(no commit message)
[libreriscv.git] / ztrans_proposal.mdwn
1 **OBSOLETE**, superceded by [[openpower/transcendentals]]
2
3 # Zftrans - transcendental operations
4
5 Summary:
6
7 *This proposal extends RISC-V scalar floating point operations to add IEEE754 transcendental functions (pow, log etc) and trigonometric functions (sin, cos etc). These functions are also 98% shared with the Khronos Group OpenCL Extended Instruction Set.*
8
9 With thanks to:
10
11 * Jacob Lifshay
12 * Dan Petroski
13 * Mitch Alsup
14 * Allen Baum
15 * Andrew Waterman
16 * Luis Vitorio Cargnini
17
18 [[!toc levels=2]]
19
20 See:
21
22 * <http://bugs.libre-riscv.org/show_bug.cgi?id=127>
23 * <https://www.khronos.org/registry/spir-v/specs/unified1/OpenCL.ExtendedInstructionSet.100.html>
24 * Discussion: <http://lists.libre-riscv.org/pipermail/libre-riscv-dev/2019-August/002342.html>
25 * [[rv_major_opcode_1010011]] for opcode listing.
26 * [[zfpacc_proposal]] for accuracy settings proposal
27
28 Extension subsets:
29
30 * **Zftrans**: standard transcendentals (best suited to 3D)
31 * **ZftransExt**: extra functions (useful, not generally needed for 3D,
32 can be synthesised using Ztrans)
33 * **Ztrigpi**: trig. xxx-pi sinpi cospi tanpi
34 * **Ztrignpi**: trig non-xxx-pi sin cos tan
35 * **Zarctrigpi**: arc-trig. a-xxx-pi: atan2pi asinpi acospi
36 * **Zarctrignpi**: arc-trig. non-a-xxx-pi: atan2, asin, acos
37 * **Zfhyp**: hyperbolic/inverse-hyperbolic. sinh, cosh, tanh, asinh,
38 acosh, atanh (can be synthesised - see below)
39 * **ZftransAdv**: much more complex to implement in hardware
40 * **Zfrsqrt**: Reciprocal square-root.
41
42 Minimum recommended requirements for 3D: Zftrans, Ztrignpi,
43 Zarctrignpi, with Ztrigpi and Zarctrigpi as augmentations.
44
45 Minimum recommended requirements for Mobile-Embedded 3D: Ztrignpi, Zftrans, with Ztrigpi as an augmentation.
46
47 # TODO:
48
49 * Decision on accuracy, moved to [[zfpacc_proposal]]
50 <http://lists.libre-riscv.org/pipermail/libre-riscv-dev/2019-August/002355.html>
51 * Errors **MUST** be repeatable.
52 * How about four Platform Specifications? 3DUNIX, UNIX, 3DEmbedded and Embedded?
53 <http://lists.libre-riscv.org/pipermail/libre-riscv-dev/2019-August/002361.html>
54 Accuracy requirements for dual (triple) purpose implementations must
55 meet the higher standard.
56 * Reciprocal Square-root is in its own separate extension (Zfrsqrt) as
57 it is desirable on its own by other implementors. This to be evaluated.
58
59 # Requirements <a name="requirements"></a>
60
61 This proposal is designed to meet a wide range of extremely diverse needs,
62 allowing implementors from all of them to benefit from the tools and hardware
63 cost reductions associated with common standards adoption in RISC-V (primarily IEEE754 and Vulkan).
64
65 **There are *four* different, disparate platform's needs (two new)**:
66
67 * 3D Embedded Platform (new)
68 * Embedded Platform
69 * 3D UNIX Platform (new)
70 * UNIX Platform
71
72 **The use-cases are**:
73
74 * 3D GPUs
75 * Numerical Computation
76 * (Potentially) A.I. / Machine-learning (1)
77
78 (1) although approximations suffice in this field, making it more likely
79 to use a custom extension. High-end ML would inherently definitely
80 be excluded.
81
82 **The power and die-area requirements vary from**:
83
84 * Ultra-low-power (smartwatches where GPU power budgets are in milliwatts)
85 * Mobile-Embedded (good performance with high efficiency for battery life)
86 * Desktop Computing
87 * Server / HPC (2)
88
89 (2) Supercomputing is left out of the requirements as it is traditionally
90 covered by Supercomputer Vectorisation Standards (such as RVV).
91
92 **The software requirements are**:
93
94 * Full public integration into GNU math libraries (libm)
95 * Full public integration into well-known Numerical Computation systems (numpy)
96 * Full public integration into upstream GNU and LLVM Compiler toolchains
97 * Full public integration into Khronos OpenCL SPIR-V compatible Compilers
98 seeking public Certification and Endorsement from the Khronos Group
99 under their Trademarked Certification Programme.
100
101 **The "contra"-requirements are**:
102
103 * NOT for use with RVV (RISC-V Vector Extension). These are *scalar* opcodes.
104 Ultra Low Power Embedded platforms (smart watches) are sufficiently
105 resource constrained that Vectorisation (of any kind) is likely to be
106 unnecessary and inappropriate.
107 * The requirements are **not** for the purposes of developing a full custom
108 proprietary GPU with proprietary firmware driven by *hardware* centric
109 optimised design decisions as a priority over collaboration.
110 * A full custom proprietary GPU ASIC Manufacturer *may* benefit from
111 this proposal however the fact that they typically develop proprietary
112 software that is not shared with the rest of the community likely to
113 use this proposal means that they have completely different needs.
114 * This proposal is for *sharing* of effort in reducing development costs
115
116 # Requirements Analysis <a name="requirements_analysis"></a>
117
118 **Platforms**:
119
120 3D Embedded will require significantly less accuracy and will need to make
121 power budget and die area compromises that other platforms (including Embedded)
122 will not need to make.
123
124 3D UNIX Platform has to be performance-price-competitive: subtly-reduced
125 accuracy in FP32 is acceptable where, conversely, in the UNIX Platform,
126 IEEE754 compliance is a hard requirement that would compromise power
127 and efficiency on a 3D UNIX Platform.
128
129 Even in the Embedded platform, IEEE754 interoperability is beneficial,
130 where if it was a hard requirement the 3D Embedded platform would be severely
131 compromised in its ability to meet the demanding power budgets of that market.
132
133 Thus, learning from the lessons of
134 [SIMD considered harmful](https://www.sigarch.org/simd-instructions-considered-harmful/)
135 this proposal works in conjunction with the [[zfpacc_proposal]], so as
136 not to overburden the OP32 ISA space with extra "reduced-accuracy" opcodes.
137
138 **Use-cases**:
139
140 There really is little else in the way of suitable markets. 3D GPUs
141 have extremely competitive power-efficiency and power-budget requirements
142 that are completely at odds with the other market at the other end of
143 the spectrum: Numerical Computation.
144
145 Interoperability in Numerical Computation is absolutely critical: it
146 implies (correlates directly with) IEEE754 compliance. However full
147 IEEE754 compliance automatically and inherently penalises a GPU on
148 performance and die area, where accuracy is simply just not necessary.
149
150 To meet the needs of both markets, the two new platforms have to be created,
151 and [[zfpacc_proposal]] is a critical dependency. Runtime selection of
152 FP accuracy allows an implementation to be "Hybrid" - cover UNIX IEEE754
153 compliance *and* 3D performance in a single ASIC.
154
155 **Power and die-area requirements**:
156
157 This is where the conflicts really start to hit home.
158
159 A "Numerical High performance only" proposal (suitable for Server / HPC
160 only) would customise and target the Extension based on a quantitative
161 analysis of the value of certain opcodes *for HPC only*. It would
162 conclude, reasonably and rationally, that it is worthwhile adding opcodes
163 to RVV as parallel Vector operations, and that further discussion of
164 the matter is pointless.
165
166 A "Proprietary GPU effort" (even one that was intended for publication
167 of its API through, for example, a public libre-licensed Vulkan SPIR-V
168 Compiler) would conclude, reasonably and rationally, that, likewise, the
169 opcodes were best suited to be added to RVV, and, further, that their
170 requirements conflict with the HPC world, due to the reduced accuracy.
171 This on the basis that the silicon die area required for IEEE754 is far
172 greater than that needed for reduced-accuracy, and thus their product
173 would be completely unacceptable in the market if it had to meet IEEE754,
174 unnecessarily.
175
176 An "Embedded 3D" GPU has radically different performance, power
177 and die-area requirements (and may even target SoftCores in FPGA).
178 Sharing of the silicon to cover multi-function uses (CORDIC for example)
179 is absolutely essential in order to keep cost and power down, and high
180 performance simply is not. Multi-cycle FSMs instead of pipelines may
181 be considered acceptable, and so on. Subsets of functionality are
182 also essential.
183
184 An "Embedded Numerical" platform has requirements that are separate and
185 distinct from all of the above!
186
187 Mobile Computing needs (tablets, smartphones) again pull in a different
188 direction: high performance, reasonable accuracy, but efficiency is
189 critical. Screen sizes are not at the 4K range: they are within the
190 800x600 range at the low end (320x240 at the extreme budget end), and
191 only the high-performance smartphones and tablets provide 1080p (1920x1080).
192 With lower resolution, accuracy compromises are possible which the Desktop
193 market (4k and soon to be above) would find unacceptable.
194
195 Meeting these disparate markets may be achieved, again, through
196 [[zfpacc_proposal]], by subdividing into four platforms, yet, in addition
197 to that, subdividing the extension into subsets that best suit the different
198 market areas.
199
200 **Software requirements**:
201
202 A "custom" extension is developed in near-complete isolation from the
203 rest of the RISC-V Community. Cost savings to the Corporation are
204 large, with no direct beneficial feedback to (or impact on) the rest
205 of the RISC-V ecosystem.
206
207 However given that 3D revolves around Standards - DirectX, Vulkan, OpenGL,
208 OpenCL - users have much more influence than first appears. Compliance
209 with these standards is critical as the userbase (Games writers,
210 scientific applications) expects not to have to rewrite extremely large
211 and costly codebases to conform with *non-standards-compliant* hardware.
212
213 Therefore, compliance with public APIs (Vulkan, OpenCL, OpenGL, DirectX)
214 is paramount, and compliance with Trademarked Standards is critical.
215 Any deviation from Trademarked Standards means that an implementation
216 may not be sold and also make a claim of being, for example, "Vulkan
217 compatible".
218
219 For 3D, this in turn reinforces and makes a hard requirement a need for public
220 compliance with such standards, over-and-above what would otherwise be
221 set by a RISC-V Standards Development Process, including both the
222 software compliance and the knock-on implications that has for hardware.
223
224 For libraries such as libm and numpy, accuracy is paramount, for software interoperability across multiple platforms. Some algorithms critically rely on correct IEEE754, for example.
225 The conflicting accuracy requirements can be met through the zfpacc extension.
226
227 **Collaboration**:
228
229 The case for collaboration on any Extension is already well-known.
230 In this particular case, the precedent for inclusion of Transcendentals
231 in other ISAs, both from Graphics and High-performance Computing, has
232 these primitives well-established in high-profile software libraries and
233 compilers in both GPU and HPC Computer Science divisions. Collaboration
234 and shared public compliance with those standards brooks no argument.
235
236 The combined requirements of collaboration and multi accuracy requirements
237 mean that *overall this proposal is categorically and wholly unsuited
238 to relegation of "custom" status*.
239
240 # Quantitative Analysis <a name="analysis"></a>
241
242 This is extremely challenging. Normally, an Extension would require full,
243 comprehensive and detailed analysis of every single instruction, for every
244 single possible use-case, in every single market. The amount of silicon
245 area required would be balanced against the benefits of introducing extra
246 opcodes, as well as a full market analysis performed to see which divisions
247 of Computer Science benefit from the introduction of the instruction,
248 in each and every case.
249
250 With 34 instructions, four possible Platforms, and sub-categories of
251 implementations even within each Platform, over 136 separate and distinct
252 analyses is not a practical proposition.
253
254 A little more intelligence has to be applied to the problem space,
255 to reduce it down to manageable levels.
256
257 Fortunately, the subdivision by Platform, in combination with the
258 identification of only two primary markets (Numerical Computation and
259 3D), means that the logical reasoning applies *uniformly* and broadly
260 across *groups* of instructions rather than individually, making it a primarily
261 hardware-centric and accuracy-centric decision-making process.
262
263 In addition, hardware algorithms such as CORDIC can cover such a wide
264 range of operations (simply by changing the input parameters) that the
265 normal argument of compromising and excluding certain opcodes because they
266 would significantly increase the silicon area is knocked down.
267
268 However, CORDIC, whilst space-efficient, and thus well-suited to
269 Embedded, is an old iterative algorithm not well-suited to High-Performance
270 Computing or Mid to High-end GPUs, where commercially-competitive
271 FP32 pipeline lengths are only around 5 stages.
272
273 Not only that, but some operations such as LOG1P, which would normally
274 be excluded from one market (due to there being an alternative macro-op
275 fused sequence replacing it) are required for other markets due to
276 the higher accuracy obtainable at the lower range of input values when
277 compared to LOG(1+P).
278
279 (Thus we start to see why "proprietary" markets are excluded from this
280 proposal, because "proprietary" markets would make *hardware*-driven
281 optimisation decisions that would be completely inappropriate for a
282 common standard).
283
284 ATAN and ATAN2 is another example area in which one market's needs
285 conflict directly with another: the only viable solution, without compromising
286 one market to the detriment of the other, is to provide both opcodes
287 and let implementors make the call as to which (or both) to optimise,
288 at the *hardware* level.
289
290 Likewise it is well-known that loops involving "0 to 2 times pi", often
291 done in subdivisions of powers of two, are costly to do because they
292 involve floating-point multiplication by PI in each and every loop.
293 3D GPUs solved this by providing SINPI variants which range from 0 to 1
294 and perform the multiply *inside* the hardware itself. In the case of
295 CORDIC, it turns out that the multiply by PI is not even needed (is a
296 loop invariant magic constant).
297
298 However, some markets may not wish to *use* CORDIC, for reasons mentioned
299 above, and, again, one market would be penalised if SINPI was prioritised
300 over SIN, or vice-versa.
301
302 In essence, then, even when only the two primary markets (3D and
303 Numerical Computation) have been identified, this still leaves two
304 (three) diametrically-opposed *accuracy* sub-markets as the prime
305 conflict drivers:
306
307 * Embedded Ultra Low Power
308 * IEEE754 compliance
309 * Khronos Vulkan compliance
310
311 Thus the best that can be done is to use Quantitative Analysis to work
312 out which "subsets" - sub-Extensions - to include, provide an additional
313 "accuracy" extension, be as "inclusive" as possible, and thus allow
314 implementors to decide what to add to their implementation, and how best
315 to optimise them.
316
317 This approach *only* works due to the uniformity of the function space,
318 and is **not** an appropriate methodology for use in other Extensions
319 with huge (non-uniform) market diversity even with similarly large
320 numbers of potential opcodes. BitManip is the perfect counter-example.
321
322 # Proposed Opcodes vs Khronos OpenCL vs IEEE754-2019<a name="khronos_equiv"></a>
323
324 This list shows the (direct) equivalence between proposed opcodes,
325 their Khronos OpenCL equivalents, and their IEEE754-2019 equivalents.
326 98% of the opcodes in this proposal that are in the IEEE754-2019 standard
327 are present in the Khronos Extended Instruction Set.
328
329 For RISCV opcode encodings see
330 [[rv_major_opcode_1010011]]
331
332 See
333 <https://www.khronos.org/registry/spir-v/specs/unified1/OpenCL.ExtendedInstructionSet.100.html>
334 and <https://ieeexplore.ieee.org/document/8766229>
335
336 * Special FP16 opcodes are *not* being proposed, except by indirect / inherent
337 use of the "fmt" field that is already present in the RISC-V Specification.
338 * "Native" opcodes are *not* being proposed: implementors will be expected
339 to use the (equivalent) proposed opcode covering the same function.
340 * "Fast" opcodes are *not* being proposed, because the Khronos Specification
341 fast\_length, fast\_normalise and fast\_distance OpenCL opcodes require
342 vectors (or can be done as scalar operations using other RISC-V instructions).
343
344 The OpenCL FP32 opcodes are **direct** equivalents to the proposed opcodes.
345 Deviation from conformance with the Khronos Specification - including the
346 Khronos Specification accuracy requirements - is not an option, as it
347 results in non-compliance, and the vendor may not use the Trademarked words
348 "Vulkan" etc. in conjunction with their product.
349
350 IEEE754-2019 Table 9.1 lists "additional mathematical operations".
351 Interestingly the only functions missing when compared to OpenCL are
352 compound, exp2m1, exp10m1, log2p1, log10p1, pown (integer power) and powr.
353
354 [[!table data="""
355 opcode | OpenCL FP32 | OpenCL FP16 | OpenCL native | OpenCL fast | IEEE754 |
356 FSIN | sin | half\_sin | native\_sin | NONE | sin |
357 FCOS | cos | half\_cos | native\_cos | NONE | cos |
358 FTAN | tan | half\_tan | native\_tan | NONE | tan |
359 NONE (1) | sincos | NONE | NONE | NONE | NONE |
360 FASIN | asin | NONE | NONE | NONE | asin |
361 FACOS | acos | NONE | NONE | NONE | acos |
362 FATAN | atan | NONE | NONE | NONE | atan |
363 FSINPI | sinpi | NONE | NONE | NONE | sinPi |
364 FCOSPI | cospi | NONE | NONE | NONE | cosPi |
365 FTANPI | tanpi | NONE | NONE | NONE | tanPi |
366 FASINPI | asinpi | NONE | NONE | NONE | asinPi |
367 FACOSPI | acospi | NONE | NONE | NONE | acosPi |
368 FATANPI | atanpi | NONE | NONE | NONE | atanPi |
369 FSINH | sinh | NONE | NONE | NONE | sinh |
370 FCOSH | cosh | NONE | NONE | NONE | cosh |
371 FTANH | tanh | NONE | NONE | NONE | tanh |
372 FASINH | asinh | NONE | NONE | NONE | asinh |
373 FACOSH | acosh | NONE | NONE | NONE | acosh |
374 FATANH | atanh | NONE | NONE | NONE | atanh |
375 FATAN2 | atan2 | NONE | NONE | NONE | atan2 |
376 FATAN2PI | atan2pi | NONE | NONE | NONE | atan2pi |
377 FRSQRT | rsqrt | half\_rsqrt | native\_rsqrt | NONE | rSqrt |
378 FCBRT | cbrt | NONE | NONE | NONE | NONE (2) |
379 FEXP2 | exp2 | half\_exp2 | native\_exp2 | NONE | exp2 |
380 FLOG2 | log2 | half\_log2 | native\_log2 | NONE | log2 |
381 FEXPM1 | expm1 | NONE | NONE | NONE | expm1 |
382 FLOG1P | log1p | NONE | NONE | NONE | logp1 |
383 FEXP | exp | half\_exp | native\_exp | NONE | exp |
384 FLOG | log | half\_log | native\_log | NONE | log |
385 FEXP10 | exp10 | half\_exp10 | native\_exp10 | NONE | exp10 |
386 FLOG10 | log10 | half\_log10 | native\_log10 | NONE | log10 |
387 FPOW | pow | NONE | NONE | NONE | pow |
388 FPOWN | pown | NONE | NONE | NONE | pown |
389 FPOWR | powr | half\_powr | native\_powr | NONE | powr |
390 FROOTN | rootn | NONE | NONE | NONE | rootn |
391 FHYPOT | hypot | NONE | NONE | NONE | hypot |
392 FRECIP | NONE | half\_recip | native\_recip | NONE | NONE (3) |
393 NONE | NONE | NONE | NONE | NONE | compound |
394 NONE | NONE | NONE | NONE | NONE | exp2m1 |
395 NONE | NONE | NONE | NONE | NONE | exp10m1 |
396 NONE | NONE | NONE | NONE | NONE | log2p1 |
397 NONE | NONE | NONE | NONE | NONE | log10p1 |
398 """]]
399
400 Note (1) FSINCOS is macro-op fused (see below).
401
402 Note (2) synthesised in IEEE754-2019 as "pown(x, 3)"
403
404 Note (3) synthesised in IEEE754-2019 using "1.0 / x"
405
406 ## List of 2-arg opcodes
407
408 [[!table data="""
409 opcode | Description | pseudocode | Extension |
410 FATAN2 | atan2 arc tangent | rd = atan2(rs2, rs1) | Zarctrignpi |
411 FATAN2PI | atan2 arc tangent / pi | rd = atan2(rs2, rs1) / pi | Zarctrigpi |
412 FPOW | x power of y | rd = pow(rs1, rs2) | ZftransAdv |
413 FPOWN | x power of n (n int) | rd = pow(rs1, rs2) | ZftransAdv |
414 FPOWR | x power of y (x +ve) | rd = exp(rs1 log(rs2)) | ZftransAdv |
415 FROOTN | x power 1/n (n integer)| rd = pow(rs1, 1/rs2) | ZftransAdv |
416 FHYPOT | hypotenuse | rd = sqrt(rs1^2 + rs2^2) | ZftransAdv |
417 """]]
418
419 ## List of 1-arg transcendental opcodes
420
421 [[!table data="""
422 opcode | Description | pseudocode | Extension |
423 FRSQRT | Reciprocal Square-root | rd = sqrt(rs1) | Zfrsqrt |
424 FCBRT | Cube Root | rd = pow(rs1, 1.0 / 3) | ZftransAdv |
425 FRECIP | Reciprocal | rd = 1.0 / rs1 | Zftrans |
426 FEXP2 | power-of-2 | rd = pow(2, rs1) | Zftrans |
427 FLOG2 | log2 | rd = log(2. rs1) | Zftrans |
428 FEXPM1 | exponential minus 1 | rd = pow(e, rs1) - 1.0 | ZftransExt |
429 FLOG1P | log plus 1 | rd = log(e, 1 + rs1) | ZftransExt |
430 FEXP | exponential | rd = pow(e, rs1) | ZftransExt |
431 FLOG | natural log (base e) | rd = log(e, rs1) | ZftransExt |
432 FEXP10 | power-of-10 | rd = pow(10, rs1) | ZftransExt |
433 FLOG10 | log base 10 | rd = log(10, rs1) | ZftransExt |
434 """]]
435
436 ## List of 1-arg trigonometric opcodes
437
438 [[!table data="""
439 opcode | Description | pseudo-code | Extension |
440 FSIN | sin (radians) | rd = sin(rs1) | Ztrignpi |
441 FCOS | cos (radians) | rd = cos(rs1) | Ztrignpi |
442 FTAN | tan (radians) | rd = tan(rs1) | Ztrignpi |
443 FASIN | arcsin (radians) | rd = asin(rs1) | Zarctrignpi |
444 FACOS | arccos (radians) | rd = acos(rs1) | Zarctrignpi |
445 FATAN | arctan (radians) | rd = atan(rs1) | Zarctrignpi |
446 FSINPI | sin times pi | rd = sin(pi * rs1) | Ztrigpi |
447 FCOSPI | cos times pi | rd = cos(pi * rs1) | Ztrigpi |
448 FTANPI | tan times pi | rd = tan(pi * rs1) | Ztrigpi |
449 FASINPI | arcsin / pi | rd = asin(rs1) / pi | Zarctrigpi |
450 FACOSPI | arccos / pi | rd = acos(rs1) / pi | Zarctrigpi |
451 FATANPI | arctan / pi | rd = atan(rs1) / pi | Zarctrigpi |
452 FSINH | hyperbolic sin (radians) | rd = sinh(rs1) | Zfhyp |
453 FCOSH | hyperbolic cos (radians) | rd = cosh(rs1) | Zfhyp |
454 FTANH | hyperbolic tan (radians) | rd = tanh(rs1) | Zfhyp |
455 FASINH | inverse hyperbolic sin | rd = asinh(rs1) | Zfhyp |
456 FACOSH | inverse hyperbolic cos | rd = acosh(rs1) | Zfhyp |
457 FATANH | inverse hyperbolic tan | rd = atanh(rs1) | Zfhyp |
458 """]]
459
460 # Subsets
461
462 The full set is based on the Khronos OpenCL opcodes. If implemented
463 entirely it would be too much for both Embedded and also 3D.
464
465 The subsets are organised by hardware complexity, need (3D, HPC), however
466 due to synthesis producing inaccurate results at the range limits,
467 the less common subsets are still required for IEEE754 HPC.
468
469 MALI Midgard, an embedded / mobile 3D GPU, for example only has the
470 following opcodes:
471
472 E8 - fatan_pt2
473 F0 - frcp (reciprocal)
474 F2 - frsqrt (inverse square root, 1/sqrt(x))
475 F3 - fsqrt (square root)
476 F4 - fexp2 (2^x)
477 F5 - flog2
478 F6 - fsin1pi
479 F7 - fcos1pi
480 F9 - fatan_pt1
481
482 These in FP32 and FP16 only: no FP32 hardware, at all.
483
484 Vivante Embedded/Mobile 3D (etnaviv <https://github.com/laanwj/etna_viv/blob/master/rnndb/isa.xml>) only has the following:
485
486 sin, cos2pi
487 cos, sin2pi
488 log2, exp
489 sqrt and rsqrt
490 recip.
491
492 It also has fast variants of some of these, as a CSR Mode.
493
494 AMD's R600 GPU (R600\_Instruction\_Set\_Architecture.pdf) and the
495 RDNA ISA (RDNA\_Shader\_ISA\_5August2019.pdf, Table 22, Section 6.3) have:
496
497 COS2PI (appx)
498 EXP2
499 LOG (IEEE754)
500 RECIP
501 RSQRT
502 SQRT
503 SIN2PI (appx)
504
505 AMD RDNA has F16 and F32 variants of all the above, and also has F64
506 variants of SQRT, RSQRT and RECIP. It is interesting that even the
507 modern high-end AMD GPU does not have TAN or ATAN, where MALI Midgard
508 does.
509
510 Also a general point, that customised optimised hardware targetting
511 FP32 3D with less accuracy simply can neither be used for IEEE754 nor
512 for FP64 (except as a starting point for hardware or software driven
513 Newton Raphson or other iterative method).
514
515 Also in cost/area sensitive applications even the extra ROM lookup tables
516 for certain algorithms may be too costly.
517
518 These wildly differing and incompatible driving factors lead to the
519 subset subdivisions, below.
520
521 ## Transcendental Subsets
522
523 ### Zftrans
524
525 LOG2 EXP2 RECIP RSQRT
526
527 Zftrans contains the minimum standard transcendentals best suited to
528 3D. They are also the minimum subset for synthesising log10, exp10,
529 exp1m, log1p, the hyperbolic trigonometric functions sinh and so on.
530
531 They are therefore considered "base" (essential) transcendentals.
532
533 ### ZftransExt
534
535 LOG, EXP, EXP10, LOG10, LOGP1, EXP1M
536
537 These are extra transcendental functions that are useful, not generally
538 needed for 3D, however for Numerical Computation they may be useful.
539
540 Although they can be synthesised using Ztrans (LOG2 multiplied
541 by a constant), there is both a performance penalty as well as an
542 accuracy penalty towards the limits, which for IEEE754 compliance is
543 unacceptable. In particular, LOG(1+rs1) in hardware may give much better
544 accuracy at the lower end (very small rs1) than LOG(rs1).
545
546 Their forced inclusion would be inappropriate as it would penalise
547 embedded systems with tight power and area budgets. However if they
548 were completely excluded the HPC applications would be penalised on
549 performance and accuracy.
550
551 Therefore they are their own subset extension.
552
553 ### Zfhyp
554
555 SINH, COSH, TANH, ASINH, ACOSH, ATANH
556
557 These are the hyperbolic/inverse-hyperbolic functions. Their use in 3D is limited.
558
559 They can all be synthesised using LOG, SQRT and so on, so depend
560 on Zftrans. However, once again, at the limits of the range, IEEE754
561 compliance becomes impossible, and thus a hardware implementation may
562 be required.
563
564 HPC and high-end GPUs are likely markets for these.
565
566 ### ZftransAdv
567
568 CBRT, POW, POWN, POWR, ROOTN
569
570 These are simply much more complex to implement in hardware, and typically
571 will only be put into HPC applications.
572
573 * **Zfrsqrt**: Reciprocal square-root.
574
575 ## Trigonometric subsets
576
577 ### Ztrigpi vs Ztrignpi
578
579 * **Ztrigpi**: SINPI COSPI TANPI
580 * **Ztrignpi**: SIN COS TAN
581
582 Ztrignpi are the basic trigonometric functions through which all others
583 could be synthesised, and they are typically the base trigonometrics
584 provided by GPUs for 3D, warranting their own subset.
585
586 In the case of the Ztrigpi subset, these are commonly used in for loops
587 with a power of two number of subdivisions, and the cost of multiplying
588 by PI inside each loop (or cumulative addition, resulting in cumulative
589 errors) is not acceptable.
590
591 In for example CORDIC the multiplication by PI may be moved outside of
592 the hardware algorithm as a loop invariant, with no power or area penalty.
593
594 Again, therefore, if SINPI (etc.) were excluded, programmers would be penalised by being forced to divide by PI in some circumstances. Likewise if SIN were excluded, programmers would be penaslised by being forced to *multiply* by PI in some circumstances.
595
596 Thus again, a slightly different application of the same general argument applies to give Ztrignpi and
597 Ztrigpi as subsets. 3D GPUs will almost certainly provide both.
598
599 ### Zarctrigpi and Zarctrignpi
600
601 * **Zarctrigpi**: ATAN2PI ASINPI ACOSPI
602 * **Zarctrignpi**: ATAN2 ACOS ASIN
603
604 These are extra trigonometric functions that are useful in some
605 applications, but even for 3D GPUs, particularly embedded and mobile class
606 GPUs, they are not so common and so are typically synthesised, there.
607
608 Although they can be synthesised using Ztrigpi and Ztrignpi, there is,
609 once again, both a performance penalty as well as an accuracy penalty
610 towards the limits, which for IEEE754 compliance is unacceptable, yet
611 is acceptable for 3D.
612
613 Therefore they are their own subset extensions.
614
615 # Synthesis, Pseudo-code ops and macro-ops
616
617 The pseudo-ops are best left up to the compiler rather than being actual
618 pseudo-ops, by allocating one scalar FP register for use as a constant
619 (loop invariant) set to "1.0" at the beginning of a function or other
620 suitable code block.
621
622 * FSINCOS - fused macro-op between FSIN and FCOS (issued in that order).
623 * FSINCOSPI - fused macro-op between FSINPI and FCOSPI (issued in that order).
624
625 FATANPI example pseudo-code:
626
627 lui t0, 0x3F800 // upper bits of f32 1.0
628 fmv.x.s ft0, t0
629 fatan2pi.s rd, rs1, ft0
630
631 Hyperbolic function example (obviates need for Zfhyp except for
632 high-performance or correctly-rounding):
633
634 ASINH( x ) = ln( x + SQRT(x**2+1))
635
636 # Evaluation and commentary
637
638 This section will move later to discussion.
639
640 ## Reciprocal
641
642 Used to be an alias. Some implementors may wish to implement divide as
643 y times recip(x).
644
645 Others may have shared hardware for recip and divide, others may not.
646
647 To avoid penalising one implementor over another, recip stays.
648
649 ## To evaluate: should LOG be replaced with LOG1P (and EXP with EXPM1)?
650
651 RISC principle says "exclude LOG because it's covered by LOGP1 plus an ADD".
652 Research needed to ensure that implementors are not compromised by such
653 a decision
654 <http://lists.libre-riscv.org/pipermail/libre-riscv-dev/2019-August/002358.html>
655
656 > > correctly-rounded LOG will return different results than LOGP1 and ADD.
657 > > Likewise for EXP and EXPM1
658
659 > ok, they stay in as real opcodes, then.
660
661 ## ATAN / ATAN2 commentary
662
663 Discussion starts here:
664 <http://lists.libre-riscv.org/pipermail/libre-riscv-dev/2019-August/002470.html>
665
666 from Mitch Alsup:
667
668 would like to point out that the general implementations of ATAN2 do a
669 bunch of special case checks and then simply call ATAN.
670
671 double ATAN2( double y, double x )
672 { // IEEE 754-2008 quality ATAN2
673
674 // deal with NANs
675 if( ISNAN( x ) ) return x;
676 if( ISNAN( y ) ) return y;
677
678 // deal with infinities
679 if( x == +∞ && |y|== +∞ ) return copysign( π/4, y );
680 if( x == +∞ ) return copysign( 0.0, y );
681 if( x == -∞ && |y|== +∞ ) return copysign( 3π/4, y );
682 if( x == -∞ ) return copysign( π, y );
683 if( |y|== +∞ ) return copysign( π/2, y );
684
685 // deal with signed zeros
686 if( x == 0.0 && y != 0.0 ) return copysign( π/2, y );
687 if( x >=+0.0 && y == 0.0 ) return copysign( 0.0, y );
688 if( x <=-0.0 && y == 0.0 ) return copysign( π, y );
689
690 // calculate ATAN2 textbook style
691 if( x > 0.0 ) return ATAN( |y / x| );
692 if( x < 0.0 ) return π - ATAN( |y / x| );
693 }
694
695
696 Yet the proposed encoding makes ATAN2 the primitive and has ATAN invent
697 a constant and then call/use ATAN2.
698
699 When one considers an implementation of ATAN, one must consider several
700 ranges of evaluation::
701
702 x [ -∞, -1.0]:: ATAN( x ) = -π/2 + ATAN( 1/x );
703 x (-1.0, +1.0]:: ATAN( x ) = + ATAN( x );
704 x [ 1.0, +∞]:: ATAN( x ) = +π/2 - ATAN( 1/x );
705
706 I should point out that the add/sub of π/2 can not lose significance
707 since the result of ATAN(1/x) is bounded 0..π/2
708
709 The bottom line is that I think you are choosing to make too many of
710 these into OpCodes, making the hardware function/calculation unit (and
711 sequencer) more complicated that necessary.
712
713 --------------------------------------------------------
714
715 We therefore I think have a case for bringing back ATAN and including ATAN2.
716
717 The reason is that whilst a microcode-like GPU-centric platform would do ATAN2 in terms of ATAN, a UNIX-centric platform would do it the other way round.
718
719 (that is the hypothesis, to be evaluated for correctness. feedback requested).
720
721 This because we cannot compromise or prioritise one platfrom's
722 speed/accuracy over another. That is not reasonable or desirable, to
723 penalise one implementor over another.
724
725 Thus, all implementors, to keep interoperability, must both have both
726 opcodes and may choose, at the architectural and routing level, which
727 one to implement in terms of the other.
728
729 Allowing implementors to choose to add either opcode and let traps sort it
730 out leaves an uncertainty in the software developer's mind: they cannot
731 trust the hardware, available from many vendors, to be performant right
732 across the board.
733
734 Standards are a pig.
735
736 ---
737
738 I might suggest that if there were a way for a calculation to be performed
739 and the result of that calculation chained to a subsequent calculation
740 such that the precision of the result-becomes-operand is wider than
741 what will fit in a register, then you can dramatically reduce the count
742 of instructions in this category while retaining
743
744 acceptable accuracy:
745
746 z = x / y
747
748 can be calculated as::
749
750 z = x * (1/y)
751
752 Where 1/y has about 26-to-32 bits of fraction. No, it's not IEEE 754-2008
753 accurate, but GPUs want speed and
754
755 1/y is fully pipelined (F32) while x/y cannot be (at reasonable area). It
756 is also not "that inaccurate" displaying 0.625-to-0.52 ULP.
757
758 Given that one has the ability to carry (and process) more fraction bits,
759 one can then do high precision multiplies of π or other transcendental
760 radixes.
761
762 And GPUs have been doing this almost since the dawn of 3D.
763
764 // calculate ATAN2 high performance style
765 // Note: at this point x != y
766 //
767 if( x > 0.0 )
768 {
769 if( y < 0.0 && |y| < |x| ) return - π/2 - ATAN( x / y );
770 if( y < 0.0 && |y| > |x| ) return + ATAN( y / x );
771 if( y > 0.0 && |y| < |x| ) return + ATAN( y / x );
772 if( y > 0.0 && |y| > |x| ) return + π/2 - ATAN( x / y );
773 }
774 if( x < 0.0 )
775 {
776 if( y < 0.0 && |y| < |x| ) return + π/2 + ATAN( x / y );
777 if( y < 0.0 && |y| > |x| ) return + π - ATAN( y / x );
778 if( y > 0.0 && |y| < |x| ) return + π - ATAN( y / x );
779 if( y > 0.0 && |y| > |x| ) return +3π/2 + ATAN( x / y );
780 }
781
782 This way the adds and subtracts from the constant are not in a precision
783 precarious position.