(no commit message)
[libreriscv.git] / ztrans_proposal.mdwn
1 # Zftrans - transcendental operations
2
3 *This proposal extends RISC-V scalar floating point operations to add IEEE754 transcendental functions (pow, log etc) and trigonometric functions (sin, cos etc). These functions are also 98% shared with the Khronos Group OpenCL Extended Instruction Set.*
4
5 With thanks to:
6
7 * Jacob Lifshay
8 * Dan Petroski
9 * Mitch Alsup
10 * Allen Baum
11 * Andrew Waterman
12 * Luis Vitorio Cargnini
13
14 [[!toc levels=2]]
15
16 See:
17
18 * <http://bugs.libre-riscv.org/show_bug.cgi?id=127>
19 * <https://www.khronos.org/registry/spir-v/specs/unified1/OpenCL.ExtendedInstructionSet.100.html>
20 * Discussion: <http://lists.libre-riscv.org/pipermail/libre-riscv-dev/2019-August/002342.html>
21 * [[rv_major_opcode_1010011]] for opcode listing.
22 * [[zfpacc_proposal]] for accuracy settings proposal
23
24 Extension subsets:
25
26 * **Zftrans**: standard transcendentals (best suited to 3D)
27 * **ZftransExt**: extra functions (useful, not generally needed for 3D,
28 can be synthesised using Ztrans)
29 * **Ztrigpi**: trig. xxx-pi sinpi cospi tanpi
30 * **Ztrignpi**: trig non-xxx-pi sin cos tan
31 * **Zarctrigpi**: arc-trig. a-xxx-pi: atan2pi asinpi acospi
32 * **Zarctrignpi**: arc-trig. non-a-xxx-pi: atan2, asin, acos
33 * **Zfhyp**: hyperbolic/inverse-hyperbolic. sinh, cosh, tanh, asinh,
34 acosh, atanh (can be synthesised - see below)
35 * **ZftransAdv**: much more complex to implement in hardware
36 * **Zfrsqrt**: Reciprocal square-root.
37
38 Minimum recommended requirements for 3D: Zftrans, Ztrignpi,
39 Zarctrignpi, with Ztrigpi and Zarctrigpi as augmentations.
40
41 Minimum recommended requirements for Mobile-Embedded 3D: Ztrignpi, Zftrans, with Ztrigpi as an augmentation.
42
43 # TODO:
44
45 * Decision on accuracy, moved to [[zfpacc_proposal]]
46 <http://lists.libre-riscv.org/pipermail/libre-riscv-dev/2019-August/002355.html>
47 * Errors **MUST** be repeatable.
48 * How about four Platform Specifications? 3DUNIX, UNIX, 3DEmbedded and Embedded?
49 <http://lists.libre-riscv.org/pipermail/libre-riscv-dev/2019-August/002361.html>
50 Accuracy requirements for dual (triple) purpose implementations must
51 meet the higher standard.
52 * Reciprocal Square-root is in its own separate extension (Zfrsqrt) as
53 it is desirable on its own by other implementors. This to be evaluated.
54
55 # Requirements <a name="requirements"></a>
56
57 This proposal is designed to meet a wide range of extremely diverse needs,
58 allowing implementors from all of them to benefit from the tools and hardware
59 cost reductions associated with common standards adoption.
60
61 **There are *four* different, disparate platform's needs (two new)**:
62
63 * 3D Embedded Platform (new)
64 * Embedded Platform
65 * 3D UNIX Platform (new)
66 * UNIX Platform
67
68 **The use-cases are**:
69
70 * 3D GPUs
71 * Numerical Computation
72 * (Potentially) A.I. / Machine-learning (1)
73
74 (1) although approximations suffice in this field, making it more likely
75 to use a custom extension. High-end ML would inherently definitely
76 be excluded.
77
78 **The power and die-area requirements vary from**:
79
80 * Ultra-low-power (smartwatches where GPU power budgets are in milliwatts)
81 * Mobile-Embedded (good performance with high efficiency for battery life)
82 * Desktop Computing
83 * Server / HPC (2)
84
85 (2) Supercomputing is left out of the requirements as it is traditionally
86 covered by Supercomputer Vectorisation Standards (such as RVV).
87
88 **The software requirements are**:
89
90 * Full public integration into GNU math libraries (libm)
91 * Full public integration into well-known Numerical Computation systems (numpy)
92 * Full public integration into upstream GNU and LLVM Compiler toolchains
93 * Full public integration into Khronos OpenCL SPIR-V compatible Compilers
94 seeking public Certification and Endorsement from the Khronos Group
95 under their Trademarked Certification Programme.
96
97 **The "contra"-requirements are**:
98
99 * NOT for use with RVV (RISC-V Vector Extension). These are *scalar* opcodes.
100 Ultra Low Power Embedded platforms (smart watches) are sufficiently
101 resource constrained that Vectorisation (of any kind) is likely to be
102 unnecessary and inappropriate.
103 * The requirements are **not** for the purposes of developing a full custom
104 proprietary GPU with proprietary firmware driven by *hardware* centric
105 optimised design decisions as a priority over collaboration.
106 * A full custom proprietary GPU ASIC Manufacturer *may* benefit from
107 this proposal however the fact that they typically develop proprietary
108 software that is not shared with the rest of the community likely to
109 use this proposal means that they have completely different needs.
110 * This proposal is for *sharing* of effort in reducing development costs
111
112 # Requirements Analysis <a name="requirements_analysis"></a>
113
114 **Platforms**:
115
116 3D Embedded will require significantly less accuracy and will need to make
117 power budget and die area compromises that other platforms (including Embedded)
118 will not need to make.
119
120 3D UNIX Platform has to be performance-price-competitive: subtly-reduced
121 accuracy in FP32 is acceptable where, conversely, in the UNIX Platform,
122 IEEE754 compliance is a hard requirement that would compromise power
123 and efficiency on a 3D UNIX Platform.
124
125 Even in the Embedded platform, IEEE754 interoperability is beneficial,
126 where if it was a hard requirement the 3D Embedded platform would be severely
127 compromised in its ability to meet the demanding power budgets of that market.
128
129 Thus, learning from the lessons of
130 [SIMD considered harmful](https://www.sigarch.org/simd-instructions-considered-harmful/)
131 this proposal works in conjunction with the [[zfpacc_proposal]], so as
132 not to overburden the OP32 ISA space with extra "reduced-accuracy" opcodes.
133
134 **Use-cases**:
135
136 There really is little else in the way of suitable markets. 3D GPUs
137 have extremely competitive power-efficiency and power-budget requirements
138 that are completely at odds with the other market at the other end of
139 the spectrum: Numerical Computation.
140
141 Interoperability in Numerical Computation is absolutely critical: it
142 implies (correlates directly with) IEEE754 compliance. However full
143 IEEE754 compliance automatically and inherently penalises a GPU on
144 performance and die area, where accuracy is simply just not necessary.
145
146 To meet the needs of both markets, the two new platforms have to be created,
147 and [[zfpacc_proposal]] is a critical dependency. Runtime selection of
148 FP accuracy allows an implementation to be "Hybrid" - cover UNIX IEEE754
149 compliance *and* 3D performance in a single ASIC.
150
151 **Power and die-area requirements**:
152
153 This is where the conflicts really start to hit home.
154
155 A "Numerical High performance only" proposal (suitable for Server / HPC
156 only) would customise and target the Extension based on a quantitative
157 analysis of the value of certain opcodes *for HPC only*. It would
158 conclude, reasonably and rationally, that it is worthwhile adding opcodes
159 to RVV as parallel Vector operations, and that further discussion of
160 the matter is pointless.
161
162 A "Proprietary GPU effort" (even one that was intended for publication
163 of its API through, for example, a public libre-licensed Vulkan SPIR-V
164 Compiler) would conclude, reasonably and rationally, that, likewise, the
165 opcodes were best suited to be added to RVV, and, further, that their
166 requirements conflict with the HPC world, due to the reduced accuracy.
167 This on the basis that the silicon die area required for IEEE754 is far
168 greater than that needed for reduced-accuracy, and thus their product
169 would be completely unacceptable in the market if it had to meet IEEE754,
170 unnecessarily.
171
172 An "Embedded 3D" GPU has radically different performance, power
173 and die-area requirements (and may even target SoftCores in FPGA).
174 Sharing of the silicon to cover multi-function uses (CORDIC for example)
175 is absolutely essential in order to keep cost and power down, and high
176 performance simply is not. Multi-cycle FSMs instead of pipelines may
177 be considered acceptable, and so on. Subsets of functionality are
178 also essential.
179
180 An "Embedded Numerical" platform has requirements that are separate and
181 distinct from all of the above!
182
183 Mobile Computing needs (tablets, smartphones) again pull in a different
184 direction: high performance, reasonable accuracy, but efficiency is
185 critical. Screen sizes are not at the 4K range: they are within the
186 800x600 range at the low end (320x240 at the extreme budget end), and
187 only the high-performance smartphones and tablets provide 1080p (1920x1080).
188 With lower resolution, accuracy compromises are possible which the Desktop
189 market (4k and soon to be above) would find unacceptable.
190
191 Meeting these disparate markets may be achieved, again, through
192 [[zfpacc_proposal]], by subdividing into four platforms, yet, in addition
193 to that, subdividing the extension into subsets that best suit the different
194 market areas.
195
196 **Software requirements**:
197
198 A "custom" extension is developed in near-complete isolation from the
199 rest of the RISC-V Community. Cost savings to the Corporation are
200 large, with no direct beneficial feedback to (or impact on) the rest
201 of the RISC-V ecosystem.
202
203 However given that 3D revolves around Standards - DirectX, Vulkan, OpenGL,
204 OpenCL - users have much more influence than first appears. Compliance
205 with these standards is critical as the userbase (Games writers,
206 scientific applications) expects not to have to rewrite extremely large
207 and costly codebases to conform with *non-standards-compliant* hardware.
208
209 Therefore, compliance with public APIs (Vulkan, OpenCL, OpenGL, DirectX)
210 is paramount, and compliance with Trademarked Standards is critical.
211 Any deviation from Trademarked Standards means that an implementation
212 may not be sold and also make a claim of being, for example, "Vulkan
213 compatible".
214
215 For 3D, this in turn reinforces and makes a hard requirement a need for public
216 compliance with such standards, over-and-above what would otherwise be
217 set by a RISC-V Standards Development Process, including both the
218 software compliance and the knock-on implications that has for hardware.
219
220 For libraries such as libm and numpy, accuracy is paramount, for software interoperability across multiple platforms. Some algorithms critically rely on correct IEEE754, for example.
221 The conflicting accuracy requirements can be met through the zfpacc extension.
222
223 **Collaboration**:
224
225 The case for collaboration on any Extension is already well-known.
226 In this particular case, the precedent for inclusion of Transcendentals
227 in other ISAs, both from Graphics and High-performance Computing, has
228 these primitives well-established in high-profile software libraries and
229 compilers in both GPU and HPC Computer Science divisions. Collaboration
230 and shared public compliance with those standards brooks no argument.
231
232 The combined requirements of collaboration and multi accuracy requirements
233 mean that *overall this proposal is categorically and wholly unsuited
234 to relegation of "custom" status*.
235
236 # Quantitative Analysis <a name="analysis"></a>
237
238 This is extremely challenging. Normally, an Extension would require full,
239 comprehensive and detailed analysis of every single instruction, for every
240 single possible use-case, in every single market. The amount of silicon
241 area required would be balanced against the benefits of introducing extra
242 opcodes, as well as a full market analysis performed to see which divisions
243 of Computer Science benefit from the introduction of the instruction,
244 in each and every case.
245
246 With 34 instructions, four possible Platforms, and sub-categories of
247 implementations even within each Platform, over 136 separate and distinct
248 analyses is not a practical proposition.
249
250 A little more intelligence has to be applied to the problem space,
251 to reduce it down to manageable levels.
252
253 Fortunately, the subdivision by Platform, in combination with the
254 identification of only two primary markets (Numerical Computation and
255 3D), means that the logical reasoning applies *uniformly* and broadly
256 across *groups* of instructions rather than individually, making it a primarily
257 hardware-centric and accuracy-centric decision-making process.
258
259 In addition, hardware algorithms such as CORDIC can cover such a wide
260 range of operations (simply by changing the input parameters) that the
261 normal argument of compromising and excluding certain opcodes because they
262 would significantly increase the silicon area is knocked down.
263
264 However, CORDIC, whilst space-efficient, and thus well-suited to
265 Embedded, is an old iterative algorithm not well-suited to High-Performance
266 Computing or Mid to High-end GPUs, where commercially-competitive
267 FP32 pipeline lengths are only around 5 stages.
268
269 Not only that, but some operations such as LOG1P, which would normally
270 be excluded from one market (due to there being an alternative macro-op
271 fused sequence replacing it) are required for other markets due to
272 the higher accuracy obtainable at the lower range of input values when
273 compared to LOG(1+P).
274
275 (Thus we start to see why "proprietary" markets are excluded from this
276 proposal, because "proprietary" markets would make *hardware*-driven
277 optimisation decisions that would be completely inappropriate for a
278 common standard).
279
280 ATAN and ATAN2 is another example area in which one market's needs
281 conflict directly with another: the only viable solution, without compromising
282 one market to the detriment of the other, is to provide both opcodes
283 and let implementors make the call as to which (or both) to optimise,
284 at the *hardware* level.
285
286 Likewise it is well-known that loops involving "0 to 2 times pi", often
287 done in subdivisions of powers of two, are costly to do because they
288 involve floating-point multiplication by PI in each and every loop.
289 3D GPUs solved this by providing SINPI variants which range from 0 to 1
290 and perform the multiply *inside* the hardware itself. In the case of
291 CORDIC, it turns out that the multiply by PI is not even needed (is a
292 loop invariant magic constant).
293
294 However, some markets may not wish to *use* CORDIC, for reasons mentioned
295 above, and, again, one market would be penalised if SINPI was prioritised
296 over SIN, or vice-versa.
297
298 In essence, then, even when only the two primary markets (3D and
299 Numerical Computation) have been identified, this still leaves two
300 (three) diametrically-opposed *accuracy* sub-markets as the prime
301 conflict drivers:
302
303 * Embedded Ultra Low Power
304 * IEEE754 compliance
305 * Khronos Vulkan compliance
306
307 Thus the best that can be done is to use Quantitative Analysis to work
308 out which "subsets" - sub-Extensions - to include, provide an additional
309 "accuracy" extension, be as "inclusive" as possible, and thus allow
310 implementors to decide what to add to their implementation, and how best
311 to optimise them.
312
313 This approach *only* works due to the uniformity of the function space,
314 and is **not** an appropriate methodology for use in other Extensions
315 with huge (non-uniform) market diversity even with similarly large
316 numbers of potential opcodes. BitManip is the perfect counter-example.
317
318 # Proposed Opcodes vs Khronos OpenCL vs IEEE754-2019<a name="khronos_equiv"></a>
319
320 This list shows the (direct) equivalence between proposed opcodes,
321 their Khronos OpenCL equivalents, and their IEEE754-2019 equivalents.
322 98% of the opcodes in this proposal that are in the IEEE754-2019 standard
323 are present in the Khronos Extended Instruction Set.
324
325 For RISCV opcode encodings see
326 [[rv_major_opcode_1010011]]
327
328 See
329 <https://www.khronos.org/registry/spir-v/specs/unified1/OpenCL.ExtendedInstructionSet.100.html>
330 and <https://ieeexplore.ieee.org/document/8766229>
331
332 * Special FP16 opcodes are *not* being proposed, except by indirect / inherent
333 use of the "fmt" field that is already present in the RISC-V Specification.
334 * "Native" opcodes are *not* being proposed: implementors will be expected
335 to use the (equivalent) proposed opcode covering the same function.
336 * "Fast" opcodes are *not* being proposed, because the Khronos Specification
337 fast\_length, fast\_normalise and fast\_distance OpenCL opcodes require
338 vectors (or can be done as scalar operations using other RISC-V instructions).
339
340 The OpenCL FP32 opcodes are **direct** equivalents to the proposed opcodes.
341 Deviation from conformance with the Khronos Specification - including the
342 Khronos Specification accuracy requirements - is not an option, as it
343 results in non-compliance, and the vendor may not use the Trademarked words
344 "Vulkan" etc. in conjunction with their product.
345
346 IEEE754-2019 Table 9.1 lists "additional mathematical operations".
347 Interestingly the only functions missing when compared to OpenCL are
348 compound, exp2m1, exp10m1, log2p1, log10p1, pown (integer power) and powr.
349
350 [[!table data="""
351 opcode | OpenCL FP32 | OpenCL FP16 | OpenCL native | OpenCL fast | IEEE754 |
352 FSIN | sin | half\_sin | native\_sin | NONE | sin |
353 FCOS | cos | half\_cos | native\_cos | NONE | cos |
354 FTAN | tan | half\_tan | native\_tan | NONE | tan |
355 NONE (1) | sincos | NONE | NONE | NONE | NONE |
356 FASIN | asin | NONE | NONE | NONE | asin |
357 FACOS | acos | NONE | NONE | NONE | acos |
358 FATAN | atan | NONE | NONE | NONE | atan |
359 FSINPI | sinpi | NONE | NONE | NONE | sinPi |
360 FCOSPI | cospi | NONE | NONE | NONE | cosPi |
361 FTANPI | tanpi | NONE | NONE | NONE | tanPi |
362 FASINPI | asinpi | NONE | NONE | NONE | asinPi |
363 FACOSPI | acospi | NONE | NONE | NONE | acosPi |
364 FATANPI | atanpi | NONE | NONE | NONE | atanPi |
365 FSINH | sinh | NONE | NONE | NONE | sinh |
366 FCOSH | cosh | NONE | NONE | NONE | cosh |
367 FTANH | tanh | NONE | NONE | NONE | tanh |
368 FASINH | asinh | NONE | NONE | NONE | asinh |
369 FACOSH | acosh | NONE | NONE | NONE | acosh |
370 FATANH | atanh | NONE | NONE | NONE | atanh |
371 FATAN2 | atan2 | NONE | NONE | NONE | atan2 |
372 FATAN2PI | atan2pi | NONE | NONE | NONE | atan2pi |
373 FRSQRT | rsqrt | half\_rsqrt | native\_rsqrt | NONE | rSqrt |
374 FCBRT | cbrt | NONE | NONE | NONE | NONE (2) |
375 FEXP2 | exp2 | half\_exp2 | native\_exp2 | NONE | exp2 |
376 FLOG2 | log2 | half\_log2 | native\_log2 | NONE | log2 |
377 FEXPM1 | expm1 | NONE | NONE | NONE | expm1 |
378 FLOG1P | log1p | NONE | NONE | NONE | logp1 |
379 FEXP | exp | half\_exp | native\_exp | NONE | exp |
380 FLOG | log | half\_log | native\_log | NONE | log |
381 FEXP10 | exp10 | half\_exp10 | native\_exp10 | NONE | exp10 |
382 FLOG10 | log10 | half\_log10 | native\_log10 | NONE | log10 |
383 FPOW | pow | NONE | NONE | NONE | pow |
384 FPOWN | pown | NONE | NONE | NONE | pown |
385 FPOWR | powr | NONE | NONE | NONE | powr |
386 FROOTN | rootn | NONE | NONE | NONE | rootn |
387 FHYPOT | hypot | NONE | NONE | NONE | hypot |
388 FRECIP | NONE | half\_recip | native\_recip | NONE | NONE (3) |
389 NONE | NONE | NONE | NONE | NONE | compound |
390 NONE | NONE | NONE | NONE | NONE | exp2m1 |
391 NONE | NONE | NONE | NONE | NONE | exp10m1 |
392 NONE | NONE | NONE | NONE | NONE | log2p1 |
393 NONE | NONE | NONE | NONE | NONE | log10p1 |
394 """]]
395
396 Note (1) FSINCOS is macro-op fused (see below).
397
398 Note (2) synthesised in IEEE754-2019 as "pown(x, 3)"
399
400 Note (3) synthesised in IEEE754-2019 using "1.0 / x"
401
402 ## List of 2-arg opcodes
403
404 [[!table data="""
405 opcode | Description | pseudocode | Extension |
406 FATAN2 | atan2 arc tangent | rd = atan2(rs2, rs1) | Zarctrignpi |
407 FATAN2PI | atan2 arc tangent / pi | rd = atan2(rs2, rs1) / pi | Zarctrigpi |
408 FPOW | x power of y | rd = pow(rs1, rs2) | ZftransAdv |
409 FPOWN | x power of n (n int) | rd = pow(rs1, rs2) | ZftransAdv |
410 FPOWR | x power of y (x +ve) | rd = exp(rs1 log(rs2)) | ZftransAdv |
411 FROOTN | x power 1/n (n integer)| rd = pow(rs1, 1/rs2) | ZftransAdv |
412 FHYPOT | hypotenuse | rd = sqrt(rs1^2 + rs2^2) | ZftransAdv |
413 """]]
414
415 ## List of 1-arg transcendental opcodes
416
417 [[!table data="""
418 opcode | Description | pseudocode | Extension |
419 FRSQRT | Reciprocal Square-root | rd = sqrt(rs1) | Zfrsqrt |
420 FCBRT | Cube Root | rd = pow(rs1, 1.0 / 3) | ZftransAdv |
421 FRECIP | Reciprocal | rd = 1.0 / rs1 | Zftrans |
422 FEXP2 | power-of-2 | rd = pow(2, rs1) | Zftrans |
423 FLOG2 | log2 | rd = log(2. rs1) | Zftrans |
424 FEXPM1 | exponential minus 1 | rd = pow(e, rs1) - 1.0 | ZftransExt |
425 FLOG1P | log plus 1 | rd = log(e, 1 + rs1) | ZftransExt |
426 FEXP | exponential | rd = pow(e, rs1) | ZftransExt |
427 FLOG | natural log (base e) | rd = log(e, rs1) | ZftransExt |
428 FEXP10 | power-of-10 | rd = pow(10, rs1) | ZftransExt |
429 FLOG10 | log base 10 | rd = log(10, rs1) | ZftransExt |
430 """]]
431
432 ## List of 1-arg trigonometric opcodes
433
434 [[!table data="""
435 opcode | Description | pseudo-code | Extension |
436 FSIN | sin (radians) | rd = sin(rs1) | Ztrignpi |
437 FCOS | cos (radians) | rd = cos(rs1) | Ztrignpi |
438 FTAN | tan (radians) | rd = tan(rs1) | Ztrignpi |
439 FASIN | arcsin (radians) | rd = asin(rs1) | Zarctrignpi |
440 FACOS | arccos (radians) | rd = acos(rs1) | Zarctrignpi |
441 FATAN | arctan (radians) | rd = atan(rs1) | Zarctrignpi |
442 FSINPI | sin times pi | rd = sin(pi * rs1) | Ztrigpi |
443 FCOSPI | cos times pi | rd = cos(pi * rs1) | Ztrigpi |
444 FTANPI | tan times pi | rd = tan(pi * rs1) | Ztrigpi |
445 FASINPI | arcsin / pi | rd = asin(rs1) / pi | Zarctrigpi |
446 FACOSPI | arccos / pi | rd = acos(rs1) / pi | Zarctrigpi |
447 FATANPI | arctan / pi | rd = atan(rs1) / pi | Zarctrigpi |
448 FSINH | hyperbolic sin (radians) | rd = sinh(rs1) | Zfhyp |
449 FCOSH | hyperbolic cos (radians) | rd = cosh(rs1) | Zfhyp |
450 FTANH | hyperbolic tan (radians) | rd = tanh(rs1) | Zfhyp |
451 FASINH | inverse hyperbolic sin | rd = asinh(rs1) | Zfhyp |
452 FACOSH | inverse hyperbolic cos | rd = acosh(rs1) | Zfhyp |
453 FATANH | inverse hyperbolic tan | rd = atanh(rs1) | Zfhyp |
454 """]]
455
456 # Subsets
457
458 The full set is based on the Khronos OpenCL opcodes. If implemented
459 entirely it would be too much for both Embedded and also 3D.
460
461 The subsets are organised by hardware complexity, need (3D, HPC), however
462 due to synthesis producing inaccurate results at the range limits,
463 the less common subsets are still required for IEEE754 HPC.
464
465 MALI Midgard, an embedded / mobile 3D GPU, for example only has the
466 following opcodes:
467
468 E8 - fatan_pt2
469 F0 - frcp (reciprocal)
470 F2 - frsqrt (inverse square root, 1/sqrt(x))
471 F3 - fsqrt (square root)
472 F4 - fexp2 (2^x)
473 F5 - flog2
474 F6 - fsin1pi
475 F7 - fcos1pi
476 F9 - fatan_pt1
477
478 These in FP32 and FP16 only: no FP32 hardware, at all.
479
480 Vivante Embedded/Mobile 3D (etnaviv <https://github.com/laanwj/etna_viv/blob/master/rnndb/isa.xml>) only has the following:
481
482 sin, cos2pi
483 cos, sin2pi
484 log2, exp
485 sqrt and rsqrt
486 recip.
487
488 It also has fast variants of some of these, as a CSR Mode.
489
490 AMD's R600 GPU (R600\_Instruction\_Set\_Architecture.pdf) and the
491 RDNA ISA (RDNA\_Shader\_ISA\_5August2019.pdf, Table 22, Section 6.3) have:
492
493 COS2PI (appx)
494 EXP2
495 LOG (IEEE754)
496 RECIP
497 RSQRT
498 SQRT
499 SIN2PI (appx)
500
501 AMD RDNA has F16 and F32 variants of all the above, and also has F64
502 variants of SQRT, RSQRT and RECIP. It is interesting that even the
503 modern high-end AMD GPU does not have TAN or ATAN, where MALI Midgard
504 does.
505
506 Also a general point, that customised optimised hardware targetting
507 FP32 3D with less accuracy simply can neither be used for IEEE754 nor
508 for FP64 (except as a starting point for hardware or software driven
509 Newton Raphson or other iterative method).
510
511 Also in cost/area sensitive applications even the extra ROM lookup tables
512 for certain algorithms may be too costly.
513
514 These wildly differing and incompatible driving factors lead to the
515 subset subdivisions, below.
516
517 ## Transcendental Subsets
518
519 ### Zftrans
520
521 LOG2 EXP2 RECIP RSQRT
522
523 Zftrans contains the minimum standard transcendentals best suited to
524 3D. They are also the minimum subset for synthesising log10, exp10,
525 exp1m, log1p, the hyperbolic trigonometric functions sinh and so on.
526
527 They are therefore considered "base" (essential) transcendentals.
528
529 ### ZftransExt
530
531 LOG, EXP, EXP10, LOG10, LOGP1, EXP1M
532
533 These are extra transcendental functions that are useful, not generally
534 needed for 3D, however for Numerical Computation they may be useful.
535
536 Although they can be synthesised using Ztrans (LOG2 multiplied
537 by a constant), there is both a performance penalty as well as an
538 accuracy penalty towards the limits, which for IEEE754 compliance is
539 unacceptable. In particular, LOG(1+rs1) in hardware may give much better
540 accuracy at the lower end (very small rs1) than LOG(rs1).
541
542 Their forced inclusion would be inappropriate as it would penalise
543 embedded systems with tight power and area budgets. However if they
544 were completely excluded the HPC applications would be penalised on
545 performance and accuracy.
546
547 Therefore they are their own subset extension.
548
549 ### Zfhyp
550
551 SINH, COSH, TANH, ASINH, ACOSH, ATANH
552
553 These are the hyperbolic/inverse-hyperbolic functions. Their use in 3D is limited.
554
555 They can all be synthesised using LOG, SQRT and so on, so depend
556 on Zftrans. However, once again, at the limits of the range, IEEE754
557 compliance becomes impossible, and thus a hardware implementation may
558 be required.
559
560 HPC and high-end GPUs are likely markets for these.
561
562 ### ZftransAdv
563
564 CBRT, POW, POWN, POWR, ROOTN
565
566 These are simply much more complex to implement in hardware, and typically
567 will only be put into HPC applications.
568
569 * **Zfrsqrt**: Reciprocal square-root.
570
571 ## Trigonometric subsets
572
573 ### Ztrigpi vs Ztrignpi
574
575 * **Ztrigpi**: SINPI COSPI TANPI
576 * **Ztrignpi**: SIN COS TAN
577
578 Ztrignpi are the basic trigonometric functions through which all others
579 could be synthesised, and they are typically the base trigonometrics
580 provided by GPUs for 3D, warranting their own subset.
581
582 In the case of the Ztrigpi subset, these are commonly used in for loops
583 with a power of two number of subdivisions, and the cost of multiplying
584 by PI inside each loop (or cumulative addition, resulting in cumulative
585 errors) is not acceptable.
586
587 In for example CORDIC the multiplication by PI may be moved outside of
588 the hardware algorithm as a loop invariant, with no power or area penalty.
589
590 Again, therefore, if SINPI (etc.) were excluded, programmers would be penalised by being forced to divide by PI in some circumstances. Likewise if SIN were excluded, programmers would be penaslised by being forced to *multiply* by PI in some circumstances.
591
592 Thus again, a slightly different application of the same general argument applies to give Ztrignpi and
593 Ztrigpi as subsets. 3D GPUs will almost certainly provide both.
594
595 ### Zarctrigpi and Zarctrignpi
596
597 * **Zarctrigpi**: ATAN2PI ASINPI ACOSPI
598 * **Zarctrignpi**: ATAN2 ACOS ASIN
599
600 These are extra trigonometric functions that are useful in some
601 applications, but even for 3D GPUs, particularly embedded and mobile class
602 GPUs, they are not so common and so are typically synthesised, there.
603
604 Although they can be synthesised using Ztrigpi and Ztrignpi, there is,
605 once again, both a performance penalty as well as an accuracy penalty
606 towards the limits, which for IEEE754 compliance is unacceptable, yet
607 is acceptable for 3D.
608
609 Therefore they are their own subset extensions.
610
611 # Synthesis, Pseudo-code ops and macro-ops
612
613 The pseudo-ops are best left up to the compiler rather than being actual
614 pseudo-ops, by allocating one scalar FP register for use as a constant
615 (loop invariant) set to "1.0" at the beginning of a function or other
616 suitable code block.
617
618 * FSINCOS - fused macro-op between FSIN and FCOS (issued in that order).
619 * FSINCOSPI - fused macro-op between FSINPI and FCOSPI (issued in that order).
620
621 FATANPI example pseudo-code:
622
623 lui t0, 0x3F800 // upper bits of f32 1.0
624 fmv.x.s ft0, t0
625 fatan2pi.s rd, rs1, ft0
626
627 Hyperbolic function example (obviates need for Zfhyp except for
628 high-performance or correctly-rounding):
629
630 ASINH( x ) = ln( x + SQRT(x**2+1))
631
632 # Evaluation and commentary
633
634 This section will move later to discussion.
635
636 ## Reciprocal
637
638 Used to be an alias. Some implementors may wish to implement divide as
639 y times recip(x).
640
641 Others may have shared hardware for recip and divide, others may not.
642
643 To avoid penalising one implementor over another, recip stays.
644
645 ## To evaluate: should LOG be replaced with LOG1P (and EXP with EXPM1)?
646
647 RISC principle says "exclude LOG because it's covered by LOGP1 plus an ADD".
648 Research needed to ensure that implementors are not compromised by such
649 a decision
650 <http://lists.libre-riscv.org/pipermail/libre-riscv-dev/2019-August/002358.html>
651
652 > > correctly-rounded LOG will return different results than LOGP1 and ADD.
653 > > Likewise for EXP and EXPM1
654
655 > ok, they stay in as real opcodes, then.
656
657 ## ATAN / ATAN2 commentary
658
659 Discussion starts here:
660 <http://lists.libre-riscv.org/pipermail/libre-riscv-dev/2019-August/002470.html>
661
662 from Mitch Alsup:
663
664 would like to point out that the general implementations of ATAN2 do a
665 bunch of special case checks and then simply call ATAN.
666
667 double ATAN2( double y, double x )
668 { // IEEE 754-2008 quality ATAN2
669
670 // deal with NANs
671 if( ISNAN( x ) ) return x;
672 if( ISNAN( y ) ) return y;
673
674 // deal with infinities
675 if( x == +∞ && |y|== +∞ ) return copysign( π/4, y );
676 if( x == +∞ ) return copysign( 0.0, y );
677 if( x == -∞ && |y|== +∞ ) return copysign( 3π/4, y );
678 if( x == -∞ ) return copysign( π, y );
679 if( |y|== +∞ ) return copysign( π/2, y );
680
681 // deal with signed zeros
682 if( x == 0.0 && y != 0.0 ) return copysign( π/2, y );
683 if( x >=+0.0 && y == 0.0 ) return copysign( 0.0, y );
684 if( x <=-0.0 && y == 0.0 ) return copysign( π, y );
685
686 // calculate ATAN2 textbook style
687 if( x > 0.0 ) return ATAN( |y / x| );
688 if( x < 0.0 ) return π - ATAN( |y / x| );
689 }
690
691
692 Yet the proposed encoding makes ATAN2 the primitive and has ATAN invent
693 a constant and then call/use ATAN2.
694
695 When one considers an implementation of ATAN, one must consider several
696 ranges of evaluation::
697
698 x [ -∞, -1.0]:: ATAN( x ) = -π/2 + ATAN( 1/x );
699 x (-1.0, +1.0]:: ATAN( x ) = + ATAN( x );
700 x [ 1.0, +∞]:: ATAN( x ) = +π/2 - ATAN( 1/x );
701
702 I should point out that the add/sub of π/2 can not lose significance
703 since the result of ATAN(1/x) is bounded 0..π/2
704
705 The bottom line is that I think you are choosing to make too many of
706 these into OpCodes, making the hardware function/calculation unit (and
707 sequencer) more complicated that necessary.
708
709 --------------------------------------------------------
710
711 We therefore I think have a case for bringing back ATAN and including ATAN2.
712
713 The reason is that whilst a microcode-like GPU-centric platform would do ATAN2 in terms of ATAN, a UNIX-centric platform would do it the other way round.
714
715 (that is the hypothesis, to be evaluated for correctness. feedback requested).
716
717 This because we cannot compromise or prioritise one platfrom's
718 speed/accuracy over another. That is not reasonable or desirable, to
719 penalise one implementor over another.
720
721 Thus, all implementors, to keep interoperability, must both have both
722 opcodes and may choose, at the architectural and routing level, which
723 one to implement in terms of the other.
724
725 Allowing implementors to choose to add either opcode and let traps sort it
726 out leaves an uncertainty in the software developer's mind: they cannot
727 trust the hardware, available from many vendors, to be performant right
728 across the board.
729
730 Standards are a pig.
731
732 ---
733
734 I might suggest that if there were a way for a calculation to be performed
735 and the result of that calculation chained to a subsequent calculation
736 such that the precision of the result-becomes-operand is wider than
737 what will fit in a register, then you can dramatically reduce the count
738 of instructions in this category while retaining
739
740 acceptable accuracy:
741
742 z = x / y
743
744 can be calculated as::
745
746 z = x * (1/y)
747
748 Where 1/y has about 26-to-32 bits of fraction. No, it's not IEEE 754-2008
749 accurate, but GPUs want speed and
750
751 1/y is fully pipelined (F32) while x/y cannot be (at reasonable area). It
752 is also not "that inaccurate" displaying 0.625-to-0.52 ULP.
753
754 Given that one has the ability to carry (and process) more fraction bits,
755 one can then do high precision multiplies of π or other transcendental
756 radixes.
757
758 And GPUs have been doing this almost since the dawn of 3D.
759
760 // calculate ATAN2 high performance style
761 // Note: at this point x != y
762 //
763 if( x > 0.0 )
764 {
765 if( y < 0.0 && |y| < |x| ) return - π/2 - ATAN( x / y );
766 if( y < 0.0 && |y| > |x| ) return + ATAN( y / x );
767 if( y > 0.0 && |y| < |x| ) return + ATAN( y / x );
768 if( y > 0.0 && |y| > |x| ) return + π/2 - ATAN( x / y );
769 }
770 if( x < 0.0 )
771 {
772 if( y < 0.0 && |y| < |x| ) return + π/2 + ATAN( x / y );
773 if( y < 0.0 && |y| > |x| ) return + π - ATAN( y / x );
774 if( y > 0.0 && |y| < |x| ) return + π - ATAN( y / x );
775 if( y > 0.0 && |y| > |x| ) return +3π/2 + ATAN( x / y );
776 }
777
778 This way the adds and subtracts from the constant are not in a precision
779 precarious position.