(no commit message)
[libreriscv.git] / openpower / transcendentals.mdwn
1 # Transcendental operations
2
3 To be updated to OpenPOWER.
4
5 Summary:
6
7 *This proposal extends OpenPOWER scalar floating point operations to
8 add IEEE754 transcendental functions (pow, log etc) and trigonometric
9 functions (sin, cos etc). These functions are also 98% shared with the
10 Khronos Group OpenCL Extended Instruction Set.*
11
12 With thanks to:
13
14 * Jacob Lifshay
15 * Dan Petroski
16 * Mitch Alsup
17 * Allen Baum
18 * Andrew Waterman
19 * Luis Vitorio Cargnini
20
21 [[!toc levels=2]]
22
23 See:
24
25 * <http://bugs.libre-riscv.org/show_bug.cgi?id=127>
26 * <https://www.khronos.org/registry/spir-v/specs/unified1/OpenCL.ExtendedInstructionSet.100.html>
27 * Discussion: <http://lists.libre-riscv.org/pipermail/libre-riscv-dev/2019-August/002342.html>
28 * [[rv_major_opcode_1010011]] for opcode listing.
29 * [[zfpacc_proposal]] for accuracy settings proposal
30
31 Extension subsets:
32
33 * **Zftrans**: standard transcendentals (best suited to 3D)
34 * **ZftransExt**: extra functions (useful, not generally needed for 3D,
35 can be synthesised using Ztrans)
36 * **Ztrigpi**: trig. xxx-pi sinpi cospi tanpi
37 * **Ztrignpi**: trig non-xxx-pi sin cos tan
38 * **Zarctrigpi**: arc-trig. a-xxx-pi: atan2pi asinpi acospi
39 * **Zarctrignpi**: arc-trig. non-a-xxx-pi: atan2, asin, acos
40 * **Zfhyp**: hyperbolic/inverse-hyperbolic. sinh, cosh, tanh, asinh,
41 acosh, atanh (can be synthesised - see below)
42 * **ZftransAdv**: much more complex to implement in hardware
43 * **Zfrsqrt**: Reciprocal square-root.
44
45 Minimum recommended requirements for 3D: Zftrans, Ztrignpi,
46 Zarctrignpi, with Ztrigpi and Zarctrigpi as augmentations.
47
48 Minimum recommended requirements for Mobile-Embedded 3D: Ztrignpi, Zftrans, with Ztrigpi as an augmentation.
49
50 # TODO:
51
52 * Decision on accuracy, moved to [[zfpacc_proposal]]
53 <http://lists.libre-riscv.org/pipermail/libre-riscv-dev/2019-August/002355.html>
54 * Errors **MUST** be repeatable.
55 * How about four Platform Specifications? 3DUNIX, UNIX, 3DEmbedded and Embedded?
56 <http://lists.libre-riscv.org/pipermail/libre-riscv-dev/2019-August/002361.html>
57 Accuracy requirements for dual (triple) purpose implementations must
58 meet the higher standard.
59 * Reciprocal Square-root is in its own separate extension (Zfrsqrt) as
60 it is desirable on its own by other implementors. This to be evaluated.
61
62 # Requirements <a name="requirements"></a>
63
64 This proposal is designed to meet a wide range of extremely diverse needs,
65 allowing implementors from all of them to benefit from the tools and hardware
66 cost reductions associated with common standards adoption in RISC-V (primarily IEEE754 and Vulkan).
67
68 **There are *four* different, disparate platform's needs (two new)**:
69
70 * 3D Embedded Platform (new)
71 * Embedded Platform
72 * 3D UNIX Platform (new)
73 * UNIX Platform
74
75 **The use-cases are**:
76
77 * 3D GPUs
78 * Numerical Computation
79 * (Potentially) A.I. / Machine-learning (1)
80
81 (1) although approximations suffice in this field, making it more likely
82 to use a custom extension. High-end ML would inherently definitely
83 be excluded.
84
85 **The power and die-area requirements vary from**:
86
87 * Ultra-low-power (smartwatches where GPU power budgets are in milliwatts)
88 * Mobile-Embedded (good performance with high efficiency for battery life)
89 * Desktop Computing
90 * Server / HPC (2)
91
92 (2) Supercomputing is left out of the requirements as it is traditionally
93 covered by Supercomputer Vectorisation Standards (such as RVV).
94
95 **The software requirements are**:
96
97 * Full public integration into GNU math libraries (libm)
98 * Full public integration into well-known Numerical Computation systems (numpy)
99 * Full public integration into upstream GNU and LLVM Compiler toolchains
100 * Full public integration into Khronos OpenCL SPIR-V compatible Compilers
101 seeking public Certification and Endorsement from the Khronos Group
102 under their Trademarked Certification Programme.
103
104 **The "contra"-requirements are**:
105
106 Ultra Low Power Embedded platforms (smart watches) are sufficiently
107 resource constrained that Vectorisation (of any kind) is likely to be
108 unnecessary and inappropriate.
109 * The requirements are **not** for the purposes of developing a full custom
110 proprietary GPU with proprietary firmware driven by *hardware* centric
111 optimised design decisions as a priority over collaboration.
112 * A full custom proprietary GPU ASIC Manufacturer *may* benefit from
113 this proposal however the fact that they typically develop proprietary
114 software that is not shared with the rest of the community likely to
115 use this proposal means that they have completely different needs.
116 * This proposal is for *sharing* of effort in reducing development costs
117
118 # Proposed Opcodes vs Khronos OpenCL vs IEEE754-2019<a name="khronos_equiv"></a>
119
120 This list shows the (direct) equivalence between proposed opcodes,
121 their Khronos OpenCL equivalents, and their IEEE754-2019 equivalents.
122 98% of the opcodes in this proposal that are in the IEEE754-2019 standard
123 are present in the Khronos Extended Instruction Set.
124
125 For RISCV opcode encodings see [[rv_major_opcode_1010011]]
126 **TODO** replace with OpenPOWER
127
128 See
129 <https://www.khronos.org/registry/spir-v/specs/unified1/OpenCL.ExtendedInstructionSet.100.html>
130 and <https://ieeexplore.ieee.org/document/8766229>
131
132 * Special FP16 opcodes are *not* being proposed, except by indirect / inherent
133 use of the "fmt" field that is already present in the RISC-V Specification.
134 * "Native" opcodes are *not* being proposed: implementors will be expected
135 to use the (equivalent) proposed opcode covering the same function.
136 * "Fast" opcodes are *not* being proposed, because the Khronos Specification
137 fast\_length, fast\_normalise and fast\_distance OpenCL opcodes require
138 vectors (or can be done as scalar operations using other RISC-V instructions).
139
140 The OpenCL FP32 opcodes are **direct** equivalents to the proposed opcodes.
141 Deviation from conformance with the Khronos Specification - including the
142 Khronos Specification accuracy requirements - is not an option, as it
143 results in non-compliance, and the vendor may not use the Trademarked words
144 "Vulkan" etc. in conjunction with their product.
145
146 IEEE754-2019 Table 9.1 lists "additional mathematical operations".
147 Interestingly the only functions missing when compared to OpenCL are
148 compound, exp2m1, exp10m1, log2p1, log10p1, pown (integer power) and powr.
149
150 [[!table data="""
151 opcode | OpenCL FP32 | OpenCL FP16 | OpenCL native | OpenCL fast | IEEE754 | Power ISA |
152 FSIN | sin | half\_sin | native\_sin | NONE | sin | |
153 FCOS | cos | half\_cos | native\_cos | NONE | cos | |
154 FTAN | tan | half\_tan | native\_tan | NONE | tan | |
155 NONE (1) | sincos | NONE | NONE | NONE | NONE | |
156 FASIN | asin | NONE | NONE | NONE | asin | |
157 FACOS | acos | NONE | NONE | NONE | acos | |
158 FATAN | atan | NONE | NONE | NONE | atan | |
159 FSINPI | sinpi | NONE | NONE | NONE | sinPi | |
160 FCOSPI | cospi | NONE | NONE | NONE | cosPi | |
161 FTANPI | tanpi | NONE | NONE | NONE | tanPi | |
162 FASINPI | asinpi | NONE | NONE | NONE | asinPi | |
163 FACOSPI | acospi | NONE | NONE | NONE | acosPi | |
164 FATANPI | atanpi | NONE | NONE | NONE | atanPi | |
165 FSINH | sinh | NONE | NONE | NONE | sinh | |
166 FCOSH | cosh | NONE | NONE | NONE | cosh | |
167 FTANH | tanh | NONE | NONE | NONE | tanh | |
168 FASINH | asinh | NONE | NONE | NONE | asinh | |
169 FACOSH | acosh | NONE | NONE | NONE | acosh | |
170 FATANH | atanh | NONE | NONE | NONE | atanh | |
171 FATAN2 | atan2 | NONE | NONE | NONE | atan2 | |
172 FATAN2PI | atan2pi | NONE | NONE | NONE | atan2pi | |
173 FRSQRT | rsqrt | half\_rsqrt | native\_rsqrt | NONE | rSqrt | fsqrte, fsqrtes (4) |
174 FCBRT | cbrt | NONE | NONE | NONE | NONE (2) | |
175 FEXP2 | exp2 | half\_exp2 | native\_exp2 | NONE | exp2 | |
176 FLOG2 | log2 | half\_log2 | native\_log2 | NONE | log2 | |
177 FEXPM1 | expm1 | NONE | NONE | NONE | expm1 | |
178 FLOG1P | log1p | NONE | NONE | NONE | logp1 | |
179 FEXP | exp | half\_exp | native\_exp | NONE | exp | |
180 FLOG | log | half\_log | native\_log | NONE | log | |
181 FEXP10 | exp10 | half\_exp10 | native\_exp10 | NONE | exp10 | |
182 FLOG10 | log10 | half\_log10 | native\_log10 | NONE | log10 | |
183 FPOW | pow | NONE | NONE | NONE | pow | |
184 FPOWN | pown | NONE | NONE | NONE | pown | |
185 FPOWR | powr | half\_powr | native\_powr | NONE | powr | |
186 FROOTN | rootn | NONE | NONE | NONE | rootn | |
187 FHYPOT | hypot | NONE | NONE | NONE | hypot | |
188 FRECIP | NONE | half\_recip | native\_recip | NONE | NONE (3) | fre, fres (4) |
189 NONE | NONE | NONE | NONE | NONE | compound | |
190 NONE | NONE | NONE | NONE | NONE | exp2m1 | |
191 NONE | NONE | NONE | NONE | NONE | exp10m1 | |
192 NONE | NONE | NONE | NONE | NONE | log2p1 | |
193 NONE | NONE | NONE | NONE | NONE | log10p1 | |
194 """]]
195
196 Note (1) FSINCOS is macro-op fused (see below).
197
198 Note (2) synthesised in IEEE754-2019 as "pown(x, 3)"
199
200 Note (3) synthesised in IEEE754-2019 using "1.0 / x"
201
202 Note (4) these are estimate opcodes that help accelerate
203 software emulation
204
205 ## List of 2-arg opcodes
206
207 [[!table data="""
208 opcode | Description | pseudocode | Extension |
209 FATAN2 | atan2 arc tangent | rd = atan2(rs2, rs1) | Zarctrignpi |
210 FATAN2PI | atan2 arc tangent / pi | rd = atan2(rs2, rs1) / pi | Zarctrigpi |
211 FPOW | x power of y | rd = pow(rs1, rs2) | ZftransAdv |
212 FPOWN | x power of n (n int) | rd = pow(rs1, rs2) | ZftransAdv |
213 FPOWR | x power of y (x +ve) | rd = exp(rs1 log(rs2)) | ZftransAdv |
214 FROOTN | x power 1/n (n integer)| rd = pow(rs1, 1/rs2) | ZftransAdv |
215 FHYPOT | hypotenuse | rd = sqrt(rs1^2 + rs2^2) | ZftransAdv |
216 """]]
217
218 ## List of 1-arg transcendental opcodes
219
220 [[!table data="""
221 opcode | Description | pseudocode | Extension |
222 FRSQRT | Reciprocal Square-root | rd = sqrt(rs1) | Zfrsqrt |
223 FCBRT | Cube Root | rd = pow(rs1, 1.0 / 3) | ZftransAdv |
224 FRECIP | Reciprocal | rd = 1.0 / rs1 | Zftrans |
225 FEXP2 | power-of-2 | rd = pow(2, rs1) | Zftrans |
226 FLOG2 | log2 | rd = log(2. rs1) | Zftrans |
227 FEXPM1 | exponential minus 1 | rd = pow(e, rs1) - 1.0 | ZftransExt |
228 FLOG1P | log plus 1 | rd = log(e, 1 + rs1) | ZftransExt |
229 FEXP | exponential | rd = pow(e, rs1) | ZftransExt |
230 FLOG | natural log (base e) | rd = log(e, rs1) | ZftransExt |
231 FEXP10 | power-of-10 | rd = pow(10, rs1) | ZftransExt |
232 FLOG10 | log base 10 | rd = log(10, rs1) | ZftransExt |
233 """]]
234
235 ## List of 1-arg trigonometric opcodes
236
237 [[!table data="""
238 opcode | Description | pseudo-code | Extension |
239 FSIN | sin (radians) | rd = sin(rs1) | Ztrignpi |
240 FCOS | cos (radians) | rd = cos(rs1) | Ztrignpi |
241 FTAN | tan (radians) | rd = tan(rs1) | Ztrignpi |
242 FASIN | arcsin (radians) | rd = asin(rs1) | Zarctrignpi |
243 FACOS | arccos (radians) | rd = acos(rs1) | Zarctrignpi |
244 FATAN | arctan (radians) | rd = atan(rs1) | Zarctrignpi |
245 FSINPI | sin times pi | rd = sin(pi * rs1) | Ztrigpi |
246 FCOSPI | cos times pi | rd = cos(pi * rs1) | Ztrigpi |
247 FTANPI | tan times pi | rd = tan(pi * rs1) | Ztrigpi |
248 FASINPI | arcsin / pi | rd = asin(rs1) / pi | Zarctrigpi |
249 FACOSPI | arccos / pi | rd = acos(rs1) / pi | Zarctrigpi |
250 FATANPI | arctan / pi | rd = atan(rs1) / pi | Zarctrigpi |
251 FSINH | hyperbolic sin (radians) | rd = sinh(rs1) | Zfhyp |
252 FCOSH | hyperbolic cos (radians) | rd = cosh(rs1) | Zfhyp |
253 FTANH | hyperbolic tan (radians) | rd = tanh(rs1) | Zfhyp |
254 FASINH | inverse hyperbolic sin | rd = asinh(rs1) | Zfhyp |
255 FACOSH | inverse hyperbolic cos | rd = acosh(rs1) | Zfhyp |
256 FATANH | inverse hyperbolic tan | rd = atanh(rs1) | Zfhyp |
257 """]]
258
259 # Subsets
260
261 The full set is based on the Khronos OpenCL opcodes. If implemented
262 entirely it would be too much for both Embedded and also 3D.
263
264 The subsets are organised by hardware complexity, need (3D, HPC), however
265 due to synthesis producing inaccurate results at the range limits,
266 the less common subsets are still required for IEEE754 HPC.
267
268 MALI Midgard, an embedded / mobile 3D GPU, for example only has the
269 following opcodes:
270
271 E8 - fatan_pt2
272 F0 - frcp (reciprocal)
273 F2 - frsqrt (inverse square root, 1/sqrt(x))
274 F3 - fsqrt (square root)
275 F4 - fexp2 (2^x)
276 F5 - flog2
277 F6 - fsin1pi
278 F7 - fcos1pi
279 F9 - fatan_pt1
280
281 These in FP32 and FP16 only: no FP32 hardware, at all.
282
283 Vivante Embedded/Mobile 3D (etnaviv
284 <https://github.com/laanwj/etna_viv/blob/master/rnndb/isa.xml>)
285 only has the following:
286
287 sin, cos2pi
288 cos, sin2pi
289 log2, exp
290 sqrt and rsqrt
291 recip.
292
293 It also has fast variants of some of these, as a CSR Mode.
294
295 AMD's R600 GPU (R600\_Instruction\_Set\_Architecture.pdf) and the
296 RDNA ISA (RDNA\_Shader\_ISA\_5August2019.pdf, Table 22, Section 6.3) have:
297
298 COS2PI (appx)
299 EXP2
300 LOG (IEEE754)
301 RECIP
302 RSQRT
303 SQRT
304 SIN2PI (appx)
305
306 AMD RDNA has F16 and F32 variants of all the above, and also has F64
307 variants of SQRT, RSQRT and RECIP. It is interesting that even the
308 modern high-end AMD GPU does not have TAN or ATAN, where MALI Midgard
309 does.
310
311 Also a general point, that customised optimised hardware targetting
312 FP32 3D with less accuracy simply can neither be used for IEEE754 nor
313 for FP64 (except as a starting point for hardware or software driven
314 Newton Raphson or other iterative method).
315
316 Also in cost/area sensitive applications even the extra ROM lookup tables
317 for certain algorithms may be too costly.
318
319 These wildly differing and incompatible driving factors lead to the
320 subset subdivisions, below.
321
322 ## Transcendental Subsets
323
324 ### Zftrans
325
326 LOG2 EXP2 RECIP RSQRT
327
328 Zftrans contains the minimum standard transcendentals best suited to
329 3D. They are also the minimum subset for synthesising log10, exp10,
330 exp1m, log1p, the hyperbolic trigonometric functions sinh and so on.
331
332 They are therefore considered "base" (essential) transcendentals.
333
334 ### ZftransExt
335
336 LOG, EXP, EXP10, LOG10, LOGP1, EXP1M
337
338 These are extra transcendental functions that are useful, not generally
339 needed for 3D, however for Numerical Computation they may be useful.
340
341 Although they can be synthesised using Ztrans (LOG2 multiplied
342 by a constant), there is both a performance penalty as well as an
343 accuracy penalty towards the limits, which for IEEE754 compliance is
344 unacceptable. In particular, LOG(1+rs1) in hardware may give much better
345 accuracy at the lower end (very small rs1) than LOG(rs1).
346
347 Their forced inclusion would be inappropriate as it would penalise
348 embedded systems with tight power and area budgets. However if they
349 were completely excluded the HPC applications would be penalised on
350 performance and accuracy.
351
352 Therefore they are their own subset extension.
353
354 ### Zfhyp
355
356 SINH, COSH, TANH, ASINH, ACOSH, ATANH
357
358 These are the hyperbolic/inverse-hyperbolic functions. Their use in 3D
359 is limited.
360
361 They can all be synthesised using LOG, SQRT and so on, so depend
362 on Zftrans. However, once again, at the limits of the range, IEEE754
363 compliance becomes impossible, and thus a hardware implementation may
364 be required.
365
366 HPC and high-end GPUs are likely markets for these.
367
368 ### ZftransAdv
369
370 CBRT, POW, POWN, POWR, ROOTN
371
372 These are simply much more complex to implement in hardware, and typically
373 will only be put into HPC applications.
374
375 * **Zfrsqrt**: Reciprocal square-root.
376
377 ## Trigonometric subsets
378
379 ### Ztrigpi vs Ztrignpi
380
381 * **Ztrigpi**: SINPI COSPI TANPI
382 * **Ztrignpi**: SIN COS TAN
383
384 Ztrignpi are the basic trigonometric functions through which all others
385 could be synthesised, and they are typically the base trigonometrics
386 provided by GPUs for 3D, warranting their own subset.
387
388 In the case of the Ztrigpi subset, these are commonly used in for loops
389 with a power of two number of subdivisions, and the cost of multiplying
390 by PI inside each loop (or cumulative addition, resulting in cumulative
391 errors) is not acceptable.
392
393 In for example CORDIC the multiplication by PI may be moved outside of
394 the hardware algorithm as a loop invariant, with no power or area penalty.
395
396 Again, therefore, if SINPI (etc.) were excluded, programmers would be
397 penalised by being forced to divide by PI in some circumstances. Likewise
398 if SIN were excluded, programmers would be penaslised by being forced
399 to *multiply* by PI in some circumstances.
400
401 Thus again, a slightly different application of the same general argument
402 applies to give Ztrignpi and Ztrigpi as subsets. 3D GPUs will almost
403 certainly provide both.
404
405 ### Zarctrigpi and Zarctrignpi
406
407 * **Zarctrigpi**: ATAN2PI ASINPI ACOSPI
408 * **Zarctrignpi**: ATAN2 ACOS ASIN
409
410 These are extra trigonometric functions that are useful in some
411 applications, but even for 3D GPUs, particularly embedded and mobile class
412 GPUs, they are not so common and so are typically synthesised, there.
413
414 Although they can be synthesised using Ztrigpi and Ztrignpi, there is,
415 once again, both a performance penalty as well as an accuracy penalty
416 towards the limits, which for IEEE754 compliance is unacceptable, yet
417 is acceptable for 3D.
418
419 Therefore they are their own subset extensions.
420
421 # Synthesis, Pseudo-code ops and macro-ops
422
423 The pseudo-ops are best left up to the compiler rather than being actual
424 pseudo-ops, by allocating one scalar FP register for use as a constant
425 (loop invariant) set to "1.0" at the beginning of a function or other
426 suitable code block.
427
428 * FSINCOS - fused macro-op between FSIN and FCOS (issued in that order).
429 * FSINCOSPI - fused macro-op between FSINPI and FCOSPI (issued in that order).
430
431 FATANPI example pseudo-code:
432
433 lui t0, 0x3F800 // upper bits of f32 1.0
434 fmv.x.s ft0, t0
435 fatan2pi.s rd, rs1, ft0
436
437 Hyperbolic function example (obviates need for Zfhyp except for
438 high-performance or correctly-rounding):
439
440 ASINH( x ) = ln( x + SQRT(x**2+1))
441
442 # Evaluation and commentary
443
444 This section will move later to discussion.
445
446 ## Reciprocal
447
448 Used to be an alias. Some implementors may wish to implement divide as
449 y times recip(x).
450
451 Others may have shared hardware for recip and divide, others may not.
452
453 To avoid penalising one implementor over another, recip stays.
454
455 ## To evaluate: should LOG be replaced with LOG1P (and EXP with EXPM1)?
456
457 RISC principle says "exclude LOG because it's covered by LOGP1 plus an ADD".
458 Research needed to ensure that implementors are not compromised by such
459 a decision
460 <http://lists.libre-riscv.org/pipermail/libre-riscv-dev/2019-August/002358.html>
461
462 > > correctly-rounded LOG will return different results than LOGP1 and ADD.
463 > > Likewise for EXP and EXPM1
464
465 > ok, they stay in as real opcodes, then.
466
467 ## ATAN / ATAN2 commentary
468
469 Discussion starts here:
470 <http://lists.libre-riscv.org/pipermail/libre-riscv-dev/2019-August/002470.html>
471
472 from Mitch Alsup:
473
474 would like to point out that the general implementations of ATAN2 do a
475 bunch of special case checks and then simply call ATAN.
476
477 double ATAN2( double y, double x )
478 { // IEEE 754-2008 quality ATAN2
479
480 // deal with NANs
481 if( ISNAN( x ) ) return x;
482 if( ISNAN( y ) ) return y;
483
484 // deal with infinities
485 if( x == +∞ && |y|== +∞ ) return copysign( π/4, y );
486 if( x == +∞ ) return copysign( 0.0, y );
487 if( x == -∞ && |y|== +∞ ) return copysign( 3π/4, y );
488 if( x == -∞ ) return copysign( π, y );
489 if( |y|== +∞ ) return copysign( π/2, y );
490
491 // deal with signed zeros
492 if( x == 0.0 && y != 0.0 ) return copysign( π/2, y );
493 if( x >=+0.0 && y == 0.0 ) return copysign( 0.0, y );
494 if( x <=-0.0 && y == 0.0 ) return copysign( π, y );
495
496 // calculate ATAN2 textbook style
497 if( x > 0.0 ) return ATAN( |y / x| );
498 if( x < 0.0 ) return π - ATAN( |y / x| );
499 }
500
501
502 Yet the proposed encoding makes ATAN2 the primitive and has ATAN invent
503 a constant and then call/use ATAN2.
504
505 When one considers an implementation of ATAN, one must consider several
506 ranges of evaluation::
507
508 x [ -∞, -1.0]:: ATAN( x ) = -π/2 + ATAN( 1/x );
509 x (-1.0, +1.0]:: ATAN( x ) = + ATAN( x );
510 x [ 1.0, +∞]:: ATAN( x ) = +π/2 - ATAN( 1/x );
511
512 I should point out that the add/sub of π/2 can not lose significance
513 since the result of ATAN(1/x) is bounded 0..π/2
514
515 The bottom line is that I think you are choosing to make too many of
516 these into OpCodes, making the hardware function/calculation unit (and
517 sequencer) more complicated that necessary.
518
519 --------------------------------------------------------
520
521 We therefore I think have a case for bringing back ATAN and including ATAN2.
522
523 The reason is that whilst a microcode-like GPU-centric platform would
524 do ATAN2 in terms of ATAN, a UNIX-centric platform would do it the other
525 way round.
526
527 (that is the hypothesis, to be evaluated for correctness. feedback requested).
528
529 This because we cannot compromise or prioritise one platfrom's
530 speed/accuracy over another. That is not reasonable or desirable, to
531 penalise one implementor over another.
532
533 Thus, all implementors, to keep interoperability, must both have both
534 opcodes and may choose, at the architectural and routing level, which
535 one to implement in terms of the other.
536
537 Allowing implementors to choose to add either opcode and let traps sort it
538 out leaves an uncertainty in the software developer's mind: they cannot
539 trust the hardware, available from many vendors, to be performant right
540 across the board.
541
542 Standards are a pig.
543
544 ---
545
546 I might suggest that if there were a way for a calculation to be performed
547 and the result of that calculation chained to a subsequent calculation
548 such that the precision of the result-becomes-operand is wider than
549 what will fit in a register, then you can dramatically reduce the count
550 of instructions in this category while retaining
551
552 acceptable accuracy:
553
554 z = x / y
555
556 can be calculated as::
557
558 z = x * (1/y)
559
560 Where 1/y has about 26-to-32 bits of fraction. No, it's not IEEE 754-2008
561 accurate, but GPUs want speed and
562
563 1/y is fully pipelined (F32) while x/y cannot be (at reasonable area). It
564 is also not "that inaccurate" displaying 0.625-to-0.52 ULP.
565
566 Given that one has the ability to carry (and process) more fraction bits,
567 one can then do high precision multiplies of π or other transcendental
568 radixes.
569
570 And GPUs have been doing this almost since the dawn of 3D.
571
572 // calculate ATAN2 high performance style
573 // Note: at this point x != y
574 //
575 if( x > 0.0 )
576 {
577 if( y < 0.0 && |y| < |x| ) return - π/2 - ATAN( x / y );
578 if( y < 0.0 && |y| > |x| ) return + ATAN( y / x );
579 if( y > 0.0 && |y| < |x| ) return + ATAN( y / x );
580 if( y > 0.0 && |y| > |x| ) return + π/2 - ATAN( x / y );
581 }
582 if( x < 0.0 )
583 {
584 if( y < 0.0 && |y| < |x| ) return + π/2 + ATAN( x / y );
585 if( y < 0.0 && |y| > |x| ) return + π - ATAN( y / x );
586 if( y > 0.0 && |y| < |x| ) return + π - ATAN( y / x );
587 if( y > 0.0 && |y| > |x| ) return +3π/2 + ATAN( x / y );
588 }
589
590 This way the adds and subtracts from the constant are not in a precision
591 precarious position.