update transcendental tables
[libreriscv.git] / openpower / transcendentals.mdwn
1 # Transcendental operations
2
3 Summary:
4
5 *This proposal extends Power ISA scalar floating point operations to
6 add IEEE754 transcendental functions (pow, log etc) and trigonometric
7 functions (sin, cos etc). These functions are also 98% shared with the
8 Khronos Group OpenCL Extended Instruction Set.*
9
10 With thanks to:
11
12 * Jacob Lifshay
13 * Dan Petroski
14 * Mitch Alsup
15 * Allen Baum
16 * Andrew Waterman
17 * Luis Vitorio Cargnini
18
19 [[!toc levels=2]]
20
21 See:
22
23 * <http://bugs.libre-soc.org/show_bug.cgi?id=127>
24 * <https://www.khronos.org/registry/spir-v/specs/unified1/OpenCL.ExtendedInstructionSet.100.html>
25 * Discussion: <http://lists.libre-riscv.org/pipermail/libre-riscv-dev/2019-August/002342.html>
26 * [[power_trans_ops]] for opcode listing.
27
28 Extension subsets:
29
30 * **Zftrans**: standard transcendentals (best suited to 3D)
31 * **ZftransExt**: extra functions (useful, not generally needed for 3D,
32 can be synthesised using Ztrans)
33 * **Ztrigpi**: trig. xxx-pi sinpi cospi tanpi
34 * **Ztrignpi**: trig non-xxx-pi sin cos tan
35 * **Zarctrigpi**: arc-trig. a-xxx-pi: atan2pi asinpi acospi
36 * **Zarctrignpi**: arc-trig. non-a-xxx-pi: atan2, asin, acos
37 * **Zfhyp**: hyperbolic/inverse-hyperbolic. sinh, cosh, tanh, asinh,
38 acosh, atanh (can be synthesised - see below)
39 * **ZftransAdv**: much more complex to implement in hardware
40 * **Zfrsqrt**: Reciprocal square-root.
41
42 Minimum recommended requirements for 3D: Zftrans, Ztrignpi,
43 Zarctrignpi, with Ztrigpi and Zarctrigpi as augmentations.
44
45 Minimum recommended requirements for Mobile-Embedded 3D:
46 Ztrignpi, Zftrans, with Ztrigpi as an augmentation.
47
48 # TODO:
49
50 * Decision on accuracy, moved to [[zfpacc_proposal]]
51 <http://lists.libre-riscv.org/pipermail/libre-riscv-dev/2019-August/002355.html>
52 * Errors **MUST** be repeatable.
53 * How about four Platform Specifications? 3DUNIX, UNIX, 3DEmbedded and Embedded?
54 <http://lists.libre-riscv.org/pipermail/libre-riscv-dev/2019-August/002361.html>
55 Accuracy requirements for dual (triple) purpose implementations must
56 meet the higher standard.
57 * Reciprocal Square-root is in its own separate extension (Zfrsqrt) as
58 it is desirable on its own by other implementors. This to be evaluated.
59
60 # Requirements <a name="requirements"></a>
61
62 This proposal is designed to meet a wide range of extremely diverse
63 needs, allowing implementors from all of them to benefit from the tools
64 and hardware cost reductions associated with common standards adoption
65 in Power ISA (primarily IEEE754 and Vulkan).
66
67 **There are *four* different, disparate platform's needs (two new)**:
68
69 * 3D Embedded Platform (new)
70 * Embedded Platform
71 * 3D UNIX Platform (new)
72 * UNIX Platform
73
74 **The use-cases are**:
75
76 * 3D GPUs
77 * Numerical Computation
78 * (Potentially) A.I. / Machine-learning (1)
79
80 (1) although approximations suffice in this field, making it more likely
81 to use a custom extension. High-end ML would inherently definitely
82 be excluded.
83
84 **The power and die-area requirements vary from**:
85
86 * Ultra-low-power (smartwatches where GPU power budgets are in milliwatts)
87 * Mobile-Embedded (good performance with high efficiency for battery life)
88 * Desktop Computing
89 * Server / HPC (2)
90
91 (2) Supercomputing is left out of the requirements as it is traditionally
92 covered by Supercomputer Vectorisation Standards (such as RVV).
93
94 **The software requirements are**:
95
96 * Full public integration into GNU math libraries (libm)
97 * Full public integration into well-known Numerical Computation systems (numpy)
98 * Full public integration into upstream GNU and LLVM Compiler toolchains
99 * Full public integration into Khronos OpenCL SPIR-V compatible Compilers
100 seeking public Certification and Endorsement from the Khronos Group
101 under their Trademarked Certification Programme.
102
103 **The "contra"-requirements are**:
104
105 Ultra Low Power Embedded platforms (smart watches) are sufficiently
106 resource constrained that Vectorisation (of any kind) is likely to be
107 unnecessary and inappropriate.
108 * The requirements are **not** for the purposes of developing a full custom
109 proprietary GPU with proprietary firmware driven by *hardware* centric
110 optimised design decisions as a priority over collaboration.
111 * A full custom proprietary GPU ASIC Manufacturer *may* benefit from
112 this proposal however the fact that they typically develop proprietary
113 software that is not shared with the rest of the community likely to
114 use this proposal means that they have completely different needs.
115 * This proposal is for *sharing* of effort in reducing development costs
116
117 # Proposed Opcodes vs Khronos OpenCL vs IEEE754-2019<a name="khronos_equiv"></a>
118
119 This list shows the (direct) equivalence between proposed opcodes,
120 their Khronos OpenCL equivalents, and their IEEE754-2019 equivalents.
121 98% of the opcodes in this proposal that are in the IEEE754-2019 standard
122 are present in the Khronos Extended Instruction Set.
123
124 Power opcode encodings see [[power_trans_ops]]
125
126 See
127 <https://www.khronos.org/registry/spir-v/specs/unified1/OpenCL.ExtendedInstructionSet.100.html>
128 and <https://ieeexplore.ieee.org/document/8766229>
129
130 * Special FP16 opcodes are *not* being proposed, except by indirect / inherent
131 use of elwidth overrides that is already present in the SVP64 Specification.
132 * "Native" opcodes are *not* being proposed: implementors will be expected
133 to use the (equivalent) proposed opcode covering the same function.
134 * "Fast" opcodes are *not* being proposed, because the Khronos Specification
135 fast\_length, fast\_normalise and fast\_distance OpenCL opcodes require
136 vectors (or can be done as scalar operations using other Power ISA
137 instructions).
138
139 The OpenCL FP32 opcodes are **direct** equivalents to the proposed opcodes.
140 Deviation from conformance with the Khronos Specification - including the
141 Khronos Specification accuracy requirements - is not an option, as it
142 results in non-compliance, and the vendor may not use the Trademarked words
143 "Vulkan" etc. in conjunction with their product.
144
145 IEEE754-2019 Table 9.1 lists "additional mathematical operations".
146 Interestingly the only functions missing when compared to OpenCL are
147 compound, exp2m1, exp10m1, log2p1, log10p1, pown (integer power) and powr.
148
149 [[!table data="""
150 opcode | OpenCL FP32 | OpenCL FP16 | OpenCL native | OpenCL fast | IEEE754 | Power ISA |
151 FSIN | sin | half\_sin | native\_sin | NONE | sin | NONE |
152 FCOS | cos | half\_cos | native\_cos | NONE | cos | NONE |
153 FTAN | tan | half\_tan | native\_tan | NONE | tan | NONE |
154 NONE (1) | sincos | NONE | NONE | NONE | NONE | NONE |
155 FASIN | asin | NONE | NONE | NONE | asin | NONE |
156 FACOS | acos | NONE | NONE | NONE | acos | NONE |
157 FATAN | atan | NONE | NONE | NONE | atan | NONE |
158 FSINPI | sinpi | NONE | NONE | NONE | sinPi | NONE |
159 FCOSPI | cospi | NONE | NONE | NONE | cosPi | NONE |
160 FTANPI | tanpi | NONE | NONE | NONE | tanPi | NONE |
161 FASINPI | asinpi | NONE | NONE | NONE | asinPi | NONE |
162 FACOSPI | acospi | NONE | NONE | NONE | acosPi | NONE |
163 FATANPI | atanpi | NONE | NONE | NONE | atanPi | NONE |
164 FSINH | sinh | NONE | NONE | NONE | sinh | NONE |
165 FCOSH | cosh | NONE | NONE | NONE | cosh | NONE |
166 FTANH | tanh | NONE | NONE | NONE | tanh | NONE |
167 FASINH | asinh | NONE | NONE | NONE | asinh | NONE |
168 FACOSH | acosh | NONE | NONE | NONE | acosh | NONE |
169 FATANH | atanh | NONE | NONE | NONE | atanh | NONE |
170 FATAN2 | atan2 | NONE | NONE | NONE | atan2 | NONE |
171 FATAN2PI | atan2pi | NONE | NONE | NONE | atan2pi | NONE |
172 FRSQRT | rsqrt | half\_rsqrt | native\_rsqrt | NONE | rSqrt | fsqrte, fsqrtes (4) |
173 FCBRT | cbrt | NONE | NONE | NONE | NONE (2) | NONE |
174 FEXP2 | exp2 | half\_exp2 | native\_exp2 | NONE | exp2 | NONE |
175 FLOG2 | log2 | half\_log2 | native\_log2 | NONE | log2 | NONE |
176 FEXPM1 | expm1 | NONE | NONE | NONE | expm1 | NONE |
177 FLOG1P | log1p | NONE | NONE | NONE | logp1 | NONE |
178 FEXP | exp | half\_exp | native\_exp | NONE | exp | NONE |
179 FLOG | log | half\_log | native\_log | NONE | log | NONE |
180 FEXP10 | exp10 | half\_exp10 | native\_exp10 | NONE | exp10 | NONE |
181 FLOG10 | log10 | half\_log10 | native\_log10 | NONE | log10 | NONE |
182 FPOW | pow | NONE | NONE | NONE | pow | NONE |
183 FPOWN | pown | NONE | NONE | NONE | pown | NONE |
184 FPOWR | powr | half\_powr | native\_powr | NONE | powr | NONE |
185 FROOTN | rootn | NONE | NONE | NONE | rootn | NONE |
186 FHYPOT | hypot | NONE | NONE | NONE | hypot | NONE |
187 FRECIP | NONE | half\_recip | native\_recip | NONE | NONE (3) | fre, fres (4) |
188 NONE | NONE | NONE | NONE | NONE | compound | NONE |
189 NONE | NONE | NONE | NONE | NONE | exp2m1 | NONE |
190 NONE | NONE | NONE | NONE | NONE | exp10m1 | NONE |
191 NONE | NONE | NONE | NONE | NONE | log2p1 | NONE |
192 NONE | NONE | NONE | NONE | NONE | log10p1 | NONE |
193 """]]
194
195 Note (1) FSINCOS is macro-op fused (see below).
196
197 Note (2) synthesised in IEEE754-2019 as "pown(x, 3)"
198
199 Note (3) synthesised in IEEE754-2019 using "1.0 / x"
200
201 Note (4) these are estimate opcodes that help accelerate
202 software emulation
203
204 ## List of 2-arg opcodes
205
206 | opcode | Description | pseudocode | Extension |
207 | ------ | ---------------- | ---------------- | ----------- |
208 | FATAN2 | atan2 arc tangent | rd = atan2(rs2, rs1) | Zarctrignpi |
209 | FATAN2PI | atan2 arc tangent / pi | rd = atan2(rs2, rs1) / pi | Zarctrigpi |
210 | FPOW | x power of y | rd = pow(rs1, rs2) | ZftransAdv |
211 | FPOWN | x power of n (n int) | rd = pow(rs1, rs2) | ZftransAdv |
212 | FPOWR | x power of y (x +ve) | rd = exp(rs1 log(rs2)) | ZftransAdv |
213 | FROOTN | x power 1/n (n integer)| rd = pow(rs1, 1/rs2) | ZftransAdv |
214 | FHYPOT | hypotenuse | rd = sqrt(rs1^2 + rs2^2) | ZftransAdv |
215
216 ## List of 1-arg transcendental opcodes
217
218 | opcode | Description | pseudocode | Extension |
219 | ------ | ---------------- | ---------------- | ----------- |
220 | FRSQRT | Reciprocal Square-root | rd = sqrt(rs1) | Zfrsqrt |
221 | FCBRT | Cube Root | rd = pow(rs1, 1.0 / 3) | ZftransAdv |
222 | FRECIP | Reciprocal | rd = 1.0 / rs1 | Zftrans |
223 | FEXP2 | power-of-2 | rd = pow(2, rs1) | Zftrans |
224 | FLOG2 | log2 | rd = log(2. rs1) | Zftrans |
225 | FEXPM1 | exponential minus 1 | rd = pow(e, rs1) - 1.0 | ZftransExt |
226 | FLOG1P | log plus 1 | rd = log(e, 1 + rs1) | ZftransExt |
227 | FEXP | exponential | rd = pow(e, rs1) | ZftransExt |
228 | FLOG | natural log (base e) | rd = log(e, rs1) | ZftransExt |
229 | FEXP10 | power-of-10 | rd = pow(10, rs1) | ZftransExt |
230 | FLOG10 | log base 10 | rd = log(10, rs1) | ZftransExt |
231
232 ## List of 1-arg trigonometric opcodes
233
234 | opcode | Description | pseudocode | Extension |
235 | ------ | ---------------- | ---------------- | ----------- |
236 | FSIN | sin (radians) | rd = sin(rs1) | Ztrignpi |
237 | FCOS | cos (radians) | rd = cos(rs1) | Ztrignpi |
238 | FTAN | tan (radians) | rd = tan(rs1) | Ztrignpi |
239 | FASIN | arcsin (radians) | rd = asin(rs1) | Zarctrignpi |
240 | FACOS | arccos (radians) | rd = acos(rs1) | Zarctrignpi |
241 | FATAN | arctan (radians) | rd = atan(rs1) | Zarctrignpi |
242 | FSINPI | sin times pi | rd = sin(pi * rs1) | Ztrigpi |
243 | FCOSPI | cos times pi | rd = cos(pi * rs1) | Ztrigpi |
244 | FTANPI | tan times pi | rd = tan(pi * rs1) | Ztrigpi |
245 | FASINPI | arcsin / pi | rd = asin(rs1) / pi | Zarctrigpi |
246 | FACOSPI | arccos / pi | rd = acos(rs1) / pi | Zarctrigpi |
247 | FATANPI | arctan / pi | rd = atan(rs1) / pi | Zarctrigpi |
248 | FSINH | hyperbolic sin (radians) | rd = sinh(rs1) | Zfhyp |
249 | FCOSH | hyperbolic cos (radians) | rd = cosh(rs1) | Zfhyp |
250 | FTANH | hyperbolic tan (radians) | rd = tanh(rs1) | Zfhyp |
251 | FASINH | inverse hyperbolic sin | rd = asinh(rs1) | Zfhyp |
252 | FACOSH | inverse hyperbolic cos | rd = acosh(rs1) | Zfhyp |
253 | FATANH | inverse hyperbolic tan | rd = atanh(rs1) | Zfhyp |
254
255 # Subsets
256
257 The full set is based on the Khronos OpenCL opcodes. If implemented
258 entirely it would be too much for both Embedded and also 3D.
259
260 The subsets are organised by hardware complexity, need (3D, HPC), however
261 due to synthesis producing inaccurate results at the range limits,
262 the less common subsets are still required for IEEE754 HPC.
263
264 MALI Midgard, an embedded / mobile 3D GPU, for example only has the
265 following opcodes:
266
267 E8 - fatan_pt2
268 F0 - frcp (reciprocal)
269 F2 - frsqrt (inverse square root, 1/sqrt(x))
270 F3 - fsqrt (square root)
271 F4 - fexp2 (2^x)
272 F5 - flog2
273 F6 - fsin1pi
274 F7 - fcos1pi
275 F9 - fatan_pt1
276
277 These in FP32 and FP16 only: no FP32 hardware, at all.
278
279 Vivante Embedded/Mobile 3D (etnaviv
280 <https://github.com/laanwj/etna_viv/blob/master/rnndb/isa.xml>)
281 only has the following:
282
283 sin, cos2pi
284 cos, sin2pi
285 log2, exp
286 sqrt and rsqrt
287 recip.
288
289 It also has fast variants of some of these, as a CSR Mode.
290
291 AMD's R600 GPU (R600\_Instruction\_Set\_Architecture.pdf) and the
292 RDNA ISA (RDNA\_Shader\_ISA\_5August2019.pdf, Table 22, Section 6.3) have:
293
294 COS2PI (appx)
295 EXP2
296 LOG (IEEE754)
297 RECIP
298 RSQRT
299 SQRT
300 SIN2PI (appx)
301
302 AMD RDNA has F16 and F32 variants of all the above, and also has F64
303 variants of SQRT, RSQRT and RECIP. It is interesting that even the
304 modern high-end AMD GPU does not have TAN or ATAN, where MALI Midgard
305 does.
306
307 Also a general point, that customised optimised hardware targetting
308 FP32 3D with less accuracy simply can neither be used for IEEE754 nor
309 for FP64 (except as a starting point for hardware or software driven
310 Newton Raphson or other iterative method).
311
312 Also in cost/area sensitive applications even the extra ROM lookup tables
313 for certain algorithms may be too costly.
314
315 These wildly differing and incompatible driving factors lead to the
316 subset subdivisions, below.
317
318 ## Transcendental Subsets
319
320 ### Zftrans
321
322 LOG2 EXP2 RECIP RSQRT
323
324 Zftrans contains the minimum standard transcendentals best suited to
325 3D. They are also the minimum subset for synthesising log10, exp10,
326 exp1m, log1p, the hyperbolic trigonometric functions sinh and so on.
327
328 They are therefore considered "base" (essential) transcendentals.
329
330 ### ZftransExt
331
332 LOG, EXP, EXP10, LOG10, LOGP1, EXP1M
333
334 These are extra transcendental functions that are useful, not generally
335 needed for 3D, however for Numerical Computation they may be useful.
336
337 Although they can be synthesised using Ztrans (LOG2 multiplied
338 by a constant), there is both a performance penalty as well as an
339 accuracy penalty towards the limits, which for IEEE754 compliance is
340 unacceptable. In particular, LOG(1+rs1) in hardware may give much better
341 accuracy at the lower end (very small rs1) than LOG(rs1).
342
343 Their forced inclusion would be inappropriate as it would penalise
344 embedded systems with tight power and area budgets. However if they
345 were completely excluded the HPC applications would be penalised on
346 performance and accuracy.
347
348 Therefore they are their own subset extension.
349
350 ### Zfhyp
351
352 SINH, COSH, TANH, ASINH, ACOSH, ATANH
353
354 These are the hyperbolic/inverse-hyperbolic functions. Their use in 3D
355 is limited.
356
357 They can all be synthesised using LOG, SQRT and so on, so depend
358 on Zftrans. However, once again, at the limits of the range, IEEE754
359 compliance becomes impossible, and thus a hardware implementation may
360 be required.
361
362 HPC and high-end GPUs are likely markets for these.
363
364 ### ZftransAdv
365
366 CBRT, POW, POWN, POWR, ROOTN
367
368 These are simply much more complex to implement in hardware, and typically
369 will only be put into HPC applications.
370
371 * **Zfrsqrt**: Reciprocal square-root.
372
373 ## Trigonometric subsets
374
375 ### Ztrigpi vs Ztrignpi
376
377 * **Ztrigpi**: SINPI COSPI TANPI
378 * **Ztrignpi**: SIN COS TAN
379
380 Ztrignpi are the basic trigonometric functions through which all others
381 could be synthesised, and they are typically the base trigonometrics
382 provided by GPUs for 3D, warranting their own subset.
383
384 In the case of the Ztrigpi subset, these are commonly used in for loops
385 with a power of two number of subdivisions, and the cost of multiplying
386 by PI inside each loop (or cumulative addition, resulting in cumulative
387 errors) is not acceptable.
388
389 In for example CORDIC the multiplication by PI may be moved outside of
390 the hardware algorithm as a loop invariant, with no power or area penalty.
391
392 Again, therefore, if SINPI (etc.) were excluded, programmers would be
393 penalised by being forced to divide by PI in some circumstances. Likewise
394 if SIN were excluded, programmers would be penaslised by being forced
395 to *multiply* by PI in some circumstances.
396
397 Thus again, a slightly different application of the same general argument
398 applies to give Ztrignpi and Ztrigpi as subsets. 3D GPUs will almost
399 certainly provide both.
400
401 ### Zarctrigpi and Zarctrignpi
402
403 * **Zarctrigpi**: ATAN2PI ASINPI ACOSPI
404 * **Zarctrignpi**: ATAN2 ACOS ASIN
405
406 These are extra trigonometric functions that are useful in some
407 applications, but even for 3D GPUs, particularly embedded and mobile class
408 GPUs, they are not so common and so are typically synthesised, there.
409
410 Although they can be synthesised using Ztrigpi and Ztrignpi, there is,
411 once again, both a performance penalty as well as an accuracy penalty
412 towards the limits, which for IEEE754 compliance is unacceptable, yet
413 is acceptable for 3D.
414
415 Therefore they are their own subset extensions.
416
417 # Synthesis, Pseudo-code ops and macro-ops
418
419 The pseudo-ops are best left up to the compiler rather than being actual
420 pseudo-ops, by allocating one scalar FP register for use as a constant
421 (loop invariant) set to "1.0" at the beginning of a function or other
422 suitable code block.
423
424 * FSINCOS - fused macro-op between FSIN and FCOS (issued in that order).
425 * FSINCOSPI - fused macro-op between FSINPI and FCOSPI (issued in that order).
426
427 FATANPI example pseudo-code:
428
429 lui t0, 0x3F800 // upper bits of f32 1.0
430 fmv.x.s ft0, t0
431 fatan2pi.s rd, rs1, ft0
432
433 Hyperbolic function example (obviates need for Zfhyp except for
434 high-performance or correctly-rounding):
435
436 ASINH( x ) = ln( x + SQRT(x**2+1))
437
438 # Evaluation and commentary
439
440 This section will move later to discussion.
441
442 ## Reciprocal
443
444 Used to be an alias. Some implementors may wish to implement divide as
445 y times recip(x).
446
447 Others may have shared hardware for recip and divide, others may not.
448
449 To avoid penalising one implementor over another, recip stays.
450
451 ## To evaluate: should LOG be replaced with LOG1P (and EXP with EXPM1)?
452
453 RISC principle says "exclude LOG because it's covered by LOGP1 plus an ADD".
454 Research needed to ensure that implementors are not compromised by such
455 a decision
456 <http://lists.libre-riscv.org/pipermail/libre-riscv-dev/2019-August/002358.html>
457
458 > > correctly-rounded LOG will return different results than LOGP1 and ADD.
459 > > Likewise for EXP and EXPM1
460
461 > ok, they stay in as real opcodes, then.
462
463 ## ATAN / ATAN2 commentary
464
465 Discussion starts here:
466 <http://lists.libre-riscv.org/pipermail/libre-riscv-dev/2019-August/002470.html>
467
468 from Mitch Alsup:
469
470 would like to point out that the general implementations of ATAN2 do a
471 bunch of special case checks and then simply call ATAN.
472
473 double ATAN2( double y, double x )
474 { // IEEE 754-2008 quality ATAN2
475
476 // deal with NANs
477 if( ISNAN( x ) ) return x;
478 if( ISNAN( y ) ) return y;
479
480 // deal with infinities
481 if( x == +∞ && |y|== +∞ ) return copysign( π/4, y );
482 if( x == +∞ ) return copysign( 0.0, y );
483 if( x == -∞ && |y|== +∞ ) return copysign( 3π/4, y );
484 if( x == -∞ ) return copysign( π, y );
485 if( |y|== +∞ ) return copysign( π/2, y );
486
487 // deal with signed zeros
488 if( x == 0.0 && y != 0.0 ) return copysign( π/2, y );
489 if( x >=+0.0 && y == 0.0 ) return copysign( 0.0, y );
490 if( x <=-0.0 && y == 0.0 ) return copysign( π, y );
491
492 // calculate ATAN2 textbook style
493 if( x > 0.0 ) return ATAN( |y / x| );
494 if( x < 0.0 ) return π - ATAN( |y / x| );
495 }
496
497
498 Yet the proposed encoding makes ATAN2 the primitive and has ATAN invent
499 a constant and then call/use ATAN2.
500
501 When one considers an implementation of ATAN, one must consider several
502 ranges of evaluation::
503
504 x [ -∞, -1.0]:: ATAN( x ) = -π/2 + ATAN( 1/x );
505 x (-1.0, +1.0]:: ATAN( x ) = + ATAN( x );
506 x [ 1.0, +∞]:: ATAN( x ) = +π/2 - ATAN( 1/x );
507
508 I should point out that the add/sub of π/2 can not lose significance
509 since the result of ATAN(1/x) is bounded 0..π/2
510
511 The bottom line is that I think you are choosing to make too many of
512 these into OpCodes, making the hardware function/calculation unit (and
513 sequencer) more complicated that necessary.
514
515 --------------------------------------------------------
516
517 We therefore I think have a case for bringing back ATAN and including ATAN2.
518
519 The reason is that whilst a microcode-like GPU-centric platform would
520 do ATAN2 in terms of ATAN, a UNIX-centric platform would do it the other
521 way round.
522
523 (that is the hypothesis, to be evaluated for correctness. feedback requested).
524
525 This because we cannot compromise or prioritise one platfrom's
526 speed/accuracy over another. That is not reasonable or desirable, to
527 penalise one implementor over another.
528
529 Thus, all implementors, to keep interoperability, must both have both
530 opcodes and may choose, at the architectural and routing level, which
531 one to implement in terms of the other.
532
533 Allowing implementors to choose to add either opcode and let traps sort it
534 out leaves an uncertainty in the software developer's mind: they cannot
535 trust the hardware, available from many vendors, to be performant right
536 across the board.
537
538 Standards are a pig.
539
540 ---
541
542 I might suggest that if there were a way for a calculation to be performed
543 and the result of that calculation chained to a subsequent calculation
544 such that the precision of the result-becomes-operand is wider than
545 what will fit in a register, then you can dramatically reduce the count
546 of instructions in this category while retaining
547
548 acceptable accuracy:
549
550 z = x / y
551
552 can be calculated as::
553
554 z = x * (1/y)
555
556 Where 1/y has about 26-to-32 bits of fraction. No, it's not IEEE 754-2008
557 accurate, but GPUs want speed and
558
559 1/y is fully pipelined (F32) while x/y cannot be (at reasonable area). It
560 is also not "that inaccurate" displaying 0.625-to-0.52 ULP.
561
562 Given that one has the ability to carry (and process) more fraction bits,
563 one can then do high precision multiplies of π or other transcendental
564 radixes.
565
566 And GPUs have been doing this almost since the dawn of 3D.
567
568 // calculate ATAN2 high performance style
569 // Note: at this point x != y
570 //
571 if( x > 0.0 )
572 {
573 if( y < 0.0 && |y| < |x| ) return - π/2 - ATAN( x / y );
574 if( y < 0.0 && |y| > |x| ) return + ATAN( y / x );
575 if( y > 0.0 && |y| < |x| ) return + ATAN( y / x );
576 if( y > 0.0 && |y| > |x| ) return + π/2 - ATAN( x / y );
577 }
578 if( x < 0.0 )
579 {
580 if( y < 0.0 && |y| < |x| ) return + π/2 + ATAN( x / y );
581 if( y < 0.0 && |y| > |x| ) return + π - ATAN( y / x );
582 if( y > 0.0 && |y| < |x| ) return + π - ATAN( y / x );
583 if( y > 0.0 && |y| > |x| ) return +3π/2 + ATAN( x / y );
584 }
585
586 This way the adds and subtracts from the constant are not in a precision
587 precarious position.