--- /dev/null
+# Transcendental operations
+
+To be updated to OpenPOWER.
+
+Summary:
+
+*This proposal extends OpenPOWER scalar floating point operations to
+add IEEE754 transcendental functions (pow, log etc) and trigonometric
+functions (sin, cos etc). These functions are also 98% shared with the
+Khronos Group OpenCL Extended Instruction Set.*
+
+With thanks to:
+
+* Jacob Lifshay
+* Dan Petroski
+* Mitch Alsup
+* Allen Baum
+* Andrew Waterman
+* Luis Vitorio Cargnini
+
+[[!toc levels=2]]
+
+See:
+
+* <http://bugs.libre-riscv.org/show_bug.cgi?id=127>
+* <https://www.khronos.org/registry/spir-v/specs/unified1/OpenCL.ExtendedInstructionSet.100.html>
+* Discussion: <http://lists.libre-riscv.org/pipermail/libre-riscv-dev/2019-August/002342.html>
+* [[rv_major_opcode_1010011]] for opcode listing.
+* [[zfpacc_proposal]] for accuracy settings proposal
+
+Extension subsets:
+
+* **Zftrans**: standard transcendentals (best suited to 3D)
+* **ZftransExt**: extra functions (useful, not generally needed for 3D,
+ can be synthesised using Ztrans)
+* **Ztrigpi**: trig. xxx-pi sinpi cospi tanpi
+* **Ztrignpi**: trig non-xxx-pi sin cos tan
+* **Zarctrigpi**: arc-trig. a-xxx-pi: atan2pi asinpi acospi
+* **Zarctrignpi**: arc-trig. non-a-xxx-pi: atan2, asin, acos
+* **Zfhyp**: hyperbolic/inverse-hyperbolic. sinh, cosh, tanh, asinh,
+ acosh, atanh (can be synthesised - see below)
+* **ZftransAdv**: much more complex to implement in hardware
+* **Zfrsqrt**: Reciprocal square-root.
+
+Minimum recommended requirements for 3D: Zftrans, Ztrignpi,
+Zarctrignpi, with Ztrigpi and Zarctrigpi as augmentations.
+
+Minimum recommended requirements for Mobile-Embedded 3D: Ztrignpi, Zftrans, with Ztrigpi as an augmentation.
+
+# TODO:
+
+* Decision on accuracy, moved to [[zfpacc_proposal]]
+<http://lists.libre-riscv.org/pipermail/libre-riscv-dev/2019-August/002355.html>
+* Errors **MUST** be repeatable.
+* How about four Platform Specifications? 3DUNIX, UNIX, 3DEmbedded and Embedded?
+<http://lists.libre-riscv.org/pipermail/libre-riscv-dev/2019-August/002361.html>
+ Accuracy requirements for dual (triple) purpose implementations must
+ meet the higher standard.
+* Reciprocal Square-root is in its own separate extension (Zfrsqrt) as
+ it is desirable on its own by other implementors. This to be evaluated.
+
+# Requirements <a name="requirements"></a>
+
+This proposal is designed to meet a wide range of extremely diverse needs,
+allowing implementors from all of them to benefit from the tools and hardware
+cost reductions associated with common standards adoption in RISC-V (primarily IEEE754 and Vulkan).
+
+**There are *four* different, disparate platform's needs (two new)**:
+
+* 3D Embedded Platform (new)
+* Embedded Platform
+* 3D UNIX Platform (new)
+* UNIX Platform
+
+**The use-cases are**:
+
+* 3D GPUs
+* Numerical Computation
+* (Potentially) A.I. / Machine-learning (1)
+
+(1) although approximations suffice in this field, making it more likely
+to use a custom extension. High-end ML would inherently definitely
+be excluded.
+
+**The power and die-area requirements vary from**:
+
+* Ultra-low-power (smartwatches where GPU power budgets are in milliwatts)
+* Mobile-Embedded (good performance with high efficiency for battery life)
+* Desktop Computing
+* Server / HPC (2)
+
+(2) Supercomputing is left out of the requirements as it is traditionally
+covered by Supercomputer Vectorisation Standards (such as RVV).
+
+**The software requirements are**:
+
+* Full public integration into GNU math libraries (libm)
+* Full public integration into well-known Numerical Computation systems (numpy)
+* Full public integration into upstream GNU and LLVM Compiler toolchains
+* Full public integration into Khronos OpenCL SPIR-V compatible Compilers
+ seeking public Certification and Endorsement from the Khronos Group
+ under their Trademarked Certification Programme.
+
+**The "contra"-requirements are**:
+
+ Ultra Low Power Embedded platforms (smart watches) are sufficiently
+ resource constrained that Vectorisation (of any kind) is likely to be
+ unnecessary and inappropriate.
+* The requirements are **not** for the purposes of developing a full custom
+ proprietary GPU with proprietary firmware driven by *hardware* centric
+ optimised design decisions as a priority over collaboration.
+* A full custom proprietary GPU ASIC Manufacturer *may* benefit from
+ this proposal however the fact that they typically develop proprietary
+ software that is not shared with the rest of the community likely to
+ use this proposal means that they have completely different needs.
+* This proposal is for *sharing* of effort in reducing development costs
+
+# Proposed Opcodes vs Khronos OpenCL vs IEEE754-2019<a name="khronos_equiv"></a>
+
+This list shows the (direct) equivalence between proposed opcodes,
+their Khronos OpenCL equivalents, and their IEEE754-2019 equivalents.
+98% of the opcodes in this proposal that are in the IEEE754-2019 standard
+are present in the Khronos Extended Instruction Set.
+
+For RISCV opcode encodings see [[rv_major_opcode_1010011]]
+**TODO** replace with OpenPOWER
+
+See
+<https://www.khronos.org/registry/spir-v/specs/unified1/OpenCL.ExtendedInstructionSet.100.html>
+and <https://ieeexplore.ieee.org/document/8766229>
+
+* Special FP16 opcodes are *not* being proposed, except by indirect / inherent
+ use of the "fmt" field that is already present in the RISC-V Specification.
+* "Native" opcodes are *not* being proposed: implementors will be expected
+ to use the (equivalent) proposed opcode covering the same function.
+* "Fast" opcodes are *not* being proposed, because the Khronos Specification
+ fast\_length, fast\_normalise and fast\_distance OpenCL opcodes require
+ vectors (or can be done as scalar operations using other RISC-V instructions).
+
+The OpenCL FP32 opcodes are **direct** equivalents to the proposed opcodes.
+Deviation from conformance with the Khronos Specification - including the
+Khronos Specification accuracy requirements - is not an option, as it
+results in non-compliance, and the vendor may not use the Trademarked words
+"Vulkan" etc. in conjunction with their product.
+
+IEEE754-2019 Table 9.1 lists "additional mathematical operations".
+Interestingly the only functions missing when compared to OpenCL are
+compound, exp2m1, exp10m1, log2p1, log10p1, pown (integer power) and powr.
+
+[[!table data="""
+opcode | OpenCL FP32 | OpenCL FP16 | OpenCL native | OpenCL fast | IEEE754 |
+FSIN | sin | half\_sin | native\_sin | NONE | sin |
+FCOS | cos | half\_cos | native\_cos | NONE | cos |
+FTAN | tan | half\_tan | native\_tan | NONE | tan |
+NONE (1) | sincos | NONE | NONE | NONE | NONE |
+FASIN | asin | NONE | NONE | NONE | asin |
+FACOS | acos | NONE | NONE | NONE | acos |
+FATAN | atan | NONE | NONE | NONE | atan |
+FSINPI | sinpi | NONE | NONE | NONE | sinPi |
+FCOSPI | cospi | NONE | NONE | NONE | cosPi |
+FTANPI | tanpi | NONE | NONE | NONE | tanPi |
+FASINPI | asinpi | NONE | NONE | NONE | asinPi |
+FACOSPI | acospi | NONE | NONE | NONE | acosPi |
+FATANPI | atanpi | NONE | NONE | NONE | atanPi |
+FSINH | sinh | NONE | NONE | NONE | sinh |
+FCOSH | cosh | NONE | NONE | NONE | cosh |
+FTANH | tanh | NONE | NONE | NONE | tanh |
+FASINH | asinh | NONE | NONE | NONE | asinh |
+FACOSH | acosh | NONE | NONE | NONE | acosh |
+FATANH | atanh | NONE | NONE | NONE | atanh |
+FATAN2 | atan2 | NONE | NONE | NONE | atan2 |
+FATAN2PI | atan2pi | NONE | NONE | NONE | atan2pi |
+FRSQRT | rsqrt | half\_rsqrt | native\_rsqrt | NONE | rSqrt |
+FCBRT | cbrt | NONE | NONE | NONE | NONE (2) |
+FEXP2 | exp2 | half\_exp2 | native\_exp2 | NONE | exp2 |
+FLOG2 | log2 | half\_log2 | native\_log2 | NONE | log2 |
+FEXPM1 | expm1 | NONE | NONE | NONE | expm1 |
+FLOG1P | log1p | NONE | NONE | NONE | logp1 |
+FEXP | exp | half\_exp | native\_exp | NONE | exp |
+FLOG | log | half\_log | native\_log | NONE | log |
+FEXP10 | exp10 | half\_exp10 | native\_exp10 | NONE | exp10 |
+FLOG10 | log10 | half\_log10 | native\_log10 | NONE | log10 |
+FPOW | pow | NONE | NONE | NONE | pow |
+FPOWN | pown | NONE | NONE | NONE | pown |
+FPOWR | powr | half\_powr | native\_powr | NONE | powr |
+FROOTN | rootn | NONE | NONE | NONE | rootn |
+FHYPOT | hypot | NONE | NONE | NONE | hypot |
+FRECIP | NONE | half\_recip | native\_recip | NONE | NONE (3) |
+NONE | NONE | NONE | NONE | NONE | compound |
+NONE | NONE | NONE | NONE | NONE | exp2m1 |
+NONE | NONE | NONE | NONE | NONE | exp10m1 |
+NONE | NONE | NONE | NONE | NONE | log2p1 |
+NONE | NONE | NONE | NONE | NONE | log10p1 |
+"""]]
+
+Note (1) FSINCOS is macro-op fused (see below).
+
+Note (2) synthesised in IEEE754-2019 as "pown(x, 3)"
+
+Note (3) synthesised in IEEE754-2019 using "1.0 / x"
+
+## List of 2-arg opcodes
+
+[[!table data="""
+opcode | Description | pseudocode | Extension |
+FATAN2 | atan2 arc tangent | rd = atan2(rs2, rs1) | Zarctrignpi |
+FATAN2PI | atan2 arc tangent / pi | rd = atan2(rs2, rs1) / pi | Zarctrigpi |
+FPOW | x power of y | rd = pow(rs1, rs2) | ZftransAdv |
+FPOWN | x power of n (n int) | rd = pow(rs1, rs2) | ZftransAdv |
+FPOWR | x power of y (x +ve) | rd = exp(rs1 log(rs2)) | ZftransAdv |
+FROOTN | x power 1/n (n integer)| rd = pow(rs1, 1/rs2) | ZftransAdv |
+FHYPOT | hypotenuse | rd = sqrt(rs1^2 + rs2^2) | ZftransAdv |
+"""]]
+
+## List of 1-arg transcendental opcodes
+
+[[!table data="""
+opcode | Description | pseudocode | Extension |
+FRSQRT | Reciprocal Square-root | rd = sqrt(rs1) | Zfrsqrt |
+FCBRT | Cube Root | rd = pow(rs1, 1.0 / 3) | ZftransAdv |
+FRECIP | Reciprocal | rd = 1.0 / rs1 | Zftrans |
+FEXP2 | power-of-2 | rd = pow(2, rs1) | Zftrans |
+FLOG2 | log2 | rd = log(2. rs1) | Zftrans |
+FEXPM1 | exponential minus 1 | rd = pow(e, rs1) - 1.0 | ZftransExt |
+FLOG1P | log plus 1 | rd = log(e, 1 + rs1) | ZftransExt |
+FEXP | exponential | rd = pow(e, rs1) | ZftransExt |
+FLOG | natural log (base e) | rd = log(e, rs1) | ZftransExt |
+FEXP10 | power-of-10 | rd = pow(10, rs1) | ZftransExt |
+FLOG10 | log base 10 | rd = log(10, rs1) | ZftransExt |
+"""]]
+
+## List of 1-arg trigonometric opcodes
+
+[[!table data="""
+opcode | Description | pseudo-code | Extension |
+FSIN | sin (radians) | rd = sin(rs1) | Ztrignpi |
+FCOS | cos (radians) | rd = cos(rs1) | Ztrignpi |
+FTAN | tan (radians) | rd = tan(rs1) | Ztrignpi |
+FASIN | arcsin (radians) | rd = asin(rs1) | Zarctrignpi |
+FACOS | arccos (radians) | rd = acos(rs1) | Zarctrignpi |
+FATAN | arctan (radians) | rd = atan(rs1) | Zarctrignpi |
+FSINPI | sin times pi | rd = sin(pi * rs1) | Ztrigpi |
+FCOSPI | cos times pi | rd = cos(pi * rs1) | Ztrigpi |
+FTANPI | tan times pi | rd = tan(pi * rs1) | Ztrigpi |
+FASINPI | arcsin / pi | rd = asin(rs1) / pi | Zarctrigpi |
+FACOSPI | arccos / pi | rd = acos(rs1) / pi | Zarctrigpi |
+FATANPI | arctan / pi | rd = atan(rs1) / pi | Zarctrigpi |
+FSINH | hyperbolic sin (radians) | rd = sinh(rs1) | Zfhyp |
+FCOSH | hyperbolic cos (radians) | rd = cosh(rs1) | Zfhyp |
+FTANH | hyperbolic tan (radians) | rd = tanh(rs1) | Zfhyp |
+FASINH | inverse hyperbolic sin | rd = asinh(rs1) | Zfhyp |
+FACOSH | inverse hyperbolic cos | rd = acosh(rs1) | Zfhyp |
+FATANH | inverse hyperbolic tan | rd = atanh(rs1) | Zfhyp |
+"""]]
+
+# Subsets
+
+The full set is based on the Khronos OpenCL opcodes. If implemented
+entirely it would be too much for both Embedded and also 3D.
+
+The subsets are organised by hardware complexity, need (3D, HPC), however
+due to synthesis producing inaccurate results at the range limits,
+the less common subsets are still required for IEEE754 HPC.
+
+MALI Midgard, an embedded / mobile 3D GPU, for example only has the
+following opcodes:
+
+ E8 - fatan_pt2
+ F0 - frcp (reciprocal)
+ F2 - frsqrt (inverse square root, 1/sqrt(x))
+ F3 - fsqrt (square root)
+ F4 - fexp2 (2^x)
+ F5 - flog2
+ F6 - fsin1pi
+ F7 - fcos1pi
+ F9 - fatan_pt1
+
+These in FP32 and FP16 only: no FP32 hardware, at all.
+
+Vivante Embedded/Mobile 3D (etnaviv
+<https://github.com/laanwj/etna_viv/blob/master/rnndb/isa.xml>)
+only has the following:
+
+ sin, cos2pi
+ cos, sin2pi
+ log2, exp
+ sqrt and rsqrt
+ recip.
+
+It also has fast variants of some of these, as a CSR Mode.
+
+AMD's R600 GPU (R600\_Instruction\_Set\_Architecture.pdf) and the
+RDNA ISA (RDNA\_Shader\_ISA\_5August2019.pdf, Table 22, Section 6.3) have:
+
+ COS2PI (appx)
+ EXP2
+ LOG (IEEE754)
+ RECIP
+ RSQRT
+ SQRT
+ SIN2PI (appx)
+
+AMD RDNA has F16 and F32 variants of all the above, and also has F64
+variants of SQRT, RSQRT and RECIP. It is interesting that even the
+modern high-end AMD GPU does not have TAN or ATAN, where MALI Midgard
+does.
+
+Also a general point, that customised optimised hardware targetting
+FP32 3D with less accuracy simply can neither be used for IEEE754 nor
+for FP64 (except as a starting point for hardware or software driven
+Newton Raphson or other iterative method).
+
+Also in cost/area sensitive applications even the extra ROM lookup tables
+for certain algorithms may be too costly.
+
+These wildly differing and incompatible driving factors lead to the
+subset subdivisions, below.
+
+## Transcendental Subsets
+
+### Zftrans
+
+LOG2 EXP2 RECIP RSQRT
+
+Zftrans contains the minimum standard transcendentals best suited to
+3D. They are also the minimum subset for synthesising log10, exp10,
+exp1m, log1p, the hyperbolic trigonometric functions sinh and so on.
+
+They are therefore considered "base" (essential) transcendentals.
+
+### ZftransExt
+
+LOG, EXP, EXP10, LOG10, LOGP1, EXP1M
+
+These are extra transcendental functions that are useful, not generally
+needed for 3D, however for Numerical Computation they may be useful.
+
+Although they can be synthesised using Ztrans (LOG2 multiplied
+by a constant), there is both a performance penalty as well as an
+accuracy penalty towards the limits, which for IEEE754 compliance is
+unacceptable. In particular, LOG(1+rs1) in hardware may give much better
+accuracy at the lower end (very small rs1) than LOG(rs1).
+
+Their forced inclusion would be inappropriate as it would penalise
+embedded systems with tight power and area budgets. However if they
+were completely excluded the HPC applications would be penalised on
+performance and accuracy.
+
+Therefore they are their own subset extension.
+
+### Zfhyp
+
+SINH, COSH, TANH, ASINH, ACOSH, ATANH
+
+These are the hyperbolic/inverse-hyperbolic functions. Their use in 3D
+is limited.
+
+They can all be synthesised using LOG, SQRT and so on, so depend
+on Zftrans. However, once again, at the limits of the range, IEEE754
+compliance becomes impossible, and thus a hardware implementation may
+be required.
+
+HPC and high-end GPUs are likely markets for these.
+
+### ZftransAdv
+
+CBRT, POW, POWN, POWR, ROOTN
+
+These are simply much more complex to implement in hardware, and typically
+will only be put into HPC applications.
+
+* **Zfrsqrt**: Reciprocal square-root.
+
+## Trigonometric subsets
+
+### Ztrigpi vs Ztrignpi
+
+* **Ztrigpi**: SINPI COSPI TANPI
+* **Ztrignpi**: SIN COS TAN
+
+Ztrignpi are the basic trigonometric functions through which all others
+could be synthesised, and they are typically the base trigonometrics
+provided by GPUs for 3D, warranting their own subset.
+
+In the case of the Ztrigpi subset, these are commonly used in for loops
+with a power of two number of subdivisions, and the cost of multiplying
+by PI inside each loop (or cumulative addition, resulting in cumulative
+errors) is not acceptable.
+
+In for example CORDIC the multiplication by PI may be moved outside of
+the hardware algorithm as a loop invariant, with no power or area penalty.
+
+Again, therefore, if SINPI (etc.) were excluded, programmers would be
+penalised by being forced to divide by PI in some circumstances. Likewise
+if SIN were excluded, programmers would be penaslised by being forced
+to *multiply* by PI in some circumstances.
+
+Thus again, a slightly different application of the same general argument
+applies to give Ztrignpi and Ztrigpi as subsets. 3D GPUs will almost
+certainly provide both.
+
+### Zarctrigpi and Zarctrignpi
+
+* **Zarctrigpi**: ATAN2PI ASINPI ACOSPI
+* **Zarctrignpi**: ATAN2 ACOS ASIN
+
+These are extra trigonometric functions that are useful in some
+applications, but even for 3D GPUs, particularly embedded and mobile class
+GPUs, they are not so common and so are typically synthesised, there.
+
+Although they can be synthesised using Ztrigpi and Ztrignpi, there is,
+once again, both a performance penalty as well as an accuracy penalty
+towards the limits, which for IEEE754 compliance is unacceptable, yet
+is acceptable for 3D.
+
+Therefore they are their own subset extensions.
+
+# Synthesis, Pseudo-code ops and macro-ops
+
+The pseudo-ops are best left up to the compiler rather than being actual
+pseudo-ops, by allocating one scalar FP register for use as a constant
+(loop invariant) set to "1.0" at the beginning of a function or other
+suitable code block.
+
+* FSINCOS - fused macro-op between FSIN and FCOS (issued in that order).
+* FSINCOSPI - fused macro-op between FSINPI and FCOSPI (issued in that order).
+
+FATANPI example pseudo-code:
+
+ lui t0, 0x3F800 // upper bits of f32 1.0
+ fmv.x.s ft0, t0
+ fatan2pi.s rd, rs1, ft0
+
+Hyperbolic function example (obviates need for Zfhyp except for
+high-performance or correctly-rounding):
+
+ ASINH( x ) = ln( x + SQRT(x**2+1))
+
+# Evaluation and commentary
+
+This section will move later to discussion.
+
+## Reciprocal
+
+Used to be an alias. Some implementors may wish to implement divide as
+y times recip(x).
+
+Others may have shared hardware for recip and divide, others may not.
+
+To avoid penalising one implementor over another, recip stays.
+
+## To evaluate: should LOG be replaced with LOG1P (and EXP with EXPM1)?
+
+RISC principle says "exclude LOG because it's covered by LOGP1 plus an ADD".
+Research needed to ensure that implementors are not compromised by such
+a decision
+<http://lists.libre-riscv.org/pipermail/libre-riscv-dev/2019-August/002358.html>
+
+> > correctly-rounded LOG will return different results than LOGP1 and ADD.
+> > Likewise for EXP and EXPM1
+
+> ok, they stay in as real opcodes, then.
+
+## ATAN / ATAN2 commentary
+
+Discussion starts here:
+<http://lists.libre-riscv.org/pipermail/libre-riscv-dev/2019-August/002470.html>
+
+from Mitch Alsup:
+
+would like to point out that the general implementations of ATAN2 do a
+bunch of special case checks and then simply call ATAN.
+
+ double ATAN2( double y, double x )
+ { // IEEE 754-2008 quality ATAN2
+
+ // deal with NANs
+ if( ISNAN( x ) ) return x;
+ if( ISNAN( y ) ) return y;
+
+ // deal with infinities
+ if( x == +∞ && |y|== +∞ ) return copysign( π/4, y );
+ if( x == +∞ ) return copysign( 0.0, y );
+ if( x == -∞ && |y|== +∞ ) return copysign( 3π/4, y );
+ if( x == -∞ ) return copysign( π, y );
+ if( |y|== +∞ ) return copysign( π/2, y );
+
+ // deal with signed zeros
+ if( x == 0.0 && y != 0.0 ) return copysign( π/2, y );
+ if( x >=+0.0 && y == 0.0 ) return copysign( 0.0, y );
+ if( x <=-0.0 && y == 0.0 ) return copysign( π, y );
+
+ // calculate ATAN2 textbook style
+ if( x > 0.0 ) return ATAN( |y / x| );
+ if( x < 0.0 ) return π - ATAN( |y / x| );
+ }
+
+
+Yet the proposed encoding makes ATAN2 the primitive and has ATAN invent
+a constant and then call/use ATAN2.
+
+When one considers an implementation of ATAN, one must consider several
+ranges of evaluation::
+
+ x [ -∞, -1.0]:: ATAN( x ) = -π/2 + ATAN( 1/x );
+ x (-1.0, +1.0]:: ATAN( x ) = + ATAN( x );
+ x [ 1.0, +∞]:: ATAN( x ) = +π/2 - ATAN( 1/x );
+
+I should point out that the add/sub of π/2 can not lose significance
+since the result of ATAN(1/x) is bounded 0..π/2
+
+The bottom line is that I think you are choosing to make too many of
+these into OpCodes, making the hardware function/calculation unit (and
+sequencer) more complicated that necessary.
+
+--------------------------------------------------------
+
+We therefore I think have a case for bringing back ATAN and including ATAN2.
+
+The reason is that whilst a microcode-like GPU-centric platform would
+do ATAN2 in terms of ATAN, a UNIX-centric platform would do it the other
+way round.
+
+(that is the hypothesis, to be evaluated for correctness. feedback requested).
+
+This because we cannot compromise or prioritise one platfrom's
+speed/accuracy over another. That is not reasonable or desirable, to
+penalise one implementor over another.
+
+Thus, all implementors, to keep interoperability, must both have both
+opcodes and may choose, at the architectural and routing level, which
+one to implement in terms of the other.
+
+Allowing implementors to choose to add either opcode and let traps sort it
+out leaves an uncertainty in the software developer's mind: they cannot
+trust the hardware, available from many vendors, to be performant right
+across the board.
+
+Standards are a pig.
+
+---
+
+I might suggest that if there were a way for a calculation to be performed
+and the result of that calculation chained to a subsequent calculation
+such that the precision of the result-becomes-operand is wider than
+what will fit in a register, then you can dramatically reduce the count
+of instructions in this category while retaining
+
+acceptable accuracy:
+
+ z = x / y
+
+can be calculated as::
+
+ z = x * (1/y)
+
+Where 1/y has about 26-to-32 bits of fraction. No, it's not IEEE 754-2008
+accurate, but GPUs want speed and
+
+1/y is fully pipelined (F32) while x/y cannot be (at reasonable area). It
+is also not "that inaccurate" displaying 0.625-to-0.52 ULP.
+
+Given that one has the ability to carry (and process) more fraction bits,
+one can then do high precision multiplies of π or other transcendental
+radixes.
+
+And GPUs have been doing this almost since the dawn of 3D.
+
+ // calculate ATAN2 high performance style
+ // Note: at this point x != y
+ //
+ if( x > 0.0 )
+ {
+ if( y < 0.0 && |y| < |x| ) return - π/2 - ATAN( x / y );
+ if( y < 0.0 && |y| > |x| ) return + ATAN( y / x );
+ if( y > 0.0 && |y| < |x| ) return + ATAN( y / x );
+ if( y > 0.0 && |y| > |x| ) return + π/2 - ATAN( x / y );
+ }
+ if( x < 0.0 )
+ {
+ if( y < 0.0 && |y| < |x| ) return + π/2 + ATAN( x / y );
+ if( y < 0.0 && |y| > |x| ) return + π - ATAN( y / x );
+ if( y > 0.0 && |y| < |x| ) return + π - ATAN( y / x );
+ if( y > 0.0 && |y| > |x| ) return +3π/2 + ATAN( x / y );
+ }
+
+This way the adds and subtracts from the constant are not in a precision
+precarious position.