(no commit message)
[libreriscv.git] / ztrans_proposal.mdwn
1 # Zftrans - transcendental operations
2
3 With thanks to:
4
5 * Jacob Lifshay
6 * Dan Petroski
7 * Mitch Alsup
8 * Allen Baum
9 * Andrew Waterman
10 * Luis Vitorio Cargnini
11
12 [[!toc levels=2]]
13
14 See:
15
16 * <http://bugs.libre-riscv.org/show_bug.cgi?id=127>
17 * <https://www.khronos.org/registry/spir-v/specs/unified1/OpenCL.ExtendedInstructionSet.100.html>
18 * Discussion: <http://lists.libre-riscv.org/pipermail/libre-riscv-dev/2019-August/002342.html>
19 * [[rv_major_opcode_1010011]] for opcode listing.
20 * [[zfpacc_proposal]] for accuracy settings proposal
21
22 Extension subsets:
23
24 * **Zftrans**: standard transcendentals (best suited to 3D)
25 * **ZftransExt**: extra functions (useful, not generally needed for 3D,
26 can be synthesised using Ztrans)
27 * **Ztrigpi**: trig. xxx-pi sinpi cospi tanpi
28 * **Ztrignpi**: trig non-xxx-pi sin cos tan
29 * **Zarctrigpi**: arc-trig. a-xxx-pi: atan2pi asinpi acospi
30 * **Zarctrignpi**: arc-trig. non-a-xxx-pi: atan2, asin, acos
31 * **Zfhyp**: hyperbolic/inverse-hyperbolic. sinh, cosh, tanh, asinh,
32 acosh, atanh (can be synthesised - see below)
33 * **ZftransAdv**: much more complex to implement in hardware
34 * **Zfrsqrt**: Reciprocal square-root.
35
36 Minimum recommended requirements for 3D: Zftrans, Ztrigpi, Ztrignpi, Zarctrigpi,
37 Zarctrignpi
38
39 Minimum recommended requirements for Mobile-Embedded 3D: Ztrigpi, Zftrans, Ztrignpi
40
41 # TODO:
42
43 * Decision on accuracy, moved to [[zfpacc_proposal]]
44 <http://lists.libre-riscv.org/pipermail/libre-riscv-dev/2019-August/002355.html>
45 * Errors **MUST** be repeatable.
46 * How about four Platform Specifications? 3DUNIX, UNIX, 3DEmbedded and Embedded?
47 <http://lists.libre-riscv.org/pipermail/libre-riscv-dev/2019-August/002361.html>
48 Accuracy requirements for dual (triple) purpose implementations must
49 meet the higher standard.
50 * Reciprocal Square-root is in its own separate extension (Zfrsqrt) as
51 it is desirable on its own by other implementors. This to be evaluated.
52
53 # Requirements <a name="requirements"></a>
54
55 This proposal is designed to meet a wide range of extremely diverse needs,
56 allowing implementors from all of them to benefit from the tools and hardware
57 cost reductions associated with common standards adoption.
58
59 **There are *four* different, disparate platform's needs (two new)**:
60
61 * 3D Embedded Platform (new)
62 * Embedded Platform
63 * 3D UNIX Platform (new)
64 * UNIX Platform
65
66 **The use-cases are**:
67
68 * 3D GPUs
69 * Numerical Computation
70 * (Potentially) A.I. / Machine-learning (1)
71
72 (1) although approximations suffice in this field, making it more likely
73 to use a custom extension. High-end ML would inherently definitely
74 be excluded.
75
76 **The power and die-area requirements vary from**:
77
78 * Ultra-low-power (smartwatches where GPU power budgets are in milliwatts)
79 * Mobile-Embedded (good performance with high efficiency for battery life)
80 * Desktop Computing
81 * Server / HPC (2)
82
83 (2) Supercomputing is left out of the requirements as it is traditionally
84 covered by Supercomputer Vectorisation Standards (such as RVV).
85
86 **The software requirements are**:
87
88 * Full public integration into GNU math libraries (libm)
89 * Full public integration into well-known Numerical Computation systems (numpy)
90 * Full public integration into upstream GNU and LLVM Compiler toolchains
91 * Full public integration into Khronos OpenCL SPIR-V compatible Compilers
92 seeking public Certification and Endorsement from the Khronos Group
93 under their Trademarked Certification Programme.
94
95 **The "contra"-requirements are**:
96
97 * NOT for use with RVV (RISC-V Vector Extension). These are *scalar* opcodes.
98 Ultra Low Power Embedded platforms (smart watches) are sufficiently resource constrained that Vectorisation
99 (of any kind) is likely to be unnecessary and inappropriate.
100 * The requirements are **not** for the purposes of developing a full custom
101 proprietary GPU with proprietary firmware
102 driven by *hardware* centric optimised design decisions as a priority over collaboration.
103 * A full custom proprietary GPU ASIC Manufacturer *may* benefit from
104 this proposal however the fact that they typically develop proprietary
105 software that is not shared with the rest of the community likely to
106 use this proposal means that they have completely different needs.
107 * This proposal is for *sharing* of effort in reducing development costs
108
109 # Requirements Analysis <a name="requirements_analysis"></a>
110
111 **Platforms**:
112
113 3D Embedded will require significantly less accuracy and will need to make
114 power budget and die area compromises that other platforms (including Embedded)
115 will not need to make.
116
117 3D UNIX Platform has to be performance-price-competitive: subtly-reduced
118 accuracy in FP32 is acceptable where, conversely, in the UNIX Platform,
119 IEEE754 compliance is a hard requirement that would compromise power
120 and efficiency on a 3D UNIX Platform.
121
122 Even in the Embedded platform, IEEE754 interoperability is beneficial,
123 where if it was a hard requirement the 3D Embedded platform would be severely
124 compromised in its ability to meet the demanding power budgets of that market.
125
126 Thus, learning from the lessons of
127 [SIMD considered harmful](https://www.sigarch.org/simd-instructions-considered-harmful/)
128 this proposal works in conjunction with the [[zfpacc_proposal]], so as
129 not to overburden the OP32 ISA space with extra "reduced-accuracy" opcodes.
130
131 **Use-cases**:
132
133 There really is little else in the way of suitable markets. 3D GPUs
134 have extremely competitive power-efficiency and power-budget requirements
135 that are completely at odds with the other market at the other end of
136 the spectrum: Numerical Computation.
137
138 Interoperability in Numerical Computation is absolutely critical: it implies (correlates directly with)
139 IEEE754 compliance. However full IEEE754 compliance automatically and
140 inherently penalises a GPU on performance and die area, where accuracy is simply just not necessary.
141
142 To meet the needs of both markets, the two new platforms have to be created,
143 and [[zfpacc_proposal]] is a critical dependency. Runtime selection of
144 FP accuracy allows an implementation to be "Hybrid" - cover UNIX IEEE754
145 compliance *and* 3D performance in a single ASIC.
146
147 **Power and die-area requirements**:
148
149 This is where the conflicts really start to hit home.
150
151 A "Numerical High performance only" proposal (suitable for Server / HPC
152 only) would customise and target the Extension based on a quantitative
153 analysis of the value of certain opcodes *for HPC only*. It would
154 conclude, reasonably and rationally, that it is worthwhile adding opcodes
155 to RVV as parallel Vector operations, and that further discussion of
156 the matter is pointless.
157
158 A "Proprietary GPU effort" (even one that was intended for publication
159 of its API through, for example, a public libre-licensed Vulkan SPIR-V
160 Compiler) would conclude, reasonably and rationally, that, likewise, the
161 opcodes were best suited to be added to RVV, and, further, that their
162 requirements conflict with the HPC world, due to the reduced accuracy.
163 This on the basis that the silicon die area required for IEEE754 is far
164 greater than that needed for reduced-accuracy, and thus their product would
165 be completely unacceptable in the market if it had to meet IEEE754, unnecessarily.
166
167 An "Embedded 3D" GPU has radically different performance, power
168 and die-area requirements (and may even target SoftCores in FPGA).
169 Sharing of the silicon to cover multi-function uses (CORDIC for example)
170 is absolutely essential in order to keep cost and power down, and high
171 performance simply is not. Multi-cycle FSMs instead of pipelines may
172 be considered acceptable, and so on. Subsets of functionality are
173 also essential.
174
175 An "Embedded Numerical" platform has requirements that are separate and
176 distinct from all of the above!
177
178 Mobile Computing needs (tablets, smartphones) again pull in a different
179 direction: high performance, reasonable accuracy, but efficiency is
180 critical. Screen sizes are not at the 4K range: they are within the
181 800x600 range at the low end (320x240 at the extreme budget end), and
182 only the high-performance smartphones and tablets provide 1080p (1920x1080).
183 With lower resolution, accuracy compromises are possible which the Desktop
184 market (4k and soon to be above) would find unacceptable.
185
186 Meeting these disparate markets may be achieved, again, through
187 [[zfpacc_proposal]], by subdividing into four platforms, yet, in addition
188 to that, subdividing the extension into subsets that best suit the different
189 market areas.
190
191 **Software requirements**:
192
193 A "custom" extension is developed in near-complete isolation from the
194 rest of the RISC-V Community. Cost savings to the Corporation are
195 large, with no direct beneficial feedback to (or impact on) the rest
196 of the RISC-V ecosystem.
197
198 However given that 3D revolves around Standards - DirectX, Vulkan, OpenGL,
199 OpenCL - users have much more influence than first appears. Compliance
200 with these standards is critical as the userbase (Games writers, scientific
201 applications) expects not to have to rewrite extremely large and costly codebases to conform
202 with *non-standards-compliant* hardware.
203
204 Therefore, compliance with public APIs (Vulkan, OpenCL, OpenGL, DirectX) is paramount, and compliance with
205 Trademarked Standards is critical. Any deviation from Trademarked Standards
206 means that an implementation may not be sold and also make a claim of being,
207 for example, "Vulkan compatible".
208
209 This in turn reinforces and makes a hard requirement a need for public
210 compliance with such standards, over-and-above what would otherwise be
211 set by a RISC-V Standards Development Process, including both the
212 software compliance and the knock-on implications that has for hardware.
213
214 **Collaboration**:
215
216 The case for collaboration on any Extension is already well-known.
217 In this particular case, the precedent for inclusion of Transcendentals
218 in other ISAs, both from Graphics and High-performance Computing, has
219 these primitives well-established in high-profile software libraries and
220 compilers in both GPU and HPC Computer Science divisions. Collaboration
221 and shared public compliance with those standards brooks no argument.
222
223 The combined requirements of collaboration and multi accuracy requirements mean that
224 *overall this proposal is categorically and wholly unsuited to
225 relegation of "custom" status*.
226
227 # Quantitative Analysis <a name="analysis"></a>
228
229 This is extremely challenging. Normally, an Extension would require full,
230 comprehensive and detailed analysis of every single instruction, for every
231 single possible use-case, in every single market. The amount of silicon
232 area required would be balanced against the benefits of introducing extra
233 opcodes, as well as a full market analysis performed to see which divisions
234 of Computer Science benefit from the introduction of the instruction,
235 in each and every case.
236
237 With 34 instructions, four possible Platforms, and sub-categories of
238 implementations even within each Platform, over 136 separate and distinct
239 analyses is not a practical proposition.
240
241 A little more intelligence has to be applied to the problem space,
242 to reduce it down to manageable levels.
243
244 Fortunately, the subdivision by Platform, in combination with the
245 identification of only two primary markets (Numerical Computation and
246 3D), means that the logical reasoning applies *uniformly* and broadly
247 across *groups* of instructions rather than individually, making it a primarily
248 hardware-centric and accuracy-centric decision-making process.
249
250 In addition, hardware algorithms such as CORDIC can cover such a wide
251 range of operations (simply by changing the input parameters) that the
252 normal argument of compromising and excluding certain opcodes because they
253 would significantly increase the silicon area is knocked down.
254
255 However, CORDIC, whilst space-efficient, and thus well-suited to
256 Embedded, is an old iterative algorithm not well-suited to High-Performance
257 Computing or Mid to High-end GPUs, where commercially-competitive
258 FP32 pipeline lengths are only around 5 stages.
259
260 Not only that, but some operations such as LOG1P, which would normally
261 be excluded from one market (due to there being an alternative macro-op
262 fused sequence replacing it) are required for other markets due to
263 the higher accuracy obtainable at the lower range of input values when
264 compared to LOG(1+P).
265
266 (Thus we start to see why "proprietary" markets are excluded from this
267 proposal, because "proprietary" markets would make *hardware*-driven
268 optimisation decisions that would be completely inappropriate for a
269 common standard).
270
271 ATAN and ATAN2 is another example area in which one market's needs
272 conflict directly with another: the only viable solution, without compromising
273 one market to the detriment of the other, is to provide both opcodes
274 and let implementors make the call as to which (or both) to optimise,
275 at the *hardware* level.
276
277 Likewise it is well-known that loops involving "0 to 2 times pi", often
278 done in subdivisions of powers of two, are costly to do because they
279 involve floating-point multiplication by PI in each and every loop.
280 3D GPUs solved this by providing SINPI variants which range from 0 to 1
281 and perform the multiply *inside* the hardware itself. In the case of
282 CORDIC, it turns out that the multiply by PI is not even needed (is a
283 loop invariant magic constant).
284
285 However, some markets may not wish to *use* CORDIC, for reasons mentioned
286 above, and, again, one market would be penalised if SINPI was prioritised
287 over SIN, or vice-versa.
288
289 In essence, then, even when only the two primary markets (3D and Numerical Computation) have been identified, this still leaves two (three) diametrically-opposed *accuracy* sub-markets as the prime conflict drivers:
290
291 * Embedded Ultra Low Power
292 * IEEE754 compliance
293 * Khronos Vulkan compliance
294
295 Thus the best that can be done is to use Quantitative Analysis to work
296 out which "subsets" - sub-Extensions - to include, provide an additional "accuracy" extension, be as "inclusive"
297 as possible, and thus allow implementors to decide what to add to their
298 implementation, and how best to optimise them.
299
300 This approach *only* works due to the uniformity of the function space,
301 and is **not** an appropriate methodology for use in other Extensions
302 with huge (non-uniform) market diversity even with similarly large numbers of potential opcodes.
303 BitManip is the perfect counter-example.
304
305 # Proposed Opcodes vs Khronos OpenCL Opcodes <a name="khronos_equiv"></a>
306
307 This list shows the (direct) equivalence between proposed opcodes and
308 their Khronos OpenCL equivalents.
309 For RISCV opcode encodings see
310 [[rv_major_opcode_1010011]]
311
312 See
313 <https://www.khronos.org/registry/spir-v/specs/unified1/OpenCL.ExtendedInstructionSet.100.html>
314
315 * Special FP16 opcodes are *not* being proposed, except by indirect / inherent
316 use of the "fmt" field that is already present in the RISC-V Specification.
317 * "Native" opcodes are *not* being proposed: implementors will be expected
318 to use the (equivalent) proposed opcode covering the same function.
319 * "Fast" opcodes are *not* being proposed, because the Khronos Specification
320 fast\_length, fast\_normalise and fast\_distance OpenCL opcodes require
321 vectors (or can be done as scalar operations using other RISC-V instructions).
322
323 The OpenCL FP32 opcodes are **direct** equivalents to the proposed opcodes.
324 Deviation from conformance with the Khronos Specification - including the
325 Khronos Specification accuracy requirements - is not an option, as it
326 results in non-compliance, and the vendor may not use the Trademarked words
327 "Vulkan" etc. in conjunction with their product.
328
329 [[!table data="""
330 Proposed opcode | OpenCL FP32 | OpenCL FP16 | OpenCL native | OpenCL fast |
331 FSIN | sin | half\_sin | native\_sin | NONE |
332 FCOS | cos | half\_cos | native\_cos | NONE |
333 FTAN | tan | half\_tan | native\_tan | NONE |
334 NONE (1) | sincos | NONE | NONE | NONE |
335 FASIN | asin | NONE | NONE | NONE |
336 FACOS | acos | NONE | NONE | NONE |
337 FATAN | atan | NONE | NONE | NONE |
338 FSINPI | sinpi | NONE | NONE | NONE |
339 FCOSPI | cospi | NONE | NONE | NONE |
340 FTANPI | tanpi | NONE | NONE | NONE |
341 FASINPI | asinpi | NONE | NONE | NONE |
342 FACOSPI | acospi | NONE | NONE | NONE |
343 FATANPI | atanpi | NONE | NONE | NONE |
344 FSINH | sinh | NONE | NONE | NONE |
345 FCOSH | cosh | NONE | NONE | NONE |
346 FTANH | tanh | NONE | NONE | NONE |
347 FASINH | asinh | NONE | NONE | NONE |
348 FACOSH | acosh | NONE | NONE | NONE |
349 FATANH | atanh | NONE | NONE | NONE |
350 FRSQRT | rsqrt | half\_rsqrt | native\_rsqrt | NONE |
351 FCBRT | cbrt | NONE | NONE | NONE |
352 FEXP2 | exp2 | half\_exp2 | native\_exp2 | NONE |
353 FLOG2 | log2 | half\_log2 | native\_log2 | NONE |
354 FEXPM1 | expm1 | NONE | NONE | NONE |
355 FLOG1P | log1p | NONE | NONE | NONE |
356 FEXP | exp | half\_exp | native\_exp | NONE |
357 FLOG | log | half\_log | native\_log | NONE |
358 FEXP10 | exp10 | half\_exp10 | native\_exp10 | NONE |
359 FLOG10 | log10 | half\_log10 | native\_log10 | NONE |
360 FATAN2 | atan2 | NONE | NONE | NONE |
361 FATAN2PI | atan2pi | NONE | NONE | NONE |
362 FPOW | pow | NONE | NONE | NONE |
363 FROOT | rootn | NONE | NONE | NONE |
364 FHYPOT | hypot | NONE | NONE | NONE |
365 FRECIP | NONE | half\_recip | native\_recip | NONE |
366 """]]
367
368 Note (1) FSINCOS is macro-op fused (see below).
369
370 ## List of 2-arg opcodes
371
372 [[!table data="""
373 opcode | Description | pseudocode | Extension |
374 FATAN2 | atan2 arc tangent | rd = atan2(rs2, rs1) | Zarctrignpi |
375 FATAN2PI | atan2 arc tangent / pi | rd = atan2(rs2, rs1) / pi | Zarctrigpi |
376 FPOW | x power of y | rd = pow(rs1, rs2) | ZftransAdv |
377 FROOT | x power 1/y | rd = pow(rs1, 1/rs2) | ZftransAdv |
378 FHYPOT | hypotenuse | rd = sqrt(rs1^2 + rs2^2) | ZftransAdv |
379 """]]
380
381 ## List of 1-arg transcendental opcodes
382
383 [[!table data="""
384 opcode | Description | pseudocode | Extension |
385 FRSQRT | Reciprocal Square-root | rd = sqrt(rs1) | Zfrsqrt |
386 FCBRT | Cube Root | rd = pow(rs1, 1.0 / 3) | ZftransAdv |
387 FRECIP | Reciprocal | rd = 1.0 / rs1 | Zftrans |
388 FEXP2 | power-of-2 | rd = pow(2, rs1) | Zftrans |
389 FLOG2 | log2 | rd = log(2. rs1) | Zftrans |
390 FEXPM1 | exponential minus 1 | rd = pow(e, rs1) - 1.0 | ZftransExt |
391 FLOG1P | log plus 1 | rd = log(e, 1 + rs1) | ZftransExt |
392 FEXP | exponential | rd = pow(e, rs1) | ZftransExt |
393 FLOG | natural log (base e) | rd = log(e, rs1) | ZftransExt |
394 FEXP10 | power-of-10 | rd = pow(10, rs1) | ZftransExt |
395 FLOG10 | log base 10 | rd = log(10, rs1) | ZftransExt |
396 """]]
397
398 ## List of 1-arg trigonometric opcodes
399
400 [[!table data="""
401 opcode | Description | pseudo-code | Extension |
402 FSIN | sin (radians) | rd = sin(rs1) | Ztrignpi |
403 FCOS | cos (radians) | rd = cos(rs1) | Ztrignpi |
404 FTAN | tan (radians) | rd = tan(rs1) | Ztrignpi |
405 FASIN | arcsin (radians) | rd = asin(rs1) | Zarctrignpi |
406 FACOS | arccos (radians) | rd = acos(rs1) | Zarctrignpi |
407 FATAN | arctan (radians) | rd = atan(rs1) | Zarctrignpi |
408 FSINPI | sin times pi | rd = sin(pi * rs1) | Ztrigpi |
409 FCOSPI | cos times pi | rd = cos(pi * rs1) | Ztrigpi |
410 FTANPI | tan times pi | rd = tan(pi * rs1) | Ztrigpi |
411 FASINPI | arcsin / pi | rd = asin(rs1) / pi | Zarctrigpi |
412 FACOSPI | arccos / pi | rd = acos(rs1) / pi | Zarctrigpi |
413 FATANPI | arctan / pi | rd = atan(rs1) / pi | Zarctrigpi |
414 FSINH | hyperbolic sin (radians) | rd = sinh(rs1) | Zfhyp |
415 FCOSH | hyperbolic cos (radians) | rd = cosh(rs1) | Zfhyp |
416 FTANH | hyperbolic tan (radians) | rd = tanh(rs1) | Zfhyp |
417 FASINH | inverse hyperbolic sin | rd = asinh(rs1) | Zfhyp |
418 FACOSH | inverse hyperbolic cos | rd = acosh(rs1) | Zfhyp |
419 FATANH | inverse hyperbolic tan | rd = atanh(rs1) | Zfhyp |
420 """]]
421
422 # Subsets
423
424 The full set is based on the Khronos OpenCL opcodes. If implemented entirely it would be too much for both Embedded and also 3D.
425
426 The subsets are organised by hardware complexity, need (3D, HPC), however due to synthesis producing inaccurate results at the range limits, the less common subsets are still required for IEEE754 HPC.
427
428 MALI Midgard, an embedded / mobile 3D GPU, for example only has the following opcodes:
429
430 E8 - fatan_pt2
431 F0 - frcp (reciprocal)
432 F2 - frsqrt (inverse square root, 1/sqrt(x))
433 F3 - fsqrt (square root)
434 F4 - fexp2 (2^x)
435 F5 - flog2
436 F6 - fsin
437 F7 - fcos
438 F9 - fatan_pt1
439
440 These in FP32 and FP16 only: no FP32 hardware, at all.
441
442 Vivante Embedded/Mobile 3D (etnaviv <https://github.com/laanwj/etna_viv/blob/master/rnndb/isa.xml>) only has the following:
443
444 sin, cos2pi
445 cos, sin2pi
446 log2, exp
447 sqrt and rsqrt
448 recip.
449
450 It also has fast variants of some of these, as a CSR Mode.
451
452 Also a general point, that customised optimised hardware targetting FP32 3D with less accuracy simply can neither be used for IEEE754 nor for FP64 (except as a starting point for hardware or software driven Newton Raphson or other iterative method).
453
454 Also in cost/area sensitive applications even the extra ROM lookup tables for certain algorithms may be too costly.
455
456 These wildly differing and incompatible driving factors lead to the subset subdivisions, below.
457
458 ## Zftrans
459
460 Zftrans contains the minimum standard transcendentals best suited to 3D: log2, exp2, recip, rsqrt. They are also the minimum subset for synthesising log10, exp10, exp1m, log1p, the hyperbolic trigonometric functions sinh and so on.
461
462 ## ZftransExt
463
464 LOG, EXP, EXP10, LOG10, LOGP1, EXP1M
465
466 These are extra transcendental functions that are useful, not generally needed for 3D, however for Numerical Computation they may be useful.
467
468 Although they can be synthesised using Ztrans (LOG2 multiplied by a constant), there is both a performance penalty as well as an accuracy penalty towards the limits, which for IEEE754 compliance is unacceptable. In particular, LOG(1+rs1) in hardware
469 may give much better accuracy at the lower end (very small rs1) than LOG(rs1).
470
471 Their forced inclusion would be inappropriate as it would penalise embedded systems with tight power and area budgets. However if they were completely excluded the HPC applications would be penalised on performance and accuracy.
472
473 Therefore they are their own subset extension.
474
475 ## Ztrigpi vs Ztrignpi
476
477 * **Ztrigpi**: trig. xxx-pi sinpi cospi tanpi
478 * **Ztrignpi**: trig non-xxx-pi sin cos tan
479
480 Ztrignpi are the basic trigonometric functions through which all others could be synthesised, and they are typically the base trigonometrics provided by GPUs for 3D, warranting their own subset.
481
482 However as can be correspondingly seen from other sections, there is an accuracy penalty for doing so which will not be acceptable for IEEE754 compliance.
483
484 In the case of the Ztrigpi subset, these are commonly used in for loops with a power of two number of subdivisions, and the cost of multiplying by PI inside each loop (or cumulative addition, resulting in cumulative errors) is not acceptable.
485
486 In for example CORDIC the multiplication by PI may be moved outside of the hardware algorithm as a loop invariant, with no power or area penalty.
487
488 Thus again, the same general argument applies to give Ztrignpi and Ztrigpi as subsets.
489
490 ## Zarctrigpi and Zarctrignpi
491
492 * **Zarctrigpi**: arc-trig. a-xxx-pi: atan2pi asinpi acospi
493 * **Zarctrignpi**: arc-trig. non-a-xxx-pi: atan2, asin, acos
494
495 These are extra trigonometric functions that are useful in some applications, but even for 3D GPUs, particularly embedded and mobile class GPUs, they are not so common and so are synthesised, there.
496
497 Although they can be synthesised using Ztrigpi and Ztrignpi, there is, once again, both a performance penalty as well as an accuracy penalty towards the limits, which for IEEE754 compliance is unacceptable, yet is acceptable for 3D.
498
499 Therefore they are their own subset extension.
500
501 ## Zfhyp
502
503 These are the hyperbolic/inverse-hyperbolic finctions: sinh, cosh, tanh, asinh, acosh, atanh. Their use in 3D is limited.
504
505 They can all be synthesised using LOG, SQRT and so on, so depend on Zftrans.
506 However, once again, at the limits of the range, IEEE754 compliance becomes impossible, and thus a hardware implementation may be required.
507
508 HPC and high-end GPUs are likely markets for these.
509
510 ## ZftransAdv
511
512 Cube-root, Power, Root: these are simply much more complex to implement in hardware, and typically will only be put into HPC applications.
513
514 Root is included as well as Power because at the extreme ranges one is more accurate than the other.
515
516 * **Zfrsqrt**: Reciprocal square-root.
517
518 # Synthesis, Pseudo-code ops and macro-ops
519
520 The pseudo-ops are best left up to the compiler rather than being actual
521 pseudo-ops, by allocating one scalar FP register for use as a constant
522 (loop invariant) set to "1.0" at the beginning of a function or other
523 suitable code block.
524
525 * FSINCOS - fused macro-op between FSIN and FCOS (issued in that order).
526 * FSINCOSPI - fused macro-op between FSINPI and FCOSPI (issued in that order).
527
528 FATANPI example pseudo-code:
529
530 lui t0, 0x3F800 // upper bits of f32 1.0
531 fmv.x.s ft0, t0
532 fatan2pi.s rd, rs1, ft0
533
534 Hyperbolic function example (obviates need for Zfhyp except for
535 high-performance or correctly-rounding):
536
537 ASINH( x ) = ln( x + SQRT(x**2+1))
538
539 # Evaluation and commentary
540
541 This section will move later to discussion.
542
543 ## Reciprocal
544
545 Used to be an alias. Some implementors may wish to implement divide as y times recip(x).
546
547 Others may have shared hardware for recip and divide, others may not.
548
549 To avoid penalising one implementor over another, recip stays.
550
551 ## To evaluate: should LOG be replaced with LOG1P (and EXP with EXPM1)?
552
553 RISC principle says "exclude LOG because it's covered by LOGP1 plus an ADD".
554 Research needed to ensure that implementors are not compromised by such
555 a decision
556 <http://lists.libre-riscv.org/pipermail/libre-riscv-dev/2019-August/002358.html>
557
558 > > correctly-rounded LOG will return different results than LOGP1 and ADD.
559 > > Likewise for EXP and EXPM1
560
561 > ok, they stay in as real opcodes, then.
562
563 ## ATAN / ATAN2 commentary
564
565 Discussion starts here:
566 <http://lists.libre-riscv.org/pipermail/libre-riscv-dev/2019-August/002470.html>
567
568 from Mitch Alsup:
569
570 would like to point out that the general implementations of ATAN2 do a
571 bunch of special case checks and then simply call ATAN.
572
573 double ATAN2( double y, double x )
574 { // IEEE 754-2008 quality ATAN2
575
576 // deal with NANs
577 if( ISNAN( x ) ) return x;
578 if( ISNAN( y ) ) return y;
579
580 // deal with infinities
581 if( x == +∞ && |y|== +∞ ) return copysign( π/4, y );
582 if( x == +∞ ) return copysign( 0.0, y );
583 if( x == -∞ && |y|== +∞ ) return copysign( 3π/4, y );
584 if( x == -∞ ) return copysign( π, y );
585 if( |y|== +∞ ) return copysign( π/2, y );
586
587 // deal with signed zeros
588 if( x == 0.0 && y != 0.0 ) return copysign( π/2, y );
589 if( x >=+0.0 && y == 0.0 ) return copysign( 0.0, y );
590 if( x <=-0.0 && y == 0.0 ) return copysign( π, y );
591
592 // calculate ATAN2 textbook style
593 if( x > 0.0 ) return ATAN( |y / x| );
594 if( x < 0.0 ) return π - ATAN( |y / x| );
595 }
596
597
598 Yet the proposed encoding makes ATAN2 the primitive and has ATAN invent
599 a constant and then call/use ATAN2.
600
601 When one considers an implementation of ATAN, one must consider several
602 ranges of evaluation::
603
604 x [ -∞, -1.0]:: ATAN( x ) = -π/2 + ATAN( 1/x );
605 x (-1.0, +1.0]:: ATAN( x ) = + ATAN( x );
606 x [ 1.0, +∞]:: ATAN( x ) = +π/2 - ATAN( 1/x );
607
608 I should point out that the add/sub of π/2 can not lose significance
609 since the result of ATAN(1/x) is bounded 0..π/2
610
611 The bottom line is that I think you are choosing to make too many of
612 these into OpCodes, making the hardware function/calculation unit (and
613 sequencer) more complicated that necessary.
614
615 --------------------------------------------------------
616
617 We therefore I think have a case for bringing back ATAN and including ATAN2.
618
619 The reason is that whilst a microcode-like GPU-centric platform would do ATAN2 in terms of ATAN, a UNIX-centric platform would do it the other way round.
620
621 (that is the hypothesis, to be evaluated for correctness. feedback requested).
622
623 Thie because we cannot compromise or prioritise one platfrom's speed/accuracy over another. That is not reasonable or desirable, to penalise one implementor over another.
624
625 Thus, all implementors, to keep interoperability, must both have both opcodes and may choose, at the architectural and routing level, which one to implement in terms of the other.
626
627 Allowing implementors to choose to add either opcode and let traps sort it out leaves an uncertainty in the software developer's mind: they cannot trust the hardware, available from many vendors, to be performant right across the board.
628
629 Standards are a pig.
630
631 ---
632
633 I might suggest that if there were a way for a calculation to be performed
634 and the result of that calculation chained to a subsequent calculation
635 such that the precision of the result-becomes-operand is wider than
636 what will fit in a register, then you can dramatically reduce the count
637 of instructions in this category while retaining
638
639 acceptable accuracy:
640
641 z = x / y
642
643 can be calculated as::
644
645 z = x * (1/y)
646
647 Where 1/y has about 26-to-32 bits of fraction. No, it's not IEEE 754-2008
648 accurate, but GPUs want speed and
649
650 1/y is fully pipelined (F32) while x/y cannot be (at reasonable area). It
651 is also not "that inaccurate" displaying 0.625-to-0.52 ULP.
652
653 Given that one has the ability to carry (and process) more fraction bits,
654 one can then do high precision multiplies of π or other transcendental
655 radixes.
656
657 And GPUs have been doing this almost since the dawn of 3D.
658
659 // calculate ATAN2 high performance style
660 // Note: at this point x != y
661 //
662 if( x > 0.0 )
663 {
664 if( y < 0.0 && |y| < |x| ) return - π/2 - ATAN( x / y );
665 if( y < 0.0 && |y| > |x| ) return + ATAN( y / x );
666 if( y > 0.0 && |y| < |x| ) return + ATAN( y / x );
667 if( y > 0.0 && |y| > |x| ) return + π/2 - ATAN( x / y );
668 }
669 if( x < 0.0 )
670 {
671 if( y < 0.0 && |y| < |x| ) return + π/2 + ATAN( x / y );
672 if( y < 0.0 && |y| > |x| ) return + π - ATAN( y / x );
673 if( y > 0.0 && |y| < |x| ) return + π - ATAN( y / x );
674 if( y > 0.0 && |y| > |x| ) return +3π/2 + ATAN( x / y );
675 }
676
677 This way the adds and subtracts from the constant are not in a precision
678 precarious position.