99fcbec2ee02a07400a62eb2f6b95429e86f2cb6
[libreriscv.git] / openpower / sv / rfc / ls012.mdwn
1 # External RFC ls012: Discuss priorities of Libre-SOC Scalar(Vector) ops
2
3 **Date: 2023apr10. v2 released: TODO**
4
5 * Funded by NLnet Grants under EU Horizon Grants 101069594 825310
6 * <https://git.openpower.foundation/isa/PowerISA/issues/121>
7 * <https://bugs.libre-soc.org/show_bug.cgi?id=1051>
8 * <https://bugs.libre-soc.org/show_bug.cgi?id=1052>
9 * <https://bugs.libre-soc.org/show_bug.cgi?id=1054>
10
11 The purpose of this RFC is:
12
13 * to give a full list of upcoming Scalar opcodes developed by Libre-SOC
14 (being cognisant that *all* of them are Vectoriseable)
15 * to give OPF Members and non-Members alike the opportunity to comment and get
16 involved early in RFC submission
17 * formally agree a priority order on an iterative basis with new versions
18 of this RFC,
19 * which ones should be EXT022 Sandbox, which in EXT0xx, which in EXT2xx, which
20 not proposed at all,
21 * keep readers summarily informed of ongoing RFC submissions, with new versions
22 of this RFC,
23 * for IBM (in their capacity as Allocator of Opcodes)
24 to get a clear advance picture of Opcode Allocation
25 *prior* to submission
26
27 As this is a Formal ISA RFC the evaluation shall ultimately define
28 (in advance of the actual submission of the instructions themselves)
29 which instructions will be submitted over the next 1-18 months.
30
31 *It is expected that readers visit and interact with the Libre-SOC
32 resources in order to do due-diligence on the prioritisation
33 evaluation. Otherwise the ISA WG is overwhelmed by "drip-fed" RFCs
34 that may turn out not to be useful, against a background of having
35 no guiding overview or pre-filtering, and everybody's precious time
36 is wasted. Also note that the Libre-SOC Team, being funded by NLnet
37 under Privacy and Enhanced Trust Grants, are **prohibited** from signing
38 Commercial-Confidentiality NDAs, as doing so is a direct conflict of
39 interest with their funding body's Charitable Foundation Status and remit,
40 and therefore the **entire** set of almost 150 new SFFS instructions
41 can only go via the External RFC Process. Also be advised and aware
42 that "Libre-SOC" != "RED Semiconductor Ltd". The two are completely
43 **separate** organisations*.
44
45 Worth bearing in mind during evaluation that every "Defined Word" may
46 or may not be Vectoriseable, but that every "Defined Word" should have
47 merits on its own, not just when Vectorised. An example of a borderline
48 Vectoriseable Defined Word is `mv.swizzle` which only really becomes
49 high-priority for Audio/Video, Vector GPU and HPC Workloads, but has
50 less merit as a Scalar-only operation, yet when SVP64Single-Prefixed
51 can be part of an atomic Compare-and-Swap sequence.
52
53 Although one of the top world-class ISAs, Power ISA Scalar (SFFS) has
54 not been significantly advanced in 12 years: IBM's primary focus has
55 understandably been on PackedSIMD VSX. Unfortunately, with VSX being
56 914 instructions and 128-bit it is far too much for any new team to
57 consider (10+ years development effort) and far outside of Embedded or
58 Tablet/Desktop/Laptop power budgets. Thus bringing Power Scalar up-to-date
59 to modern standards *and on its own merits* is a reasonable goal, and
60 the advantages of the reduced focus is that SFFS remains RISC-paradigm,
61 with lessons being learned from other ISAs from the intervening years.
62 Good examples here include `bmask`.
63
64 SVP64 Prefixing - also known by the terms "Zero-Overhead-Loop-Prefixing"
65 as well as "True-Scalable-Vector Prefixing" - also literally brings new
66 dimensions to the Power ISA. Thus when adding new Scalar "Defined Words"
67 it has to unavoidably and simultaneously be taken into consideration
68 their value when Vector-Prefixed, *as well as* SVP64Single-Prefixed.
69
70 **Target areas**
71
72 Whilst entirely general-purpose there are some categories that these
73 instructions are targetting: Bit-manipulation, Big-integer, cryptography,
74 Audio/Visual, High-Performance Compute, GPU workloads and DSP.
75
76 **Instruction count guide and approximate priority order**
77
78 * 6 - SVP64 Management [[ls008]] [[ls009]] [[ls010]]
79 * 5 - CR weirds [[sv/cr_int_predication]]
80 * 4 - INT<->FP mv [[ls006]]
81 * 19 - GPR LD/ST-PostIncrement-Update (saves hugely in hot-loops) [[ls011]]
82 * ~12 - FPR LD/ST-PostIncrement-Update (ditto) [[ls011]]
83 * 11 - GPR LD/ST-Shifted-PostIncrement-Update (saves hugely in hot-loops) [[ls011]]
84 * 4 - FPR LD/ST-Shifted-PostIncrement-Update (ditto) [[ls011]]
85 * 26 - GPR LD/ST-Shifted (again saves hugely in hot-loops) [[ls004]]
86 * 11 - FPR LD/ST-Shifted (ditto) [[ls004]]
87 * 2 - Float-Load-Immediate (always saves one LD L1/2/3 D-Cache op) [[ls002]]
88 * 5 - Big-Integer Chained 3-in 2-out (64-bit Carry) [[ls003]] [[sv/biginteger]]
89 * 6 - Bitmanip LUT2/3 operations. high cost high reward [[sv/bitmanip]]
90 * 1 - fclass (Scalar variant of xvtstdcsp) [[sv/fclass]]
91 * 5 - Audio-Video [[sv/av_opcodes]]
92 * 2 - Shift-and-Add (mitigates LD-ST-Shift; Cryptography e.g. twofish) [[ls004]]
93 * 2 - BMI group [[ls014]] [[sv/vector_ops]]
94 * 2 - GPU swizzle [[sv/mv.swizzle]]
95 * 9 - FP DCT/FFT Butterfly (2/3-in 2-out) [[ls016]]
96 * ~9 Integer DCT/FFT Butterfly [[ls016]] <https://bugs.libre-soc.org/show_bug.cgi?id=1028>
97 * 18 - Trigonometric (1-arg) [[openpower/transcendentals]]
98 * 15 - Transcendentals (1-arg) [[openpower/transcendentals]]
99 * 25 - Transcendentals (2-arg) [[openpower/transcendentals]]
100
101 Summary tables are created below by different sort categories. Additional
102 columns (and tables) as necessary can be requested to be added as part
103 of update revisions to this RFC.
104
105 \newpage{}
106
107 # Target Area summaries
108
109 Please note that there are some instructions developed thanks to
110 NLnet funding that have not been included here for assessment. Examples
111 include `pcdec` and the Galois Field arithmetic operations. From a purely
112 practical perspective due to the quantity the lower-priority instructions
113 were simply left out. However they remain in the Libre-SOC resources.
114
115 Some of these SFFS instructions appear to be duplicates of VSX.
116 A frequent argument comes up that if instructions are in VSX already they
117 should not be added to SFFS, especially if they are nominally the same.
118 The logic that this effectively damages performance of an SFFS-only
119 implementation was raised earlier, however there is a more subtle reason
120 why the instructions are needed.
121
122 Future versions of SVP64 and SVP64Single are expected to be developed
123 by future Power ISA Stakeholders on top of VSX. The decisions made
124 there about the meaning of Prefixed Vectorised VSX may be *completely
125 different* from those made for Prefixed SFFS instructions. At which
126 point the lack of SFFS equivalents would penalise SFFS implementors in a
127 much more severe way, effectively expecting them and SFFS programmers to
128 work with a non-orthogonal paradigm, to their detriment. The solution
129 is to give the SFFS Subset the space and respect that it deserves and
130 allow it to be stand-alone on its own merits.
131
132 ## SVP64 Management instructions
133
134 These without question have to go in EXT0xx. Future extended variants,
135 bringing even more powerful capabilities, can be followed up later with
136 EXT1xx prefixed variants, which is not possible if placed in EXT2xx.
137 *Only `svstep` is actually Vectoriseable*, all other Management
138 instructions are UnVectoriseable. PO1-Prefixed examples include
139 adding psvshape in order to support both Inner and Outer Product Matrix
140 Schedules, by providing the option to directly reverse the order of the
141 triple loops. Outer is used for standard Matrix Multiply (on top of a
142 standard MAC or FMAC instruction), but Inner is required for Warshall
143 Transitive Closure (on top of a cumulatively-applied max instruction).
144
145 Excpt for `svstep` which is Vectoriseable the Management Instructions
146 themselves are all 32-bit Defined Words (Scalar Operations), so
147 PO1-Prefixing is perfectly reasonable. SVP64 Management instructions
148 of which there are only 6 are all 5 or 6 bit XO, meaning that the opcode
149 space they take up in EXT0xx is not alarmingly high for their intrinsic
150 strategic value.
151
152 ## Transcendentals
153
154 Found at [[openpower/transcendentals]] these subdivide into high
155 priority for accelerating general-purpose and High-Performance Compute,
156 specialist 3D GPU operations suited to 3D visualisation, and low-priority
157 less common instructions where IEEE754 full bit-accuracy is paramount.
158 In 3D GPU scenarios for example even 12-bit accuracy can be overkill,
159 but for HPC Scientific scenarios 12-bit would be disastrous.
160
161 There are a **lot** of operations here, and they also bring Power
162 ISA up-to-date to IEEE754-2019. Fortunately the number of critical
163 instructions is quite low, but the caveat is that if those operations
164 are utilised to synthesise other IEEE754 operations (divide by `pi` for
165 example) full bit-level accuracy (a hard requirement for IEEE754) is lost.
166
167 Also worth noting that the Khronos Group defines minimum acceptable
168 bit-accuracy levels for 3D Graphics: these are **nowhere near** the full
169 accuracy demanded by IEEE754, the reason for the Khronos definitions is
170 a massive reduction often four-fold in power consumption and gate count
171 when 3D Graphics simply has no need for full accuracy.
172
173 *For 3D GPU markets this definitely needs addressing*
174
175 These instructions are therefore only likely to be proposed if a Stakeholder
176 comes forward and needs them. If for example RED Semiconductor Ltd had a
177 customer requiring a GPS/GNSS Correlator DSP then the SIN/COS Transcendentals
178 would become a high priority but still be optional, as DSP (and 3D) is still
179 specialist.
180
181 ## Audio/Video
182
183 Found at [[sv/av_opcodes]] these do not require Saturated variants
184 because Saturation is added via [[sv/svp64]] (Vector Prefixing) and
185 via [[sv/svp64_single]] Scalar Prefixing. This is important to note for
186 Opcode Allocation because placing these operations in the UnVectoriseable
187 areas would irredeemably damage their value. Unlike PackedSIMD ISAs
188 the actual number of AV Opcodes is remarkably small once the usual
189 cascading-option-multipliers (SIMD width, bitwidth, saturation,
190 HI/LO) are abstracted out to RISC-paradigm Prefixing, leaving just
191 absolute-diff-accumulate, min-max, average-add etc. as "basic primitives".
192
193 The min/max set are under their own RFC, [[ls013]]. They are sufficent
194 high priority: fmax requires an astounding 32 SFFS instructions.
195
196 ## Twin-Butterfly FFT/DCT/DFT for DSP/HPC/AI/AV
197
198 The number of uses in Computer Science for DCT, NTT, FFT and DFT,
199 is astonishing. The wikipedia page lists over a hundred separate and
200 distinct areas: Audio, Video, Radar, Baseband processing, AI, Solomon-Reed
201 Error Correction, the list goes on and on. ARM has special dedicated
202 Integer Twin-butterfly instructions. TI's MSP Series DSPs have had FFT
203 Inner loop support for over 30 years. Qualcomm's Hexagon VLIW Baseband
204 DSP can do full FFT triple loops in one VLIW group.
205
206 It should be pretty clear this is high priority.
207
208 With SVP64 [[sv/remap]] providing the Loop Schedules it falls to
209 the Scalar side of the ISA to add the prerequisite "Twin Butterfly"
210 operations, typically performing for example one multiply but in-place
211 subtracting that product from one operand and adding it to the other.
212 The *in-place* aspect is strategically extremely important for significant
213 reductions in Vectorised register usage, particularly for DCT.
214
215 ## CR Weird group
216
217 Outlined in [[sv/cr_int_predication]] these instructions massively save
218 on CR-Field instruction count. Multi-bit to single-bit and vice-versa
219 normally requiring several CR-ops (crand, crxor) are done in one single
220 instruction. The reason for their addition is down to SVP64 overloading
221 CR Fields as Vector Predicate Masks. Reducing instruction count in
222 hot-loops is considered high priority.
223
224 An additional need is to do popcount on CR Field bit vectors but adding
225 such instructions to the *Condition Register* side was deemed to be far
226 too much. Therefore, priority was given instead to transferring several
227 CR Field bits into GPRs, whereupon the full set of Standard Scalar GPR
228 Logical Operations may be used. This strategy has the side-effect of
229 keeping the CRweird group down to only five instructions.
230
231 ## Big-integer Math
232
233 [[sv/biginteger]] has always been a high priority area for commercial
234 applications, privacy, Banking, as well as HPC Numerical Accuracy:
235 libgmp as well as cryptographic uses in Asymmetric Ciphers. poly1305
236 and ec25519 are finding their way into everyday use via OpenSSL.
237
238 A very early variant of the Power ISA had a 32-bit Carry-in Carry-out
239 SPR. Its removal from subsequent revisions is regrettable. An alternative
240 concept is to add six explicit 3-in 2-out operations that, on close
241 inspection, always turn out to be supersets of *existing Scalar
242 operations* that discard upper or lower DWords, or parts thereof.
243
244 *Thus it is critical to note that not one single one of these operations
245 expands the bitwidth of any existing Scalar pipelines*.
246
247 The `dsld` instruction for example merely places additional LSBs into the
248 64-bit shift (64-bit carry-in), and then places the (normally discarded)
249 MSBs into the second output register (64-bit carry-out). It does **not**
250 require a 128-bit shifter to replace the existing Scalar Power ISA
251 64-bit shifters.
252
253 The reduction in instruction count these operations bring, in critical
254 hot loops, is remarkably high, to the extent where a Scalar-to-Vector
255 operation of *arbitrary length* becomes just the one Vector-Prefixed
256 instruction.
257
258 Whilst these are 5-6 bit XO their utility is considered high strategic
259 value and as such are strongly advocated to be in EXT04. The alternative
260 is to bring back a 64-bit Carry SPR but how it is retrospectively
261 applicable to pre-existing Scalar Power ISA multiply, divide, and shift
262 operations at this late stage of maturity of the Power ISA is an entire
263 area of research on its own deemed unlikely to be achievable.
264
265 Note: none of these instructions are in VSX. They are a different paradigm
266 and have more akin with their x86 equivalents.
267
268 ## fclass and GPR-FPR moves
269
270 [[sv/fclass]] - just one instruction. With SFFS being locked down to
271 exclude VSX, and there being no desire within the nascent OpenPOWER
272 ecosystem outside of IBM to implement the VSX PackedSIMD paradigm, it
273 becomes necessary to upgrade SFFS such that it is stand-alone capable. One
274 omission based on the assumption that VSX would always be present is an
275 equivalent to `xvtstdcsp`.
276
277 Similar arguments apply to the GPR-INT move operations, proposed in
278 [[ls006]], with the opportunity taken to add rounding modes present
279 in other ISAs that Power ISA VSX PackedSIMD does not have. Javascript
280 rounding, one of the worst offenders of Computer Science, requires a
281 phenomenal 35 instructions with *six branches* to emulate in Power
282 ISA! For desktop as well as Server HTML/JS back-end execution of
283 javascript this becomes an obvious priority, recognised already by ARM
284 as just one example.
285
286 Whilst some of these instructions have VSX equivalents they must not
287 be excluded on that basis. SVP64/VSX may have a different meaning from
288 SVP64/SFFS i e. the two *Vectorised* instructions may not be equivalent.
289
290 ## Bitmanip LUT2/3
291
292 These LUT2/3 operations are high cost high reward. Outlined in
293 [[sv/bitmanip]], the simplest ones already exist in PackedSIMD VSX:
294 `xxeval`. The same reasoning applies as to fclass: SFFS needs to be
295 stand-alone on its own merits and should an implementor choose not to
296 implement any aspect of PackedSIMD VSX the performance of their product
297 should not be penalised for making that decision.
298
299 With Predication being such a high priority in GPUs and HPC, CR Field
300 variants of Ternary and Binary LUT instructions were considered high
301 priority, and again just like in the CRweird group the opportunity was
302 taken to work on *all* bits of a CR Field rather than just one bit as
303 is done with the existing CR operations crand, cror etc.
304
305 The other high strategic value instruction is `grevlut` (and `grevluti`
306 which can generate a remarkably large number of regular-patterned magic
307 constants). The grevlut set require of the order of 20,000 gates but
308 provide an astonishing plethora of innovative bit-permuting instructions
309 never seen in any other ISA.
310
311 The downside of all of these instructions is the extremely low XO bit
312 requirements: 2-3 bit XO due to the large immediates *and* the number of
313 operands required. The LUT3 instructions are already compacted down to
314 "Overwrite" variants. (By contrast the Float-Load-Immediate instructions
315 are a much larger XO because despite having 16-bit immediate only one
316 Register Operand is needed).
317
318 Realistically these high-value instructions should be proposed in EXT2xx
319 where their XO cost does not overwhelm EXT0xx.
320
321 ## (f)mv.swizzle
322
323 [[sv/mv.swizzle]] is dicey. It is a 2-in 2-out operation whose value
324 as a Scalar instruction is limited *except* if combined with `cmpi` and
325 SVP64Single Predication, whereupon the end result is the RISC-synthesis
326 of Compare-and-Swap, in two instructions.
327
328 Where this instruction comes into its full value is when Vectorised.
329 3D GPU and HPC numerical workloads astonishingly contain between 10 to 15%
330 swizzle operations: access to YYZ, XY, of an XYZW Quaternion, performing
331 balancing of ARGB pixel data. The usage is so high that 3D GPU ISAs make
332 Swizzle a first-class priority in their VLIW words. Even 64-bit Embedded
333 GPU ISAs have a staggering 24-bits dedicated to 2-operand Swizzle.
334
335 So as not to radicalise the Power ISA the Libre-SOC team decided to
336 introduce mv Swizzle operations, which can always be Macro-op fused
337 in exactly the same way that ARM SVE predicated-move extends 3-operand
338 "overwrite" opcodes to full independent 3-in 1-out.
339
340 ## BMI (bit-manipulation) group.
341
342 Whilst the [[sv/vector_ops]] instructions are only two in number, in
343 reality the `bmask` instruction has a Mode field allowing it to cover
344 **24** instructions, more than have been added to any other CPUs by
345 ARM, Intel or AMD. Analysis of the BMI sets of these CPUs shows simple
346 patterns that can greatly simplify both Decode and implementation. These
347 are sufficiently commonly used, saving instruction count regularly,
348 that they justify going into EXT0xx.
349
350 The other instruction is `cprop` - Carry-Propagation - which takes
351 the P and Q from carry-propagation algorithms and generates carry
352 look-ahead. Greatly increases the efficiency of arbitrary-precision
353 integer arithmetic by combining what would otherwise be half a dozen
354 instructions into one. However it is still not a huge priority unlike
355 `bmask` so is probably best placed in EXT2xx.
356
357 ## Float-Load-Immediate
358
359 Very easily justified. As explained in [[ls002]] these always saves one
360 LD L1/2/3 D-Cache memory-lookup operation, by virtue of the Immediate
361 FP value being in the I-Cache side. It is such a high priority that
362 these instructions are easily justifiable adding into EXT0xx, despite
363 requiring a 16-bit immediate. By designing the second-half instruction
364 as a Read-Modify-Write it saves on XO bit-length (only 5 bits), and
365 can be macro-op fused with its first-half to store a full IEEE754 FP32
366 immediate into a register.
367
368 There is little point in putting these instructions into EXT2xx. Their
369 very benefit and inherent value *is* as 32-bit instructions, not 64-bit
370 ones. Likewise there is less value in taking up EXT1xx Encoding space
371 because EXT1xx only brings an additional 16 bits (approx) to the table,
372 and that is provided already by the second-half instruction.
373
374 Thus they qualify as both high priority and also EXT0xx candidates.
375
376 ## FPR/GPR LD/ST-PostIncrement-Update
377
378 These instruction, outlined in [[ls011]], save hugely in hot-loops.
379 Early ISAs such as PDP-8, PDP-11, which inspired the iconic Motorola
380 68000, 88100, Mitch Alsup's MyISA 66000, and can even be traced back to
381 the iconic ultra-RISC CDC 6600, all had both pre- and post- increment
382 Addressing Modes.
383
384 The reason is very simple: it is a direct recognition of the practice
385 in c to frequently utilise both `*p++` and `*++p` which itself stems
386 from common need in Computer Science algorithms.
387
388 The problem for the Power ISA is - was - that the opcode space needed
389 to support both was far too great, and the decision was made to go with
390 pre-increment, on the basis that outside the loop a "pre-subtraction"
391 may be performed.
392
393 Whilst this is a "solution" it is less than ideal, and the opportunity
394 exists now with the EXT2xx Primary Opcodes to correct this and bring
395 Power ISA up a level.
396
397 Where things begin to get more than a little hairy is if both
398 Post-Increment *and* Shifted are included. If SVP64 keeps one
399 single bit (/pi) dedicated in the `RM.Mode` field then this
400 problem ges away, at the cost of reducing SVP64's effectiveness.
401 However again, given that even the Shifted-Post-Increment
402 instructions are all 9-bit XO it is not outside the realm of
403 possibility to include them in EXT2xx.
404
405 ## Shift-and-add (and LD/ST Indexed-Shift)
406
407 Shift-and-Add are proposed in [[ls004]]. They mitigate the need to add
408 LD-ST-Shift instructions which are a high-priority aspect of both x86
409 and ARM. LD-ST-Shift is normally just the one instruction: Shift-and-add
410 brings that down to two, where Power ISA presently requires three.
411 Cryptography e.g. twofish also makes use of Integer double-and-add,
412 so the value of these instructions is not limited to Effective Address
413 computation. They will also have value in Audio DSP.
414
415 Being a 10-bit XO it would be somewhat punitive to place these in EXT2xx
416 when their whole purpose and value is to reduce binary size in Address
417 offset computation, thus they are best placed in EXT0xx.
418
419 The upside as far as adding them is concerned is that existing hardware
420 will already have amalgamated pipelines with very few actual back-end
421 (Micro-Coded) internal operations (likely just two: one load, one store).
422 Passing a 2-bit additional immediate field down to those pipelines really
423 is not hard.
424
425 *(Readers unfamiliar with Micro-coding should look at the Microwatt VHDL
426 source code)*
427
428 Also included because it is important to see the quantity of instructions:
429 LD/ST-Indexed-Shifted. Across Update variants, Byte-reverse variants,
430 Arithmetic and FP, the total is a slightly-eye-watering **37**
431 instructions, only ameliorated by the fact that they are all 9-bit XO.
432 Even when adding the Post-Increment-Shifted group it is still only
433 52 9-bit XO instructions, which is not unreasonable to consider (in
434 EXT2xx).
435
436 \newpage{}
437
438 # Vectorisation: SVP64 and SVP64Single
439
440 To be submitted as part of [[ls001]], [[ls008]], [[ls009]] and [[ls010]],
441 with SVP64Single to follow in a subsequent RFC, SVP64 is conceptually
442 identical to the 50+ year old 8080 `REP` instruction and the Zilog
443 Z80 `CPIR` and `LDIR` instructions. Parallelism is best achieved
444 by exploiting a Multi-Issue Out-of-Order Micro-architecture. It is
445 extremely important to bear in mind that at no time does SVP64 add even
446 one single actual Vector instruction. It is a *pure* RISC-paradigm
447 Prefixing concept only.
448
449 This has some implications which need unpacking. Firstly: in the future,
450 the Prefixing may be applied to VSX. The only reason it was not included
451 in the initial proposal of SVP64 is because due to the number of VSX
452 instructions the Due Diligence required is obviously five times higher
453 than the 3+ years work done so far on the SFFS Subset.
454
455 Secondly: **any** Scalar instruction involving registers **automatically**
456 becomes a candidate for Vector-Prefixing. This in turn means that when
457 a new instruction is proposed, it becomes a hard requirement to consider
458 not only the implications of its inclusion as a Scalar-only instruction,
459 but how it will best be utilised as a Vectorised instruction **as well**.
460 Extreme examples of this are the Big-Integer 3-in 2-out instructions
461 that use one 64-bit register effectively as a Carry-in and Carry-out. The
462 instructions were designed in a *Scalar* context to be inline-efficient
463 in hardware (use of Operand-Forwarding to reduce the chain down to 2-in
464 1-out), but in a *Vector* context it is extremely straightforward to
465 Micro-code an entire batch onto 128-bit SIMD pipelines, 256-bit SIMD
466 pipelines, and to perform a large internal Forward-Carry-Propagation on
467 for example the Vectorised-Multiply instruction.
468
469 Thirdly: as far as Opcode Allocation is concerned, SVP64 needs to be
470 considered as an independent stand-alone instruction (just like `REP`).
471 In other words, the Suffix **never** gets decoded as a completely
472 different instruction just because of the Prefix. The cost of doing so
473 is simply too high in hardware.
474
475 --------
476
477 # Guidance for evaluation
478
479 Deciding which instructions go into an ISA is extremely complex, costly,
480 and a huge responsibility. In public standards mistakes are irrevocable,
481 and in the case of an ISA the Opcode Allocation is a finite resource,
482 meaning that mistakes punish future instructions as well. This section
483 therefore provides some Evaluation Guidance on the decision process,
484 particularly for people new to ISA development, given that this RFC is
485 circulated widely and publicly. Constructive feedback from experienced
486 ISA Architects welcomed to improve this section.
487
488 **Does anyone want it?**
489
490 Sounds like an obvious question but if there is no driving need (no
491 "Stakeholder") then why is the instruction being proposed? If it is
492 purely out of curiosity or part of a Research effort not intended for
493 production then it's probably best left in the EXT022 Sandbox.
494
495 **How many registers does it need?**
496
497 The basic RISC Paradigm is not only to make instruction encoding simple
498 (often "wasting" encoding space compared to highly-compacted ISAs such
499 as x86), but also to keep the number of registers used down to a minimum.
500
501 Counter-examples are FMAC which had to be added to IEEE754 because the
502 *internal* product requires more accuracy than can fit into a register
503 (it is well-known that FMUL followed by FADD performs an additional
504 rounding on the intermediate register which loses accuracy compared to
505 FMAC). Another would be a dot-product instruction, which again requires
506 an accumulator of at least double the width of the two vector inputs.
507 And in the AMDGPU ISA, there are Texture-mapping instructions taking up
508 to an astounding *twelve* input operands!
509
510 The downside of going too far however has to be a trade-off with the
511 next question. Both MIPS and RISC-V lack Condition Codes, which means
512 that emulating x86 Branch-Conditional requires *ten* MIPS instructions.
513
514 The downside of creating too complex instructions is that the Dependency
515 Hazard Management in high-performance multi-issue out-of-order
516 microarchitectures becomes infeasibly large, and even simple in-order
517 systems may have performance severely compromised by an overabundance
518 of stalls. Also worth remembering is that register file ports are
519 insanely costly, not just to design but also use considerable power.
520
521 That said there do exist genuine reasons why more registers is better than
522 less: Compare-and-Swap has huge benefits but is costly to implement,
523 and DCT/FFT Twin-Butterfly instructions allow creation of in-place
524 in-register algorithms reducing the number of registers needed and
525 thus saving power due to making the *overall* algorithm more efficient,
526 as opposed to micro-focussing on a localised power increase.
527
528 **How many register files does it use?**
529
530 Complex instructions pulling in data from multiple register files can
531 create unnecessary issues surrounding Dependency Hazard Management in
532 Out-of-Order systems. As a general rule it is better to keep complex
533 instructions reading and writing to the same register file, relying
534 on much simpler (1-in 1-out) instructions to transfer data between
535 register files.
536
537 **Can other existing instructions (plural) do the same job**
538
539 The general rule being: if two or more instructions can do the
540 same job, leave it out... *unless* the number of occurrences of
541 that instruction being missing is causing huge increases in binary
542 size. RISC-V has gone too far in this regard, as explained here:
543 <https://news.ycombinator.com/item?id=24459314>
544
545 Good examples are LD-ST-Indexed-shifted (multiply RB by 2, 4 8 or 16)
546 which are high-priority instructions in x86 and ARM, but lacking in
547 Power ISA, MIPS, and RISC-V. With many critical hot-loops in Computer
548 Science having to perform shift and add as explicit instructions,
549 adding LD/ST-shifted should be considered high priority, except that
550 the sheer *number* of such instructions needing to be added takes us
551 into the next question
552
553 **How costly is the encoding?**
554
555 This can either be a single instruction that is costly (several operands
556 or a few long ones) or it could be a group of simpler ones that purely
557 due to their number increases overall encoding cost. An example of an
558 extreme costly instruction would be those with their own Primary Opcode:
559 addi is a good candidate. However the sheer overwhelming number of
560 times that instruction is used easily makes a case for its inclusion.
561
562 Mentioned above was Load-Store-Indexed-Shifted, which only needs 2
563 bits to specify how much to shift: x2 x4 x8 or x16. And they are all
564 a 10-bit XO Field, so not that costly for any one given instruction.
565 Unfortunately there are *around 30* Load-Store-Indexed Instructions in the
566 Power ISA, which means an extra *five* bits taken up of precious XO space.
567 Then let us not forget the two needed for the Shift amount. Now we are
568 up to *three* bit XO for the group.
569
570 Is this a worthwhile tradeoff? Honestly it could well be. And that's
571 the decision process that the OpenPOWER ISA Working Group could use some
572 assistance on, to make the evaluation easier.
573
574 **How many gates does it need?**
575
576 `grevlut` comes in at an astonishing 20,000 gates, where for comparison
577 an FP64 Multiply typically takes between 12 to 15,000. Not counting
578 the cost in hardware terms is just asking for trouble.
579
580 If the number of gates gets too large it has an unintended side-effect:
581 power consumption goes up but so does the distance between functions
582 on-chip. A good illustration here is the CDC6600 and Cray Supercomputers
583 where speed was limited by the size of the *room*. In other words larger
584 functions cause communication delays, and communication delays reduce
585 top speed.
586
587 **How long will it take to complete?**
588
589 In the case of divide or Transcendentals the algorithms needed are so
590 complex that simple implementations can often take an astounding 128
591 clock cycles to complete (Goldschmidtt reduces that significantly).
592 Other instructions waiting for the results
593 will back up and eventually stall, where in-order systems pretty much
594 just stall straight away.
595
596 Less extreme examples include instructions that take only a few cycles
597 to complete, but if commonly used in tight loops with Conditional Branches, an
598 Out-of-Order system with Speculative capability may need significantly
599 more Reservation Stations to hold in-flight data for *all* instructions when
600 some take longer, so even a single clock cycle reduction
601 could become important.
602
603 A rule of thumb is that in Hardware, at 4.8 ghz the budget for what is called
604 "gate propagation delay" is only around 16 to 19 gates chained one after
605 the other. Anything beyond that budget will need to be stored in DFFs
606 (Flip-flops) and another set of 16-19 gates continues on the next clock
607 cycle. Thus for example with `grevlut` above it is almost certainly the
608 case that high-performance high-clock-rate systems would need at least
609 two clock cycles (two pipeline stages) to produce a valid result.
610 This in turn brings us to the next question as it is common to consider
611 subdividing complex instructions into smaller parts.
612
613 **Can one instruction do the job of many?**
614
615 Large numbers of disparate instructions adversely affects resource
616 utilisation in In-Order systems. However it is not always that simple:
617 every one of the Power ISA "add" and "subtract" instructions, as shown by
618 the Microwatt source code, may be micro-coded as one single instruction
619 where RA may optionally be inverted, output likewise, and Carry-In set to
620 1, 0 or XER.CA. From these options the *entire* suite of add/subtract
621 may be synthesised (subtract by inverting RA and adding an extra 1 it
622 produces a 2s-complement of RA).
623
624 `bmask` for example is to be proposed as a single instruction with
625 a 5-bit "Mode" operand, greatly simplifying some micro-architectural
626 implementations. Likewise the FP-INT conversion instructions are grouped
627 as a set of four, instead of over 30 separate instructions. Aside from
628 anything this strategy makes the ISA Working Group's evaluation task
629 easier, as well as reducing the work of writing a Compliance Test Suite.
630
631 In the case of the MIPS 3D ASE Extension, a Reciprocal-Square-Root
632 instruction was proposed that was split into two halves: 12-14 bit
633 accuracy completing in 7 cycles and "Carry On And Get Better Accuracy"
634 for the second instruction! With 3D only needing reduced accuracy
635 the saving in power consumption and time was definitely worthwhile,
636 and it neatly illustrates a counter-example to trying to make one
637 instruction do too much.
638
639 **Summary**
640
641 There are many tradeoffs here, it is a huge list of considerations: any
642 others known about please do submit feedback so they may be included,
643 here. Then the evaluation process may take place: again, constructive
644 feedback on that as to which instructions are a priority also appreciated.
645 The above helps explain the columns in the tables that follow.
646
647 \newpage{}
648
649 # Tables
650
651 The original tables are available publicly as as CSV file at
652 <https://git.libre-soc.org/?p=libreriscv.git;a=blob;f=openpower/sv/rfc/ls012/optable.csv;hb=HEAD>.
653 A python program auto-generates the tables in the following sections by
654 sorting into different useful priorities.
655
656 The key to headings and sections are as follows:
657
658 * **Area** - Target Area as described in above sections
659 * **XO Cost** - the number of bits required in the XO Field. whilst not
660 the full picture it is a good indicator as to how costly in terms
661 of Opcode Allocation a given instruction will be. Lower number is
662 a higher cost for the Power ISA's precious remaining Opcode space.
663 "PO" indicates that an entire Primary Opcode is required.
664 * **rfc** the Libre-SOC External RFC resource,
665 <https://libre-soc.org/openpower/sv/rfc/> where advance notice of
666 upcoming RFCs in development may be found.
667 *Reading advance Draft RFCs and providing feedback strongly advised*,
668 it saves time and effort for the OPF ISA Workgroup.
669 * **SVP64** - Vectoriseable (SVP64-Prefixable) - also implies that
670 SVP64Single is also permitted (required).
671 * **page** - Libre-SOC wiki page at which further information can
672 be found. Again: **advance reading strongly advised due to the
673 sheer volume of information**.
674 * **PO1** - the instruction is capable of being PO1-Prefixed
675 (given an EXT1xx Opcode Allocation). Bear in mind that this option
676 is **mutually exclusively incompatible** with Vectorisation.
677 * **group** - the Primary Opcode Group recommended for this instruction.
678 Options are EXT0xx (EXT000-EXT063), EXT1xx and EXT2xx. A third area
679 (UnVectoriseable),
680 EXT3xx, was available in an early Draft RFC but has been made "RESERVED"
681 instead. see [[sv/po9_encoding]].
682 * **Level** - Compliancy Subset and Simple-V Level. `SFFS` indicates "mandatory"
683 in SFFS. All else is "optional" however some instructions are further Subsetted
684 within Simple-V: SV/Embedded, SV/DSP and SV/Supercomputing.
685 * **regs** - a guide to register usage, to how costly Hazard Management
686 will be, in hardware:
687
688 ```
689 - 1R: reads one GPR/FPR/SPR/CR.
690 - 1W: writes one GPR/FPR/SPR/CR.
691 - 1r: reads one CR *Field* (not necessarily the entire CR)
692 - 1w: writes one CR *Field* (not necessarily the entire CR)
693 ```
694
695 [[!inline pages="openpower/sv/rfc/ls012/areas.mdwn" raw=yes ]]
696 [[!inline pages="openpower/sv/rfc/ls012/xo_cost.mdwn" raw=yes ]]
697 [[!inline pages="openpower/sv/rfc/ls012/level.mdwn" raw=yes ]]
698
699 [[!tag opf_rfc]]