(no commit message)
[libreriscv.git] / openpower / sv / svp64 / appendix.mdwn
1 [[!tag standards]]
2
3 # Appendix
4
5 * <https://bugs.libre-soc.org/show_bug.cgi?id=574> Saturation
6 * <https://bugs.libre-soc.org/show_bug.cgi?id=558#c47> Parallel Prefix
7 * <https://bugs.libre-soc.org/show_bug.cgi?id=697> Reduce Modes
8 * <https://bugs.libre-soc.org/show_bug.cgi?id=809> OV sv.addex discussion
9
10 This is the appendix to [[sv/svp64]], providing explanations of modes
11 etc. leaving the main svp64 page's primary purpose as outlining the
12 instruction format.
13
14 Table of contents:
15
16 [[!toc]]
17
18 # Partial Implementations
19
20 It is perfectly legal to implement subsets of SVP64 as long as illegal
21 instruction traps are always raised on unimplemented features,
22 so that soft-emulation is possible,
23 even for future revisions of SVP64. With SVP64 being partly controlled
24 through contextual SPRs, a little care has to be taken.
25
26 **All** SPRs
27 not implemented including reserved ones for future use must raise an illegal
28 instruction trap if read or written. This allows software the
29 opportunity to emulate the context created by the given SPR.
30
31 **Embedded Scalar Scenario**
32
33 In this scenario an implementation does not wish to implement the Vectorisation
34 but simply wishes to take advantage of predication or other feature
35 of SVP64, such as instructions that might only be available if prefixed.
36 Such an implementation would be entirely free to do so with the proviso
37 that:
38
39 * any attempts to call `setvl` shall either raise an illegal instruction
40 or be partially implemented to set SVSTATE correctly.
41 * if SVSTATE contains any value in any bit that is not supported
42 in hardware, an illegal instruction shall be raised when an SVP64
43 prefixed instruction is executed.
44 * if SVSTATE contains values requesting supported features at the time
45 that the prefixed instruction is executed then it is executed in
46 hardware as per specification, with no illegal exception trap raised.
47
48 Example, assuming that hardware implements scalar operations only,
49 and implements predication but not elwidth overrides:
50
51 setvli r0, 4 # sets VL equal to 4
52 sv.addi r5, r0, 1 # raises an 0x700 trap
53 setvli r0, 1 # sets VL equal to 1
54 sv.addi r5, r0, 1 # gets executed by hardware
55 sv.addi/ew=8 r5, r0, 1 # raises an 0x700 trap
56 sv.ori/sm=EQ r5, r0, 1 # executed by hardware
57
58 The first `sv.addi` raises an illegal instruction trap because
59 VL has been set to 4, and this is not supported. Likewise
60 elwidth overrides if requested always raise illegal instruction
61 traps.
62
63 # XER, SO and other global flags
64
65 Vector systems are expected to be high performance. This is achieved
66 through parallelism, which requires that elements in the vector be
67 independent. XER SO/OV and other global "accumulation" flags (CR.SO) cause
68 Read-Write Hazards on single-bit global resources, having a significant
69 detrimental effect.
70
71 Consequently in SV, XER.SO behaviour is disregarded (including
72 in `cmp` instructions). XER.SO is not read, but XER.OV may be written,
73 breaking the Read-Modify-Write Hazard Chain that complicates
74 microarchitectural implementations.
75 This includes when `scalar identity behaviour` occurs. If precise
76 OpenPOWER v3.0/1 scalar behaviour is desired then OpenPOWER v3.0/1
77 instructions should be used without an SV Prefix.
78
79 TODO jacob add about OV https://www.intel.com/content/dam/www/public/us/en/documents/white-papers/ia-large-integer-arithmetic-paper.pdf
80
81 Of note here is that XER.SO and OV may already be disregarded in the
82 Power ISA v3.0/1 SFFS (Scalar Fixed and Floating) Compliancy Subset.
83 SVP64 simply makes it mandatory to disregard XER.SO even for other Subsets,
84 but only for SVP64 Prefixed Operations.
85
86 XER.CA/CA32 on the other hand is expected and required to be implemented
87 according to standard Power ISA Scalar behaviour. Interestingly, due
88 to SVP64 being in effect a hardware for-loop around Scalar instructions
89 executing in precise Program Order, a little thought shows that a Vectorised
90 Carry-In-Out add is in effect a Big Integer Add, taking a single bit Carry In
91 and producing, at the end, a single bit Carry out. High performance
92 implementations may exploit this observation to deploy efficient
93 Parallel Carry Lookahead.
94
95 # assume VL=4, this results in 4 sequential ops (below)
96 sv.adde r0.v, r4.v, r8.v
97
98 # instructions that get executed in backend hardware:
99 adde r0, r4, r8 # takes carry-in, produces carry-out
100 adde r1, r5, r9 # takes carry from previous
101 ...
102 adde r3, r7, r11 # likewise
103
104 It can clearly be seen that the carry chains from one
105 64 bit add to the next, the end result being that a
106 256-bit "Big Integer Add" has been performed, and that
107 CA contains the 257th bit. A one-instruction 512-bit Add
108 may be performed by setting VL=8, and a one-instruction
109 1024-bit add by setting VL=16, and so on.
110
111 # v3.0B/v3.1 relevant instructions
112
113 SV is primarily designed for use as an efficient hybrid 3D GPU / VPU /
114 CPU ISA.
115
116 Vectorisation of the VSX Packed SIMD system makes no sense whatsoever,
117 the sole exceptions potentially being any operations with 128-bit
118 operands such as `vrlq` (Rotate Quad Word) and `xsaddqp` (Scalar
119 Quad-precision Add).
120 SV effectively *replaces* the majority of VSX, requiring far less
121 instructions, and provides, at the very minimum, predication
122 (which VSX was designed without).
123
124 Likewise, `lq` (Load Quad), and Load/Store Multiple make no sense to
125 have because they are not only provided by SV, the SV alternatives may
126 be predicated as well, making them far better suited to use in function
127 calls and context-switching.
128
129 Additionally, some v3.0/1 instructions simply make no sense at all in a
130 Vector context: `rfid` falls into this category,
131 as well as `sc` and `scv`. Here there is simply no point
132 trying to Vectorise them: the standard OpenPOWER v3.0/1 instructions
133 should be called instead.
134
135 Fortuitously this leaves several Major Opcodes free for use by SV
136 to fit alternative future instructions. In a 3D context this means
137 Vector Product, Vector Normalise, [[sv/mv.swizzle]], Texture LD/ST
138 operations, and others critical to an efficient, effective 3D GPU and
139 VPU ISA. With such instructions being included as standard in other
140 commercially-successful GPU ISAs it is likewise critical that a 3D
141 GPU/VPU based on svp64 also have such instructions.
142
143 Note however that svp64 is stand-alone and is in no way
144 critically dependent on the existence or provision of 3D GPU or VPU
145 instructions. These should be considered extensions, and their discussion
146 and specification is out of scope for this document.
147
148 Note, again: this is *only* under svp64 prefixing. Standard v3.0B /
149 v3.1B is *not* altered by svp64 in any way.
150
151 ## Major opcode map (v3.0B)
152
153 This table is taken from v3.0B.
154 Table 9: Primary Opcode Map (opcode bits 0:5)
155
156 ```
157 | 000 | 001 | 010 | 011 | 100 | 101 | 110 | 111
158 000 | | | tdi | twi | EXT04 | | | mulli | 000
159 001 | subfic | | cmpli | cmpi | addic | addic. | addi | addis | 001
160 010 | bc/l/a | EXT17 | b/l/a | EXT19 | rlwimi| rlwinm | | rlwnm | 010
161 011 | ori | oris | xori | xoris | andi. | andis. | EXT30 | EXT31 | 011
162 100 | lwz | lwzu | lbz | lbzu | stw | stwu | stb | stbu | 100
163 101 | lhz | lhzu | lha | lhau | sth | sthu | lmw | stmw | 101
164 110 | lfs | lfsu | lfd | lfdu | stfs | stfsu | stfd | stfdu | 110
165 111 | lq | EXT57 | EXT58 | EXT59 | EXT60 | EXT61 | EXT62 | EXT63 | 111
166 | 000 | 001 | 010 | 011 | 100 | 101 | 110 | 111
167 ```
168
169 ## Suitable for svp64-only
170
171 This is the same table containing v3.0B Primary Opcodes except those that
172 make no sense in a Vectorisation Context have been removed. These removed
173 POs can, *in the SV Vector Context only*, be assigned to alternative
174 (Vectorised-only) instructions, including future extensions.
175 EXT04 retains the scalar `madd*` operations but would have all PackedSIMD
176 (aka VSX) operations removed.
177
178 Note, again, to emphasise: outside of svp64 these opcodes **do not**
179 change. When not prefixed with svp64 these opcodes **specifically**
180 retain their v3.0B / v3.1B OpenPOWER Standard compliant meaning.
181
182 ```
183 | 000 | 001 | 010 | 011 | 100 | 101 | 110 | 111
184 000 | | | | | EXT04 | | | mulli | 000
185 001 | subfic | | cmpli | cmpi | addic | addic. | addi | addis | 001
186 010 | bc/l/a | | | EXT19 | rlwimi| rlwinm | | rlwnm | 010
187 011 | ori | oris | xori | xoris | andi. | andis. | EXT30 | EXT31 | 011
188 100 | lwz | lwzu | lbz | lbzu | stw | stwu | stb | stbu | 100
189 101 | lhz | lhzu | lha | lhau | sth | sthu | | | 101
190 110 | lfs | lfsu | lfd | lfdu | stfs | stfsu | stfd | stfdu | 110
191 111 | | | EXT58 | EXT59 | | EXT61 | | EXT63 | 111
192 | 000 | 001 | 010 | 011 | 100 | 101 | 110 | 111
193 ```
194
195 It is important to note that having a different v3.0B Scalar opcode
196 that is different from an SVP64 one is highly undesirable: the complexity
197 in the decoder is greatly increased.
198
199 # EXTRA Field Mapping
200
201 The purpose of the 9-bit EXTRA field mapping is to mark individual
202 registers (RT, RA, BFA) as either scalar or vector, and to extend
203 their numbering from 0..31 in Power ISA v3.0 to 0..127 in SVP64.
204 Three of the 9 bits may also be used up for a 2nd Predicate (Twin
205 Predication) leaving a mere 6 bits for qualifying registers. As can
206 be seen there is significant pressure on these (and in fact all) SVP64 bits.
207
208 In Power ISA v3.1 prefixing there are bits which describe and classify
209 the prefix in a fashion that is independent of the suffix. MLSS for
210 example. For SVP64 there is insufficient space to make the SVP64 Prefix
211 "self-describing", and consequently every single Scalar instruction
212 had to be individually analysed, by rote, to craft an EXTRA Field Mapping.
213 This process was semi-automated and is described in this section.
214 The final results, which are part of the SVP64 Specification, are here:
215
216 * [[openpower/opcode_regs_deduped]]
217
218 Firstly, every instruction's mnemonic (`add RT, RA, RB`) was analysed
219 from reading the markdown formatted version of the Scalar pseudocode
220 which is machine-readable and found in [[openpower/isatables]]. The
221 analysis gives, by instruction, a "Register Profile". `add RT, RA, RB`
222 for example is given a designation `RM-2R-1W` because it requires
223 two GPR reads and one GPR write.
224
225 Secondly, the total number of registers was added up (2R-1W is 3 registers)
226 and if less than or equal to three then that instruction could be given an
227 EXTRA3 designation. Four or more is given an EXTRA2 designation because
228 there are only 9 bits available.
229
230 Thirdly, the instruction was analysed to see if Twin or Single
231 Predication was suitable. As a general rule this was if there
232 was only a single operand and a single result (`extw` and LD/ST)
233 however it was found that some 2 or 3 operand instructions also
234 qualify. Given that 3 of the 9 bits of EXTRA had to be sacrificed for use
235 in Twin Predication, some compromises were made, here. LDST is
236 Twin but also has 3 operands in some operations, so only EXTRA2 can be used.
237
238 Fourthly, a packing format was decided: for 2R-1W an EXTRA3 indexing
239 could have been decided
240 that RA would be indexed 0 (EXTRA bits 0-2), RB indexed 1 (EXTRA bits 3-5)
241 and RT indexed 2 (EXTRA bits 6-8). In some cases (LD/ST with update)
242 RA-as-a-source is given a **different** EXTRA index from RA-as-a-result
243 (because it is possible to do, and perceived to be useful). Rc=1
244 co-results (CR0, CR1) are always given the same EXTRA index as their
245 main result (RT, FRT).
246
247 Fifthly, in an automated process the results of the analysis
248 were outputted in CSV Format for use in machine-readable form
249 by sv_analysis.py <https://git.libre-soc.org/?p=openpower-isa.git;a=blob;f=src/openpower/sv/sv_analysis.py;hb=HEAD>
250
251 This process was laborious but logical, and, crucially, once a
252 decision is made (and ratified) cannot be reversed.
253 Qualifying future Power ISA Scalar instructions for SVP64
254 is **strongly** advised to utilise this same process and the same
255 sv_analysis.py program as a canonical method of maintaining the
256 relationships. Alterations to that same program which
257 change the Designation is **prohibited** once finalised (ratified
258 through the Power ISA WG Process). It would
259 be similar to deciding that `add` should be changed from X-Form
260 to D-Form.
261
262 # Single Predication
263
264 This is a standard mode normally found in Vector ISAs. every element in every source Vector and in the destination uses the same bit of one single predicate mask.
265
266 In SVSTATE, for Single-predication, implementors MUST increment both srcstep and dststep: unlike Twin-Predication the two must be equal at all times.
267
268 # Twin Predication
269
270 This is a novel concept that allows predication to be applied to a single
271 source and a single dest register. The following types of traditional
272 Vector operations may be encoded with it, *without requiring explicit
273 opcodes to do so*
274
275 * VSPLAT (a single scalar distributed across a vector)
276 * VEXTRACT (like LLVM IR [`extractelement`](https://releases.llvm.org/11.0.0/docs/LangRef.html#extractelement-instruction))
277 * VINSERT (like LLVM IR [`insertelement`](https://releases.llvm.org/11.0.0/docs/LangRef.html#insertelement-instruction))
278 * VCOMPRESS (like LLVM IR [`llvm.masked.compressstore.*`](https://releases.llvm.org/11.0.0/docs/LangRef.html#llvm-masked-compressstore-intrinsics))
279 * VEXPAND (like LLVM IR [`llvm.masked.expandload.*`](https://releases.llvm.org/11.0.0/docs/LangRef.html#llvm-masked-expandload-intrinsics))
280
281 Those patterns (and more) may be applied to:
282
283 * mv (the usual way that V\* ISA operations are created)
284 * exts\* sign-extension
285 * rwlinm and other RS-RA shift operations (**note**: excluding
286 those that take RA as both a src and dest. These are not
287 1-src 1-dest, they are 2-src, 1-dest)
288 * LD and ST (treating AGEN as one source)
289 * FP fclass, fsgn, fneg, fabs, fcvt, frecip, fsqrt etc.
290 * Condition Register ops mfcr, mtcr and other similar
291
292 This is a huge list that creates extremely powerful combinations,
293 particularly given that one of the predicate options is `(1<<r3)`
294
295 Additional unusual capabilities of Twin Predication include a back-to-back
296 version of VCOMPRESS-VEXPAND which is effectively the ability to do
297 sequentially ordered multiple VINSERTs. The source predicate selects a
298 sequentially ordered subset of elements to be inserted; the destination
299 predicate specifies the sequentially ordered recipient locations.
300 This is equivalent to
301 `llvm.masked.compressstore.*`
302 followed by
303 `llvm.masked.expandload.*`
304 with a single instruction.
305
306 This extreme power and flexibility comes down to the fact that SVP64
307 is not actually a Vector ISA: it is a loop-abstraction-concept that
308 is applied *in general* to Scalar operations, just like the x86
309 `REP` instruction (if put on steroids).
310
311 # Reduce modes
312
313 Reduction in SVP64 is deterministic and somewhat of a misnomer. A normal
314 Vector ISA would have explicit Reduce opcodes with defined characteristics
315 per operation: in SX Aurora there is even an additional scalar argument
316 containing the initial reduction value, and the default is either 0
317 or 1 depending on the specifics of the explicit opcode.
318 SVP64 fundamentally has to
319 utilise *existing* Scalar Power ISA v3.0B operations, which presents some
320 unique challenges.
321
322 The solution turns out to be to simply define reduction as permitting
323 deterministic element-based schedules to be issued using the base Scalar
324 operations, and to rely on the underlying microarchitecture to resolve
325 Register Hazards at the element level. This goes back to
326 the fundamental principle that SV is nothing more than a Sub-Program-Counter
327 sitting between Decode and Issue phases.
328
329 Microarchitectures *may* take opportunities to parallelise the reduction
330 but only if in doing so they preserve Program Order at the Element Level.
331 Opportunities where this is possible include an `OR` operation
332 or a MIN/MAX operation: it may be possible to parallelise the reduction,
333 but for Floating Point it is not permitted due to different results
334 being obtained if the reduction is not executed in strict Program-Sequential
335 Order.
336
337 In essence it becomes the programmer's responsibility to leverage the
338 pre-determined schedules to desired effect.
339
340 ## Scalar result reduction and iteration
341
342 Scalar Reduction per se does not exist, instead is implemented in SVP64
343 as a simple and natural relaxation of the usual restriction on the Vector
344 Looping which would terminate if the destination was marked as a Scalar.
345 Scalar Reduction by contrast *keeps issuing Vector Element Operations*
346 even though the destination register is marked as scalar.
347 Thus it is up to the programmer to be aware of this, observe some
348 conventions, and thus end up achieving the desired outcome of scalar
349 reduction.
350
351 It is also important to appreciate that there is no
352 actual imposition or restriction on how this mode is utilised: there
353 will therefore be several valuable uses (including Vector Iteration
354 and "Reverse-Gear")
355 and it is up to the programmer to make best use of the
356 (strictly deterministic) capability
357 provided.
358
359 In this mode, which is suited to operations involving carry or overflow,
360 one register must be assigned, by convention by the programmer to be the
361 "accumulator". Scalar reduction is thus categorised by:
362
363 * One of the sources is a Vector
364 * the destination is a scalar
365 * optionally but most usefully when one source scalar register is
366 also the scalar destination (which may be informally termed
367 the "accumulator")
368 * That the source register type is the same as the destination register
369 type identified as the "accumulator". Scalar reduction on `cmp`,
370 `setb` or `isel` makes no sense for example because of the mixture
371 between CRs and GPRs.
372
373 *Note that issuing instructions in Scalar reduce mode such as `setb`
374 are neither `UNDEFINED` nor prohibited, despite them not making much
375 sense at first glance.
376 Scalar reduce is strictly defined behaviour, and the cost in
377 hardware terms of prohibition of seemingly non-sensical operations is too great.
378 Therefore it is permitted and required to be executed successfully.
379 Implementors **MAY** choose to optimise such instructions in instances
380 where their use results in "extraneous execution", i.e. where it is clear
381 that the sequence of operations, comprising multiple overwrites to
382 a scalar destination **without** cumulative, iterative, or reductive
383 behaviour (no "accumulator"), may discard all but the last element
384 operation. Identification
385 of such is trivial to do for `setb` and `cmp`: the source register type is
386 a completely different register file from the destination.
387 Likewise Scalar reduction when the destination is a Vector
388 is as if the Reduction Mode was not requested.*
389
390 Typical applications include simple operations such as `ADD r3, r10.v,
391 r3` where, clearly, r3 is being used to accumulate the addition of all
392 elements of the vector starting at r10.
393
394 # add RT, RA,RB but when RT==RA
395 for i in range(VL):
396 iregs[RA] += iregs[RB+i] # RT==RA
397
398 However, *unless* the operation is marked as "mapreduce" (`sv.add/mr`)
399 SV ordinarily
400 **terminates** at the first scalar operation. Only by marking the
401 operation as "mapreduce" will it continue to issue multiple sub-looped
402 (element) instructions in `Program Order`.
403
404 To perform the loop in reverse order, the ```RG``` (reverse gear) bit must be set. This may be useful in situations where the results may be different
405 (floating-point) if executed in a different order. Given that there is
406 no actual prohibition on Reduce Mode being applied when the destination
407 is a Vector, the "Reverse Gear" bit turns out to be a way to apply Iterative
408 or Cumulative Vector operations in reverse. `sv.add/rg r3.v, r4.v, r4.v`
409 for example will start at the opposite end of the Vector and push
410 a cumulative series of overlapping add operations into the Execution units of
411 the underlying hardware.
412
413 Other examples include shift-mask operations where a Vector of inserts
414 into a single destination register is required (see [[sv/bitmanip]], bmset),
415 as a way to construct
416 a value quickly from multiple arbitrary bit-ranges and bit-offsets.
417 Using the same register as both the source and destination, with Vectors
418 of different offsets masks and values to be inserted has multiple
419 applications including Video, cryptography and JIT compilation.
420
421 # assume VL=4:
422 # * Vector of shift-offsets contained in RC (r12.v)
423 # * Vector of masks contained in RB (r8.v)
424 # * Vector of values to be masked-in in RA (r4.v)
425 # * Scalar destination RT (r0) to receive all mask-offset values
426 sv.bmset/mr r0, r4.v, r8.v, r12.v
427
428 Due to the Deterministic Scheduling,
429 Subtract and Divide are still permitted to be executed in this mode,
430 although from an algorithmic perspective it is strongly discouraged.
431 It would be better to use addition followed by one final subtract,
432 or in the case of divide, to get better accuracy, to perform a multiply
433 cascade followed by a final divide.
434
435 Note that single-operand or three-operand scalar-dest reduce is perfectly
436 well permitted: the programmer may still declare one register, used as
437 both a Vector source and Scalar destination, to be utilised as
438 the "accumulator". In the case of `sv.fmadds` and `sv.maddhw` etc
439 this naturally fits well with the normal expected usage of these
440 operations.
441
442 If an interrupt or exception occurs in the middle of the scalar mapreduce,
443 the scalar destination register **MUST** be updated with the current
444 (intermediate) result, because this is how ```Program Order``` is
445 preserved (Vector Loops are to be considered to be just another way of issuing instructions
446 in Program Order). In this way, after return from interrupt,
447 the scalar mapreduce may continue where it left off. This provides
448 "precise" exception behaviour.
449
450 Note that hardware is perfectly permitted to perform multi-issue
451 parallel optimisation of the scalar reduce operation: it's just that
452 as far as the user is concerned, all exceptions and interrupts **MUST**
453 be precise.
454
455 ## Vector result reduce mode
456
457 Vector Reduce Mode issues a deterministic tree-reduction schedule to the underlying micro-architecture. Like Scalar reduction, the "Scalar Base"
458 (Power ISA v3.0B) operation is leveraged, unmodified, to give the
459 *appearance* and *effect* of Reduction.
460
461 Given that the tree-reduction schedule is deterministic,
462 Interrupts and exceptions
463 can therefore also be precise. The final result will be in the first
464 non-predicate-masked-out destination element, but due again to
465 the deterministic schedule programmers may find uses for the intermediate
466 results.
467
468 When Rc=1 a corresponding Vector of co-resultant CRs is also
469 created. No special action is taken: the result and its CR Field
470 are stored "as usual" exactly as all other SVP64 Rc=1 operations.
471
472 ## Sub-Vector Horizontal Reduction
473
474 Note that when SVM is clear and SUBVL!=1 the sub-elements are
475 *independent*, i.e. they are mapreduced per *sub-element* as a result.
476 illustration with a vec2, assuming RA==RT, e.g `sv.add/mr/vec2 r4, r4, r16`
477
478 for i in range(0, VL):
479 # RA==RT in the instruction. does not have to be
480 iregs[RT].x = op(iregs[RT].x, iregs[RB+i].x)
481 iregs[RT].y = op(iregs[RT].y, iregs[RB+i].y)
482
483 Thus logically there is nothing special or unanticipated about
484 `SVM=0`: it is expected behaviour according to standard SVP64
485 Sub-Vector rules.
486
487 By contrast, when SVM is set and SUBVL!=1, a Horizontal
488 Subvector mode is enabled, which behaves very much more
489 like a traditional Vector Processor Reduction instruction.
490 Example for a vec3:
491
492 for i in range(VL):
493 result = iregs[RA+i].x
494 result = op(result, iregs[RA+i].y)
495 result = op(result, iregs[RA+i].z)
496 iregs[RT+i] = result
497
498 In this mode, when Rc=1 the Vector of CRs is as normal: each result
499 element creates a corresponding CR element (for the final, reduced, result).
500
501 # Fail-on-first
502
503 Data-dependent fail-on-first has two distinct variants: one for LD/ST
504 (see [[sv/ldst]],
505 the other for arithmetic operations (actually, CR-driven)
506 ([[sv/normal]]) and CR operations ([[sv/cr_ops]]).
507 Note in each
508 case the assumption is that vector elements are required appear to be
509 executed in sequential Program Order, element 0 being the first.
510
511 * LD/ST ffirst treats the first LD/ST in a vector (element 0) as an
512 ordinary one. Exceptions occur "as normal". However for elements 1
513 and above, if an exception would occur, then VL is **truncated** to the
514 previous element.
515 * Data-driven (CR-driven) fail-on-first activates when Rc=1 or other
516 CR-creating operation produces a result (including cmp). Similar to
517 branch, an analysis of the CR is performed and if the test fails, the
518 vector operation terminates and discards all element operations
519 above the current one (and the current one if VLi is not set),
520 and VL is truncated to either
521 the *previous* element or the current one, depending on whether
522 VLi (VL "inclusive") is set.
523
524 Thus the new VL comprises a contiguous vector of results,
525 all of which pass the testing criteria (equal to zero, less than zero).
526
527 The CR-based data-driven fail-on-first is new and not found in ARM
528 SVE or RVV. It is extremely useful for reducing instruction count,
529 however requires speculative execution involving modifications of VL
530 to get high performance implementations. An additional mode (RC1=1)
531 effectively turns what would otherwise be an arithmetic operation
532 into a type of `cmp`. The CR is stored (and the CR.eq bit tested
533 against the `inv` field).
534 If the CR.eq bit is equal to `inv` then the Vector is truncated and
535 the loop ends.
536 Note that when RC1=1 the result elements are never stored, only the CRs.
537
538 VLi is only available as an option when `Rc=0` (or for instructions
539 which do not have Rc). When set, the current element is always
540 also included in the count (the new length that VL will be set to).
541 This may be useful in combination with "inv" to truncate the Vector
542 to `exclude` elements that fail a test, or, in the case of implementations
543 of strncpy, to include the terminating zero.
544
545 In CR-based data-driven fail-on-first there is only the option to select
546 and test one bit of each CR (just as with branch BO). For more complex
547 tests this may be insufficient. If that is the case, a vectorised crops
548 (crand, cror) may be used, and ffirst applied to the crop instead of to
549 the arithmetic vector.
550
551 One extremely important aspect of ffirst is:
552
553 * LDST ffirst may never set VL equal to zero. This because on the first
554 element an exception must be raised "as normal".
555 * CR-based data-dependent ffirst on the other hand **can** set VL equal
556 to zero. This is the only means in the entirety of SV that VL may be set
557 to zero (with the exception of via the SV.STATE SPR). When VL is set
558 zero due to the first element failing the CR bit-test, all subsequent
559 vectorised operations are effectively `nops` which is
560 *precisely the desired and intended behaviour*.
561
562 Another aspect is that for ffirst LD/STs, VL may be truncated arbitrarily
563 to a nonzero value for any implementation-specific reason. For example:
564 it is perfectly reasonable for implementations to alter VL when ffirst
565 LD or ST operations are initiated on a nonaligned boundary, such that
566 within a loop the subsequent iteration of that loop begins subsequent
567 ffirst LD/ST operations on an aligned boundary. Likewise, to reduce
568 workloads or balance resources.
569
570 CR-based data-dependent first on the other hand MUST not truncate VL
571 arbitrarily to a length decided by the hardware: VL MUST only be
572 truncated based explicitly on whether a test fails.
573 This because it is a precise test on which algorithms
574 will rely.
575
576 ## Data-dependent fail-first on CR operations (crand etc)
577
578 Operations that actually produce or alter CR Field as a result
579 do not also in turn have an Rc=1 mode. However it makes no
580 sense to try to test the 4 bits of a CR Field for being equal
581 or not equal to zero. Moreover, the result is already in the
582 form that is desired: it is a CR field. Therefore,
583 CR-based operations have their own SVP64 Mode, described
584 in [[sv/cr_ops]]
585
586 There are two primary different types of CR operations:
587
588 * Those which have a 3-bit operand field (referring to a CR Field)
589 * Those which have a 5-bit operand (referring to a bit within the
590 whole 32-bit CR)
591
592 More details can be found in [[sv/cr_ops]].
593
594 # pred-result mode
595
596 Predicate-result merges common CR testing with predication, saving on
597 instruction count. In essence, a Condition Register Field test
598 is performed, and if it fails it is considered to have been
599 *as if* the destination predicate bit was zero.
600 Arithmetic and Logical Pred-result is covered in [[sv/normal]]
601
602 Ped-result mode may not be applied on CR ops.
603
604 Although CR operations (mtcr, crand, cror) may be Vectorised,
605 predicated, pred-result mode applies to operations that have
606 an Rc=1 mode, or make sense to add an RC1 option.
607
608 # CR Operations
609
610 CRs are slightly more involved than INT or FP registers due to the
611 possibility for indexing individual bits (crops BA/BB/BT). Again however
612 the access pattern needs to be understandable in relation to v3.0B / v3.1B
613 numbering, with a clear linear relationship and mapping existing when
614 SV is applied.
615
616 ## CR EXTRA mapping table and algorithm
617
618 Numbering relationships for CR fields are already complex due to being
619 in BE format (*the relationship is not clearly explained in the v3.0B
620 or v3.1 specification*). However with some care and consideration
621 the exact same mapping used for INT and FP regfiles may be applied,
622 just to the upper bits, as explained below. The notation
623 `CR{field number}` is used to indicate access to a particular
624 Condition Register Field (as opposed to the notation `CR[bit]`
625 which accesses one bit of the 32 bit Power ISA v3.0B
626 Condition Register)
627
628 `CR{n}` refers to `CR0` when `n=0` and consequently, for CR0-7, is defined, in v3.0B pseudocode, as:
629
630 CR{7-n} = CR[32+n*4:35+n*4]
631
632 For SVP64 the relationship for the sequential
633 numbering of elements is to the CR **fields** within
634 the CR Register, not to individual bits within the CR register.
635
636 In OpenPOWER v3.0/1, BF/BT/BA/BB are all 5 bits. The top 3 bits (0:2)
637 select one of the 8 CRs; the bottom 2 bits (3:4) select one of 4 bits
638 *in* that CR. The numbering was determined (after 4 months of
639 analysis and research) to be as follows:
640
641 CR_index = 7-(BA>>2) # top 3 bits but BE
642 bit_index = 3-(BA & 0b11) # low 2 bits but BE
643 CR_reg = CR{CR_index} # get the CR
644 # finally get the bit from the CR.
645 CR_bit = (CR_reg & (1<<bit_index)) != 0
646
647 When it comes to applying SV, it is the CR\_reg number to which SV EXTRA2/3
648 applies, **not** the CR\_bit portion (bits 3:4):
649
650 if extra3_mode:
651 spec = EXTRA3
652 else:
653 spec = EXTRA2<<1 | 0b0
654 if spec[0]:
655 # vector constructs "BA[0:2] spec[1:2] 00 BA[3:4]"
656 return ((BA >> 2)<<6) | # hi 3 bits shifted up
657 (spec[1:2]<<4) | # to make room for these
658 (BA & 0b11) # CR_bit on the end
659 else:
660 # scalar constructs "00 spec[1:2] BA[0:4]"
661 return (spec[1:2] << 5) | BA
662
663 Thus, for example, to access a given bit for a CR in SV mode, the v3.0B
664 algorithm to determin CR\_reg is modified to as follows:
665
666 CR_index = 7-(BA>>2) # top 3 bits but BE
667 if spec[0]:
668 # vector mode, 0-124 increments of 4
669 CR_index = (CR_index<<4) | (spec[1:2] << 2)
670 else:
671 # scalar mode, 0-32 increments of 1
672 CR_index = (spec[1:2]<<3) | CR_index
673 # same as for v3.0/v3.1 from this point onwards
674 bit_index = 3-(BA & 0b11) # low 2 bits but BE
675 CR_reg = CR{CR_index} # get the CR
676 # finally get the bit from the CR.
677 CR_bit = (CR_reg & (1<<bit_index)) != 0
678
679 Note here that the decoding pattern to determine CR\_bit does not change.
680
681 Note: high-performance implementations may read/write Vectors of CRs in
682 batches of aligned 32-bit chunks (CR0-7, CR7-15). This is to greatly
683 simplify internal design. If instructions are issued where CR Vectors
684 do not start on a 32-bit aligned boundary, performance may be affected.
685
686 ## CR fields as inputs/outputs of vector operations
687
688 CRs (or, the arithmetic operations associated with them)
689 may be marked as Vectorised or Scalar. When Rc=1 in arithmetic operations that have no explicit EXTRA to cover the CR, the CR is Vectorised if the destination is Vectorised. Likewise if the destination is scalar then so is the CR.
690
691 When vectorized, the CR inputs/outputs are sequentially read/written
692 to 4-bit CR fields. Vectorised Integer results, when Rc=1, will begin
693 writing to CR8 (TBD evaluate) and increase sequentially from there.
694 This is so that:
695
696 * implementations may rely on the Vector CRs being aligned to 8. This
697 means that CRs may be read or written in aligned batches of 32 bits
698 (8 CRs per batch), for high performance implementations.
699 * scalar Rc=1 operation (CR0, CR1) and callee-saved CRs (CR2-4) are not
700 overwritten by vector Rc=1 operations except for very large VL
701 * CR-based predication, from CR32, is also not interfered with
702 (except by large VL).
703
704 However when the SV result (destination) is marked as a scalar by the
705 EXTRA field the *standard* v3.0B behaviour applies: the accompanying
706 CR when Rc=1 is written to. This is CR0 for integer operations and CR1
707 for FP operations.
708
709 Note that yes, the CR Fields are genuinely Vectorised. Unlike in SIMD VSX which
710 has a single CR (CR6) for a given SIMD result, SV Vectorised OpenPOWER
711 v3.0B scalar operations produce a **tuple** of element results: the
712 result of the operation as one part of that element *and a corresponding
713 CR element*. Greatly simplified pseudocode:
714
715 for i in range(VL):
716 # calculate the vector result of an add
717 iregs[RT+i] = iregs[RA+i] + iregs[RB+i]
718 # now calculate CR bits
719 CRs{8+i}.eq = iregs[RT+i] == 0
720 CRs{8+i}.gt = iregs[RT+i] > 0
721 ... etc
722
723 If a "cumulated" CR based analysis of results is desired (a la VSX CR6)
724 then a followup instruction must be performed, setting "reduce" mode on
725 the Vector of CRs, using cr ops (crand, crnor) to do so. This provides far
726 more flexibility in analysing vectors than standard Vector ISAs. Normal
727 Vector ISAs are typically restricted to "were all results nonzero" and
728 "were some results nonzero". The application of mapreduce to Vectorised
729 cr operations allows far more sophisticated analysis, particularly in
730 conjunction with the new crweird operations see [[sv/cr_int_predication]].
731
732 Note in particular that the use of a separate instruction in this way
733 ensures that high performance multi-issue OoO inplementations do not
734 have the computation of the cumulative analysis CR as a bottleneck and
735 hindrance, regardless of the length of VL.
736
737 Additionally,
738 SVP64 [[sv/branches]] may be used, even when the branch itself is to
739 the following instruction. The combined side-effects of CTR reduction
740 and VL truncation provide several benefits.
741
742 (see [[discussion]]. some alternative schemes are described there)
743
744 ## Rc=1 when SUBVL!=1
745
746 sub-vectors are effectively a form of Packed SIMD (length 2 to 4). Only 1 bit of
747 predicate is allocated per subvector; likewise only one CR is allocated
748 per subvector.
749
750 This leaves a conundrum as to how to apply CR computation per subvector,
751 when normally Rc=1 is exclusively applied to scalar elements. A solution
752 is to perform a bitwise OR or AND of the subvector tests. Given that
753 OE is ignored in SVP64, this field may (when available) be used to select OR or
754 AND behavior.
755
756 ### Table of CR fields
757
758 CRn is the notation used by the OpenPower spec to refer to CR field #i,
759 so FP instructions with Rc=1 write to CR1 (n=1).
760
761 CRs are not stored in SPRs: they are registers in their own right.
762 Therefore context-switching the full set of CRs involves a Vectorised
763 mfcr or mtcr, using VL=8 to do so. This is exactly as how
764 scalar OpenPOWER context-switches CRs: it is just that there are now
765 more of them.
766
767 The 64 SV CRs are arranged similarly to the way the 128 integer registers
768 are arranged. TODO a python program that auto-generates a CSV file
769 which can be included in a table, which is in a new page (so as not to
770 overwhelm this one). [[svp64/cr_names]]
771
772 # Register Profiles
773
774 **NOTE THIS TABLE SHOULD NO LONGER BE HAND EDITED** see
775 <https://bugs.libre-soc.org/show_bug.cgi?id=548> for details.
776
777 Instructions are broken down by Register Profiles as listed in the
778 following auto-generated page: [[opcode_regs_deduped]]. "Non-SV"
779 indicates that the operations with this Register Profile cannot be
780 Vectorised (mtspr, bc, dcbz, twi)
781
782 TODO generate table which will be here [[svp64/reg_profiles]]
783
784 # SV pseudocode illilustration
785
786 ## Single-predicated Instruction
787
788 illustration of normal mode add operation: zeroing not included, elwidth
789 overrides not included. if there is no predicate, it is set to all 1s
790
791 function op_add(rd, rs1, rs2) # add not VADD!
792 int i, id=0, irs1=0, irs2=0;
793 predval = get_pred_val(FALSE, rd);
794 for (i = 0; i < VL; i++)
795 STATE.srcoffs = i # save context
796 if (predval & 1<<i) # predication uses intregs
797 ireg[rd+id] <= ireg[rs1+irs1] + ireg[rs2+irs2];
798 if (!int_vec[rd].isvec) break;
799 if (rd.isvec) { id += 1; }
800 if (rs1.isvec) { irs1 += 1; }
801 if (rs2.isvec) { irs2 += 1; }
802 if (id == VL or irs1 == VL or irs2 == VL)
803 {
804 # end VL hardware loop
805 STATE.srcoffs = 0; # reset
806 return;
807 }
808
809 This has several modes:
810
811 * RT.v = RA.v RB.v
812 * RT.v = RA.v RB.s (and RA.s RB.v)
813 * RT.v = RA.s RB.s
814 * RT.s = RA.v RB.v
815 * RT.s = RA.v RB.s (and RA.s RB.v)
816 * RT.s = RA.s RB.s
817
818 All of these may be predicated. Vector-Vector is straightfoward.
819 When one of source is a Vector and the other a Scalar, it is clear that
820 each element of the Vector source should be added to the Scalar source,
821 each result placed into the Vector (or, if the destination is a scalar,
822 only the first nonpredicated result).
823
824 The one that is not obvious is RT=vector but both RA/RB=scalar.
825 Here this acts as a "splat scalar result", copying the same result into
826 all nonpredicated result elements. If a fixed destination scalar was
827 intended, then an all-Scalar operation should be used.
828
829 See <https://bugs.libre-soc.org/show_bug.cgi?id=552>
830
831 # Assembly Annotation
832
833 Assembly code annotation is required for SV to be able to successfully
834 mark instructions as "prefixed".
835
836 A reasonable (prototype) starting point:
837
838 svp64 [field=value]*
839
840 Fields:
841
842 * ew=8/16/32 - element width
843 * sew=8/16/32 - source element width
844 * vec=2/3/4 - SUBVL
845 * mode=mr/satu/sats/crpred
846 * pred=1\<\<3/r3/~r3/r10/~r10/r30/~r30/lt/gt/le/ge/eq/ne
847
848 similar to x86 "rex" prefix.
849
850 For actual assembler:
851
852 sv.asmcode/mode.vec{N}.ew=8,sw=16,m={pred},sm={pred} reg.v, src.s
853
854 Qualifiers:
855
856 * m={pred}: predicate mask mode
857 * sm={pred}: source-predicate mask mode (only allowed in Twin-predication)
858 * vec{N}: vec2 OR vec3 OR vec4 - sets SUBVL=2/3/4
859 * ew={N}: ew=8/16/32 - sets elwidth override
860 * sw={N}: sw=8/16/32 - sets source elwidth override
861 * ff={xx}: see fail-first mode
862 * pr={xx}: see predicate-result mode
863 * sat{x}: satu / sats - see saturation mode
864 * mr: see map-reduce mode
865 * mr.svm see map-reduce with sub-vector mode
866 * crm: see map-reduce CR mode
867 * crm.svm see map-reduce CR with sub-vector mode
868 * sz: predication with source-zeroing
869 * dz: predication with dest-zeroing
870
871 For modes:
872
873 * pred-result:
874 - pm=lt/gt/le/ge/eq/ne/so/ns
875 - RC1 mode
876 * fail-first
877 - ff=lt/gt/le/ge/eq/ne/so/ns
878 - RC1 mode
879 * saturation:
880 - sats
881 - satu
882 * map-reduce:
883 - mr OR crm: "normal" map-reduce mode or CR-mode.
884 - mr.svm OR crm.svm: when vec2/3/4 set, sub-vector mapreduce is enabled
885
886 # Proposed Parallel-reduction algorithm
887
888 **This algorithm contains a MV operation and may NOT be used. Removal
889 of the MV operation may be achieved by using index-redirection as was
890 achieved in DCT and FFT REMAP**
891
892 ```
893 /// reference implementation of proposed SimpleV reduction semantics.
894 ///
895 // reduction operation -- we still use this algorithm even
896 // if the reduction operation isn't associative or
897 // commutative.
898 XXX VIOLATION OF SVP64 DESIGN PRINCIPLES XXXX
899 /// XXX `pred` is a user-visible Vector Condition register XXXX
900 XXX VIOLATION OF SVP64 DESIGN PRINCIPLES XXXX
901 ///
902 /// all input arrays have length `vl`
903 def reduce(vl, vec, pred):
904 pred = copy(pred) # must not damage predicate
905 step = 1;
906 while step < vl
907 step *= 2;
908 for i in (0..vl).step_by(step)
909 other = i + step / 2;
910 other_pred = other < vl && pred[other];
911 if pred[i] && other_pred
912 vec[i] += vec[other];
913 else if other_pred
914 XXX VIOLATION OF SVP64 DESIGN XXX
915 XXX vec[i] = vec[other]; XXX
916 XXX VIOLATION OF SVP64 DESIGN XXX
917 pred[i] |= other_pred;
918 ```
919
920 The first principle in SVP64 being violated is that SVP64 is a fully-independent
921 Abstraction of hardware-looping in between issue and execute phases
922 that has no relation to the operation it issues. The above pseudocode
923 conditionally changes not only the type of element operation issued
924 (a MV in some cases) but also the number of arguments (2 for a MV).
925 At the very least, for Vertical-First Mode this will result in unanticipated and unexpected behaviour (maximise "surprises" for programmers) in
926 the middle of loops, that will be far too hard to explain.
927
928 The second principle being violated by the above algorithm is the expectation
929 that temporary storage is available for a modified predicate: there is no
930 such space, and predicates are read-only to reduce complexity at the
931 micro-architectural level.
932 SVP64 is founded on the principle that all operations are
933 "re-entrant" with respect to interrupts and exceptions: SVSTATE must
934 be saved and restored alongside PC and MSR, but nothing more. It is perfectly
935 fine to have context-switching back to the operation be somewhat slower,
936 through "reconstruction" of temporary internal state based on what SVSTATE
937 contains, but nothing more.
938
939 An alternative algorithm is therefore required that does not perform MVs,
940 and does not require additional state to be saved on context-switching.
941
942 ```
943 def reduce( vl, vec, pred ):
944 pred = copy(pred) # must not damage predicate
945 j = 0
946 vi = [] # array of lookup indices to skip nonpredicated
947 for i, pbit in enumerate(pred):
948 if pbit:
949 vi[j] = i
950 j += 1
951 step = 2
952 while step <= vl
953 halfstep = step // 2
954 for i in (0..vl).step_by(step)
955 other = vi[i + halfstep]
956 ir = vi[i]
957 other_pred = other < vl && pred[other]
958 if pred[i] && other_pred
959 vec[ir] += vec[other]
960 else if other_pred:
961 vi[ir] = vi[other] # index redirection, no MV
962 pred[ir] |= other_pred # reconstructed on context-switch
963 step *= 2
964 ```
965
966 In this version the need for an explicit MV is made unnecessary by instead
967 leaving elements *in situ*. The internal modifications to the predicate may,
968 due to the reduction being entirely deterministic, be "reconstructed"
969 on a context-switch. This may make some implementations slower.
970
971 *Implementor's Note: many SIMD-based Parallel Reduction Algorithms are
972 implemented in hardware with MVs that ensure lane-crossing is minimised.
973 The mistake which would be catastrophic to SVP64 to make is to then
974 limit the Reduction Sequence for all implementors
975 based solely and exclusively on what one
976 specific internal microarchitecture does.
977 In SIMD ISAs the internal SIMD Architectural design is exposed and imposed on the programmer. Cray-style Vector ISAs on the other hand provide convenient,
978 compact and efficient encodings of abstract concepts.
979 It is the Implementor's responsibility to produce a design
980 that complies with the above algorithm,
981 utilising internal Micro-coding and other techniques to transparently
982 insert MV operations
983 if necessary or desired, to give the level of efficiency or performance
984 required.*
985
986 # Element-width overrides
987
988 Element-width overrides are best illustrated with a packed structure
989 union in the c programming language. The following should be taken
990 literally, and assume always a little-endian layout:
991
992 typedef union {
993 uint8_t b[];
994 uint16_t s[];
995 uint32_t i[];
996 uint64_t l[];
997 uint8_t actual_bytes[8];
998 } el_reg_t;
999
1000 elreg_t int_regfile[128];
1001
1002 get_polymorphed_reg(reg, bitwidth, offset):
1003 el_reg_t res;
1004 res.l = 0; // TODO: going to need sign-extending / zero-extending
1005 if bitwidth == 8:
1006 reg.b = int_regfile[reg].b[offset]
1007 elif bitwidth == 16:
1008 reg.s = int_regfile[reg].s[offset]
1009 elif bitwidth == 32:
1010 reg.i = int_regfile[reg].i[offset]
1011 elif bitwidth == 64:
1012 reg.l = int_regfile[reg].l[offset]
1013 return res
1014
1015 set_polymorphed_reg(reg, bitwidth, offset, val):
1016 if (!reg.isvec):
1017 # not a vector: first element only, overwrites high bits
1018 int_regfile[reg].l[0] = val
1019 elif bitwidth == 8:
1020 int_regfile[reg].b[offset] = val
1021 elif bitwidth == 16:
1022 int_regfile[reg].s[offset] = val
1023 elif bitwidth == 32:
1024 int_regfile[reg].i[offset] = val
1025 elif bitwidth == 64:
1026 int_regfile[reg].l[offset] = val
1027
1028 In effect the GPR registers r0 to r127 (and corresponding FPRs fp0
1029 to fp127) are reinterpreted to be "starting points" in a byte-addressable
1030 memory. Vectors - which become just a virtual naming construct - effectively
1031 overlap.
1032
1033 It is extremely important for implementors to note that the only circumstance
1034 where upper portions of an underlying 64-bit register are zero'd out is
1035 when the destination is a scalar. The ideal register file has byte-level
1036 write-enable lines, just like most SRAMs, in order to avoid READ-MODIFY-WRITE.
1037
1038 An example ADD operation with predication and element width overrides:
1039
1040  for (i = 0; i < VL; i++)
1041 if (predval & 1<<i) # predication
1042 src1 = get_polymorphed_reg(RA, srcwid, irs1)
1043 src2 = get_polymorphed_reg(RB, srcwid, irs2)
1044 result = src1 + src2 # actual add here
1045 set_polymorphed_reg(RT, destwid, ird, result)
1046 if (!RT.isvec) break
1047 if (RT.isvec)  { id += 1; }
1048 if (RA.isvec)  { irs1 += 1; }
1049 if (RB.isvec)  { irs2 += 1; }
1050
1051 Thus it can be clearly seen that elements are packed by their
1052 element width, and the packing starts from the source (or destination)
1053 specified by the instruction.
1054
1055 # Twin (implicit) result operations
1056
1057 Some operations in the Power ISA already target two 64-bit scalar
1058 registers: `lq` for example, and LD with update.
1059 Some mathematical algorithms are more
1060 efficient when there are two outputs rather than one, providing
1061 feedback loops between elements (the most well-known being add with
1062 carry). 64-bit multiply
1063 for example actually internally produces a 128 bit result, which clearly
1064 cannot be stored in a single 64 bit register. Some ISAs recommend
1065 "macro op fusion": the practice of setting a convention whereby if
1066 two commonly used instructions (mullo, mulhi) use the same ALU but
1067 one selects the low part of an identical operation and the other
1068 selects the high part, then optimised micro-architectures may
1069 "fuse" those two instructions together, using Micro-coding techniques,
1070 internally.
1071
1072 The practice and convention of macro-op fusion however is not compatible
1073 with SVP64 Horizontal-First, because Horizontal Mode may only
1074 be applied to a single instruction at a time, and SVP64 is based on
1075 the principle of strict Program Order even at the element
1076 level. Thus it becomes
1077 necessary to add explicit more complex single instructions with
1078 more operands than would normally be seen in the average RISC ISA
1079 (3-in, 2-out, in some cases). If it
1080 was not for Power ISA already having LD/ST with update as well as
1081 Condition Codes and `lq` this would be hard to justify.
1082
1083 With limited space in the `EXTRA` Field, and Power ISA opcodes
1084 being only 32 bit, 5 operands is quite an ask. `lq` however sets
1085 a precedent: `RTp` stands for "RT pair". In other words the result
1086 is stored in RT and RT+1. For Scalar operations, following this
1087 precedent is perfectly reasonable. In Scalar mode,
1088 `madded` therefore stores the two halves of the 128-bit multiply
1089 into RT and RT+1.
1090
1091 What, then, of `sv.madded`? If the destination is hard-coded to
1092 RT and RT+1 the instruction is not useful when Vectorised because
1093 the output will be overwritten on the next element. To solve this
1094 is easy: define the destination registers as RT and RT+MAXVL
1095 respectively. This makes it easy for compilers to statically allocate
1096 registers even when VL changes dynamically.
1097
1098 Bear in mind that both RT and RT+MAXVL are starting points for Vectors,
1099 and bear in mind that element-width overrides still have to be taken
1100 into consideration, the starting point for the implicit destination
1101 is best illustrated in pseudocode:
1102
1103 # demo of madded
1104  for (i = 0; i < VL; i++)
1105 if (predval & 1<<i) # predication
1106 src1 = get_polymorphed_reg(RA, srcwid, irs1)
1107 src2 = get_polymorphed_reg(RB, srcwid, irs2)
1108 src2 = get_polymorphed_reg(RC, srcwid, irs3)
1109 result = src1*src2 + src2
1110 destmask = (2<<destwid)-1
1111 # store two halves of result, both start from RT.
1112 set_polymorphed_reg(RT, destwid, ird , result&destmask)
1113 set_polymorphed_reg(RT, destwid, ird+MAXVL, result>>destwid)
1114 if (!RT.isvec) break
1115 if (RT.isvec)  { id += 1; }
1116 if (RA.isvec)  { irs1 += 1; }
1117 if (RB.isvec)  { irs2 += 1; }
1118 if (RC.isvec)  { irs3 += 1; }
1119
1120 The significant part here is that the second half is stored
1121 starting not from RT+MAXVL at all: it is the *element* index
1122 that is offset by MAXVL, both halves actually starting from RT.
1123 If VL is 3, MAXVL is 5, RT is 1, and dest elwidth is 32 then the elements
1124 RT0 to RT2 are stored:
1125
1126 0..31 32..63
1127 r0 unchanged unchanged
1128 r1 RT0.lo RT1.lo
1129 r2 RT2.lo unchanged
1130 r3 unchanged RT0.hi
1131 r4 RT1.hi RT2.hi
1132 r5 unchanged unchanged
1133
1134 Note that all of the LO halves start from r1, but that the HI halves
1135 start from half-way into r3. The reason is that with MAXVL bring
1136 5 and elwidth being 32, this is the 5th element
1137 offset (in 32 bit quantities) counting from r1.
1138
1139 *Programmer's note: accessing registers that have been placed
1140 starting on a non-contiguous boundary (half-way along a scalar
1141 register) can be inconvenient: REMAP can provide an offset but
1142 it requires extra instructions to set up. A simple solution
1143 is to ensure that MAXVL is rounded up such that the Vector
1144 ends cleanly on a contiguous register boundary. MAXVL=6 in
1145 the above example would achieve that*
1146
1147 Additional DRAFT Scalar instructions in 3-in 2-out form
1148 with an implicit 2nd destination:
1149
1150 * [[isa/svfixedarith]]
1151 * [[isa/svfparith]]
1152