(no commit message)
[libreriscv.git] / openpower / sv / svp64 / appendix.mdwn
1 [[!tag standards]]
2
3 # Appendix
4
5 * <https://bugs.libre-soc.org/show_bug.cgi?id=574> Saturation
6 * <https://bugs.libre-soc.org/show_bug.cgi?id=558#c47> Parallel Prefix
7 * <https://bugs.libre-soc.org/show_bug.cgi?id=697> Reduce Modes
8 * <https://bugs.libre-soc.org/show_bug.cgi?id=864> parallel prefix simulator
9 * <https://bugs.libre-soc.org/show_bug.cgi?id=809> OV sv.addex discussion
10
11 This is the appendix to [[sv/svp64]], providing explanations of modes
12 etc. leaving the main svp64 page's primary purpose as outlining the
13 instruction format.
14
15 Table of contents:
16
17 [[!toc]]
18
19 # Partial Implementations
20
21 It is perfectly legal to implement subsets of SVP64 as long as illegal
22 instruction traps are always raised on unimplemented features,
23 so that soft-emulation is possible,
24 even for future revisions of SVP64. With SVP64 being partly controlled
25 through contextual SPRs, a little care has to be taken.
26
27 **All** SPRs
28 not implemented including reserved ones for future use must raise an illegal
29 instruction trap if read or written. This allows software the
30 opportunity to emulate the context created by the given SPR.
31
32 See [[sv/compliancy_levels]] for full details.
33
34 # XER, SO and other global flags
35
36 Vector systems are expected to be high performance. This is achieved
37 through parallelism, which requires that elements in the vector be
38 independent. XER SO/OV and other global "accumulation" flags (CR.SO) cause
39 Read-Write Hazards on single-bit global resources, having a significant
40 detrimental effect.
41
42 Consequently in SV, XER.SO behaviour is disregarded (including
43 in `cmp` instructions). XER.SO is not read, but XER.OV may be written,
44 breaking the Read-Modify-Write Hazard Chain that complicates
45 microarchitectural implementations.
46 This includes when `scalar identity behaviour` occurs. If precise
47 OpenPOWER v3.0/1 scalar behaviour is desired then OpenPOWER v3.0/1
48 instructions should be used without an SV Prefix.
49
50 TODO jacob add about OV https://www.intel.com/content/dam/www/public/us/en/documents/white-papers/ia-large-integer-arithmetic-paper.pdf
51
52 Of note here is that XER.SO and OV may already be disregarded in the
53 Power ISA v3.0/1 SFFS (Scalar Fixed and Floating) Compliancy Subset.
54 SVP64 simply makes it mandatory to disregard XER.SO even for other Subsets,
55 but only for SVP64 Prefixed Operations.
56
57 XER.CA/CA32 on the other hand is expected and required to be implemented
58 according to standard Power ISA Scalar behaviour. Interestingly, due
59 to SVP64 being in effect a hardware for-loop around Scalar instructions
60 executing in precise Program Order, a little thought shows that a Vectorised
61 Carry-In-Out add is in effect a Big Integer Add, taking a single bit Carry In
62 and producing, at the end, a single bit Carry out. High performance
63 implementations may exploit this observation to deploy efficient
64 Parallel Carry Lookahead.
65
66 # assume VL=4, this results in 4 sequential ops (below)
67 sv.adde r0.v, r4.v, r8.v
68
69 # instructions that get executed in backend hardware:
70 adde r0, r4, r8 # takes carry-in, produces carry-out
71 adde r1, r5, r9 # takes carry from previous
72 ...
73 adde r3, r7, r11 # likewise
74
75 It can clearly be seen that the carry chains from one
76 64 bit add to the next, the end result being that a
77 256-bit "Big Integer Add" has been performed, and that
78 CA contains the 257th bit. A one-instruction 512-bit Add
79 may be performed by setting VL=8, and a one-instruction
80 1024-bit add by setting VL=16, and so on. More on
81 this in [[openpower/sv/biginteger]]
82
83 # v3.0B/v3.1 relevant instructions
84
85 SV is primarily designed for use as an efficient hybrid 3D GPU / VPU /
86 CPU ISA.
87
88 Vectorisation of the VSX Packed SIMD system makes no sense whatsoever,
89 the sole exceptions potentially being any operations with 128-bit
90 operands such as `vrlq` (Rotate Quad Word) and `xsaddqp` (Scalar
91 Quad-precision Add).
92 SV effectively *replaces* the majority of VSX, requiring far less
93 instructions, and provides, at the very minimum, predication
94 (which VSX was designed without).
95
96 Likewise, Load/Store Multiple make no sense to
97 have because they are not only provided by SV, the SV alternatives may
98 be predicated as well, making them far better suited to use in function
99 calls and context-switching.
100
101 Additionally, some v3.0/1 instructions simply make no sense at all in a
102 Vector context: `rfid` falls into this category,
103 as well as `sc` and `scv`. Here there is simply no point
104 trying to Vectorise them: the standard OpenPOWER v3.0/1 instructions
105 should be called instead.
106
107 Fortuitously this leaves several Major Opcodes free for use by SV
108 to fit alternative future instructions. In a 3D context this means
109 Vector Product, Vector Normalise, [[sv/mv.swizzle]], Texture LD/ST
110 operations, and others critical to an efficient, effective 3D GPU and
111 VPU ISA. With such instructions being included as standard in other
112 commercially-successful GPU ISAs it is likewise critical that a 3D
113 GPU/VPU based on svp64 also have such instructions.
114
115 Note however that svp64 is stand-alone and is in no way
116 critically dependent on the existence or provision of 3D GPU or VPU
117 instructions. These should be considered extensions, and their discussion
118 and specification is out of scope for this document.
119
120 Note, again: this is *only* under svp64 prefixing. Standard v3.0B /
121 v3.1B is *not* altered by svp64 in any way.
122
123 ## Major opcode map (v3.0B)
124
125 This table is taken from v3.0B.
126 Table 9: Primary Opcode Map (opcode bits 0:5)
127
128 ```
129 | 000 | 001 | 010 | 011 | 100 | 101 | 110 | 111
130 000 | | | tdi | twi | EXT04 | | | mulli | 000
131 001 | subfic | | cmpli | cmpi | addic | addic. | addi | addis | 001
132 010 | bc/l/a | EXT17 | b/l/a | EXT19 | rlwimi| rlwinm | | rlwnm | 010
133 011 | ori | oris | xori | xoris | andi. | andis. | EXT30 | EXT31 | 011
134 100 | lwz | lwzu | lbz | lbzu | stw | stwu | stb | stbu | 100
135 101 | lhz | lhzu | lha | lhau | sth | sthu | lmw | stmw | 101
136 110 | lfs | lfsu | lfd | lfdu | stfs | stfsu | stfd | stfdu | 110
137 111 | lq | EXT57 | EXT58 | EXT59 | EXT60 | EXT61 | EXT62 | EXT63 | 111
138 | 000 | 001 | 010 | 011 | 100 | 101 | 110 | 111
139 ```
140
141 ## Suitable for svp64-only
142
143 This is the same table containing v3.0B Primary Opcodes except those that
144 make no sense in a Vectorisation Context have been removed. These removed
145 POs can, *in the SV Vector Context only*, be assigned to alternative
146 (Vectorised-only) instructions, including future extensions.
147 EXT04 retains the scalar `madd*` operations but would have all PackedSIMD
148 (aka VSX) operations removed.
149
150 Note, again, to emphasise: outside of svp64 these opcodes **do not**
151 change. When not prefixed with svp64 these opcodes **specifically**
152 retain their v3.0B / v3.1B OpenPOWER Standard compliant meaning.
153
154 ```
155 | 000 | 001 | 010 | 011 | 100 | 101 | 110 | 111
156 000 | | | | | EXT04 | | | mulli | 000
157 001 | subfic | | cmpli | cmpi | addic | addic. | addi | addis | 001
158 010 | bc/l/a | | | EXT19 | rlwimi| rlwinm | | rlwnm | 010
159 011 | ori | oris | xori | xoris | andi. | andis. | EXT30 | EXT31 | 011
160 100 | lwz | lwzu | lbz | lbzu | stw | stwu | stb | stbu | 100
161 101 | lhz | lhzu | lha | lhau | sth | sthu | | | 101
162 110 | lfs | lfsu | lfd | lfdu | stfs | stfsu | stfd | stfdu | 110
163 111 | | | EXT58 | EXT59 | | EXT61 | | EXT63 | 111
164 | 000 | 001 | 010 | 011 | 100 | 101 | 110 | 111
165 ```
166
167 It is important to note that having a different v3.0B Scalar opcode
168 that is different from an SVP64 one is highly undesirable: the complexity
169 in the decoder is greatly increased.
170
171 # EXTRA Field Mapping
172
173 The purpose of the 9-bit EXTRA field mapping is to mark individual
174 registers (RT, RA, BFA) as either scalar or vector, and to extend
175 their numbering from 0..31 in Power ISA v3.0 to 0..127 in SVP64.
176 Three of the 9 bits may also be used up for a 2nd Predicate (Twin
177 Predication) leaving a mere 6 bits for qualifying registers. As can
178 be seen there is significant pressure on these (and in fact all) SVP64 bits.
179
180 In Power ISA v3.1 prefixing there are bits which describe and classify
181 the prefix in a fashion that is independent of the suffix. MLSS for
182 example. For SVP64 there is insufficient space to make the SVP64 Prefix
183 "self-describing", and consequently every single Scalar instruction
184 had to be individually analysed, by rote, to craft an EXTRA Field Mapping.
185 This process was semi-automated and is described in this section.
186 The final results, which are part of the SVP64 Specification, are here:
187
188 * [[openpower/opcode_regs_deduped]]
189
190 Firstly, every instruction's mnemonic (`add RT, RA, RB`) was analysed
191 from reading the markdown formatted version of the Scalar pseudocode
192 which is machine-readable and found in [[openpower/isatables]]. The
193 analysis gives, by instruction, a "Register Profile". `add RT, RA, RB`
194 for example is given a designation `RM-2R-1W` because it requires
195 two GPR reads and one GPR write.
196
197 Secondly, the total number of registers was added up (2R-1W is 3 registers)
198 and if less than or equal to three then that instruction could be given an
199 EXTRA3 designation. Four or more is given an EXTRA2 designation because
200 there are only 9 bits available.
201
202 Thirdly, the instruction was analysed to see if Twin or Single
203 Predication was suitable. As a general rule this was if there
204 was only a single operand and a single result (`extw` and LD/ST)
205 however it was found that some 2 or 3 operand instructions also
206 qualify. Given that 3 of the 9 bits of EXTRA had to be sacrificed for use
207 in Twin Predication, some compromises were made, here. LDST is
208 Twin but also has 3 operands in some operations, so only EXTRA2 can be used.
209
210 Fourthly, a packing format was decided: for 2R-1W an EXTRA3 indexing
211 could have been decided
212 that RA would be indexed 0 (EXTRA bits 0-2), RB indexed 1 (EXTRA bits 3-5)
213 and RT indexed 2 (EXTRA bits 6-8). In some cases (LD/ST with update)
214 RA-as-a-source is given a **different** EXTRA index from RA-as-a-result
215 (because it is possible to do, and perceived to be useful). Rc=1
216 co-results (CR0, CR1) are always given the same EXTRA index as their
217 main result (RT, FRT).
218
219 Fifthly, in an automated process the results of the analysis
220 were outputted in CSV Format for use in machine-readable form
221 by sv_analysis.py <https://git.libre-soc.org/?p=openpower-isa.git;a=blob;f=src/openpower/sv/sv_analysis.py;hb=HEAD>
222
223 This process was laborious but logical, and, crucially, once a
224 decision is made (and ratified) cannot be reversed.
225 Qualifying future Power ISA Scalar instructions for SVP64
226 is **strongly** advised to utilise this same process and the same
227 sv_analysis.py program as a canonical method of maintaining the
228 relationships. Alterations to that same program which
229 change the Designation is **prohibited** once finalised (ratified
230 through the Power ISA WG Process). It would
231 be similar to deciding that `add` should be changed from X-Form
232 to D-Form.
233
234 # Single Predication <a name="1p"> </a>
235
236 This is a standard mode normally found in Vector ISAs. every element in every source Vector and in the destination uses the same bit of one single predicate mask.
237
238 In SVSTATE, for Single-predication, implementors MUST increment both srcstep and dststep, but depending on whether sz and/or dz are set, srcstep and
239 dststep can still potentially become different indices. Only when sz=dz
240 is srcstep guaranteed to equal dststep at all times.
241
242 Note that in some Mode Formats there is only one flag (zz). This indicates
243 that *both* sz *and* dz are set to the same.
244
245 Example 1:
246
247 * VL=4
248 * mask=0b1101
249 * sz=0, dz=1
250
251 The following schedule for srcstep and dststep will occur:
252
253 | srcstep | dststep | comment |
254 | ---- | ----- | -------- |
255 | 0 | 0 | both mask[src=0] and mask[dst=0] are 1 |
256 | 1 | 2 | sz=1 but dz=0: dst skips mask[1], src soes not |
257 | 2 | 3 | mask[src=2] and mask[dst=3] are 1 |
258 | end | end | loop has ended because dst reached VL-1 |
259
260 Example 2:
261
262 * VL=4
263 * mask=0b1101
264 * sz=1, dz=0
265
266 The following schedule for srcstep and dststep will occur:
267
268 | srcstep | dststep | comment |
269 | ---- | ----- | -------- |
270 | 0 | 0 | both mask[src=0] and mask[dst=0] are 1 |
271 | 2 | 1 | sz=0 but dz=1: src skips mask[1], dst does not |
272 | 3 | 2 | mask[src=3] and mask[dst=2] are 1 |
273 | end | end | loop has ended because src reached VL-1 |
274
275 In both these examples it is crucial to note that despite there being
276 a single predicate mask, with sz and dz being different, srcstep and
277 dststep are being requested to react differently.
278
279 Example 3:
280
281 * VL=4
282 * mask=0b1101
283 * sz=0, dz=0
284
285 The following schedule for srcstep and dststep will occur:
286
287 | srcstep | dststep | comment |
288 | ---- | ----- | -------- |
289 | 0 | 0 | both mask[src=0] and mask[dst=0] are 1 |
290 | 2 | 2 | sz=0 and dz=0: both src and dst skip mask[1] |
291 | 3 | 3 | mask[src=3] and mask[dst=3] are 1 |
292 | end | end | loop has ended because src and dst reached VL-1 |
293
294 Here, both srcstep and dststep remain in lockstep because sz=dz=1
295
296 # Twin Predication <a name="2p"> </a>
297
298 This is a novel concept that allows predication to be applied to a single
299 source and a single dest register. The following types of traditional
300 Vector operations may be encoded with it, *without requiring explicit
301 opcodes to do so*
302
303 * VSPLAT (a single scalar distributed across a vector)
304 * VEXTRACT (like LLVM IR [`extractelement`](https://releases.llvm.org/11.0.0/docs/LangRef.html#extractelement-instruction))
305 * VINSERT (like LLVM IR [`insertelement`](https://releases.llvm.org/11.0.0/docs/LangRef.html#insertelement-instruction))
306 * VCOMPRESS (like LLVM IR [`llvm.masked.compressstore.*`](https://releases.llvm.org/11.0.0/docs/LangRef.html#llvm-masked-compressstore-intrinsics))
307 * VEXPAND (like LLVM IR [`llvm.masked.expandload.*`](https://releases.llvm.org/11.0.0/docs/LangRef.html#llvm-masked-expandload-intrinsics))
308
309 Those patterns (and more) may be applied to:
310
311 * mv (the usual way that V\* ISA operations are created)
312 * exts\* sign-extension
313 * rwlinm and other RS-RA shift operations (**note**: excluding
314 those that take RA as both a src and dest. These are not
315 1-src 1-dest, they are 2-src, 1-dest)
316 * LD and ST (treating AGEN as one source)
317 * FP fclass, fsgn, fneg, fabs, fcvt, frecip, fsqrt etc.
318 * Condition Register ops mfcr, mtcr and other similar
319
320 This is a huge list that creates extremely powerful combinations,
321 particularly given that one of the predicate options is `(1<<r3)`
322
323 Additional unusual capabilities of Twin Predication include a back-to-back
324 version of VCOMPRESS-VEXPAND which is effectively the ability to do
325 sequentially ordered multiple VINSERTs. The source predicate selects a
326 sequentially ordered subset of elements to be inserted; the destination
327 predicate specifies the sequentially ordered recipient locations.
328 This is equivalent to
329 `llvm.masked.compressstore.*`
330 followed by
331 `llvm.masked.expandload.*`
332 with a single instruction.
333
334 This extreme power and flexibility comes down to the fact that SVP64
335 is not actually a Vector ISA: it is a loop-abstraction-concept that
336 is applied *in general* to Scalar operations, just like the x86
337 `REP` instruction (if put on steroids).
338
339 # Reduce modes
340
341 Reduction in SVP64 is deterministic and somewhat of a misnomer. A normal
342 Vector ISA would have explicit Reduce opcodes with defined characteristics
343 per operation: in SX Aurora there is even an additional scalar argument
344 containing the initial reduction value, and the default is either 0
345 or 1 depending on the specifics of the explicit opcode.
346 SVP64 fundamentally has to
347 utilise *existing* Scalar Power ISA v3.0B operations, which presents some
348 unique challenges.
349
350 The solution turns out to be to simply define reduction as permitting
351 deterministic element-based schedules to be issued using the base Scalar
352 operations, and to rely on the underlying microarchitecture to resolve
353 Register Hazards at the element level. This goes back to
354 the fundamental principle that SV is nothing more than a Sub-Program-Counter
355 sitting between Decode and Issue phases.
356
357 Microarchitectures *may* take opportunities to parallelise the reduction
358 but only if in doing so they preserve Program Order at the Element Level.
359 Opportunities where this is possible include an `OR` operation
360 or a MIN/MAX operation: it may be possible to parallelise the reduction,
361 but for Floating Point it is not permitted due to different results
362 being obtained if the reduction is not executed in strict Program-Sequential
363 Order.
364
365 In essence it becomes the programmer's responsibility to leverage the
366 pre-determined schedules to desired effect.
367
368 ## Scalar result reduction and iteration
369
370 Scalar Reduction per se does not exist, instead is implemented in SVP64
371 as a simple and natural relaxation of the usual restriction on the Vector
372 Looping which would terminate if the destination was marked as a Scalar.
373 Scalar Reduction by contrast *keeps issuing Vector Element Operations*
374 even though the destination register is marked as scalar.
375 Thus it is up to the programmer to be aware of this, observe some
376 conventions, and thus end up achieving the desired outcome of scalar
377 reduction.
378
379 It is also important to appreciate that there is no
380 actual imposition or restriction on how this mode is utilised: there
381 will therefore be several valuable uses (including Vector Iteration
382 and "Reverse-Gear")
383 and it is up to the programmer to make best use of the
384 (strictly deterministic) capability
385 provided.
386
387 In this mode, which is suited to operations involving carry or overflow,
388 one register must be assigned, by convention by the programmer to be the
389 "accumulator". Scalar reduction is thus categorised by:
390
391 * One of the sources is a Vector
392 * the destination is a scalar
393 * optionally but most usefully when one source scalar register is
394 also the scalar destination (which may be informally termed
395 the "accumulator")
396 * That the source register type is the same as the destination register
397 type identified as the "accumulator". Scalar reduction on `cmp`,
398 `setb` or `isel` makes no sense for example because of the mixture
399 between CRs and GPRs.
400
401 *Note that issuing instructions in Scalar reduce mode such as `setb`
402 are neither `UNDEFINED` nor prohibited, despite them not making much
403 sense at first glance.
404 Scalar reduce is strictly defined behaviour, and the cost in
405 hardware terms of prohibition of seemingly non-sensical operations is too great.
406 Therefore it is permitted and required to be executed successfully.
407 Implementors **MAY** choose to optimise such instructions in instances
408 where their use results in "extraneous execution", i.e. where it is clear
409 that the sequence of operations, comprising multiple overwrites to
410 a scalar destination **without** cumulative, iterative, or reductive
411 behaviour (no "accumulator"), may discard all but the last element
412 operation. Identification
413 of such is trivial to do for `setb` and `cmp`: the source register type is
414 a completely different register file from the destination.
415 Likewise Scalar reduction when the destination is a Vector
416 is as if the Reduction Mode was not requested.*
417
418 Typical applications include simple operations such as `ADD r3, r10.v,
419 r3` where, clearly, r3 is being used to accumulate the addition of all
420 elements of the vector starting at r10.
421
422 # add RT, RA,RB but when RT==RA
423 for i in range(VL):
424 iregs[RA] += iregs[RB+i] # RT==RA
425
426 However, *unless* the operation is marked as "mapreduce" (`sv.add/mr`)
427 SV ordinarily
428 **terminates** at the first scalar operation. Only by marking the
429 operation as "mapreduce" will it continue to issue multiple sub-looped
430 (element) instructions in `Program Order`.
431
432 To perform the loop in reverse order, the ```RG``` (reverse gear) bit must be set. This may be useful in situations where the results may be different
433 (floating-point) if executed in a different order. Given that there is
434 no actual prohibition on Reduce Mode being applied when the destination
435 is a Vector, the "Reverse Gear" bit turns out to be a way to apply Iterative
436 or Cumulative Vector operations in reverse. `sv.add/rg r3.v, r4.v, r4.v`
437 for example will start at the opposite end of the Vector and push
438 a cumulative series of overlapping add operations into the Execution units of
439 the underlying hardware.
440
441 Other examples include shift-mask operations where a Vector of inserts
442 into a single destination register is required (see [[sv/bitmanip]], bmset),
443 as a way to construct
444 a value quickly from multiple arbitrary bit-ranges and bit-offsets.
445 Using the same register as both the source and destination, with Vectors
446 of different offsets masks and values to be inserted has multiple
447 applications including Video, cryptography and JIT compilation.
448
449 # assume VL=4:
450 # * Vector of shift-offsets contained in RC (r12.v)
451 # * Vector of masks contained in RB (r8.v)
452 # * Vector of values to be masked-in in RA (r4.v)
453 # * Scalar destination RT (r0) to receive all mask-offset values
454 sv.bmset/mr r0, r4.v, r8.v, r12.v
455
456 Due to the Deterministic Scheduling,
457 Subtract and Divide are still permitted to be executed in this mode,
458 although from an algorithmic perspective it is strongly discouraged.
459 It would be better to use addition followed by one final subtract,
460 or in the case of divide, to get better accuracy, to perform a multiply
461 cascade followed by a final divide.
462
463 Note that single-operand or three-operand scalar-dest reduce is perfectly
464 well permitted: the programmer may still declare one register, used as
465 both a Vector source and Scalar destination, to be utilised as
466 the "accumulator". In the case of `sv.fmadds` and `sv.maddhw` etc
467 this naturally fits well with the normal expected usage of these
468 operations.
469
470 If an interrupt or exception occurs in the middle of the scalar mapreduce,
471 the scalar destination register **MUST** be updated with the current
472 (intermediate) result, because this is how ```Program Order``` is
473 preserved (Vector Loops are to be considered to be just another way of issuing instructions
474 in Program Order). In this way, after return from interrupt,
475 the scalar mapreduce may continue where it left off. This provides
476 "precise" exception behaviour.
477
478 Note that hardware is perfectly permitted to perform multi-issue
479 parallel optimisation of the scalar reduce operation: it's just that
480 as far as the user is concerned, all exceptions and interrupts **MUST**
481 be precise.
482
483 ## Vector result reduce mode
484
485 Vector Reduce Mode issues a deterministic tree-reduction schedule to the underlying micro-architecture. Like Scalar reduction, the "Scalar Base"
486 (Power ISA v3.0B) operation is leveraged, unmodified, to give the
487 *appearance* and *effect* of Reduction.
488
489 Given that the tree-reduction schedule is deterministic,
490 Interrupts and exceptions
491 can therefore also be precise. The final result will be in the first
492 non-predicate-masked-out destination element, but due again to
493 the deterministic schedule programmers may find uses for the intermediate
494 results.
495
496 When Rc=1 a corresponding Vector of co-resultant CRs is also
497 created. No special action is taken: the result and its CR Field
498 are stored "as usual" exactly as all other SVP64 Rc=1 operations.
499
500 ## Sub-Vector Horizontal Reduction
501
502 Note that when SVM is clear and SUBVL!=1 the sub-elements are
503 *independent*, i.e. they are mapreduced per *sub-element* as a result.
504 illustration with a vec2, assuming RA==RT, e.g `sv.add/mr/vec2 r4, r4, r16`
505
506 for i in range(0, VL):
507 # RA==RT in the instruction. does not have to be
508 iregs[RT].x = op(iregs[RT].x, iregs[RB+i].x)
509 iregs[RT].y = op(iregs[RT].y, iregs[RB+i].y)
510
511 Thus logically there is nothing special or unanticipated about
512 `SVM=0`: it is expected behaviour according to standard SVP64
513 Sub-Vector rules.
514
515 By contrast, when SVM is set and SUBVL!=1, a Horizontal
516 Subvector mode is enabled, which behaves very much more
517 like a traditional Vector Processor Reduction instruction.
518 Example for a vec3:
519
520 for i in range(VL):
521 result = iregs[RA+i].x
522 result = op(result, iregs[RA+i].y)
523 result = op(result, iregs[RA+i].z)
524 iregs[RT+i] = result
525
526 In this mode, when Rc=1 the Vector of CRs is as normal: each result
527 element creates a corresponding CR element (for the final, reduced, result).
528
529 # Fail-on-first
530
531 Data-dependent fail-on-first has two distinct variants: one for LD/ST
532 (see [[sv/ldst]],
533 the other for arithmetic operations (actually, CR-driven)
534 ([[sv/normal]]) and CR operations ([[sv/cr_ops]]).
535 Note in each
536 case the assumption is that vector elements are required appear to be
537 executed in sequential Program Order, element 0 being the first.
538
539 * LD/ST ffirst treats the first LD/ST in a vector (element 0) as an
540 ordinary one. Exceptions occur "as normal". However for elements 1
541 and above, if an exception would occur, then VL is **truncated** to the
542 previous element.
543 * Data-driven (CR-driven) fail-on-first activates when Rc=1 or other
544 CR-creating operation produces a result (including cmp). Similar to
545 branch, an analysis of the CR is performed and if the test fails, the
546 vector operation terminates and discards all element operations
547 above the current one (and the current one if VLi is not set),
548 and VL is truncated to either
549 the *previous* element or the current one, depending on whether
550 VLi (VL "inclusive") is set.
551
552 Thus the new VL comprises a contiguous vector of results,
553 all of which pass the testing criteria (equal to zero, less than zero).
554
555 The CR-based data-driven fail-on-first is new and not found in ARM
556 SVE or RVV. It is extremely useful for reducing instruction count,
557 however requires speculative execution involving modifications of VL
558 to get high performance implementations. An additional mode (RC1=1)
559 effectively turns what would otherwise be an arithmetic operation
560 into a type of `cmp`. The CR is stored (and the CR.eq bit tested
561 against the `inv` field).
562 If the CR.eq bit is equal to `inv` then the Vector is truncated and
563 the loop ends.
564 Note that when RC1=1 the result elements are never stored, only the CRs.
565
566 VLi is only available as an option when `Rc=0` (or for instructions
567 which do not have Rc). When set, the current element is always
568 also included in the count (the new length that VL will be set to).
569 This may be useful in combination with "inv" to truncate the Vector
570 to `exclude` elements that fail a test, or, in the case of implementations
571 of strncpy, to include the terminating zero.
572
573 In CR-based data-driven fail-on-first there is only the option to select
574 and test one bit of each CR (just as with branch BO). For more complex
575 tests this may be insufficient. If that is the case, a vectorised crops
576 (crand, cror) may be used, and ffirst applied to the crop instead of to
577 the arithmetic vector.
578
579 One extremely important aspect of ffirst is:
580
581 * LDST ffirst may never set VL equal to zero. This because on the first
582 element an exception must be raised "as normal".
583 * CR-based data-dependent ffirst on the other hand **can** set VL equal
584 to zero. This is the only means in the entirety of SV that VL may be set
585 to zero (with the exception of via the SV.STATE SPR). When VL is set
586 zero due to the first element failing the CR bit-test, all subsequent
587 vectorised operations are effectively `nops` which is
588 *precisely the desired and intended behaviour*.
589
590 Another aspect is that for ffirst LD/STs, VL may be truncated arbitrarily
591 to a nonzero value for any implementation-specific reason. For example:
592 it is perfectly reasonable for implementations to alter VL when ffirst
593 LD or ST operations are initiated on a nonaligned boundary, such that
594 within a loop the subsequent iteration of that loop begins subsequent
595 ffirst LD/ST operations on an aligned boundary. Likewise, to reduce
596 workloads or balance resources.
597
598 CR-based data-dependent first on the other hand MUST not truncate VL
599 arbitrarily to a length decided by the hardware: VL MUST only be
600 truncated based explicitly on whether a test fails.
601 This because it is a precise test on which algorithms
602 will rely.
603
604 ## Data-dependent fail-first on CR operations (crand etc)
605
606 Operations that actually produce or alter CR Field as a result
607 do not also in turn have an Rc=1 mode. However it makes no
608 sense to try to test the 4 bits of a CR Field for being equal
609 or not equal to zero. Moreover, the result is already in the
610 form that is desired: it is a CR field. Therefore,
611 CR-based operations have their own SVP64 Mode, described
612 in [[sv/cr_ops]]
613
614 There are two primary different types of CR operations:
615
616 * Those which have a 3-bit operand field (referring to a CR Field)
617 * Those which have a 5-bit operand (referring to a bit within the
618 whole 32-bit CR)
619
620 More details can be found in [[sv/cr_ops]].
621
622 # pred-result mode
623
624 Pred-result mode may not be applied on CR-based operations.
625
626 Although CR operations (mtcr, crand, cror) may be Vectorised,
627 predicated, pred-result mode applies to operations that have
628 an Rc=1 mode, or make sense to add an RC1 option.
629
630 Predicate-result merges common CR testing with predication, saving on
631 instruction count. In essence, a Condition Register Field test
632 is performed, and if it fails it is considered to have been
633 *as if* the destination predicate bit was zero. Given that
634 there are no CR-based operations that produce Rc=1 co-results,
635 there can be no pred-result mode for mtcr and other CR-based instructions
636
637 Arithmetic and Logical Pred-result, which does have Rc=1 or for which
638 RC1 Mode makes sense, is covered in [[sv/normal]]
639
640 # CR Operations
641
642 CRs are slightly more involved than INT or FP registers due to the
643 possibility for indexing individual bits (crops BA/BB/BT). Again however
644 the access pattern needs to be understandable in relation to v3.0B / v3.1B
645 numbering, with a clear linear relationship and mapping existing when
646 SV is applied.
647
648 ## CR EXTRA mapping table and algorithm <a name="cr_extra"></a>
649
650 Numbering relationships for CR fields are already complex due to being
651 in BE format (*the relationship is not clearly explained in the v3.0B
652 or v3.1 specification*). However with some care and consideration
653 the exact same mapping used for INT and FP regfiles may be applied,
654 just to the upper bits, as explained below. The notation
655 `CR{field number}` is used to indicate access to a particular
656 Condition Register Field (as opposed to the notation `CR[bit]`
657 which accesses one bit of the 32 bit Power ISA v3.0B
658 Condition Register)
659
660 `CR{n}` refers to `CR0` when `n=0` and consequently, for CR0-7, is defined, in v3.0B pseudocode, as:
661
662 CR{7-n} = CR[32+n*4:35+n*4]
663
664 For SVP64 the relationship for the sequential
665 numbering of elements is to the CR **fields** within
666 the CR Register, not to individual bits within the CR register.
667
668 In OpenPOWER v3.0/1, BF/BT/BA/BB are all 5 bits. The top 3 bits (0:2)
669 select one of the 8 CRs; the bottom 2 bits (3:4) select one of 4 bits
670 *in* that CR (EQ/LT/GT/SO). The numbering was determined (after 4 months of
671 analysis and research) to be as follows:
672
673 CR_index = 7-(BA>>2) # top 3 bits but BE
674 bit_index = 3-(BA & 0b11) # low 2 bits but BE
675 CR_reg = CR{CR_index} # get the CR
676 # finally get the bit from the CR.
677 CR_bit = (CR_reg & (1<<bit_index)) != 0
678
679 When it comes to applying SV, it is the CR\_reg number to which SV EXTRA2/3
680 applies, **not** the CR\_bit portion (bits 3-4):
681
682 if extra3_mode:
683 spec = EXTRA3
684 else:
685 spec = EXTRA2<<1 | 0b0
686 if spec[0]:
687 # vector constructs "BA[0:2] spec[1:2] 00 BA[3:4]"
688 return ((BA >> 2)<<6) | # hi 3 bits shifted up
689 (spec[1:2]<<4) | # to make room for these
690 (BA & 0b11) # CR_bit on the end
691 else:
692 # scalar constructs "00 spec[1:2] BA[0:4]"
693 return (spec[1:2] << 5) | BA
694
695 Thus, for example, to access a given bit for a CR in SV mode, the v3.0B
696 algorithm to determine CR\_reg is modified to as follows:
697
698 CR_index = 7-(BA>>2) # top 3 bits but BE
699 if spec[0]:
700 # vector mode, 0-124 increments of 4
701 CR_index = (CR_index<<4) | (spec[1:2] << 2)
702 else:
703 # scalar mode, 0-32 increments of 1
704 CR_index = (spec[1:2]<<3) | CR_index
705 # same as for v3.0/v3.1 from this point onwards
706 bit_index = 3-(BA & 0b11) # low 2 bits but BE
707 CR_reg = CR{CR_index} # get the CR
708 # finally get the bit from the CR.
709 CR_bit = (CR_reg & (1<<bit_index)) != 0
710
711 Note here that the decoding pattern to determine CR\_bit does not change.
712
713 Note: high-performance implementations may read/write Vectors of CRs in
714 batches of aligned 32-bit chunks (CR0-7, CR7-15). This is to greatly
715 simplify internal design. If instructions are issued where CR Vectors
716 do not start on a 32-bit aligned boundary, performance may be affected.
717
718 ## CR fields as inputs/outputs of vector operations
719
720 CRs (or, the arithmetic operations associated with them)
721 may be marked as Vectorised or Scalar. When Rc=1 in arithmetic operations that have no explicit EXTRA to cover the CR, the CR is Vectorised if the destination is Vectorised. Likewise if the destination is scalar then so is the CR.
722
723 When vectorized, the CR inputs/outputs are sequentially read/written
724 to 4-bit CR fields. Vectorised Integer results, when Rc=1, will begin
725 writing to CR8 (TBD evaluate) and increase sequentially from there.
726 This is so that:
727
728 * implementations may rely on the Vector CRs being aligned to 8. This
729 means that CRs may be read or written in aligned batches of 32 bits
730 (8 CRs per batch), for high performance implementations.
731 * scalar Rc=1 operation (CR0, CR1) and callee-saved CRs (CR2-4) are not
732 overwritten by vector Rc=1 operations except for very large VL
733 * CR-based predication, from CR32, is also not interfered with
734 (except by large VL).
735
736 However when the SV result (destination) is marked as a scalar by the
737 EXTRA field the *standard* v3.0B behaviour applies: the accompanying
738 CR when Rc=1 is written to. This is CR0 for integer operations and CR1
739 for FP operations.
740
741 Note that yes, the CR Fields are genuinely Vectorised. Unlike in SIMD VSX which
742 has a single CR (CR6) for a given SIMD result, SV Vectorised OpenPOWER
743 v3.0B scalar operations produce a **tuple** of element results: the
744 result of the operation as one part of that element *and a corresponding
745 CR element*. Greatly simplified pseudocode:
746
747 for i in range(VL):
748 # calculate the vector result of an add
749 iregs[RT+i] = iregs[RA+i] + iregs[RB+i]
750 # now calculate CR bits
751 CRs{8+i}.eq = iregs[RT+i] == 0
752 CRs{8+i}.gt = iregs[RT+i] > 0
753 ... etc
754
755 If a "cumulated" CR based analysis of results is desired (a la VSX CR6)
756 then a followup instruction must be performed, setting "reduce" mode on
757 the Vector of CRs, using cr ops (crand, crnor) to do so. This provides far
758 more flexibility in analysing vectors than standard Vector ISAs. Normal
759 Vector ISAs are typically restricted to "were all results nonzero" and
760 "were some results nonzero". The application of mapreduce to Vectorised
761 cr operations allows far more sophisticated analysis, particularly in
762 conjunction with the new crweird operations see [[sv/cr_int_predication]].
763
764 Note in particular that the use of a separate instruction in this way
765 ensures that high performance multi-issue OoO inplementations do not
766 have the computation of the cumulative analysis CR as a bottleneck and
767 hindrance, regardless of the length of VL.
768
769 Additionally,
770 SVP64 [[sv/branches]] may be used, even when the branch itself is to
771 the following instruction. The combined side-effects of CTR reduction
772 and VL truncation provide several benefits.
773
774 (see [[discussion]]. some alternative schemes are described there)
775
776 ## Rc=1 when SUBVL!=1
777
778 sub-vectors are effectively a form of Packed SIMD (length 2 to 4). Only 1 bit of
779 predicate is allocated per subvector; likewise only one CR is allocated
780 per subvector.
781
782 This leaves a conundrum as to how to apply CR computation per subvector,
783 when normally Rc=1 is exclusively applied to scalar elements. A solution
784 is to perform a bitwise OR or AND of the subvector tests. Given that
785 OE is ignored in SVP64, this field may (when available) be used to select OR or
786 AND behavior.
787
788 ### Table of CR fields
789
790 CRn is the notation used by the OpenPower spec to refer to CR field #i,
791 so FP instructions with Rc=1 write to CR1 (n=1).
792
793 CRs are not stored in SPRs: they are registers in their own right.
794 Therefore context-switching the full set of CRs involves a Vectorised
795 mfcr or mtcr, using VL=8 to do so. This is exactly as how
796 scalar OpenPOWER context-switches CRs: it is just that there are now
797 more of them.
798
799 The 64 SV CRs are arranged similarly to the way the 128 integer registers
800 are arranged. TODO a python program that auto-generates a CSV file
801 which can be included in a table, which is in a new page (so as not to
802 overwhelm this one). [[svp64/cr_names]]
803
804 # Register Profiles
805
806 **NOTE THIS TABLE SHOULD NO LONGER BE HAND EDITED** see
807 <https://bugs.libre-soc.org/show_bug.cgi?id=548> for details.
808
809 Instructions are broken down by Register Profiles as listed in the
810 following auto-generated page: [[opcode_regs_deduped]]. "Non-SV"
811 indicates that the operations with this Register Profile cannot be
812 Vectorised (mtspr, bc, dcbz, twi)
813
814 TODO generate table which will be here [[svp64/reg_profiles]]
815
816 # SV pseudocode illilustration
817
818 ## Single-predicated Instruction
819
820 illustration of normal mode add operation: zeroing not included, elwidth
821 overrides not included. if there is no predicate, it is set to all 1s
822
823 function op_add(rd, rs1, rs2) # add not VADD!
824 int i, id=0, irs1=0, irs2=0;
825 predval = get_pred_val(FALSE, rd);
826 for (i = 0; i < VL; i++)
827 STATE.srcoffs = i # save context
828 if (predval & 1<<i) # predication uses intregs
829 ireg[rd+id] <= ireg[rs1+irs1] + ireg[rs2+irs2];
830 if (!int_vec[rd].isvec) break;
831 if (rd.isvec) { id += 1; }
832 if (rs1.isvec) { irs1 += 1; }
833 if (rs2.isvec) { irs2 += 1; }
834 if (id == VL or irs1 == VL or irs2 == VL)
835 {
836 # end VL hardware loop
837 STATE.srcoffs = 0; # reset
838 return;
839 }
840
841 This has several modes:
842
843 * RT.v = RA.v RB.v
844 * RT.v = RA.v RB.s (and RA.s RB.v)
845 * RT.v = RA.s RB.s
846 * RT.s = RA.v RB.v
847 * RT.s = RA.v RB.s (and RA.s RB.v)
848 * RT.s = RA.s RB.s
849
850 All of these may be predicated. Vector-Vector is straightfoward.
851 When one of source is a Vector and the other a Scalar, it is clear that
852 each element of the Vector source should be added to the Scalar source,
853 each result placed into the Vector (or, if the destination is a scalar,
854 only the first nonpredicated result).
855
856 The one that is not obvious is RT=vector but both RA/RB=scalar.
857 Here this acts as a "splat scalar result", copying the same result into
858 all nonpredicated result elements. If a fixed destination scalar was
859 intended, then an all-Scalar operation should be used.
860
861 See <https://bugs.libre-soc.org/show_bug.cgi?id=552>
862
863 # Assembly Annotation
864
865 Assembly code annotation is required for SV to be able to successfully
866 mark instructions as "prefixed".
867
868 A reasonable (prototype) starting point:
869
870 svp64 [field=value]*
871
872 Fields:
873
874 * ew=8/16/32 - element width
875 * sew=8/16/32 - source element width
876 * vec=2/3/4 - SUBVL
877 * mode=mr/satu/sats/crpred
878 * pred=1\<\<3/r3/~r3/r10/~r10/r30/~r30/lt/gt/le/ge/eq/ne
879
880 similar to x86 "rex" prefix.
881
882 For actual assembler:
883
884 sv.asmcode/mode.vec{N}.ew=8,sw=16,m={pred},sm={pred} reg.v, src.s
885
886 Qualifiers:
887
888 * m={pred}: predicate mask mode
889 * sm={pred}: source-predicate mask mode (only allowed in Twin-predication)
890 * vec{N}: vec2 OR vec3 OR vec4 - sets SUBVL=2/3/4
891 * ew={N}: ew=8/16/32 - sets elwidth override
892 * sw={N}: sw=8/16/32 - sets source elwidth override
893 * ff={xx}: see fail-first mode
894 * pr={xx}: see predicate-result mode
895 * sat{x}: satu / sats - see saturation mode
896 * mr: see map-reduce mode
897 * mr.svm see map-reduce with sub-vector mode
898 * crm: see map-reduce CR mode
899 * crm.svm see map-reduce CR with sub-vector mode
900 * sz: predication with source-zeroing
901 * dz: predication with dest-zeroing
902
903 For modes:
904
905 * pred-result:
906 - pm=lt/gt/le/ge/eq/ne/so/ns
907 - RC1 mode
908 * fail-first
909 - ff=lt/gt/le/ge/eq/ne/so/ns
910 - RC1 mode
911 * saturation:
912 - sats
913 - satu
914 * map-reduce:
915 - mr OR crm: "normal" map-reduce mode or CR-mode.
916 - mr.svm OR crm.svm: when vec2/3/4 set, sub-vector mapreduce is enabled
917
918 # Parallel-reduction algorithm
919
920 The principle of SVP64 is that SVP64 is a fully-independent
921 Abstraction of hardware-looping in between issue and execute phases
922 that has no relation to the operation it issues.
923 Additional state cannot be saved on context-switching beyond that
924 of SVSTATE, making things slightly tricky.
925
926 Executable demo pseudocode, full version
927 [here](https://git.libre-soc.org/?p=libreriscv.git;a=blob;f=openpower/sv/preduce.py;hb=HEAD)
928
929 ```
930 def preducei(vl, vec, pred):
931 vec = copy(vec)
932 pred = copy(pred) # must not damage predicate
933 step = 1
934 ix = list(range(vl)) # indices move rather than copy data
935 print(" start", step, pred, vec)
936 while step < vl:
937 step *= 2
938 for i in range(0, vl, step):
939 other = i + step // 2
940 ci = ix[i]
941 oi = ix[other] if other < vl else None
942 other_pred = other < vl and pred[oi]
943 if pred[ci] and other_pred:
944 vec[ci] += vec[oi]
945 elif other_pred:
946 ix[i] = oi # leave data in-place, copy index instead
947 pred[ci] |= other_pred
948 print(" row", step, pred, vec, ix)
949 return vec
950 ```
951
952 This algorithm works by noting when data remains in-place rather than
953 being reduced, and referring to that alternative position on subsequent
954 layers of reduction. It is re-entrant. If however interrupted and
955 restored, some implementations may take longer to re-establish the
956 context.
957
958 Its application by default is that:
959
960 * RA, FRA or BFA is the first register as the first operand
961 (ci index offset in the above pseudocode)
962 * RB, FRB or BFB is the second (co index offset)
963 * RT (result) also uses ci **if RA==RT**
964
965 For more complex applications a REMAP Schedule must be used
966
967 *Programmers's note:
968 if passed a predicate mask with only one bit set, this algorithm
969 takes no action, similar to when a predicate mask is all zero.*
970
971 *Implementor's Note: many SIMD-based Parallel Reduction Algorithms are
972 implemented in hardware with MVs that ensure lane-crossing is minimised.
973 The mistake which would be catastrophic to SVP64 to make is to then
974 limit the Reduction Sequence for all implementors
975 based solely and exclusively on what one
976 specific internal microarchitecture does.
977 In SIMD ISAs the internal SIMD Architectural design is exposed and imposed on the programmer. Cray-style Vector ISAs on the other hand provide convenient,
978 compact and efficient encodings of abstract concepts.
979 It is the Implementor's responsibility to produce a design
980 that complies with the above algorithm,
981 utilising internal Micro-coding and other techniques to transparently
982 insert MV operations
983 if necessary or desired, to give the level of efficiency or performance
984 required.*
985
986 # Element-width overrides <a name="elwidth"> </>
987
988 Element-width overrides are best illustrated with a packed structure
989 union in the c programming language. The following should be taken
990 literally, and assume always a little-endian layout:
991
992 typedef union {
993 uint8_t b[];
994 uint16_t s[];
995 uint32_t i[];
996 uint64_t l[];
997 uint8_t actual_bytes[8];
998 } el_reg_t;
999
1000 elreg_t int_regfile[128];
1001
1002 get_polymorphed_reg(reg, bitwidth, offset):
1003 el_reg_t res;
1004 res.l = 0; // TODO: going to need sign-extending / zero-extending
1005 if bitwidth == 8:
1006 reg.b = int_regfile[reg].b[offset]
1007 elif bitwidth == 16:
1008 reg.s = int_regfile[reg].s[offset]
1009 elif bitwidth == 32:
1010 reg.i = int_regfile[reg].i[offset]
1011 elif bitwidth == 64:
1012 reg.l = int_regfile[reg].l[offset]
1013 return res
1014
1015 set_polymorphed_reg(reg, bitwidth, offset, val):
1016 if (!reg.isvec):
1017 # not a vector: first element only, overwrites high bits
1018 int_regfile[reg].l[0] = val
1019 elif bitwidth == 8:
1020 int_regfile[reg].b[offset] = val
1021 elif bitwidth == 16:
1022 int_regfile[reg].s[offset] = val
1023 elif bitwidth == 32:
1024 int_regfile[reg].i[offset] = val
1025 elif bitwidth == 64:
1026 int_regfile[reg].l[offset] = val
1027
1028 In effect the GPR registers r0 to r127 (and corresponding FPRs fp0
1029 to fp127) are reinterpreted to be "starting points" in a byte-addressable
1030 memory. Vectors - which become just a virtual naming construct - effectively
1031 overlap.
1032
1033 It is extremely important for implementors to note that the only circumstance
1034 where upper portions of an underlying 64-bit register are zero'd out is
1035 when the destination is a scalar. The ideal register file has byte-level
1036 write-enable lines, just like most SRAMs, in order to avoid READ-MODIFY-WRITE.
1037
1038 An example ADD operation with predication and element width overrides:
1039
1040  for (i = 0; i < VL; i++)
1041 if (predval & 1<<i) # predication
1042 src1 = get_polymorphed_reg(RA, srcwid, irs1)
1043 src2 = get_polymorphed_reg(RB, srcwid, irs2)
1044 result = src1 + src2 # actual add here
1045 set_polymorphed_reg(RT, destwid, ird, result)
1046 if (!RT.isvec) break
1047 if (RT.isvec)  { id += 1; }
1048 if (RA.isvec)  { irs1 += 1; }
1049 if (RB.isvec)  { irs2 += 1; }
1050
1051 Thus it can be clearly seen that elements are packed by their
1052 element width, and the packing starts from the source (or destination)
1053 specified by the instruction.
1054
1055 # Twin (implicit) result operations
1056
1057 Some operations in the Power ISA already target two 64-bit scalar
1058 registers: `lq` for example, and LD with update.
1059 Some mathematical algorithms are more
1060 efficient when there are two outputs rather than one, providing
1061 feedback loops between elements (the most well-known being add with
1062 carry). 64-bit multiply
1063 for example actually internally produces a 128 bit result, which clearly
1064 cannot be stored in a single 64 bit register. Some ISAs recommend
1065 "macro op fusion": the practice of setting a convention whereby if
1066 two commonly used instructions (mullo, mulhi) use the same ALU but
1067 one selects the low part of an identical operation and the other
1068 selects the high part, then optimised micro-architectures may
1069 "fuse" those two instructions together, using Micro-coding techniques,
1070 internally.
1071
1072 The practice and convention of macro-op fusion however is not compatible
1073 with SVP64 Horizontal-First, because Horizontal Mode may only
1074 be applied to a single instruction at a time, and SVP64 is based on
1075 the principle of strict Program Order even at the element
1076 level. Thus it becomes
1077 necessary to add explicit more complex single instructions with
1078 more operands than would normally be seen in the average RISC ISA
1079 (3-in, 2-out, in some cases). If it
1080 was not for Power ISA already having LD/ST with update as well as
1081 Condition Codes and `lq` this would be hard to justify.
1082
1083 With limited space in the `EXTRA` Field, and Power ISA opcodes
1084 being only 32 bit, 5 operands is quite an ask. `lq` however sets
1085 a precedent: `RTp` stands for "RT pair". In other words the result
1086 is stored in RT and RT+1. For Scalar operations, following this
1087 precedent is perfectly reasonable. In Scalar mode,
1088 `madded` therefore stores the two halves of the 128-bit multiply
1089 into RT and RT+1.
1090
1091 What, then, of `sv.madded`? If the destination is hard-coded to
1092 RT and RT+1 the instruction is not useful when Vectorised because
1093 the output will be overwritten on the next element. To solve this
1094 is easy: define the destination registers as RT and RT+MAXVL
1095 respectively. This makes it easy for compilers to statically allocate
1096 registers even when VL changes dynamically.
1097
1098 Bear in mind that both RT and RT+MAXVL are starting points for Vectors,
1099 and bear in mind that element-width overrides still have to be taken
1100 into consideration, the starting point for the implicit destination
1101 is best illustrated in pseudocode:
1102
1103 # demo of madded
1104  for (i = 0; i < VL; i++)
1105 if (predval & 1<<i) # predication
1106 src1 = get_polymorphed_reg(RA, srcwid, irs1)
1107 src2 = get_polymorphed_reg(RB, srcwid, irs2)
1108 src2 = get_polymorphed_reg(RC, srcwid, irs3)
1109 result = src1*src2 + src2
1110 destmask = (2<<destwid)-1
1111 # store two halves of result, both start from RT.
1112 set_polymorphed_reg(RT, destwid, ird , result&destmask)
1113 set_polymorphed_reg(RT, destwid, ird+MAXVL, result>>destwid)
1114 if (!RT.isvec) break
1115 if (RT.isvec)  { id += 1; }
1116 if (RA.isvec)  { irs1 += 1; }
1117 if (RB.isvec)  { irs2 += 1; }
1118 if (RC.isvec)  { irs3 += 1; }
1119
1120 The significant part here is that the second half is stored
1121 starting not from RT+MAXVL at all: it is the *element* index
1122 that is offset by MAXVL, both halves actually starting from RT.
1123 If VL is 3, MAXVL is 5, RT is 1, and dest elwidth is 32 then the elements
1124 RT0 to RT2 are stored:
1125
1126 0..31 32..63
1127 r0 unchanged unchanged
1128 r1 RT0.lo RT1.lo
1129 r2 RT2.lo unchanged
1130 r3 unchanged RT0.hi
1131 r4 RT1.hi RT2.hi
1132 r5 unchanged unchanged
1133
1134 Note that all of the LO halves start from r1, but that the HI halves
1135 start from half-way into r3. The reason is that with MAXVL bring
1136 5 and elwidth being 32, this is the 5th element
1137 offset (in 32 bit quantities) counting from r1.
1138
1139 *Programmer's note: accessing registers that have been placed
1140 starting on a non-contiguous boundary (half-way along a scalar
1141 register) can be inconvenient: REMAP can provide an offset but
1142 it requires extra instructions to set up. A simple solution
1143 is to ensure that MAXVL is rounded up such that the Vector
1144 ends cleanly on a contiguous register boundary. MAXVL=6 in
1145 the above example would achieve that*
1146
1147 Additional DRAFT Scalar instructions in 3-in 2-out form
1148 with an implicit 2nd destination:
1149
1150 * [[isa/svfixedarith]]
1151 * [[isa/svfparith]]
1152