re-add sub-vector horizontal reduction
[libreriscv.git] / openpower / sv / svp64 / appendix.mdwn
1 [[!tag standards]]
2
3 # Appendix
4
5 * <https://bugs.libre-soc.org/show_bug.cgi?id=574> Saturation
6 * <https://bugs.libre-soc.org/show_bug.cgi?id=558#c47> Parallel Prefix
7 * <https://bugs.libre-soc.org/show_bug.cgi?id=697> Reduce Modes
8 * <https://bugs.libre-soc.org/show_bug.cgi?id=864> parallel prefix simulator
9 * <https://bugs.libre-soc.org/show_bug.cgi?id=809> OV sv.addex discussion
10
11 This is the appendix to [[sv/svp64]], providing explanations of modes
12 etc. leaving the main svp64 page's primary purpose as outlining the
13 instruction format.
14
15 Table of contents:
16
17 [[!toc]]
18
19 # Partial Implementations
20
21 It is perfectly legal to implement subsets of SVP64 as long as illegal
22 instruction traps are always raised on unimplemented features,
23 so that soft-emulation is possible,
24 even for future revisions of SVP64. With SVP64 being partly controlled
25 through contextual SPRs, a little care has to be taken.
26
27 **All** SPRs
28 not implemented including reserved ones for future use must raise an illegal
29 instruction trap if read or written. This allows software the
30 opportunity to emulate the context created by the given SPR.
31
32 See [[sv/compliancy_levels]] for full details.
33
34 # XER, SO and other global flags
35
36 Vector systems are expected to be high performance. This is achieved
37 through parallelism, which requires that elements in the vector be
38 independent. XER SO/OV and other global "accumulation" flags (CR.SO) cause
39 Read-Write Hazards on single-bit global resources, having a significant
40 detrimental effect.
41
42 Consequently in SV, XER.SO behaviour is disregarded (including
43 in `cmp` instructions). XER.SO is not read, but XER.OV may be written,
44 breaking the Read-Modify-Write Hazard Chain that complicates
45 microarchitectural implementations.
46 This includes when `scalar identity behaviour` occurs. If precise
47 OpenPOWER v3.0/1 scalar behaviour is desired then OpenPOWER v3.0/1
48 instructions should be used without an SV Prefix.
49
50 TODO jacob add about OV https://www.intel.com/content/dam/www/public/us/en/documents/white-papers/ia-large-integer-arithmetic-paper.pdf
51
52 Of note here is that XER.SO and OV may already be disregarded in the
53 Power ISA v3.0/1 SFFS (Scalar Fixed and Floating) Compliancy Subset.
54 SVP64 simply makes it mandatory to disregard XER.SO even for other Subsets,
55 but only for SVP64 Prefixed Operations.
56
57 XER.CA/CA32 on the other hand is expected and required to be implemented
58 according to standard Power ISA Scalar behaviour. Interestingly, due
59 to SVP64 being in effect a hardware for-loop around Scalar instructions
60 executing in precise Program Order, a little thought shows that a Vectorised
61 Carry-In-Out add is in effect a Big Integer Add, taking a single bit Carry In
62 and producing, at the end, a single bit Carry out. High performance
63 implementations may exploit this observation to deploy efficient
64 Parallel Carry Lookahead.
65
66 # assume VL=4, this results in 4 sequential ops (below)
67 sv.adde r0.v, r4.v, r8.v
68
69 # instructions that get executed in backend hardware:
70 adde r0, r4, r8 # takes carry-in, produces carry-out
71 adde r1, r5, r9 # takes carry from previous
72 ...
73 adde r3, r7, r11 # likewise
74
75 It can clearly be seen that the carry chains from one
76 64 bit add to the next, the end result being that a
77 256-bit "Big Integer Add" has been performed, and that
78 CA contains the 257th bit. A one-instruction 512-bit Add
79 may be performed by setting VL=8, and a one-instruction
80 1024-bit add by setting VL=16, and so on. More on
81 this in [[openpower/sv/biginteger]]
82
83 # v3.0B/v3.1 relevant instructions
84
85 SV is primarily designed for use as an efficient hybrid 3D GPU / VPU /
86 CPU ISA.
87
88 Vectorisation of the VSX Packed SIMD system makes no sense whatsoever,
89 the sole exceptions potentially being any operations with 128-bit
90 operands such as `vrlq` (Rotate Quad Word) and `xsaddqp` (Scalar
91 Quad-precision Add).
92 SV effectively *replaces* the majority of VSX, requiring far less
93 instructions, and provides, at the very minimum, predication
94 (which VSX was designed without).
95
96 Likewise, Load/Store Multiple make no sense to
97 have because they are not only provided by SV, the SV alternatives may
98 be predicated as well, making them far better suited to use in function
99 calls and context-switching.
100
101 Additionally, some v3.0/1 instructions simply make no sense at all in a
102 Vector context: `rfid` falls into this category,
103 as well as `sc` and `scv`. Here there is simply no point
104 trying to Vectorise them: the standard OpenPOWER v3.0/1 instructions
105 should be called instead.
106
107 Fortuitously this leaves several Major Opcodes free for use by SV
108 to fit alternative future instructions. In a 3D context this means
109 Vector Product, Vector Normalise, [[sv/mv.swizzle]], Texture LD/ST
110 operations, and others critical to an efficient, effective 3D GPU and
111 VPU ISA. With such instructions being included as standard in other
112 commercially-successful GPU ISAs it is likewise critical that a 3D
113 GPU/VPU based on svp64 also have such instructions.
114
115 Note however that svp64 is stand-alone and is in no way
116 critically dependent on the existence or provision of 3D GPU or VPU
117 instructions. These should be considered extensions, and their discussion
118 and specification is out of scope for this document.
119
120 Note, again: this is *only* under svp64 prefixing. Standard v3.0B /
121 v3.1B is *not* altered by svp64 in any way.
122
123 ## Major opcode map (v3.0B)
124
125 This table is taken from v3.0B.
126 Table 9: Primary Opcode Map (opcode bits 0:5)
127
128 ```
129 | 000 | 001 | 010 | 011 | 100 | 101 | 110 | 111
130 000 | | | tdi | twi | EXT04 | | | mulli | 000
131 001 | subfic | | cmpli | cmpi | addic | addic. | addi | addis | 001
132 010 | bc/l/a | EXT17 | b/l/a | EXT19 | rlwimi| rlwinm | | rlwnm | 010
133 011 | ori | oris | xori | xoris | andi. | andis. | EXT30 | EXT31 | 011
134 100 | lwz | lwzu | lbz | lbzu | stw | stwu | stb | stbu | 100
135 101 | lhz | lhzu | lha | lhau | sth | sthu | lmw | stmw | 101
136 110 | lfs | lfsu | lfd | lfdu | stfs | stfsu | stfd | stfdu | 110
137 111 | lq | EXT57 | EXT58 | EXT59 | EXT60 | EXT61 | EXT62 | EXT63 | 111
138 | 000 | 001 | 010 | 011 | 100 | 101 | 110 | 111
139 ```
140
141 ## Suitable for svp64-only
142
143 This is the same table containing v3.0B Primary Opcodes except those that
144 make no sense in a Vectorisation Context have been removed. These removed
145 POs can, *in the SV Vector Context only*, be assigned to alternative
146 (Vectorised-only) instructions, including future extensions.
147 EXT04 retains the scalar `madd*` operations but would have all PackedSIMD
148 (aka VSX) operations removed.
149
150 Note, again, to emphasise: outside of svp64 these opcodes **do not**
151 change. When not prefixed with svp64 these opcodes **specifically**
152 retain their v3.0B / v3.1B OpenPOWER Standard compliant meaning.
153
154 ```
155 | 000 | 001 | 010 | 011 | 100 | 101 | 110 | 111
156 000 | | | | | EXT04 | | | mulli | 000
157 001 | subfic | | cmpli | cmpi | addic | addic. | addi | addis | 001
158 010 | bc/l/a | | | EXT19 | rlwimi| rlwinm | | rlwnm | 010
159 011 | ori | oris | xori | xoris | andi. | andis. | EXT30 | EXT31 | 011
160 100 | lwz | lwzu | lbz | lbzu | stw | stwu | stb | stbu | 100
161 101 | lhz | lhzu | lha | lhau | sth | sthu | | | 101
162 110 | lfs | lfsu | lfd | lfdu | stfs | stfsu | stfd | stfdu | 110
163 111 | | | EXT58 | EXT59 | | EXT61 | | EXT63 | 111
164 | 000 | 001 | 010 | 011 | 100 | 101 | 110 | 111
165 ```
166
167 It is important to note that having a different v3.0B Scalar opcode
168 that is different from an SVP64 one is highly undesirable: the complexity
169 in the decoder is greatly increased.
170
171 # EXTRA Field Mapping
172
173 The purpose of the 9-bit EXTRA field mapping is to mark individual
174 registers (RT, RA, BFA) as either scalar or vector, and to extend
175 their numbering from 0..31 in Power ISA v3.0 to 0..127 in SVP64.
176 Three of the 9 bits may also be used up for a 2nd Predicate (Twin
177 Predication) leaving a mere 6 bits for qualifying registers. As can
178 be seen there is significant pressure on these (and in fact all) SVP64 bits.
179
180 In Power ISA v3.1 prefixing there are bits which describe and classify
181 the prefix in a fashion that is independent of the suffix. MLSS for
182 example. For SVP64 there is insufficient space to make the SVP64 Prefix
183 "self-describing", and consequently every single Scalar instruction
184 had to be individually analysed, by rote, to craft an EXTRA Field Mapping.
185 This process was semi-automated and is described in this section.
186 The final results, which are part of the SVP64 Specification, are here:
187
188 * [[openpower/opcode_regs_deduped]]
189
190 Firstly, every instruction's mnemonic (`add RT, RA, RB`) was analysed
191 from reading the markdown formatted version of the Scalar pseudocode
192 which is machine-readable and found in [[openpower/isatables]]. The
193 analysis gives, by instruction, a "Register Profile". `add RT, RA, RB`
194 for example is given a designation `RM-2R-1W` because it requires
195 two GPR reads and one GPR write.
196
197 Secondly, the total number of registers was added up (2R-1W is 3 registers)
198 and if less than or equal to three then that instruction could be given an
199 EXTRA3 designation. Four or more is given an EXTRA2 designation because
200 there are only 9 bits available.
201
202 Thirdly, the instruction was analysed to see if Twin or Single
203 Predication was suitable. As a general rule this was if there
204 was only a single operand and a single result (`extw` and LD/ST)
205 however it was found that some 2 or 3 operand instructions also
206 qualify. Given that 3 of the 9 bits of EXTRA had to be sacrificed for use
207 in Twin Predication, some compromises were made, here. LDST is
208 Twin but also has 3 operands in some operations, so only EXTRA2 can be used.
209
210 Fourthly, a packing format was decided: for 2R-1W an EXTRA3 indexing
211 could have been decided
212 that RA would be indexed 0 (EXTRA bits 0-2), RB indexed 1 (EXTRA bits 3-5)
213 and RT indexed 2 (EXTRA bits 6-8). In some cases (LD/ST with update)
214 RA-as-a-source is given a **different** EXTRA index from RA-as-a-result
215 (because it is possible to do, and perceived to be useful). Rc=1
216 co-results (CR0, CR1) are always given the same EXTRA index as their
217 main result (RT, FRT).
218
219 Fifthly, in an automated process the results of the analysis
220 were outputted in CSV Format for use in machine-readable form
221 by sv_analysis.py <https://git.libre-soc.org/?p=openpower-isa.git;a=blob;f=src/openpower/sv/sv_analysis.py;hb=HEAD>
222
223 This process was laborious but logical, and, crucially, once a
224 decision is made (and ratified) cannot be reversed.
225 Qualifying future Power ISA Scalar instructions for SVP64
226 is **strongly** advised to utilise this same process and the same
227 sv_analysis.py program as a canonical method of maintaining the
228 relationships. Alterations to that same program which
229 change the Designation is **prohibited** once finalised (ratified
230 through the Power ISA WG Process). It would
231 be similar to deciding that `add` should be changed from X-Form
232 to D-Form.
233
234 # Single Predication <a name="1p"> </a>
235
236 This is a standard mode normally found in Vector ISAs. every element in every source Vector and in the destination uses the same bit of one single predicate mask.
237
238 In SVSTATE, for Single-predication, implementors MUST increment both srcstep and dststep, but depending on whether sz and/or dz are set, srcstep and
239 dststep can still potentially become different indices. Only when sz=dz
240 is srcstep guaranteed to equal dststep at all times.
241
242 Note that in some Mode Formats there is only one flag (zz). This indicates
243 that *both* sz *and* dz are set to the same.
244
245 Example 1:
246
247 * VL=4
248 * mask=0b1101
249 * sz=0, dz=1
250
251 The following schedule for srcstep and dststep will occur:
252
253 | srcstep | dststep | comment |
254 | ---- | ----- | -------- |
255 | 0 | 0 | both mask[src=0] and mask[dst=0] are 1 |
256 | 1 | 2 | sz=1 but dz=0: dst skips mask[1], src soes not |
257 | 2 | 3 | mask[src=2] and mask[dst=3] are 1 |
258 | end | end | loop has ended because dst reached VL-1 |
259
260 Example 2:
261
262 * VL=4
263 * mask=0b1101
264 * sz=1, dz=0
265
266 The following schedule for srcstep and dststep will occur:
267
268 | srcstep | dststep | comment |
269 | ---- | ----- | -------- |
270 | 0 | 0 | both mask[src=0] and mask[dst=0] are 1 |
271 | 2 | 1 | sz=0 but dz=1: src skips mask[1], dst does not |
272 | 3 | 2 | mask[src=3] and mask[dst=2] are 1 |
273 | end | end | loop has ended because src reached VL-1 |
274
275 In both these examples it is crucial to note that despite there being
276 a single predicate mask, with sz and dz being different, srcstep and
277 dststep are being requested to react differently.
278
279 Example 3:
280
281 * VL=4
282 * mask=0b1101
283 * sz=0, dz=0
284
285 The following schedule for srcstep and dststep will occur:
286
287 | srcstep | dststep | comment |
288 | ---- | ----- | -------- |
289 | 0 | 0 | both mask[src=0] and mask[dst=0] are 1 |
290 | 2 | 2 | sz=0 and dz=0: both src and dst skip mask[1] |
291 | 3 | 3 | mask[src=3] and mask[dst=3] are 1 |
292 | end | end | loop has ended because src and dst reached VL-1 |
293
294 Here, both srcstep and dststep remain in lockstep because sz=dz=1
295
296 # Twin Predication <a name="2p"> </a>
297
298 This is a novel concept that allows predication to be applied to a single
299 source and a single dest register. The following types of traditional
300 Vector operations may be encoded with it, *without requiring explicit
301 opcodes to do so*
302
303 * VSPLAT (a single scalar distributed across a vector)
304 * VEXTRACT (like LLVM IR [`extractelement`](https://releases.llvm.org/11.0.0/docs/LangRef.html#extractelement-instruction))
305 * VINSERT (like LLVM IR [`insertelement`](https://releases.llvm.org/11.0.0/docs/LangRef.html#insertelement-instruction))
306 * VCOMPRESS (like LLVM IR [`llvm.masked.compressstore.*`](https://releases.llvm.org/11.0.0/docs/LangRef.html#llvm-masked-compressstore-intrinsics))
307 * VEXPAND (like LLVM IR [`llvm.masked.expandload.*`](https://releases.llvm.org/11.0.0/docs/LangRef.html#llvm-masked-expandload-intrinsics))
308
309 Those patterns (and more) may be applied to:
310
311 * mv (the usual way that V\* ISA operations are created)
312 * exts\* sign-extension
313 * rwlinm and other RS-RA shift operations (**note**: excluding
314 those that take RA as both a src and dest. These are not
315 1-src 1-dest, they are 2-src, 1-dest)
316 * LD and ST (treating AGEN as one source)
317 * FP fclass, fsgn, fneg, fabs, fcvt, frecip, fsqrt etc.
318 * Condition Register ops mfcr, mtcr and other similar
319
320 This is a huge list that creates extremely powerful combinations,
321 particularly given that one of the predicate options is `(1<<r3)`
322
323 Additional unusual capabilities of Twin Predication include a back-to-back
324 version of VCOMPRESS-VEXPAND which is effectively the ability to do
325 sequentially ordered multiple VINSERTs. The source predicate selects a
326 sequentially ordered subset of elements to be inserted; the destination
327 predicate specifies the sequentially ordered recipient locations.
328 This is equivalent to
329 `llvm.masked.compressstore.*`
330 followed by
331 `llvm.masked.expandload.*`
332 with a single instruction.
333
334 This extreme power and flexibility comes down to the fact that SVP64
335 is not actually a Vector ISA: it is a loop-abstraction-concept that
336 is applied *in general* to Scalar operations, just like the x86
337 `REP` instruction (if put on steroids).
338
339 # Reduce modes
340
341 Reduction in SVP64 is deterministic and somewhat of a misnomer. A normal
342 Vector ISA would have explicit Reduce opcodes with defined characteristics
343 per operation: in SX Aurora there is even an additional scalar argument
344 containing the initial reduction value, and the default is either 0
345 or 1 depending on the specifics of the explicit opcode.
346 SVP64 fundamentally has to
347 utilise *existing* Scalar Power ISA v3.0B operations, which presents some
348 unique challenges.
349
350 The solution turns out to be to simply define reduction as permitting
351 deterministic element-based schedules to be issued using the base Scalar
352 operations, and to rely on the underlying microarchitecture to resolve
353 Register Hazards at the element level. This goes back to
354 the fundamental principle that SV is nothing more than a Sub-Program-Counter
355 sitting between Decode and Issue phases.
356
357 Microarchitectures *may* take opportunities to parallelise the reduction
358 but only if in doing so they preserve Program Order at the Element Level.
359 Opportunities where this is possible include an `OR` operation
360 or a MIN/MAX operation: it may be possible to parallelise the reduction,
361 but for Floating Point it is not permitted due to different results
362 being obtained if the reduction is not executed in strict Program-Sequential
363 Order.
364
365 In essence it becomes the programmer's responsibility to leverage the
366 pre-determined schedules to desired effect.
367
368 ## Scalar result reduction and iteration
369
370 Scalar Reduction per se does not exist, instead is implemented in SVP64
371 as a simple and natural relaxation of the usual restriction on the Vector
372 Looping which would terminate if the destination was marked as a Scalar.
373 Scalar Reduction by contrast *keeps issuing Vector Element Operations*
374 even though the destination register is marked as scalar.
375 Thus it is up to the programmer to be aware of this, observe some
376 conventions, and thus end up achieving the desired outcome of scalar
377 reduction.
378
379 It is also important to appreciate that there is no
380 actual imposition or restriction on how this mode is utilised: there
381 will therefore be several valuable uses (including Vector Iteration
382 and "Reverse-Gear")
383 and it is up to the programmer to make best use of the
384 (strictly deterministic) capability
385 provided.
386
387 In this mode, which is suited to operations involving carry or overflow,
388 one register must be assigned, by convention by the programmer to be the
389 "accumulator". Scalar reduction is thus categorised by:
390
391 * One of the sources is a Vector
392 * the destination is a scalar
393 * optionally but most usefully when one source scalar register is
394 also the scalar destination (which may be informally termed
395 the "accumulator")
396 * That the source register type is the same as the destination register
397 type identified as the "accumulator". Scalar reduction on `cmp`,
398 `setb` or `isel` makes no sense for example because of the mixture
399 between CRs and GPRs.
400
401 *Note that issuing instructions in Scalar reduce mode such as `setb`
402 are neither `UNDEFINED` nor prohibited, despite them not making much
403 sense at first glance.
404 Scalar reduce is strictly defined behaviour, and the cost in
405 hardware terms of prohibition of seemingly non-sensical operations is too great.
406 Therefore it is permitted and required to be executed successfully.
407 Implementors **MAY** choose to optimise such instructions in instances
408 where their use results in "extraneous execution", i.e. where it is clear
409 that the sequence of operations, comprising multiple overwrites to
410 a scalar destination **without** cumulative, iterative, or reductive
411 behaviour (no "accumulator"), may discard all but the last element
412 operation. Identification
413 of such is trivial to do for `setb` and `cmp`: the source register type is
414 a completely different register file from the destination.
415 Likewise Scalar reduction when the destination is a Vector
416 is as if the Reduction Mode was not requested.*
417
418 Typical applications include simple operations such as `ADD r3, r10.v,
419 r3` where, clearly, r3 is being used to accumulate the addition of all
420 elements of the vector starting at r10.
421
422 # add RT, RA,RB but when RT==RA
423 for i in range(VL):
424 iregs[RA] += iregs[RB+i] # RT==RA
425
426 However, *unless* the operation is marked as "mapreduce" (`sv.add/mr`)
427 SV ordinarily
428 **terminates** at the first scalar operation. Only by marking the
429 operation as "mapreduce" will it continue to issue multiple sub-looped
430 (element) instructions in `Program Order`.
431
432 To perform the loop in reverse order, the ```RG``` (reverse gear) bit must be set. This may be useful in situations where the results may be different
433 (floating-point) if executed in a different order. Given that there is
434 no actual prohibition on Reduce Mode being applied when the destination
435 is a Vector, the "Reverse Gear" bit turns out to be a way to apply Iterative
436 or Cumulative Vector operations in reverse. `sv.add/rg r3.v, r4.v, r4.v`
437 for example will start at the opposite end of the Vector and push
438 a cumulative series of overlapping add operations into the Execution units of
439 the underlying hardware.
440
441 Other examples include shift-mask operations where a Vector of inserts
442 into a single destination register is required (see [[sv/bitmanip]], bmset),
443 as a way to construct
444 a value quickly from multiple arbitrary bit-ranges and bit-offsets.
445 Using the same register as both the source and destination, with Vectors
446 of different offsets masks and values to be inserted has multiple
447 applications including Video, cryptography and JIT compilation.
448
449 # assume VL=4:
450 # * Vector of shift-offsets contained in RC (r12.v)
451 # * Vector of masks contained in RB (r8.v)
452 # * Vector of values to be masked-in in RA (r4.v)
453 # * Scalar destination RT (r0) to receive all mask-offset values
454 sv.bmset/mr r0, r4.v, r8.v, r12.v
455
456 Due to the Deterministic Scheduling,
457 Subtract and Divide are still permitted to be executed in this mode,
458 although from an algorithmic perspective it is strongly discouraged.
459 It would be better to use addition followed by one final subtract,
460 or in the case of divide, to get better accuracy, to perform a multiply
461 cascade followed by a final divide.
462
463 Note that single-operand or three-operand scalar-dest reduce is perfectly
464 well permitted: the programmer may still declare one register, used as
465 both a Vector source and Scalar destination, to be utilised as
466 the "accumulator". In the case of `sv.fmadds` and `sv.maddhw` etc
467 this naturally fits well with the normal expected usage of these
468 operations.
469
470 If an interrupt or exception occurs in the middle of the scalar mapreduce,
471 the scalar destination register **MUST** be updated with the current
472 (intermediate) result, because this is how ```Program Order``` is
473 preserved (Vector Loops are to be considered to be just another way of issuing instructions
474 in Program Order). In this way, after return from interrupt,
475 the scalar mapreduce may continue where it left off. This provides
476 "precise" exception behaviour.
477
478 Note that hardware is perfectly permitted to perform multi-issue
479 parallel optimisation of the scalar reduce operation: it's just that
480 as far as the user is concerned, all exceptions and interrupts **MUST**
481 be precise.
482
483 ## Vector result reduce mode
484
485 Vector Reduce Mode issues a deterministic tree-reduction schedule to the underlying micro-architecture. Like Scalar reduction, the "Scalar Base"
486 (Power ISA v3.0B) operation is leveraged, unmodified, to give the
487 *appearance* and *effect* of Reduction.
488
489 Given that the tree-reduction schedule is deterministic,
490 Interrupts and exceptions
491 can therefore also be precise. The final result will be in the first
492 non-predicate-masked-out destination element, but due again to
493 the deterministic schedule programmers may find uses for the intermediate
494 results.
495
496 When Rc=1 a corresponding Vector of co-resultant CRs is also
497 created. No special action is taken: the result and its CR Field
498 are stored "as usual" exactly as all other SVP64 Rc=1 operations.
499
500 Note that the Schedule only makes sense on top of certain instructions:
501 X-Form with a Register Profile of `RT,RA,RB` is fine. Like Scalar
502 Reduction, nothing is prohibited:
503 the results of execution on an unsuitable instruction may simply
504 not make sense. Many 3-input instructions (madd, fmadd) unlike Scalar
505 Reduction in particular do not make sense, but `ternlogi`, if used
506 with care, would.
507
508 ## Sub-Vector Horizontal Reduction
509
510 Note that when SVM is clear and SUBVL!=1 the sub-elements are
511 *independent*, i.e. they are mapreduced per *sub-element* as a result.
512 illustration with a vec2, assuming RA==RT, e.g `sv.add/mr/vec2 r4, r4, r16`
513
514 for i in range(0, VL):
515 # RA==RT in the instruction. does not have to be
516 iregs[RT].x = op(iregs[RT].x, iregs[RB+i].x)
517 iregs[RT].y = op(iregs[RT].y, iregs[RB+i].y)
518
519 Thus logically there is nothing special or unanticipated about
520 `SVM=0`: it is expected behaviour according to standard SVP64
521 Sub-Vector rules.
522
523 By contrast, when SVM is set and SUBVL!=1, a Horizontal
524 Subvector mode is enabled, which behaves very much more
525 like a traditional Vector Processor Reduction instruction.
526
527 Example for a vec2:
528
529 for i in range(VL):
530 iregs[RT+i] = op(iregs[RA+i].x, iregs[RA+i].y)
531
532 Example for a vec3:
533
534 for i in range(VL):
535 iregs[RT+i] = op(iregs[RA+i].x, iregs[RA+i].y)
536 iregs[RT+i] = op(iregs[RT+i] , iregs[RA+i].z)
537
538 Example for a vec4:
539
540 for i in range(VL):
541 iregs[RT+i] = op(iregs[RA+i].x, iregs[RA+i].y)
542 iregs[RT+i] = op(iregs[RT+i] , iregs[RA+i].z)
543 iregs[RT+i] = op(iregs[RT+i] , iregs[RA+i].w)
544
545 In this mode, when Rc=1 the Vector of CRs is as normal: each result
546 element creates a corresponding CR element (for the final, reduced, result).
547
548 Note that the destination (RT) is automatically used as an "Accumulator"
549 register, and consequently the Sub-Vector Loop is interruptible.
550 If RT is a Scalar then as usual the main VL Loop terminates at the
551 first predicated element (or the first element if unpredicated).
552
553 # Fail-on-first
554
555 Data-dependent fail-on-first has two distinct variants: one for LD/ST
556 (see [[sv/ldst]],
557 the other for arithmetic operations (actually, CR-driven)
558 ([[sv/normal]]) and CR operations ([[sv/cr_ops]]).
559 Note in each
560 case the assumption is that vector elements are required appear to be
561 executed in sequential Program Order, element 0 being the first.
562
563 * LD/ST ffirst treats the first LD/ST in a vector (element 0) as an
564 ordinary one. Exceptions occur "as normal". However for elements 1
565 and above, if an exception would occur, then VL is **truncated** to the
566 previous element.
567 * Data-driven (CR-driven) fail-on-first activates when Rc=1 or other
568 CR-creating operation produces a result (including cmp). Similar to
569 branch, an analysis of the CR is performed and if the test fails, the
570 vector operation terminates and discards all element operations
571 above the current one (and the current one if VLi is not set),
572 and VL is truncated to either
573 the *previous* element or the current one, depending on whether
574 VLi (VL "inclusive") is set.
575
576 Thus the new VL comprises a contiguous vector of results,
577 all of which pass the testing criteria (equal to zero, less than zero).
578
579 The CR-based data-driven fail-on-first is new and not found in ARM
580 SVE or RVV. It is extremely useful for reducing instruction count,
581 however requires speculative execution involving modifications of VL
582 to get high performance implementations. An additional mode (RC1=1)
583 effectively turns what would otherwise be an arithmetic operation
584 into a type of `cmp`. The CR is stored (and the CR.eq bit tested
585 against the `inv` field).
586 If the CR.eq bit is equal to `inv` then the Vector is truncated and
587 the loop ends.
588 Note that when RC1=1 the result elements are never stored, only the CRs.
589
590 VLi is only available as an option when `Rc=0` (or for instructions
591 which do not have Rc). When set, the current element is always
592 also included in the count (the new length that VL will be set to).
593 This may be useful in combination with "inv" to truncate the Vector
594 to `exclude` elements that fail a test, or, in the case of implementations
595 of strncpy, to include the terminating zero.
596
597 In CR-based data-driven fail-on-first there is only the option to select
598 and test one bit of each CR (just as with branch BO). For more complex
599 tests this may be insufficient. If that is the case, a vectorised crops
600 (crand, cror) may be used, and ffirst applied to the crop instead of to
601 the arithmetic vector.
602
603 One extremely important aspect of ffirst is:
604
605 * LDST ffirst may never set VL equal to zero. This because on the first
606 element an exception must be raised "as normal".
607 * CR-based data-dependent ffirst on the other hand **can** set VL equal
608 to zero. This is the only means in the entirety of SV that VL may be set
609 to zero (with the exception of via the SV.STATE SPR). When VL is set
610 zero due to the first element failing the CR bit-test, all subsequent
611 vectorised operations are effectively `nops` which is
612 *precisely the desired and intended behaviour*.
613
614 Another aspect is that for ffirst LD/STs, VL may be truncated arbitrarily
615 to a nonzero value for any implementation-specific reason. For example:
616 it is perfectly reasonable for implementations to alter VL when ffirst
617 LD or ST operations are initiated on a nonaligned boundary, such that
618 within a loop the subsequent iteration of that loop begins subsequent
619 ffirst LD/ST operations on an aligned boundary. Likewise, to reduce
620 workloads or balance resources.
621
622 CR-based data-dependent first on the other hand MUST not truncate VL
623 arbitrarily to a length decided by the hardware: VL MUST only be
624 truncated based explicitly on whether a test fails.
625 This because it is a precise test on which algorithms
626 will rely.
627
628 ## Data-dependent fail-first on CR operations (crand etc)
629
630 Operations that actually produce or alter CR Field as a result
631 do not also in turn have an Rc=1 mode. However it makes no
632 sense to try to test the 4 bits of a CR Field for being equal
633 or not equal to zero. Moreover, the result is already in the
634 form that is desired: it is a CR field. Therefore,
635 CR-based operations have their own SVP64 Mode, described
636 in [[sv/cr_ops]]
637
638 There are two primary different types of CR operations:
639
640 * Those which have a 3-bit operand field (referring to a CR Field)
641 * Those which have a 5-bit operand (referring to a bit within the
642 whole 32-bit CR)
643
644 More details can be found in [[sv/cr_ops]].
645
646 # pred-result mode
647
648 Pred-result mode may not be applied on CR-based operations.
649
650 Although CR operations (mtcr, crand, cror) may be Vectorised,
651 predicated, pred-result mode applies to operations that have
652 an Rc=1 mode, or make sense to add an RC1 option.
653
654 Predicate-result merges common CR testing with predication, saving on
655 instruction count. In essence, a Condition Register Field test
656 is performed, and if it fails it is considered to have been
657 *as if* the destination predicate bit was zero. Given that
658 there are no CR-based operations that produce Rc=1 co-results,
659 there can be no pred-result mode for mtcr and other CR-based instructions
660
661 Arithmetic and Logical Pred-result, which does have Rc=1 or for which
662 RC1 Mode makes sense, is covered in [[sv/normal]]
663
664 # CR Operations
665
666 CRs are slightly more involved than INT or FP registers due to the
667 possibility for indexing individual bits (crops BA/BB/BT). Again however
668 the access pattern needs to be understandable in relation to v3.0B / v3.1B
669 numbering, with a clear linear relationship and mapping existing when
670 SV is applied.
671
672 ## CR EXTRA mapping table and algorithm <a name="cr_extra"></a>
673
674 Numbering relationships for CR fields are already complex due to being
675 in BE format (*the relationship is not clearly explained in the v3.0B
676 or v3.1 specification*). However with some care and consideration
677 the exact same mapping used for INT and FP regfiles may be applied,
678 just to the upper bits, as explained below. The notation
679 `CR{field number}` is used to indicate access to a particular
680 Condition Register Field (as opposed to the notation `CR[bit]`
681 which accesses one bit of the 32 bit Power ISA v3.0B
682 Condition Register)
683
684 `CR{n}` refers to `CR0` when `n=0` and consequently, for CR0-7, is defined, in v3.0B pseudocode, as:
685
686 CR{7-n} = CR[32+n*4:35+n*4]
687
688 For SVP64 the relationship for the sequential
689 numbering of elements is to the CR **fields** within
690 the CR Register, not to individual bits within the CR register.
691
692 In OpenPOWER v3.0/1, BF/BT/BA/BB are all 5 bits. The top 3 bits (0:2)
693 select one of the 8 CRs; the bottom 2 bits (3:4) select one of 4 bits
694 *in* that CR (EQ/LT/GT/SO). The numbering was determined (after 4 months of
695 analysis and research) to be as follows:
696
697 CR_index = 7-(BA>>2) # top 3 bits but BE
698 bit_index = 3-(BA & 0b11) # low 2 bits but BE
699 CR_reg = CR{CR_index} # get the CR
700 # finally get the bit from the CR.
701 CR_bit = (CR_reg & (1<<bit_index)) != 0
702
703 When it comes to applying SV, it is the CR\_reg number to which SV EXTRA2/3
704 applies, **not** the CR\_bit portion (bits 3-4):
705
706 if extra3_mode:
707 spec = EXTRA3
708 else:
709 spec = EXTRA2<<1 | 0b0
710 if spec[0]:
711 # vector constructs "BA[0:2] spec[1:2] 00 BA[3:4]"
712 return ((BA >> 2)<<6) | # hi 3 bits shifted up
713 (spec[1:2]<<4) | # to make room for these
714 (BA & 0b11) # CR_bit on the end
715 else:
716 # scalar constructs "00 spec[1:2] BA[0:4]"
717 return (spec[1:2] << 5) | BA
718
719 Thus, for example, to access a given bit for a CR in SV mode, the v3.0B
720 algorithm to determine CR\_reg is modified to as follows:
721
722 CR_index = 7-(BA>>2) # top 3 bits but BE
723 if spec[0]:
724 # vector mode, 0-124 increments of 4
725 CR_index = (CR_index<<4) | (spec[1:2] << 2)
726 else:
727 # scalar mode, 0-32 increments of 1
728 CR_index = (spec[1:2]<<3) | CR_index
729 # same as for v3.0/v3.1 from this point onwards
730 bit_index = 3-(BA & 0b11) # low 2 bits but BE
731 CR_reg = CR{CR_index} # get the CR
732 # finally get the bit from the CR.
733 CR_bit = (CR_reg & (1<<bit_index)) != 0
734
735 Note here that the decoding pattern to determine CR\_bit does not change.
736
737 Note: high-performance implementations may read/write Vectors of CRs in
738 batches of aligned 32-bit chunks (CR0-7, CR7-15). This is to greatly
739 simplify internal design. If instructions are issued where CR Vectors
740 do not start on a 32-bit aligned boundary, performance may be affected.
741
742 ## CR fields as inputs/outputs of vector operations
743
744 CRs (or, the arithmetic operations associated with them)
745 may be marked as Vectorised or Scalar. When Rc=1 in arithmetic operations that have no explicit EXTRA to cover the CR, the CR is Vectorised if the destination is Vectorised. Likewise if the destination is scalar then so is the CR.
746
747 When vectorized, the CR inputs/outputs are sequentially read/written
748 to 4-bit CR fields. Vectorised Integer results, when Rc=1, will begin
749 writing to CR8 (TBD evaluate) and increase sequentially from there.
750 This is so that:
751
752 * implementations may rely on the Vector CRs being aligned to 8. This
753 means that CRs may be read or written in aligned batches of 32 bits
754 (8 CRs per batch), for high performance implementations.
755 * scalar Rc=1 operation (CR0, CR1) and callee-saved CRs (CR2-4) are not
756 overwritten by vector Rc=1 operations except for very large VL
757 * CR-based predication, from CR32, is also not interfered with
758 (except by large VL).
759
760 However when the SV result (destination) is marked as a scalar by the
761 EXTRA field the *standard* v3.0B behaviour applies: the accompanying
762 CR when Rc=1 is written to. This is CR0 for integer operations and CR1
763 for FP operations.
764
765 Note that yes, the CR Fields are genuinely Vectorised. Unlike in SIMD VSX which
766 has a single CR (CR6) for a given SIMD result, SV Vectorised OpenPOWER
767 v3.0B scalar operations produce a **tuple** of element results: the
768 result of the operation as one part of that element *and a corresponding
769 CR element*. Greatly simplified pseudocode:
770
771 for i in range(VL):
772 # calculate the vector result of an add
773 iregs[RT+i] = iregs[RA+i] + iregs[RB+i]
774 # now calculate CR bits
775 CRs{8+i}.eq = iregs[RT+i] == 0
776 CRs{8+i}.gt = iregs[RT+i] > 0
777 ... etc
778
779 If a "cumulated" CR based analysis of results is desired (a la VSX CR6)
780 then a followup instruction must be performed, setting "reduce" mode on
781 the Vector of CRs, using cr ops (crand, crnor) to do so. This provides far
782 more flexibility in analysing vectors than standard Vector ISAs. Normal
783 Vector ISAs are typically restricted to "were all results nonzero" and
784 "were some results nonzero". The application of mapreduce to Vectorised
785 cr operations allows far more sophisticated analysis, particularly in
786 conjunction with the new crweird operations see [[sv/cr_int_predication]].
787
788 Note in particular that the use of a separate instruction in this way
789 ensures that high performance multi-issue OoO inplementations do not
790 have the computation of the cumulative analysis CR as a bottleneck and
791 hindrance, regardless of the length of VL.
792
793 Additionally,
794 SVP64 [[sv/branches]] may be used, even when the branch itself is to
795 the following instruction. The combined side-effects of CTR reduction
796 and VL truncation provide several benefits.
797
798 (see [[discussion]]. some alternative schemes are described there)
799
800 ## Rc=1 when SUBVL!=1
801
802 sub-vectors are effectively a form of Packed SIMD (length 2 to 4). Only 1 bit of
803 predicate is allocated per subvector; likewise only one CR is allocated
804 per subvector.
805
806 This leaves a conundrum as to how to apply CR computation per subvector,
807 when normally Rc=1 is exclusively applied to scalar elements. A solution
808 is to perform a bitwise OR or AND of the subvector tests. Given that
809 OE is ignored in SVP64, this field may (when available) be used to select OR or
810 AND behavior.
811
812 ### Table of CR fields
813
814 CRn is the notation used by the OpenPower spec to refer to CR field #i,
815 so FP instructions with Rc=1 write to CR1 (n=1).
816
817 CRs are not stored in SPRs: they are registers in their own right.
818 Therefore context-switching the full set of CRs involves a Vectorised
819 mfcr or mtcr, using VL=8 to do so. This is exactly as how
820 scalar OpenPOWER context-switches CRs: it is just that there are now
821 more of them.
822
823 The 64 SV CRs are arranged similarly to the way the 128 integer registers
824 are arranged. TODO a python program that auto-generates a CSV file
825 which can be included in a table, which is in a new page (so as not to
826 overwhelm this one). [[svp64/cr_names]]
827
828 # Register Profiles
829
830 **NOTE THIS TABLE SHOULD NO LONGER BE HAND EDITED** see
831 <https://bugs.libre-soc.org/show_bug.cgi?id=548> for details.
832
833 Instructions are broken down by Register Profiles as listed in the
834 following auto-generated page: [[opcode_regs_deduped]]. "Non-SV"
835 indicates that the operations with this Register Profile cannot be
836 Vectorised (mtspr, bc, dcbz, twi)
837
838 TODO generate table which will be here [[svp64/reg_profiles]]
839
840 # SV pseudocode illilustration
841
842 ## Single-predicated Instruction
843
844 illustration of normal mode add operation: zeroing not included, elwidth
845 overrides not included. if there is no predicate, it is set to all 1s
846
847 function op_add(rd, rs1, rs2) # add not VADD!
848 int i, id=0, irs1=0, irs2=0;
849 predval = get_pred_val(FALSE, rd);
850 for (i = 0; i < VL; i++)
851 STATE.srcoffs = i # save context
852 if (predval & 1<<i) # predication uses intregs
853 ireg[rd+id] <= ireg[rs1+irs1] + ireg[rs2+irs2];
854 if (!int_vec[rd].isvec) break;
855 if (rd.isvec) { id += 1; }
856 if (rs1.isvec) { irs1 += 1; }
857 if (rs2.isvec) { irs2 += 1; }
858 if (id == VL or irs1 == VL or irs2 == VL)
859 {
860 # end VL hardware loop
861 STATE.srcoffs = 0; # reset
862 return;
863 }
864
865 This has several modes:
866
867 * RT.v = RA.v RB.v
868 * RT.v = RA.v RB.s (and RA.s RB.v)
869 * RT.v = RA.s RB.s
870 * RT.s = RA.v RB.v
871 * RT.s = RA.v RB.s (and RA.s RB.v)
872 * RT.s = RA.s RB.s
873
874 All of these may be predicated. Vector-Vector is straightfoward.
875 When one of source is a Vector and the other a Scalar, it is clear that
876 each element of the Vector source should be added to the Scalar source,
877 each result placed into the Vector (or, if the destination is a scalar,
878 only the first nonpredicated result).
879
880 The one that is not obvious is RT=vector but both RA/RB=scalar.
881 Here this acts as a "splat scalar result", copying the same result into
882 all nonpredicated result elements. If a fixed destination scalar was
883 intended, then an all-Scalar operation should be used.
884
885 See <https://bugs.libre-soc.org/show_bug.cgi?id=552>
886
887 # Assembly Annotation
888
889 Assembly code annotation is required for SV to be able to successfully
890 mark instructions as "prefixed".
891
892 A reasonable (prototype) starting point:
893
894 svp64 [field=value]*
895
896 Fields:
897
898 * ew=8/16/32 - element width
899 * sew=8/16/32 - source element width
900 * vec=2/3/4 - SUBVL
901 * mode=mr/satu/sats/crpred
902 * pred=1\<\<3/r3/~r3/r10/~r10/r30/~r30/lt/gt/le/ge/eq/ne
903
904 similar to x86 "rex" prefix.
905
906 For actual assembler:
907
908 sv.asmcode/mode.vec{N}.ew=8,sw=16,m={pred},sm={pred} reg.v, src.s
909
910 Qualifiers:
911
912 * m={pred}: predicate mask mode
913 * sm={pred}: source-predicate mask mode (only allowed in Twin-predication)
914 * vec{N}: vec2 OR vec3 OR vec4 - sets SUBVL=2/3/4
915 * ew={N}: ew=8/16/32 - sets elwidth override
916 * sw={N}: sw=8/16/32 - sets source elwidth override
917 * ff={xx}: see fail-first mode
918 * pr={xx}: see predicate-result mode
919 * sat{x}: satu / sats - see saturation mode
920 * mr: see map-reduce mode
921 * mr.svm see map-reduce with sub-vector mode
922 * crm: see map-reduce CR mode
923 * crm.svm see map-reduce CR with sub-vector mode
924 * sz: predication with source-zeroing
925 * dz: predication with dest-zeroing
926
927 For modes:
928
929 * pred-result:
930 - pm=lt/gt/le/ge/eq/ne/so/ns
931 - RC1 mode
932 * fail-first
933 - ff=lt/gt/le/ge/eq/ne/so/ns
934 - RC1 mode
935 * saturation:
936 - sats
937 - satu
938 * map-reduce:
939 - mr OR crm: "normal" map-reduce mode or CR-mode.
940 - mr.svm OR crm.svm: when vec2/3/4 set, sub-vector mapreduce is enabled
941
942 # Parallel-reduction algorithm
943
944 The principle of SVP64 is that SVP64 is a fully-independent
945 Abstraction of hardware-looping in between issue and execute phases
946 that has no relation to the operation it issues.
947 Additional state cannot be saved on context-switching beyond that
948 of SVSTATE, making things slightly tricky.
949
950 Executable demo pseudocode, full version
951 [here](https://git.libre-soc.org/?p=libreriscv.git;a=blob;f=openpower/sv/preduce.py;hb=HEAD)
952
953 ```
954 def preducei(vl, vec, pred):
955 vec = copy(vec)
956 pred = copy(pred) # must not damage predicate
957 step = 1
958 ix = list(range(vl)) # indices move rather than copy data
959 print(" start", step, pred, vec)
960 while step < vl:
961 step *= 2
962 for i in range(0, vl, step):
963 other = i + step // 2
964 ci = ix[i]
965 oi = ix[other] if other < vl else None
966 other_pred = other < vl and pred[oi]
967 if pred[ci] and other_pred:
968 vec[ci] += vec[oi]
969 elif other_pred:
970 ix[i] = oi # leave data in-place, copy index instead
971 pred[ci] |= other_pred
972 print(" row", step, pred, vec, ix)
973 return vec
974 ```
975
976 This algorithm works by noting when data remains in-place rather than
977 being reduced, and referring to that alternative position on subsequent
978 layers of reduction. It is re-entrant. If however interrupted and
979 restored, some implementations may take longer to re-establish the
980 context.
981
982 Its application by default is that:
983
984 * RA, FRA or BFA is the first register as the first operand
985 (ci index offset in the above pseudocode)
986 * RB, FRB or BFB is the second (co index offset)
987 * RT (result) also uses ci **if RA==RT**
988
989 For more complex applications a REMAP Schedule must be used
990
991 *Programmers's note:
992 if passed a predicate mask with only one bit set, this algorithm
993 takes no action, similar to when a predicate mask is all zero.*
994
995 *Implementor's Note: many SIMD-based Parallel Reduction Algorithms are
996 implemented in hardware with MVs that ensure lane-crossing is minimised.
997 The mistake which would be catastrophic to SVP64 to make is to then
998 limit the Reduction Sequence for all implementors
999 based solely and exclusively on what one
1000 specific internal microarchitecture does.
1001 In SIMD ISAs the internal SIMD Architectural design is exposed and imposed on the programmer. Cray-style Vector ISAs on the other hand provide convenient,
1002 compact and efficient encodings of abstract concepts.*
1003 **It is the Implementor's responsibility to produce a design
1004 that complies with the above algorithm,
1005 utilising internal Micro-coding and other techniques to transparently
1006 insert micro-architectural lane-crossing Move operations
1007 if necessary or desired, to give the level of efficiency or performance
1008 required.**
1009
1010 # Element-width overrides <a name="elwidth"> </>
1011
1012 Element-width overrides are best illustrated with a packed structure
1013 union in the c programming language. The following should be taken
1014 literally, and assume always a little-endian layout:
1015
1016 typedef union {
1017 uint8_t b[];
1018 uint16_t s[];
1019 uint32_t i[];
1020 uint64_t l[];
1021 uint8_t actual_bytes[8];
1022 } el_reg_t;
1023
1024 elreg_t int_regfile[128];
1025
1026 get_polymorphed_reg(reg, bitwidth, offset):
1027 el_reg_t res;
1028 res.l = 0; // TODO: going to need sign-extending / zero-extending
1029 if bitwidth == 8:
1030 reg.b = int_regfile[reg].b[offset]
1031 elif bitwidth == 16:
1032 reg.s = int_regfile[reg].s[offset]
1033 elif bitwidth == 32:
1034 reg.i = int_regfile[reg].i[offset]
1035 elif bitwidth == 64:
1036 reg.l = int_regfile[reg].l[offset]
1037 return res
1038
1039 set_polymorphed_reg(reg, bitwidth, offset, val):
1040 if (!reg.isvec):
1041 # not a vector: first element only, overwrites high bits
1042 int_regfile[reg].l[0] = val
1043 elif bitwidth == 8:
1044 int_regfile[reg].b[offset] = val
1045 elif bitwidth == 16:
1046 int_regfile[reg].s[offset] = val
1047 elif bitwidth == 32:
1048 int_regfile[reg].i[offset] = val
1049 elif bitwidth == 64:
1050 int_regfile[reg].l[offset] = val
1051
1052 In effect the GPR registers r0 to r127 (and corresponding FPRs fp0
1053 to fp127) are reinterpreted to be "starting points" in a byte-addressable
1054 memory. Vectors - which become just a virtual naming construct - effectively
1055 overlap.
1056
1057 It is extremely important for implementors to note that the only circumstance
1058 where upper portions of an underlying 64-bit register are zero'd out is
1059 when the destination is a scalar. The ideal register file has byte-level
1060 write-enable lines, just like most SRAMs, in order to avoid READ-MODIFY-WRITE.
1061
1062 An example ADD operation with predication and element width overrides:
1063
1064  for (i = 0; i < VL; i++)
1065 if (predval & 1<<i) # predication
1066 src1 = get_polymorphed_reg(RA, srcwid, irs1)
1067 src2 = get_polymorphed_reg(RB, srcwid, irs2)
1068 result = src1 + src2 # actual add here
1069 set_polymorphed_reg(RT, destwid, ird, result)
1070 if (!RT.isvec) break
1071 if (RT.isvec)  { id += 1; }
1072 if (RA.isvec)  { irs1 += 1; }
1073 if (RB.isvec)  { irs2 += 1; }
1074
1075 Thus it can be clearly seen that elements are packed by their
1076 element width, and the packing starts from the source (or destination)
1077 specified by the instruction.
1078
1079 # Twin (implicit) result operations
1080
1081 Some operations in the Power ISA already target two 64-bit scalar
1082 registers: `lq` for example, and LD with update.
1083 Some mathematical algorithms are more
1084 efficient when there are two outputs rather than one, providing
1085 feedback loops between elements (the most well-known being add with
1086 carry). 64-bit multiply
1087 for example actually internally produces a 128 bit result, which clearly
1088 cannot be stored in a single 64 bit register. Some ISAs recommend
1089 "macro op fusion": the practice of setting a convention whereby if
1090 two commonly used instructions (mullo, mulhi) use the same ALU but
1091 one selects the low part of an identical operation and the other
1092 selects the high part, then optimised micro-architectures may
1093 "fuse" those two instructions together, using Micro-coding techniques,
1094 internally.
1095
1096 The practice and convention of macro-op fusion however is not compatible
1097 with SVP64 Horizontal-First, because Horizontal Mode may only
1098 be applied to a single instruction at a time, and SVP64 is based on
1099 the principle of strict Program Order even at the element
1100 level. Thus it becomes
1101 necessary to add explicit more complex single instructions with
1102 more operands than would normally be seen in the average RISC ISA
1103 (3-in, 2-out, in some cases). If it
1104 was not for Power ISA already having LD/ST with update as well as
1105 Condition Codes and `lq` this would be hard to justify.
1106
1107 With limited space in the `EXTRA` Field, and Power ISA opcodes
1108 being only 32 bit, 5 operands is quite an ask. `lq` however sets
1109 a precedent: `RTp` stands for "RT pair". In other words the result
1110 is stored in RT and RT+1. For Scalar operations, following this
1111 precedent is perfectly reasonable. In Scalar mode,
1112 `madded` therefore stores the two halves of the 128-bit multiply
1113 into RT and RT+1.
1114
1115 What, then, of `sv.madded`? If the destination is hard-coded to
1116 RT and RT+1 the instruction is not useful when Vectorised because
1117 the output will be overwritten on the next element. To solve this
1118 is easy: define the destination registers as RT and RT+MAXVL
1119 respectively. This makes it easy for compilers to statically allocate
1120 registers even when VL changes dynamically.
1121
1122 Bear in mind that both RT and RT+MAXVL are starting points for Vectors,
1123 and bear in mind that element-width overrides still have to be taken
1124 into consideration, the starting point for the implicit destination
1125 is best illustrated in pseudocode:
1126
1127 # demo of madded
1128  for (i = 0; i < VL; i++)
1129 if (predval & 1<<i) # predication
1130 src1 = get_polymorphed_reg(RA, srcwid, irs1)
1131 src2 = get_polymorphed_reg(RB, srcwid, irs2)
1132 src2 = get_polymorphed_reg(RC, srcwid, irs3)
1133 result = src1*src2 + src2
1134 destmask = (2<<destwid)-1
1135 # store two halves of result, both start from RT.
1136 set_polymorphed_reg(RT, destwid, ird , result&destmask)
1137 set_polymorphed_reg(RT, destwid, ird+MAXVL, result>>destwid)
1138 if (!RT.isvec) break
1139 if (RT.isvec)  { id += 1; }
1140 if (RA.isvec)  { irs1 += 1; }
1141 if (RB.isvec)  { irs2 += 1; }
1142 if (RC.isvec)  { irs3 += 1; }
1143
1144 The significant part here is that the second half is stored
1145 starting not from RT+MAXVL at all: it is the *element* index
1146 that is offset by MAXVL, both halves actually starting from RT.
1147 If VL is 3, MAXVL is 5, RT is 1, and dest elwidth is 32 then the elements
1148 RT0 to RT2 are stored:
1149
1150 0..31 32..63
1151 r0 unchanged unchanged
1152 r1 RT0.lo RT1.lo
1153 r2 RT2.lo unchanged
1154 r3 unchanged RT0.hi
1155 r4 RT1.hi RT2.hi
1156 r5 unchanged unchanged
1157
1158 Note that all of the LO halves start from r1, but that the HI halves
1159 start from half-way into r3. The reason is that with MAXVL bring
1160 5 and elwidth being 32, this is the 5th element
1161 offset (in 32 bit quantities) counting from r1.
1162
1163 *Programmer's note: accessing registers that have been placed
1164 starting on a non-contiguous boundary (half-way along a scalar
1165 register) can be inconvenient: REMAP can provide an offset but
1166 it requires extra instructions to set up. A simple solution
1167 is to ensure that MAXVL is rounded up such that the Vector
1168 ends cleanly on a contiguous register boundary. MAXVL=6 in
1169 the above example would achieve that*
1170
1171 Additional DRAFT Scalar instructions in 3-in 2-out form
1172 with an implicit 2nd destination:
1173
1174 * [[isa/svfixedarith]]
1175 * [[isa/svfparith]]
1176