208055c081472e61c38eca884624b9871fd85c69
[libreriscv.git] / openpower / sv / svp64 / appendix.mdwn
1 [[!tag standards]]
2
3 # Appendix
4
5 * <https://bugs.libre-soc.org/show_bug.cgi?id=574> Saturation
6 * <https://bugs.libre-soc.org/show_bug.cgi?id=558#c47> Parallel Prefix
7 * <https://bugs.libre-soc.org/show_bug.cgi?id=697> Reduce Modes
8 * <https://bugs.libre-soc.org/show_bug.cgi?id=864> parallel prefix simulator
9 * <https://bugs.libre-soc.org/show_bug.cgi?id=809> OV sv.addex discussion
10
11 This is the appendix to [[sv/svp64]], providing explanations of modes
12 etc. leaving the main svp64 page's primary purpose as outlining the
13 instruction format.
14
15 Table of contents:
16
17 [[!toc]]
18
19 # Partial Implementations
20
21 It is perfectly legal to implement subsets of SVP64 as long as illegal
22 instruction traps are always raised on unimplemented features,
23 so that soft-emulation is possible,
24 even for future revisions of SVP64. With SVP64 being partly controlled
25 through contextual SPRs, a little care has to be taken.
26
27 **All** SPRs
28 not implemented including reserved ones for future use must raise an illegal
29 instruction trap if read or written. This allows software the
30 opportunity to emulate the context created by the given SPR.
31
32 See [[sv/compliancy_levels]] for full details.
33
34 # XER, SO and other global flags
35
36 Vector systems are expected to be high performance. This is achieved
37 through parallelism, which requires that elements in the vector be
38 independent. XER SO/OV and other global "accumulation" flags (CR.SO) cause
39 Read-Write Hazards on single-bit global resources, having a significant
40 detrimental effect.
41
42 Consequently in SV, XER.SO behaviour is disregarded (including
43 in `cmp` instructions). XER.SO is not read, but XER.OV may be written,
44 breaking the Read-Modify-Write Hazard Chain that complicates
45 microarchitectural implementations.
46 This includes when `scalar identity behaviour` occurs. If precise
47 OpenPOWER v3.0/1 scalar behaviour is desired then OpenPOWER v3.0/1
48 instructions should be used without an SV Prefix.
49
50 TODO jacob add about OV https://www.intel.com/content/dam/www/public/us/en/documents/white-papers/ia-large-integer-arithmetic-paper.pdf
51
52 Of note here is that XER.SO and OV may already be disregarded in the
53 Power ISA v3.0/1 SFFS (Scalar Fixed and Floating) Compliancy Subset.
54 SVP64 simply makes it mandatory to disregard XER.SO even for other Subsets,
55 but only for SVP64 Prefixed Operations.
56
57 XER.CA/CA32 on the other hand is expected and required to be implemented
58 according to standard Power ISA Scalar behaviour. Interestingly, due
59 to SVP64 being in effect a hardware for-loop around Scalar instructions
60 executing in precise Program Order, a little thought shows that a Vectorised
61 Carry-In-Out add is in effect a Big Integer Add, taking a single bit Carry In
62 and producing, at the end, a single bit Carry out. High performance
63 implementations may exploit this observation to deploy efficient
64 Parallel Carry Lookahead.
65
66 # assume VL=4, this results in 4 sequential ops (below)
67 sv.adde r0.v, r4.v, r8.v
68
69 # instructions that get executed in backend hardware:
70 adde r0, r4, r8 # takes carry-in, produces carry-out
71 adde r1, r5, r9 # takes carry from previous
72 ...
73 adde r3, r7, r11 # likewise
74
75 It can clearly be seen that the carry chains from one
76 64 bit add to the next, the end result being that a
77 256-bit "Big Integer Add" has been performed, and that
78 CA contains the 257th bit. A one-instruction 512-bit Add
79 may be performed by setting VL=8, and a one-instruction
80 1024-bit add by setting VL=16, and so on. More on
81 this in [[openpower/sv/biginteger]]
82
83 # v3.0B/v3.1 relevant instructions
84
85 SV is primarily designed for use as an efficient hybrid 3D GPU / VPU /
86 CPU ISA.
87
88 Vectorisation of the VSX Packed SIMD system makes no sense whatsoever,
89 the sole exceptions potentially being any operations with 128-bit
90 operands such as `vrlq` (Rotate Quad Word) and `xsaddqp` (Scalar
91 Quad-precision Add).
92 SV effectively *replaces* the majority of VSX, requiring far less
93 instructions, and provides, at the very minimum, predication
94 (which VSX was designed without).
95
96 Likewise, Load/Store Multiple make no sense to
97 have because they are not only provided by SV, the SV alternatives may
98 be predicated as well, making them far better suited to use in function
99 calls and context-switching.
100
101 Additionally, some v3.0/1 instructions simply make no sense at all in a
102 Vector context: `rfid` falls into this category,
103 as well as `sc` and `scv`. Here there is simply no point
104 trying to Vectorise them: the standard OpenPOWER v3.0/1 instructions
105 should be called instead.
106
107 Fortuitously this leaves several Major Opcodes free for use by SV
108 to fit alternative future instructions. In a 3D context this means
109 Vector Product, Vector Normalise, [[sv/mv.swizzle]], Texture LD/ST
110 operations, and others critical to an efficient, effective 3D GPU and
111 VPU ISA. With such instructions being included as standard in other
112 commercially-successful GPU ISAs it is likewise critical that a 3D
113 GPU/VPU based on svp64 also have such instructions.
114
115 Note however that svp64 is stand-alone and is in no way
116 critically dependent on the existence or provision of 3D GPU or VPU
117 instructions. These should be considered extensions, and their discussion
118 and specification is out of scope for this document.
119
120 Note, again: this is *only* under svp64 prefixing. Standard v3.0B /
121 v3.1B is *not* altered by svp64 in any way.
122
123 ## Major opcode map (v3.0B)
124
125 This table is taken from v3.0B.
126 Table 9: Primary Opcode Map (opcode bits 0:5)
127
128 ```
129 | 000 | 001 | 010 | 011 | 100 | 101 | 110 | 111
130 000 | | | tdi | twi | EXT04 | | | mulli | 000
131 001 | subfic | | cmpli | cmpi | addic | addic. | addi | addis | 001
132 010 | bc/l/a | EXT17 | b/l/a | EXT19 | rlwimi| rlwinm | | rlwnm | 010
133 011 | ori | oris | xori | xoris | andi. | andis. | EXT30 | EXT31 | 011
134 100 | lwz | lwzu | lbz | lbzu | stw | stwu | stb | stbu | 100
135 101 | lhz | lhzu | lha | lhau | sth | sthu | lmw | stmw | 101
136 110 | lfs | lfsu | lfd | lfdu | stfs | stfsu | stfd | stfdu | 110
137 111 | lq | EXT57 | EXT58 | EXT59 | EXT60 | EXT61 | EXT62 | EXT63 | 111
138 | 000 | 001 | 010 | 011 | 100 | 101 | 110 | 111
139 ```
140
141 ## Suitable for svp64-only
142
143 This is the same table containing v3.0B Primary Opcodes except those that
144 make no sense in a Vectorisation Context have been removed. These removed
145 POs can, *in the SV Vector Context only*, be assigned to alternative
146 (Vectorised-only) instructions, including future extensions.
147 EXT04 retains the scalar `madd*` operations but would have all PackedSIMD
148 (aka VSX) operations removed.
149
150 Note, again, to emphasise: outside of svp64 these opcodes **do not**
151 change. When not prefixed with svp64 these opcodes **specifically**
152 retain their v3.0B / v3.1B OpenPOWER Standard compliant meaning.
153
154 ```
155 | 000 | 001 | 010 | 011 | 100 | 101 | 110 | 111
156 000 | | | | | EXT04 | | | mulli | 000
157 001 | subfic | | cmpli | cmpi | addic | addic. | addi | addis | 001
158 010 | bc/l/a | | | EXT19 | rlwimi| rlwinm | | rlwnm | 010
159 011 | ori | oris | xori | xoris | andi. | andis. | EXT30 | EXT31 | 011
160 100 | lwz | lwzu | lbz | lbzu | stw | stwu | stb | stbu | 100
161 101 | lhz | lhzu | lha | lhau | sth | sthu | | | 101
162 110 | lfs | lfsu | lfd | lfdu | stfs | stfsu | stfd | stfdu | 110
163 111 | | | EXT58 | EXT59 | | EXT61 | | EXT63 | 111
164 | 000 | 001 | 010 | 011 | 100 | 101 | 110 | 111
165 ```
166
167 It is important to note that having a different v3.0B Scalar opcode
168 that is different from an SVP64 one is highly undesirable: the complexity
169 in the decoder is greatly increased.
170
171 # EXTRA Field Mapping
172
173 The purpose of the 9-bit EXTRA field mapping is to mark individual
174 registers (RT, RA, BFA) as either scalar or vector, and to extend
175 their numbering from 0..31 in Power ISA v3.0 to 0..127 in SVP64.
176 Three of the 9 bits may also be used up for a 2nd Predicate (Twin
177 Predication) leaving a mere 6 bits for qualifying registers. As can
178 be seen there is significant pressure on these (and in fact all) SVP64 bits.
179
180 In Power ISA v3.1 prefixing there are bits which describe and classify
181 the prefix in a fashion that is independent of the suffix. MLSS for
182 example. For SVP64 there is insufficient space to make the SVP64 Prefix
183 "self-describing", and consequently every single Scalar instruction
184 had to be individually analysed, by rote, to craft an EXTRA Field Mapping.
185 This process was semi-automated and is described in this section.
186 The final results, which are part of the SVP64 Specification, are here:
187
188 * [[openpower/opcode_regs_deduped]]
189
190 Firstly, every instruction's mnemonic (`add RT, RA, RB`) was analysed
191 from reading the markdown formatted version of the Scalar pseudocode
192 which is machine-readable and found in [[openpower/isatables]]. The
193 analysis gives, by instruction, a "Register Profile". `add RT, RA, RB`
194 for example is given a designation `RM-2R-1W` because it requires
195 two GPR reads and one GPR write.
196
197 Secondly, the total number of registers was added up (2R-1W is 3 registers)
198 and if less than or equal to three then that instruction could be given an
199 EXTRA3 designation. Four or more is given an EXTRA2 designation because
200 there are only 9 bits available.
201
202 Thirdly, the instruction was analysed to see if Twin or Single
203 Predication was suitable. As a general rule this was if there
204 was only a single operand and a single result (`extw` and LD/ST)
205 however it was found that some 2 or 3 operand instructions also
206 qualify. Given that 3 of the 9 bits of EXTRA had to be sacrificed for use
207 in Twin Predication, some compromises were made, here. LDST is
208 Twin but also has 3 operands in some operations, so only EXTRA2 can be used.
209
210 Fourthly, a packing format was decided: for 2R-1W an EXTRA3 indexing
211 could have been decided
212 that RA would be indexed 0 (EXTRA bits 0-2), RB indexed 1 (EXTRA bits 3-5)
213 and RT indexed 2 (EXTRA bits 6-8). In some cases (LD/ST with update)
214 RA-as-a-source is given a **different** EXTRA index from RA-as-a-result
215 (because it is possible to do, and perceived to be useful). Rc=1
216 co-results (CR0, CR1) are always given the same EXTRA index as their
217 main result (RT, FRT).
218
219 Fifthly, in an automated process the results of the analysis
220 were outputted in CSV Format for use in machine-readable form
221 by sv_analysis.py <https://git.libre-soc.org/?p=openpower-isa.git;a=blob;f=src/openpower/sv/sv_analysis.py;hb=HEAD>
222
223 This process was laborious but logical, and, crucially, once a
224 decision is made (and ratified) cannot be reversed.
225 Qualifying future Power ISA Scalar instructions for SVP64
226 is **strongly** advised to utilise this same process and the same
227 sv_analysis.py program as a canonical method of maintaining the
228 relationships. Alterations to that same program which
229 change the Designation is **prohibited** once finalised (ratified
230 through the Power ISA WG Process). It would
231 be similar to deciding that `add` should be changed from X-Form
232 to D-Form.
233
234 # Single Predication <a name="1p"> </a>
235
236 This is a standard mode normally found in Vector ISAs. every element in every source Vector and in the destination uses the same bit of one single predicate mask.
237
238 In SVSTATE, for Single-predication, implementors MUST increment both srcstep and dststep, but depending on whether sz and/or dz are set, srcstep and
239 dststep can still potentially become different indices. Only when sz=dz
240 is srcstep guaranteed to equal dststep at all times.
241
242 Note that in some Mode Formats there is only one flag (zz). This indicates
243 that *both* sz *and* dz are set to the same.
244
245 Example 1:
246
247 * VL=4
248 * mask=0b1101
249 * sz=0, dz=1
250
251 The following schedule for srcstep and dststep will occur:
252
253 | srcstep | dststep | comment |
254 | ---- | ----- | -------- |
255 | 0 | 0 | both mask[src=0] and mask[dst=0] are 1 |
256 | 1 | 2 | sz=1 but dz=0: dst skips mask[1], src soes not |
257 | 2 | 3 | mask[src=2] and mask[dst=3] are 1 |
258 | end | end | loop has ended because dst reached VL-1 |
259
260 Example 2:
261
262 * VL=4
263 * mask=0b1101
264 * sz=1, dz=0
265
266 The following schedule for srcstep and dststep will occur:
267
268 | srcstep | dststep | comment |
269 | ---- | ----- | -------- |
270 | 0 | 0 | both mask[src=0] and mask[dst=0] are 1 |
271 | 2 | 1 | sz=0 but dz=1: src skips mask[1], dst does not |
272 | 3 | 2 | mask[src=3] and mask[dst=2] are 1 |
273 | end | end | loop has ended because src reached VL-1 |
274
275 In both these examples it is crucial to note that despite there being
276 a single predicate mask, with sz and dz being different, srcstep and
277 dststep are being requested to react differently.
278
279 Example 3:
280
281 * VL=4
282 * mask=0b1101
283 * sz=0, dz=0
284
285 The following schedule for srcstep and dststep will occur:
286
287 | srcstep | dststep | comment |
288 | ---- | ----- | -------- |
289 | 0 | 0 | both mask[src=0] and mask[dst=0] are 1 |
290 | 2 | 2 | sz=0 and dz=0: both src and dst skip mask[1] |
291 | 3 | 3 | mask[src=3] and mask[dst=3] are 1 |
292 | end | end | loop has ended because src and dst reached VL-1 |
293
294 Here, both srcstep and dststep remain in lockstep because sz=dz=1
295
296 # Twin Predication <a name="2p"> </a>
297
298 This is a novel concept that allows predication to be applied to a single
299 source and a single dest register. The following types of traditional
300 Vector operations may be encoded with it, *without requiring explicit
301 opcodes to do so*
302
303 * VSPLAT (a single scalar distributed across a vector)
304 * VEXTRACT (like LLVM IR [`extractelement`](https://releases.llvm.org/11.0.0/docs/LangRef.html#extractelement-instruction))
305 * VINSERT (like LLVM IR [`insertelement`](https://releases.llvm.org/11.0.0/docs/LangRef.html#insertelement-instruction))
306 * VCOMPRESS (like LLVM IR [`llvm.masked.compressstore.*`](https://releases.llvm.org/11.0.0/docs/LangRef.html#llvm-masked-compressstore-intrinsics))
307 * VEXPAND (like LLVM IR [`llvm.masked.expandload.*`](https://releases.llvm.org/11.0.0/docs/LangRef.html#llvm-masked-expandload-intrinsics))
308
309 Those patterns (and more) may be applied to:
310
311 * mv (the usual way that V\* ISA operations are created)
312 * exts\* sign-extension
313 * rwlinm and other RS-RA shift operations (**note**: excluding
314 those that take RA as both a src and dest. These are not
315 1-src 1-dest, they are 2-src, 1-dest)
316 * LD and ST (treating AGEN as one source)
317 * FP fclass, fsgn, fneg, fabs, fcvt, frecip, fsqrt etc.
318 * Condition Register ops mfcr, mtcr and other similar
319
320 This is a huge list that creates extremely powerful combinations,
321 particularly given that one of the predicate options is `(1<<r3)`
322
323 Additional unusual capabilities of Twin Predication include a back-to-back
324 version of VCOMPRESS-VEXPAND which is effectively the ability to do
325 sequentially ordered multiple VINSERTs. The source predicate selects a
326 sequentially ordered subset of elements to be inserted; the destination
327 predicate specifies the sequentially ordered recipient locations.
328 This is equivalent to
329 `llvm.masked.compressstore.*`
330 followed by
331 `llvm.masked.expandload.*`
332 with a single instruction.
333
334 This extreme power and flexibility comes down to the fact that SVP64
335 is not actually a Vector ISA: it is a loop-abstraction-concept that
336 is applied *in general* to Scalar operations, just like the x86
337 `REP` instruction (if put on steroids).
338
339 # Reduce modes
340
341 Reduction in SVP64 is deterministic and somewhat of a misnomer. A normal
342 Vector ISA would have explicit Reduce opcodes with defined characteristics
343 per operation: in SX Aurora there is even an additional scalar argument
344 containing the initial reduction value, and the default is either 0
345 or 1 depending on the specifics of the explicit opcode.
346 SVP64 fundamentally has to
347 utilise *existing* Scalar Power ISA v3.0B operations, which presents some
348 unique challenges.
349
350 The solution turns out to be to simply define reduction as permitting
351 deterministic element-based schedules to be issued using the base Scalar
352 operations, and to rely on the underlying microarchitecture to resolve
353 Register Hazards at the element level. This goes back to
354 the fundamental principle that SV is nothing more than a Sub-Program-Counter
355 sitting between Decode and Issue phases.
356
357 Microarchitectures *may* take opportunities to parallelise the reduction
358 but only if in doing so they preserve Program Order at the Element Level.
359 Opportunities where this is possible include an `OR` operation
360 or a MIN/MAX operation: it may be possible to parallelise the reduction,
361 but for Floating Point it is not permitted due to different results
362 being obtained if the reduction is not executed in strict Program-Sequential
363 Order.
364
365 In essence it becomes the programmer's responsibility to leverage the
366 pre-determined schedules to desired effect.
367
368 ## Scalar result reduction and iteration
369
370 Scalar Reduction per se does not exist, instead is implemented in SVP64
371 as a simple and natural relaxation of the usual restriction on the Vector
372 Looping which would terminate if the destination was marked as a Scalar.
373 Scalar Reduction by contrast *keeps issuing Vector Element Operations*
374 even though the destination register is marked as scalar.
375 Thus it is up to the programmer to be aware of this, observe some
376 conventions, and thus end up achieving the desired outcome of scalar
377 reduction.
378
379 It is also important to appreciate that there is no
380 actual imposition or restriction on how this mode is utilised: there
381 will therefore be several valuable uses (including Vector Iteration
382 and "Reverse-Gear")
383 and it is up to the programmer to make best use of the
384 (strictly deterministic) capability
385 provided.
386
387 In this mode, which is suited to operations involving carry or overflow,
388 one register must be assigned, by convention by the programmer to be the
389 "accumulator". Scalar reduction is thus categorised by:
390
391 * One of the sources is a Vector
392 * the destination is a scalar
393 * optionally but most usefully when one source scalar register is
394 also the scalar destination (which may be informally termed
395 the "accumulator")
396 * That the source register type is the same as the destination register
397 type identified as the "accumulator". Scalar reduction on `cmp`,
398 `setb` or `isel` makes no sense for example because of the mixture
399 between CRs and GPRs.
400
401 *Note that issuing instructions in Scalar reduce mode such as `setb`
402 are neither `UNDEFINED` nor prohibited, despite them not making much
403 sense at first glance.
404 Scalar reduce is strictly defined behaviour, and the cost in
405 hardware terms of prohibition of seemingly non-sensical operations is too great.
406 Therefore it is permitted and required to be executed successfully.
407 Implementors **MAY** choose to optimise such instructions in instances
408 where their use results in "extraneous execution", i.e. where it is clear
409 that the sequence of operations, comprising multiple overwrites to
410 a scalar destination **without** cumulative, iterative, or reductive
411 behaviour (no "accumulator"), may discard all but the last element
412 operation. Identification
413 of such is trivial to do for `setb` and `cmp`: the source register type is
414 a completely different register file from the destination.
415 Likewise Scalar reduction when the destination is a Vector
416 is as if the Reduction Mode was not requested.*
417
418 Typical applications include simple operations such as `ADD r3, r10.v,
419 r3` where, clearly, r3 is being used to accumulate the addition of all
420 elements of the vector starting at r10.
421
422 # add RT, RA,RB but when RT==RA
423 for i in range(VL):
424 iregs[RA] += iregs[RB+i] # RT==RA
425
426 However, *unless* the operation is marked as "mapreduce" (`sv.add/mr`)
427 SV ordinarily
428 **terminates** at the first scalar operation. Only by marking the
429 operation as "mapreduce" will it continue to issue multiple sub-looped
430 (element) instructions in `Program Order`.
431
432 To perform the loop in reverse order, the ```RG``` (reverse gear) bit must be set. This may be useful in situations where the results may be different
433 (floating-point) if executed in a different order. Given that there is
434 no actual prohibition on Reduce Mode being applied when the destination
435 is a Vector, the "Reverse Gear" bit turns out to be a way to apply Iterative
436 or Cumulative Vector operations in reverse. `sv.add/rg r3.v, r4.v, r4.v`
437 for example will start at the opposite end of the Vector and push
438 a cumulative series of overlapping add operations into the Execution units of
439 the underlying hardware.
440
441 Other examples include shift-mask operations where a Vector of inserts
442 into a single destination register is required (see [[sv/bitmanip]], bmset),
443 as a way to construct
444 a value quickly from multiple arbitrary bit-ranges and bit-offsets.
445 Using the same register as both the source and destination, with Vectors
446 of different offsets masks and values to be inserted has multiple
447 applications including Video, cryptography and JIT compilation.
448
449 # assume VL=4:
450 # * Vector of shift-offsets contained in RC (r12.v)
451 # * Vector of masks contained in RB (r8.v)
452 # * Vector of values to be masked-in in RA (r4.v)
453 # * Scalar destination RT (r0) to receive all mask-offset values
454 sv.bmset/mr r0, r4.v, r8.v, r12.v
455
456 Due to the Deterministic Scheduling,
457 Subtract and Divide are still permitted to be executed in this mode,
458 although from an algorithmic perspective it is strongly discouraged.
459 It would be better to use addition followed by one final subtract,
460 or in the case of divide, to get better accuracy, to perform a multiply
461 cascade followed by a final divide.
462
463 Note that single-operand or three-operand scalar-dest reduce is perfectly
464 well permitted: the programmer may still declare one register, used as
465 both a Vector source and Scalar destination, to be utilised as
466 the "accumulator". In the case of `sv.fmadds` and `sv.maddhw` etc
467 this naturally fits well with the normal expected usage of these
468 operations.
469
470 If an interrupt or exception occurs in the middle of the scalar mapreduce,
471 the scalar destination register **MUST** be updated with the current
472 (intermediate) result, because this is how ```Program Order``` is
473 preserved (Vector Loops are to be considered to be just another way of issuing instructions
474 in Program Order). In this way, after return from interrupt,
475 the scalar mapreduce may continue where it left off. This provides
476 "precise" exception behaviour.
477
478 Note that hardware is perfectly permitted to perform multi-issue
479 parallel optimisation of the scalar reduce operation: it's just that
480 as far as the user is concerned, all exceptions and interrupts **MUST**
481 be precise.
482
483 ## Vector result reduce mode
484
485 Vector Reduce Mode issues a deterministic tree-reduction schedule to the underlying micro-architecture. Like Scalar reduction, the "Scalar Base"
486 (Power ISA v3.0B) operation is leveraged, unmodified, to give the
487 *appearance* and *effect* of Reduction.
488
489 Given that the tree-reduction schedule is deterministic,
490 Interrupts and exceptions
491 can therefore also be precise. The final result will be in the first
492 non-predicate-masked-out destination element, but due again to
493 the deterministic schedule programmers may find uses for the intermediate
494 results.
495
496 When Rc=1 a corresponding Vector of co-resultant CRs is also
497 created. No special action is taken: the result and its CR Field
498 are stored "as usual" exactly as all other SVP64 Rc=1 operations.
499
500 Note that the Schedule only makes sense on top of certain instructions:
501 X-Form with a Register Profile of `RT,RA,RB` is fine. Like Scalar
502 Reduction, nothing is prohibited:
503 the results of execution on an unsuitable instruction may simply
504 not make sense. Many 3-input instructions (madd, fmadd) unlike Scalar
505 Reduction in particular do not make sense, but `ternlogi`, if used
506 with care, would.
507
508 # Fail-on-first
509
510 Data-dependent fail-on-first has two distinct variants: one for LD/ST
511 (see [[sv/ldst]],
512 the other for arithmetic operations (actually, CR-driven)
513 ([[sv/normal]]) and CR operations ([[sv/cr_ops]]).
514 Note in each
515 case the assumption is that vector elements are required appear to be
516 executed in sequential Program Order, element 0 being the first.
517
518 * LD/ST ffirst treats the first LD/ST in a vector (element 0) as an
519 ordinary one. Exceptions occur "as normal". However for elements 1
520 and above, if an exception would occur, then VL is **truncated** to the
521 previous element.
522 * Data-driven (CR-driven) fail-on-first activates when Rc=1 or other
523 CR-creating operation produces a result (including cmp). Similar to
524 branch, an analysis of the CR is performed and if the test fails, the
525 vector operation terminates and discards all element operations
526 above the current one (and the current one if VLi is not set),
527 and VL is truncated to either
528 the *previous* element or the current one, depending on whether
529 VLi (VL "inclusive") is set.
530
531 Thus the new VL comprises a contiguous vector of results,
532 all of which pass the testing criteria (equal to zero, less than zero).
533
534 The CR-based data-driven fail-on-first is new and not found in ARM
535 SVE or RVV. It is extremely useful for reducing instruction count,
536 however requires speculative execution involving modifications of VL
537 to get high performance implementations. An additional mode (RC1=1)
538 effectively turns what would otherwise be an arithmetic operation
539 into a type of `cmp`. The CR is stored (and the CR.eq bit tested
540 against the `inv` field).
541 If the CR.eq bit is equal to `inv` then the Vector is truncated and
542 the loop ends.
543 Note that when RC1=1 the result elements are never stored, only the CRs.
544
545 VLi is only available as an option when `Rc=0` (or for instructions
546 which do not have Rc). When set, the current element is always
547 also included in the count (the new length that VL will be set to).
548 This may be useful in combination with "inv" to truncate the Vector
549 to `exclude` elements that fail a test, or, in the case of implementations
550 of strncpy, to include the terminating zero.
551
552 In CR-based data-driven fail-on-first there is only the option to select
553 and test one bit of each CR (just as with branch BO). For more complex
554 tests this may be insufficient. If that is the case, a vectorised crops
555 (crand, cror) may be used, and ffirst applied to the crop instead of to
556 the arithmetic vector.
557
558 One extremely important aspect of ffirst is:
559
560 * LDST ffirst may never set VL equal to zero. This because on the first
561 element an exception must be raised "as normal".
562 * CR-based data-dependent ffirst on the other hand **can** set VL equal
563 to zero. This is the only means in the entirety of SV that VL may be set
564 to zero (with the exception of via the SV.STATE SPR). When VL is set
565 zero due to the first element failing the CR bit-test, all subsequent
566 vectorised operations are effectively `nops` which is
567 *precisely the desired and intended behaviour*.
568
569 Another aspect is that for ffirst LD/STs, VL may be truncated arbitrarily
570 to a nonzero value for any implementation-specific reason. For example:
571 it is perfectly reasonable for implementations to alter VL when ffirst
572 LD or ST operations are initiated on a nonaligned boundary, such that
573 within a loop the subsequent iteration of that loop begins subsequent
574 ffirst LD/ST operations on an aligned boundary. Likewise, to reduce
575 workloads or balance resources.
576
577 CR-based data-dependent first on the other hand MUST not truncate VL
578 arbitrarily to a length decided by the hardware: VL MUST only be
579 truncated based explicitly on whether a test fails.
580 This because it is a precise test on which algorithms
581 will rely.
582
583 ## Data-dependent fail-first on CR operations (crand etc)
584
585 Operations that actually produce or alter CR Field as a result
586 do not also in turn have an Rc=1 mode. However it makes no
587 sense to try to test the 4 bits of a CR Field for being equal
588 or not equal to zero. Moreover, the result is already in the
589 form that is desired: it is a CR field. Therefore,
590 CR-based operations have their own SVP64 Mode, described
591 in [[sv/cr_ops]]
592
593 There are two primary different types of CR operations:
594
595 * Those which have a 3-bit operand field (referring to a CR Field)
596 * Those which have a 5-bit operand (referring to a bit within the
597 whole 32-bit CR)
598
599 More details can be found in [[sv/cr_ops]].
600
601 # pred-result mode
602
603 Pred-result mode may not be applied on CR-based operations.
604
605 Although CR operations (mtcr, crand, cror) may be Vectorised,
606 predicated, pred-result mode applies to operations that have
607 an Rc=1 mode, or make sense to add an RC1 option.
608
609 Predicate-result merges common CR testing with predication, saving on
610 instruction count. In essence, a Condition Register Field test
611 is performed, and if it fails it is considered to have been
612 *as if* the destination predicate bit was zero. Given that
613 there are no CR-based operations that produce Rc=1 co-results,
614 there can be no pred-result mode for mtcr and other CR-based instructions
615
616 Arithmetic and Logical Pred-result, which does have Rc=1 or for which
617 RC1 Mode makes sense, is covered in [[sv/normal]]
618
619 # CR Operations
620
621 CRs are slightly more involved than INT or FP registers due to the
622 possibility for indexing individual bits (crops BA/BB/BT). Again however
623 the access pattern needs to be understandable in relation to v3.0B / v3.1B
624 numbering, with a clear linear relationship and mapping existing when
625 SV is applied.
626
627 ## CR EXTRA mapping table and algorithm <a name="cr_extra"></a>
628
629 Numbering relationships for CR fields are already complex due to being
630 in BE format (*the relationship is not clearly explained in the v3.0B
631 or v3.1 specification*). However with some care and consideration
632 the exact same mapping used for INT and FP regfiles may be applied,
633 just to the upper bits, as explained below. The notation
634 `CR{field number}` is used to indicate access to a particular
635 Condition Register Field (as opposed to the notation `CR[bit]`
636 which accesses one bit of the 32 bit Power ISA v3.0B
637 Condition Register)
638
639 `CR{n}` refers to `CR0` when `n=0` and consequently, for CR0-7, is defined, in v3.0B pseudocode, as:
640
641 CR{7-n} = CR[32+n*4:35+n*4]
642
643 For SVP64 the relationship for the sequential
644 numbering of elements is to the CR **fields** within
645 the CR Register, not to individual bits within the CR register.
646
647 In OpenPOWER v3.0/1, BF/BT/BA/BB are all 5 bits. The top 3 bits (0:2)
648 select one of the 8 CRs; the bottom 2 bits (3:4) select one of 4 bits
649 *in* that CR (EQ/LT/GT/SO). The numbering was determined (after 4 months of
650 analysis and research) to be as follows:
651
652 CR_index = 7-(BA>>2) # top 3 bits but BE
653 bit_index = 3-(BA & 0b11) # low 2 bits but BE
654 CR_reg = CR{CR_index} # get the CR
655 # finally get the bit from the CR.
656 CR_bit = (CR_reg & (1<<bit_index)) != 0
657
658 When it comes to applying SV, it is the CR\_reg number to which SV EXTRA2/3
659 applies, **not** the CR\_bit portion (bits 3-4):
660
661 if extra3_mode:
662 spec = EXTRA3
663 else:
664 spec = EXTRA2<<1 | 0b0
665 if spec[0]:
666 # vector constructs "BA[0:2] spec[1:2] 00 BA[3:4]"
667 return ((BA >> 2)<<6) | # hi 3 bits shifted up
668 (spec[1:2]<<4) | # to make room for these
669 (BA & 0b11) # CR_bit on the end
670 else:
671 # scalar constructs "00 spec[1:2] BA[0:4]"
672 return (spec[1:2] << 5) | BA
673
674 Thus, for example, to access a given bit for a CR in SV mode, the v3.0B
675 algorithm to determine CR\_reg is modified to as follows:
676
677 CR_index = 7-(BA>>2) # top 3 bits but BE
678 if spec[0]:
679 # vector mode, 0-124 increments of 4
680 CR_index = (CR_index<<4) | (spec[1:2] << 2)
681 else:
682 # scalar mode, 0-32 increments of 1
683 CR_index = (spec[1:2]<<3) | CR_index
684 # same as for v3.0/v3.1 from this point onwards
685 bit_index = 3-(BA & 0b11) # low 2 bits but BE
686 CR_reg = CR{CR_index} # get the CR
687 # finally get the bit from the CR.
688 CR_bit = (CR_reg & (1<<bit_index)) != 0
689
690 Note here that the decoding pattern to determine CR\_bit does not change.
691
692 Note: high-performance implementations may read/write Vectors of CRs in
693 batches of aligned 32-bit chunks (CR0-7, CR7-15). This is to greatly
694 simplify internal design. If instructions are issued where CR Vectors
695 do not start on a 32-bit aligned boundary, performance may be affected.
696
697 ## CR fields as inputs/outputs of vector operations
698
699 CRs (or, the arithmetic operations associated with them)
700 may be marked as Vectorised or Scalar. When Rc=1 in arithmetic operations that have no explicit EXTRA to cover the CR, the CR is Vectorised if the destination is Vectorised. Likewise if the destination is scalar then so is the CR.
701
702 When vectorized, the CR inputs/outputs are sequentially read/written
703 to 4-bit CR fields. Vectorised Integer results, when Rc=1, will begin
704 writing to CR8 (TBD evaluate) and increase sequentially from there.
705 This is so that:
706
707 * implementations may rely on the Vector CRs being aligned to 8. This
708 means that CRs may be read or written in aligned batches of 32 bits
709 (8 CRs per batch), for high performance implementations.
710 * scalar Rc=1 operation (CR0, CR1) and callee-saved CRs (CR2-4) are not
711 overwritten by vector Rc=1 operations except for very large VL
712 * CR-based predication, from CR32, is also not interfered with
713 (except by large VL).
714
715 However when the SV result (destination) is marked as a scalar by the
716 EXTRA field the *standard* v3.0B behaviour applies: the accompanying
717 CR when Rc=1 is written to. This is CR0 for integer operations and CR1
718 for FP operations.
719
720 Note that yes, the CR Fields are genuinely Vectorised. Unlike in SIMD VSX which
721 has a single CR (CR6) for a given SIMD result, SV Vectorised OpenPOWER
722 v3.0B scalar operations produce a **tuple** of element results: the
723 result of the operation as one part of that element *and a corresponding
724 CR element*. Greatly simplified pseudocode:
725
726 for i in range(VL):
727 # calculate the vector result of an add
728 iregs[RT+i] = iregs[RA+i] + iregs[RB+i]
729 # now calculate CR bits
730 CRs{8+i}.eq = iregs[RT+i] == 0
731 CRs{8+i}.gt = iregs[RT+i] > 0
732 ... etc
733
734 If a "cumulated" CR based analysis of results is desired (a la VSX CR6)
735 then a followup instruction must be performed, setting "reduce" mode on
736 the Vector of CRs, using cr ops (crand, crnor) to do so. This provides far
737 more flexibility in analysing vectors than standard Vector ISAs. Normal
738 Vector ISAs are typically restricted to "were all results nonzero" and
739 "were some results nonzero". The application of mapreduce to Vectorised
740 cr operations allows far more sophisticated analysis, particularly in
741 conjunction with the new crweird operations see [[sv/cr_int_predication]].
742
743 Note in particular that the use of a separate instruction in this way
744 ensures that high performance multi-issue OoO inplementations do not
745 have the computation of the cumulative analysis CR as a bottleneck and
746 hindrance, regardless of the length of VL.
747
748 Additionally,
749 SVP64 [[sv/branches]] may be used, even when the branch itself is to
750 the following instruction. The combined side-effects of CTR reduction
751 and VL truncation provide several benefits.
752
753 (see [[discussion]]. some alternative schemes are described there)
754
755 ## Rc=1 when SUBVL!=1
756
757 sub-vectors are effectively a form of Packed SIMD (length 2 to 4). Only 1 bit of
758 predicate is allocated per subvector; likewise only one CR is allocated
759 per subvector.
760
761 This leaves a conundrum as to how to apply CR computation per subvector,
762 when normally Rc=1 is exclusively applied to scalar elements. A solution
763 is to perform a bitwise OR or AND of the subvector tests. Given that
764 OE is ignored in SVP64, this field may (when available) be used to select OR or
765 AND behavior.
766
767 ### Table of CR fields
768
769 CRn is the notation used by the OpenPower spec to refer to CR field #i,
770 so FP instructions with Rc=1 write to CR1 (n=1).
771
772 CRs are not stored in SPRs: they are registers in their own right.
773 Therefore context-switching the full set of CRs involves a Vectorised
774 mfcr or mtcr, using VL=8 to do so. This is exactly as how
775 scalar OpenPOWER context-switches CRs: it is just that there are now
776 more of them.
777
778 The 64 SV CRs are arranged similarly to the way the 128 integer registers
779 are arranged. TODO a python program that auto-generates a CSV file
780 which can be included in a table, which is in a new page (so as not to
781 overwhelm this one). [[svp64/cr_names]]
782
783 # Register Profiles
784
785 **NOTE THIS TABLE SHOULD NO LONGER BE HAND EDITED** see
786 <https://bugs.libre-soc.org/show_bug.cgi?id=548> for details.
787
788 Instructions are broken down by Register Profiles as listed in the
789 following auto-generated page: [[opcode_regs_deduped]]. "Non-SV"
790 indicates that the operations with this Register Profile cannot be
791 Vectorised (mtspr, bc, dcbz, twi)
792
793 TODO generate table which will be here [[svp64/reg_profiles]]
794
795 # SV pseudocode illilustration
796
797 ## Single-predicated Instruction
798
799 illustration of normal mode add operation: zeroing not included, elwidth
800 overrides not included. if there is no predicate, it is set to all 1s
801
802 function op_add(rd, rs1, rs2) # add not VADD!
803 int i, id=0, irs1=0, irs2=0;
804 predval = get_pred_val(FALSE, rd);
805 for (i = 0; i < VL; i++)
806 STATE.srcoffs = i # save context
807 if (predval & 1<<i) # predication uses intregs
808 ireg[rd+id] <= ireg[rs1+irs1] + ireg[rs2+irs2];
809 if (!int_vec[rd].isvec) break;
810 if (rd.isvec) { id += 1; }
811 if (rs1.isvec) { irs1 += 1; }
812 if (rs2.isvec) { irs2 += 1; }
813 if (id == VL or irs1 == VL or irs2 == VL)
814 {
815 # end VL hardware loop
816 STATE.srcoffs = 0; # reset
817 return;
818 }
819
820 This has several modes:
821
822 * RT.v = RA.v RB.v
823 * RT.v = RA.v RB.s (and RA.s RB.v)
824 * RT.v = RA.s RB.s
825 * RT.s = RA.v RB.v
826 * RT.s = RA.v RB.s (and RA.s RB.v)
827 * RT.s = RA.s RB.s
828
829 All of these may be predicated. Vector-Vector is straightfoward.
830 When one of source is a Vector and the other a Scalar, it is clear that
831 each element of the Vector source should be added to the Scalar source,
832 each result placed into the Vector (or, if the destination is a scalar,
833 only the first nonpredicated result).
834
835 The one that is not obvious is RT=vector but both RA/RB=scalar.
836 Here this acts as a "splat scalar result", copying the same result into
837 all nonpredicated result elements. If a fixed destination scalar was
838 intended, then an all-Scalar operation should be used.
839
840 See <https://bugs.libre-soc.org/show_bug.cgi?id=552>
841
842 # Assembly Annotation
843
844 Assembly code annotation is required for SV to be able to successfully
845 mark instructions as "prefixed".
846
847 A reasonable (prototype) starting point:
848
849 svp64 [field=value]*
850
851 Fields:
852
853 * ew=8/16/32 - element width
854 * sew=8/16/32 - source element width
855 * vec=2/3/4 - SUBVL
856 * mode=mr/satu/sats/crpred
857 * pred=1\<\<3/r3/~r3/r10/~r10/r30/~r30/lt/gt/le/ge/eq/ne
858
859 similar to x86 "rex" prefix.
860
861 For actual assembler:
862
863 sv.asmcode/mode.vec{N}.ew=8,sw=16,m={pred},sm={pred} reg.v, src.s
864
865 Qualifiers:
866
867 * m={pred}: predicate mask mode
868 * sm={pred}: source-predicate mask mode (only allowed in Twin-predication)
869 * vec{N}: vec2 OR vec3 OR vec4 - sets SUBVL=2/3/4
870 * ew={N}: ew=8/16/32 - sets elwidth override
871 * sw={N}: sw=8/16/32 - sets source elwidth override
872 * ff={xx}: see fail-first mode
873 * pr={xx}: see predicate-result mode
874 * sat{x}: satu / sats - see saturation mode
875 * mr: see map-reduce mode
876 * mr.svm see map-reduce with sub-vector mode
877 * crm: see map-reduce CR mode
878 * crm.svm see map-reduce CR with sub-vector mode
879 * sz: predication with source-zeroing
880 * dz: predication with dest-zeroing
881
882 For modes:
883
884 * pred-result:
885 - pm=lt/gt/le/ge/eq/ne/so/ns
886 - RC1 mode
887 * fail-first
888 - ff=lt/gt/le/ge/eq/ne/so/ns
889 - RC1 mode
890 * saturation:
891 - sats
892 - satu
893 * map-reduce:
894 - mr OR crm: "normal" map-reduce mode or CR-mode.
895 - mr.svm OR crm.svm: when vec2/3/4 set, sub-vector mapreduce is enabled
896
897 # Parallel-reduction algorithm
898
899 The principle of SVP64 is that SVP64 is a fully-independent
900 Abstraction of hardware-looping in between issue and execute phases
901 that has no relation to the operation it issues.
902 Additional state cannot be saved on context-switching beyond that
903 of SVSTATE, making things slightly tricky.
904
905 Executable demo pseudocode, full version
906 [here](https://git.libre-soc.org/?p=libreriscv.git;a=blob;f=openpower/sv/preduce.py;hb=HEAD)
907
908 ```
909 def preducei(vl, vec, pred):
910 vec = copy(vec)
911 pred = copy(pred) # must not damage predicate
912 step = 1
913 ix = list(range(vl)) # indices move rather than copy data
914 print(" start", step, pred, vec)
915 while step < vl:
916 step *= 2
917 for i in range(0, vl, step):
918 other = i + step // 2
919 ci = ix[i]
920 oi = ix[other] if other < vl else None
921 other_pred = other < vl and pred[oi]
922 if pred[ci] and other_pred:
923 vec[ci] += vec[oi]
924 elif other_pred:
925 ix[i] = oi # leave data in-place, copy index instead
926 pred[ci] |= other_pred
927 print(" row", step, pred, vec, ix)
928 return vec
929 ```
930
931 This algorithm works by noting when data remains in-place rather than
932 being reduced, and referring to that alternative position on subsequent
933 layers of reduction. It is re-entrant. If however interrupted and
934 restored, some implementations may take longer to re-establish the
935 context.
936
937 Its application by default is that:
938
939 * RA, FRA or BFA is the first register as the first operand
940 (ci index offset in the above pseudocode)
941 * RB, FRB or BFB is the second (co index offset)
942 * RT (result) also uses ci **if RA==RT**
943
944 For more complex applications a REMAP Schedule must be used
945
946 *Programmers's note:
947 if passed a predicate mask with only one bit set, this algorithm
948 takes no action, similar to when a predicate mask is all zero.*
949
950 *Implementor's Note: many SIMD-based Parallel Reduction Algorithms are
951 implemented in hardware with MVs that ensure lane-crossing is minimised.
952 The mistake which would be catastrophic to SVP64 to make is to then
953 limit the Reduction Sequence for all implementors
954 based solely and exclusively on what one
955 specific internal microarchitecture does.
956 In SIMD ISAs the internal SIMD Architectural design is exposed and imposed on the programmer. Cray-style Vector ISAs on the other hand provide convenient,
957 compact and efficient encodings of abstract concepts.*
958 **It is the Implementor's responsibility to produce a design
959 that complies with the above algorithm,
960 utilising internal Micro-coding and other techniques to transparently
961 insert micro-architectural lane-crossing Move operations
962 if necessary or desired, to give the level of efficiency or performance
963 required.**
964
965 # Element-width overrides <a name="elwidth"> </>
966
967 Element-width overrides are best illustrated with a packed structure
968 union in the c programming language. The following should be taken
969 literally, and assume always a little-endian layout:
970
971 typedef union {
972 uint8_t b[];
973 uint16_t s[];
974 uint32_t i[];
975 uint64_t l[];
976 uint8_t actual_bytes[8];
977 } el_reg_t;
978
979 elreg_t int_regfile[128];
980
981 get_polymorphed_reg(reg, bitwidth, offset):
982 el_reg_t res;
983 res.l = 0; // TODO: going to need sign-extending / zero-extending
984 if bitwidth == 8:
985 reg.b = int_regfile[reg].b[offset]
986 elif bitwidth == 16:
987 reg.s = int_regfile[reg].s[offset]
988 elif bitwidth == 32:
989 reg.i = int_regfile[reg].i[offset]
990 elif bitwidth == 64:
991 reg.l = int_regfile[reg].l[offset]
992 return res
993
994 set_polymorphed_reg(reg, bitwidth, offset, val):
995 if (!reg.isvec):
996 # not a vector: first element only, overwrites high bits
997 int_regfile[reg].l[0] = val
998 elif bitwidth == 8:
999 int_regfile[reg].b[offset] = val
1000 elif bitwidth == 16:
1001 int_regfile[reg].s[offset] = val
1002 elif bitwidth == 32:
1003 int_regfile[reg].i[offset] = val
1004 elif bitwidth == 64:
1005 int_regfile[reg].l[offset] = val
1006
1007 In effect the GPR registers r0 to r127 (and corresponding FPRs fp0
1008 to fp127) are reinterpreted to be "starting points" in a byte-addressable
1009 memory. Vectors - which become just a virtual naming construct - effectively
1010 overlap.
1011
1012 It is extremely important for implementors to note that the only circumstance
1013 where upper portions of an underlying 64-bit register are zero'd out is
1014 when the destination is a scalar. The ideal register file has byte-level
1015 write-enable lines, just like most SRAMs, in order to avoid READ-MODIFY-WRITE.
1016
1017 An example ADD operation with predication and element width overrides:
1018
1019  for (i = 0; i < VL; i++)
1020 if (predval & 1<<i) # predication
1021 src1 = get_polymorphed_reg(RA, srcwid, irs1)
1022 src2 = get_polymorphed_reg(RB, srcwid, irs2)
1023 result = src1 + src2 # actual add here
1024 set_polymorphed_reg(RT, destwid, ird, result)
1025 if (!RT.isvec) break
1026 if (RT.isvec)  { id += 1; }
1027 if (RA.isvec)  { irs1 += 1; }
1028 if (RB.isvec)  { irs2 += 1; }
1029
1030 Thus it can be clearly seen that elements are packed by their
1031 element width, and the packing starts from the source (or destination)
1032 specified by the instruction.
1033
1034 # Twin (implicit) result operations
1035
1036 Some operations in the Power ISA already target two 64-bit scalar
1037 registers: `lq` for example, and LD with update.
1038 Some mathematical algorithms are more
1039 efficient when there are two outputs rather than one, providing
1040 feedback loops between elements (the most well-known being add with
1041 carry). 64-bit multiply
1042 for example actually internally produces a 128 bit result, which clearly
1043 cannot be stored in a single 64 bit register. Some ISAs recommend
1044 "macro op fusion": the practice of setting a convention whereby if
1045 two commonly used instructions (mullo, mulhi) use the same ALU but
1046 one selects the low part of an identical operation and the other
1047 selects the high part, then optimised micro-architectures may
1048 "fuse" those two instructions together, using Micro-coding techniques,
1049 internally.
1050
1051 The practice and convention of macro-op fusion however is not compatible
1052 with SVP64 Horizontal-First, because Horizontal Mode may only
1053 be applied to a single instruction at a time, and SVP64 is based on
1054 the principle of strict Program Order even at the element
1055 level. Thus it becomes
1056 necessary to add explicit more complex single instructions with
1057 more operands than would normally be seen in the average RISC ISA
1058 (3-in, 2-out, in some cases). If it
1059 was not for Power ISA already having LD/ST with update as well as
1060 Condition Codes and `lq` this would be hard to justify.
1061
1062 With limited space in the `EXTRA` Field, and Power ISA opcodes
1063 being only 32 bit, 5 operands is quite an ask. `lq` however sets
1064 a precedent: `RTp` stands for "RT pair". In other words the result
1065 is stored in RT and RT+1. For Scalar operations, following this
1066 precedent is perfectly reasonable. In Scalar mode,
1067 `madded` therefore stores the two halves of the 128-bit multiply
1068 into RT and RT+1.
1069
1070 What, then, of `sv.madded`? If the destination is hard-coded to
1071 RT and RT+1 the instruction is not useful when Vectorised because
1072 the output will be overwritten on the next element. To solve this
1073 is easy: define the destination registers as RT and RT+MAXVL
1074 respectively. This makes it easy for compilers to statically allocate
1075 registers even when VL changes dynamically.
1076
1077 Bear in mind that both RT and RT+MAXVL are starting points for Vectors,
1078 and bear in mind that element-width overrides still have to be taken
1079 into consideration, the starting point for the implicit destination
1080 is best illustrated in pseudocode:
1081
1082 # demo of madded
1083  for (i = 0; i < VL; i++)
1084 if (predval & 1<<i) # predication
1085 src1 = get_polymorphed_reg(RA, srcwid, irs1)
1086 src2 = get_polymorphed_reg(RB, srcwid, irs2)
1087 src2 = get_polymorphed_reg(RC, srcwid, irs3)
1088 result = src1*src2 + src2
1089 destmask = (2<<destwid)-1
1090 # store two halves of result, both start from RT.
1091 set_polymorphed_reg(RT, destwid, ird , result&destmask)
1092 set_polymorphed_reg(RT, destwid, ird+MAXVL, result>>destwid)
1093 if (!RT.isvec) break
1094 if (RT.isvec)  { id += 1; }
1095 if (RA.isvec)  { irs1 += 1; }
1096 if (RB.isvec)  { irs2 += 1; }
1097 if (RC.isvec)  { irs3 += 1; }
1098
1099 The significant part here is that the second half is stored
1100 starting not from RT+MAXVL at all: it is the *element* index
1101 that is offset by MAXVL, both halves actually starting from RT.
1102 If VL is 3, MAXVL is 5, RT is 1, and dest elwidth is 32 then the elements
1103 RT0 to RT2 are stored:
1104
1105 0..31 32..63
1106 r0 unchanged unchanged
1107 r1 RT0.lo RT1.lo
1108 r2 RT2.lo unchanged
1109 r3 unchanged RT0.hi
1110 r4 RT1.hi RT2.hi
1111 r5 unchanged unchanged
1112
1113 Note that all of the LO halves start from r1, but that the HI halves
1114 start from half-way into r3. The reason is that with MAXVL bring
1115 5 and elwidth being 32, this is the 5th element
1116 offset (in 32 bit quantities) counting from r1.
1117
1118 *Programmer's note: accessing registers that have been placed
1119 starting on a non-contiguous boundary (half-way along a scalar
1120 register) can be inconvenient: REMAP can provide an offset but
1121 it requires extra instructions to set up. A simple solution
1122 is to ensure that MAXVL is rounded up such that the Vector
1123 ends cleanly on a contiguous register boundary. MAXVL=6 in
1124 the above example would achieve that*
1125
1126 Additional DRAFT Scalar instructions in 3-in 2-out form
1127 with an implicit 2nd destination:
1128
1129 * [[isa/svfixedarith]]
1130 * [[isa/svfparith]]
1131