(no commit message)
[libreriscv.git] / openpower / sv / svp64 / appendix.mdwn
1 [[!tag standards]]
2
3 # Appendix
4
5 * <https://bugs.libre-soc.org/show_bug.cgi?id=574> Saturation
6 * <https://bugs.libre-soc.org/show_bug.cgi?id=558#c47> Parallel Prefix
7 * <https://bugs.libre-soc.org/show_bug.cgi?id=697> Reduce Modes
8 * <https://bugs.libre-soc.org/show_bug.cgi?id=809> OV sv.addex discussion
9
10 This is the appendix to [[sv/svp64]], providing explanations of modes
11 etc. leaving the main svp64 page's primary purpose as outlining the
12 instruction format.
13
14 Table of contents:
15
16 [[!toc]]
17
18 # Partial Implementations
19
20 It is perfectly legal to implement subsets of SVP64 as long as illegal
21 instruction traps are always raised on unimplemented features,
22 so that soft-emulation is possible,
23 even for future revisions of SVP64. With SVP64 being partly controlled
24 through contextual SPRs, a little care has to be taken.
25
26 **All** SPRs
27 not implemented including reserved ones for future use must raise an illegal
28 instruction trap if read or written. This allows software the
29 opportunity to emulate the context created by the given SPR.
30
31 **Embedded Scalar Scenario**
32
33 In this scenario an implementation does not wish to implement the Vectorisation
34 but simply wishes to take advantage of predication or other feature
35 of SVP64, such as instructions that might only be available if prefixed.
36 Such an implementation would be entirely free to do so with the proviso
37 that:
38
39 * any attempts to call `setvl` shall either raise an illegal instruction
40 or be partially implemented to set SVSTATE correctly.
41 * if SVSTATE contains any value in any bit that is not supported
42 in hardware, an illegal instruction shall be raised when an SVP64
43 prefixed instruction is executed.
44 * if SVSTATE contains values requesting supported features at the time
45 that the prefixed instruction is executed then it is executed in
46 hardware as per specification, with no illegal exception trap raised.
47
48 Example, assuming that hardware implements scalar operations only,
49 and implements predication but not elwidth overrides:
50
51 setvli r0, 4 # sets VL equal to 4
52 sv.addi r5, r0, 1 # raises an 0x700 trap
53 setvli r0, 1 # sets VL equal to 1
54 sv.addi r5, r0, 1 # gets executed by hardware
55 sv.addi/ew=8 r5, r0, 1 # raises an 0x700 trap
56 sv.ori/sm=EQ r5, r0, 1 # executed by hardware
57
58 The first `sv.addi` raises an illegal instruction trap because
59 VL has been set to 4, and this is not supported. Likewise
60 elwidth overrides if requested always raise illegal instruction
61 traps.
62
63 # XER, SO and other global flags
64
65 Vector systems are expected to be high performance. This is achieved
66 through parallelism, which requires that elements in the vector be
67 independent. XER SO/OV and other global "accumulation" flags (CR.SO) cause
68 Read-Write Hazards on single-bit global resources, having a significant
69 detrimental effect.
70
71 Consequently in SV, XER.SO behaviour is disregarded (including
72 in `cmp` instructions). XER.SO is not read, but XER.OV may be written,
73 breaking the Read-Modify-Write Hazard Chain that complicates
74 microarchitectural implementations.
75 This includes when `scalar identity behaviour` occurs. If precise
76 OpenPOWER v3.0/1 scalar behaviour is desired then OpenPOWER v3.0/1
77 instructions should be used without an SV Prefix.
78
79 TODO jacob add about OV https://www.intel.com/content/dam/www/public/us/en/documents/white-papers/ia-large-integer-arithmetic-paper.pdf
80
81 Of note here is that XER.SO and OV may already be disregarded in the
82 Power ISA v3.0/1 SFFS (Scalar Fixed and Floating) Compliancy Subset.
83 SVP64 simply makes it mandatory to disregard XER.SO even for other Subsets,
84 but only for SVP64 Prefixed Operations.
85
86 XER.CA/CA32 on the other hand is expected and required to be implemented
87 according to standard Power ISA Scalar behaviour. Interestingly, due
88 to SVP64 being in effect a hardware for-loop around Scalar instructions
89 executing in precise Program Order, a little thought shows that a Vectorised
90 Carry-In-Out add is in effect a Big Integer Add, taking a single bit Carry In
91 and producing, at the end, a single bit Carry out. High performance
92 implementations may exploit this observation to deploy efficient
93 Parallel Carry Lookahead.
94
95 # assume VL=4, this results in 4 sequential ops (below)
96 sv.adde r0.v, r4.v, r8.v
97
98 # instructions that get executed in backend hardware:
99 adde r0, r4, r8 # takes carry-in, produces carry-out
100 adde r1, r5, r9 # takes carry from previous
101 ...
102 adde r3, r7, r11 # likewise
103
104 It can clearly be seen that the carry chains from one
105 64 bit add to the next, the end result being that a
106 256-bit "Big Integer Add" has been performed, and that
107 CA contains the 257th bit. A one-instruction 512-bit Add
108 may be performed by setting VL=8, and a one-instruction
109 1024-bit add by setting VL=16, and so on.
110
111 # v3.0B/v3.1 relevant instructions
112
113 SV is primarily designed for use as an efficient hybrid 3D GPU / VPU /
114 CPU ISA.
115
116 Vectorisation of the VSX Packed SIMD system makes no sense whatsoever,
117 the sole exceptions potentially being any operations with 128-bit
118 operands such as `vrlq` (Rotate Quad Word) and `xsaddqp` (Scalar
119 Quad-precision Add).
120 SV effectively *replaces* VSX requiring far less instructions, and provides,
121 at the very minimum, predication (which VSX was designed without).
122 Thus all VSX Major Opcodes - all of them - are "unused" and must raise
123 illegal instruction exceptions in SV Prefix Mode.
124
125 Likewise, `lq` (Load Quad), and Load/Store Multiple make no sense to
126 have because they are not only provided by SV, the SV alternatives may
127 be predicated as well, making them far better suited to use in function
128 calls and context-switching.
129
130 Additionally, some v3.0/1 instructions simply make no sense at all in a
131 Vector context: `rfid` falls into this category,
132 as well as `sc` and `scv`. Here there is simply no point
133 trying to Vectorise them: the standard OpenPOWER v3.0/1 instructions
134 should be called instead.
135
136 Fortuitously this leaves several Major Opcodes free for use by SV
137 to fit alternative future instructions. In a 3D context this means
138 Vector Product, Vector Normalise, [[sv/mv.swizzle]], Texture LD/ST
139 operations, and others critical to an efficient, effective 3D GPU and
140 VPU ISA. With such instructions being included as standard in other
141 commercially-successful GPU ISAs it is likewise critical that a 3D
142 GPU/VPU based on svp64 also have such instructions.
143
144 Note however that svp64 is stand-alone and is in no way
145 critically dependent on the existence or provision of 3D GPU or VPU
146 instructions. These should be considered extensions, and their discussion
147 and specification is out of scope for this document.
148
149 Note, again: this is *only* under svp64 prefixing. Standard v3.0B /
150 v3.1B is *not* altered by svp64 in any way.
151
152 ## Major opcode map (v3.0B)
153
154 This table is taken from v3.0B.
155 Table 9: Primary Opcode Map (opcode bits 0:5)
156
157 ```
158 | 000 | 001 | 010 | 011 | 100 | 101 | 110 | 111
159 000 | | | tdi | twi | EXT04 | | | mulli | 000
160 001 | subfic | | cmpli | cmpi | addic | addic. | addi | addis | 001
161 010 | bc/l/a | EXT17 | b/l/a | EXT19 | rlwimi| rlwinm | | rlwnm | 010
162 011 | ori | oris | xori | xoris | andi. | andis. | EXT30 | EXT31 | 011
163 100 | lwz | lwzu | lbz | lbzu | stw | stwu | stb | stbu | 100
164 101 | lhz | lhzu | lha | lhau | sth | sthu | lmw | stmw | 101
165 110 | lfs | lfsu | lfd | lfdu | stfs | stfsu | stfd | stfdu | 110
166 111 | lq | EXT57 | EXT58 | EXT59 | EXT60 | EXT61 | EXT62 | EXT63 | 111
167 | 000 | 001 | 010 | 011 | 100 | 101 | 110 | 111
168 ```
169
170 ## Suitable for svp64-only
171
172 This is the same table containing v3.0B Primary Opcodes except those that
173 make no sense in a Vectorisation Context have been removed. These removed
174 POs can, *in the SV Vector Context only*, be assigned to alternative
175 (Vectorised-only) instructions, including future extensions.
176 EXT04 retains the scalar `madd*` operations but would have all PackedSIMD
177 (aka VSX) operations removed.
178
179 Note, again, to emphasise: outside of svp64 these opcodes **do not**
180 change. When not prefixed with svp64 these opcodes **specifically**
181 retain their v3.0B / v3.1B OpenPOWER Standard compliant meaning.
182
183 ```
184 | 000 | 001 | 010 | 011 | 100 | 101 | 110 | 111
185 000 | | | | | EXT04 | | | mulli | 000
186 001 | subfic | | cmpli | cmpi | addic | addic. | addi | addis | 001
187 010 | bc/l/a | | | EXT19 | rlwimi| rlwinm | | rlwnm | 010
188 011 | ori | oris | xori | xoris | andi. | andis. | EXT30 | EXT31 | 011
189 100 | lwz | lwzu | lbz | lbzu | stw | stwu | stb | stbu | 100
190 101 | lhz | lhzu | lha | lhau | sth | sthu | | | 101
191 110 | lfs | lfsu | lfd | lfdu | stfs | stfsu | stfd | stfdu | 110
192 111 | | | EXT58 | EXT59 | | EXT61 | | EXT63 | 111
193 | 000 | 001 | 010 | 011 | 100 | 101 | 110 | 111
194 ```
195
196 It is important to note that having a different v3.0B Scalar opcode
197 that is different from an SVP64 one is highly undesirable: the complexity
198 in the decoder is greatly increased.
199
200 # EXTRA Field Mapping
201
202 The purpose of the 9-bit EXTRA field mapping is to mark individual
203 registers (RT, RA, BFA) as either scalar or vector, and to extend
204 their numbering from 0..31 in Power ISA v3.0 to 0..127 in SVP64.
205 Three of the 9 bits may also be used up for a 2nd Predicate (Twin
206 Predication) leaving a mere 6 bits for qualifying registers. As can
207 be seen there is significant pressure on these (and in fact all) SVP64 bits.
208
209 In Power ISA v3.1 prefixing there are bits which describe and classify
210 the prefix in a fashion that is independent of the suffix. MLSS for
211 example. For SVP64 there is insufficient space to make the SVP64 Prefix
212 "self-describing", and consequently every single Scalar instruction
213 had to be individually analysed, by rote, to craft an EXTRA Field Mapping.
214 This process was semi-automated and is described in this section.
215 The final results, which are part of the SVP64 Specification, are here:
216
217 * [[openpower/opcode_regs_deduped]]
218
219 Firstly, every instruction's mnemonic (`add RT, RA, RB`) was analysed
220 from reading the markdown formatted version of the Scalar pseudocode
221 which is machine-readable and found in [[openpower/isatables]]. The
222 analysis gives, by instruction, a "Register Profile". `add RT, RA, RB`
223 for example is given a designation `RM-2R-1W` because it requires
224 two GPR reads and one GPR write.
225
226 Secondly, the total number of registers was added up (2R-1W is 3 registers)
227 and if less than or equal to three then that instruction could be given an
228 EXTRA3 designation. Four or more is given an EXTRA2 designation because
229 there are only 9 bits available.
230
231 Thirdly, the instruction was analysed to see if Twin or Single
232 Predication was suitable. As a general rule this was if there
233 was only a single operand and a single result (`extw` and LD/ST)
234 however it was found that some 2 or 3 operand instructions also
235 qualify. Given that 3 of the 9 bits of EXTRA had to be sacrificed for use
236 in Twin Predication, some compromises were made, here. LDST is
237 Twin but also has 3 operands in some operations, so only EXTRA2 can be used.
238
239 Fourthly, a packing format was decided: for 2R-1W an EXTRA3 indexing
240 could have been decided
241 that RA would be indexed 0 (EXTRA bits 0-2), RB indexed 1 (EXTRA bits 3-5)
242 and RT indexed 2 (EXTRA bits 6-8). In some cases (LD/ST with update)
243 RA-as-a-source is given a **different** EXTRA index from RA-as-a-result
244 (because it is possible to do, and perceived to be useful). Rc=1
245 co-results (CR0, CR1) are always given the same EXTRA index as their
246 main result (RT, FRT).
247
248 Fifthly, in an automated process the results of the analysis
249 were outputted in CSV Format for use in machine-readable form
250 by sv_analysis.py <https://git.libre-soc.org/?p=openpower-isa.git;a=blob;f=src/openpower/sv/sv_analysis.py;hb=HEAD>
251
252 This process was laborious but logical, and, crucially, once a
253 decision is made (and ratified) cannot be reversed.
254 Qualifying future Power ISA Scalar instructions for SVP64
255 is **strongly** advised to utilise this same process and the same
256 sv_analysis.py program as a canonical method of maintaining the
257 relationships. Alterations to that same program which
258 change the Designation is **prohibited** once finalised (ratified
259 through the Power ISA WG Process). It would
260 be similar to deciding that `add` should be changed from X-Form
261 to D-Form.
262
263 # Single Predication
264
265 This is a standard mode normally found in Vector ISAs. every element in every source Vector and in the destination uses the same bit of one single predicate mask.
266
267 In SVSTATE, for Single-predication, implementors MUST increment both srcstep and dststep: unlike Twin-Predication the two must be equal at all times.
268
269 # Twin Predication
270
271 This is a novel concept that allows predication to be applied to a single
272 source and a single dest register. The following types of traditional
273 Vector operations may be encoded with it, *without requiring explicit
274 opcodes to do so*
275
276 * VSPLAT (a single scalar distributed across a vector)
277 * VEXTRACT (like LLVM IR [`extractelement`](https://releases.llvm.org/11.0.0/docs/LangRef.html#extractelement-instruction))
278 * VINSERT (like LLVM IR [`insertelement`](https://releases.llvm.org/11.0.0/docs/LangRef.html#insertelement-instruction))
279 * VCOMPRESS (like LLVM IR [`llvm.masked.compressstore.*`](https://releases.llvm.org/11.0.0/docs/LangRef.html#llvm-masked-compressstore-intrinsics))
280 * VEXPAND (like LLVM IR [`llvm.masked.expandload.*`](https://releases.llvm.org/11.0.0/docs/LangRef.html#llvm-masked-expandload-intrinsics))
281
282 Those patterns (and more) may be applied to:
283
284 * mv (the usual way that V\* ISA operations are created)
285 * exts\* sign-extension
286 * rwlinm and other RS-RA shift operations (**note**: excluding
287 those that take RA as both a src and dest. These are not
288 1-src 1-dest, they are 2-src, 1-dest)
289 * LD and ST (treating AGEN as one source)
290 * FP fclass, fsgn, fneg, fabs, fcvt, frecip, fsqrt etc.
291 * Condition Register ops mfcr, mtcr and other similar
292
293 This is a huge list that creates extremely powerful combinations,
294 particularly given that one of the predicate options is `(1<<r3)`
295
296 Additional unusual capabilities of Twin Predication include a back-to-back
297 version of VCOMPRESS-VEXPAND which is effectively the ability to do
298 sequentially ordered multiple VINSERTs. The source predicate selects a
299 sequentially ordered subset of elements to be inserted; the destination
300 predicate specifies the sequentially ordered recipient locations.
301 This is equivalent to
302 `llvm.masked.compressstore.*`
303 followed by
304 `llvm.masked.expandload.*`
305 with a single instruction.
306
307 This extreme power and flexibility comes down to the fact that SVP64
308 is not actually a Vector ISA: it is a loop-abstraction-concept that
309 is applied *in general* to Scalar operations, just like the x86
310 `REP` instruction (if put on steroids).
311
312 # Reduce modes
313
314 Reduction in SVP64 is deterministic and somewhat of a misnomer. A normal
315 Vector ISA would have explicit Reduce opcodes with defined characteristics
316 per operation: in SX Aurora there is even an additional scalar argument
317 containing the initial reduction value, and the default is either 0
318 or 1 depending on the specifics of the explicit opcode.
319 SVP64 fundamentally has to
320 utilise *existing* Scalar Power ISA v3.0B operations, which presents some
321 unique challenges.
322
323 The solution turns out to be to simply define reduction as permitting
324 deterministic element-based schedules to be issued using the base Scalar
325 operations, and to rely on the underlying microarchitecture to resolve
326 Register Hazards at the element level. This goes back to
327 the fundamental principle that SV is nothing more than a Sub-Program-Counter
328 sitting between Decode and Issue phases.
329
330 Microarchitectures *may* take opportunities to parallelise the reduction
331 but only if in doing so they preserve Program Order at the Element Level.
332 Opportunities where this is possible include an `OR` operation
333 or a MIN/MAX operation: it may be possible to parallelise the reduction,
334 but for Floating Point it is not permitted due to different results
335 being obtained if the reduction is not executed in strict Program-Sequential
336 Order.
337
338 In essence it becomes the programmer's responsibility to leverage the
339 pre-determined schedules to desired effect.
340
341 ## Scalar result reduction and iteration
342
343 Scalar Reduction per se does not exist, instead is implemented in SVP64
344 as a simple and natural relaxation of the usual restriction on the Vector
345 Looping which would terminate if the destination was marked as a Scalar.
346 Scalar Reduction by contrast *keeps issuing Vector Element Operations*
347 even though the destination register is marked as scalar.
348 Thus it is up to the programmer to be aware of this, observe some
349 conventions, and thus end up achieving the desired outcome of scalar
350 reduction.
351
352 It is also important to appreciate that there is no
353 actual imposition or restriction on how this mode is utilised: there
354 will therefore be several valuable uses (including Vector Iteration
355 and "Reverse-Gear")
356 and it is up to the programmer to make best use of the
357 (strictly deterministic) capability
358 provided.
359
360 In this mode, which is suited to operations involving carry or overflow,
361 one register must be assigned, by convention by the programmer to be the
362 "accumulator". Scalar reduction is thus categorised by:
363
364 * One of the sources is a Vector
365 * the destination is a scalar
366 * optionally but most usefully when one source scalar register is
367 also the scalar destination (which may be informally termed
368 the "accumulator")
369 * That the source register type is the same as the destination register
370 type identified as the "accumulator". Scalar reduction on `cmp`,
371 `setb` or `isel` makes no sense for example because of the mixture
372 between CRs and GPRs.
373
374 *Note that issuing instructions in Scalar reduce mode such as `setb`
375 are neither `UNDEFINED` nor prohibited, despite them not making much
376 sense at first glance.
377 Scalar reduce is strictly defined behaviour, and the cost in
378 hardware terms of prohibition of seemingly non-sensical operations is too great.
379 Therefore it is permitted and required to be executed successfully.
380 Implementors **MAY** choose to optimise such instructions in instances
381 where their use results in "extraneous execution", i.e. where it is clear
382 that the sequence of operations, comprising multiple overwrites to
383 a scalar destination **without** cumulative, iterative, or reductive
384 behaviour (no "accumulator"), may discard all but the last element
385 operation. Identification
386 of such is trivial to do for `setb` and `cmp`: the source register type is
387 a completely different register file from the destination.
388 Likewise Scalar reduction when the destination is a Vector
389 is as if the Reduction Mode was not requested.*
390
391 Typical applications include simple operations such as `ADD r3, r10.v,
392 r3` where, clearly, r3 is being used to accumulate the addition of all
393 elements of the vector starting at r10.
394
395 # add RT, RA,RB but when RT==RA
396 for i in range(VL):
397 iregs[RA] += iregs[RB+i] # RT==RA
398
399 However, *unless* the operation is marked as "mapreduce" (`sv.add/mr`)
400 SV ordinarily
401 **terminates** at the first scalar operation. Only by marking the
402 operation as "mapreduce" will it continue to issue multiple sub-looped
403 (element) instructions in `Program Order`.
404
405 To perform the loop in reverse order, the ```RG``` (reverse gear) bit must be set. This may be useful in situations where the results may be different
406 (floating-point) if executed in a different order. Given that there is
407 no actual prohibition on Reduce Mode being applied when the destination
408 is a Vector, the "Reverse Gear" bit turns out to be a way to apply Iterative
409 or Cumulative Vector operations in reverse. `sv.add/rg r3.v, r4.v, r4.v`
410 for example will start at the opposite end of the Vector and push
411 a cumulative series of overlapping add operations into the Execution units of
412 the underlying hardware.
413
414 Other examples include shift-mask operations where a Vector of inserts
415 into a single destination register is required (see [[sv/bitmanip]], bmset),
416 as a way to construct
417 a value quickly from multiple arbitrary bit-ranges and bit-offsets.
418 Using the same register as both the source and destination, with Vectors
419 of different offsets masks and values to be inserted has multiple
420 applications including Video, cryptography and JIT compilation.
421
422 # assume VL=4:
423 # * Vector of shift-offsets contained in RC (r12.v)
424 # * Vector of masks contained in RB (r8.v)
425 # * Vector of values to be masked-in in RA (r4.v)
426 # * Scalar destination RT (r0) to receive all mask-offset values
427 sv.bmset/mr r0, r4.v, r8.v, r12.v
428
429 Due to the Deterministic Scheduling,
430 Subtract and Divide are still permitted to be executed in this mode,
431 although from an algorithmic perspective it is strongly discouraged.
432 It would be better to use addition followed by one final subtract,
433 or in the case of divide, to get better accuracy, to perform a multiply
434 cascade followed by a final divide.
435
436 Note that single-operand or three-operand scalar-dest reduce is perfectly
437 well permitted: the programmer may still declare one register, used as
438 both a Vector source and Scalar destination, to be utilised as
439 the "accumulator". In the case of `sv.fmadds` and `sv.maddhw` etc
440 this naturally fits well with the normal expected usage of these
441 operations.
442
443 If an interrupt or exception occurs in the middle of the scalar mapreduce,
444 the scalar destination register **MUST** be updated with the current
445 (intermediate) result, because this is how ```Program Order``` is
446 preserved (Vector Loops are to be considered to be just another way of issuing instructions
447 in Program Order). In this way, after return from interrupt,
448 the scalar mapreduce may continue where it left off. This provides
449 "precise" exception behaviour.
450
451 Note that hardware is perfectly permitted to perform multi-issue
452 parallel optimisation of the scalar reduce operation: it's just that
453 as far as the user is concerned, all exceptions and interrupts **MUST**
454 be precise.
455
456 ## Vector result reduce mode
457
458 Vector Reduce Mode issues a deterministic tree-reduction schedule to the underlying micro-architecture. Like Scalar reduction, the "Scalar Base"
459 (Power ISA v3.0B) operation is leveraged, unmodified, to give the
460 *appearance* and *effect* of Reduction.
461
462 Given that the tree-reduction schedule is deterministic,
463 Interrupts and exceptions
464 can therefore also be precise. The final result will be in the first
465 non-predicate-masked-out destination element, but due again to
466 the deterministic schedule programmers may find uses for the intermediate
467 results.
468
469 When Rc=1 a corresponding Vector of co-resultant CRs is also
470 created. No special action is taken: the result and its CR Field
471 are stored "as usual" exactly as all other SVP64 Rc=1 operations.
472
473 ## Sub-Vector Horizontal Reduction
474
475 Note that when SVM is clear and SUBVL!=1 the sub-elements are
476 *independent*, i.e. they are mapreduced per *sub-element* as a result.
477 illustration with a vec2, assuming RA==RT, e.g `sv.add/mr/vec2 r4, r4, r16`
478
479 for i in range(0, VL):
480 # RA==RT in the instruction. does not have to be
481 iregs[RT].x = op(iregs[RT].x, iregs[RB+i].x)
482 iregs[RT].y = op(iregs[RT].y, iregs[RB+i].y)
483
484 Thus logically there is nothing special or unanticipated about
485 `SVM=0`: it is expected behaviour according to standard SVP64
486 Sub-Vector rules.
487
488 By contrast, when SVM is set and SUBVL!=1, a Horizontal
489 Subvector mode is enabled, which behaves very much more
490 like a traditional Vector Processor Reduction instruction.
491 Example for a vec3:
492
493 for i in range(VL):
494 result = iregs[RA+i].x
495 result = op(result, iregs[RA+i].y)
496 result = op(result, iregs[RA+i].z)
497 iregs[RT+i] = result
498
499 In this mode, when Rc=1 the Vector of CRs is as normal: each result
500 element creates a corresponding CR element (for the final, reduced, result).
501
502 # Fail-on-first
503
504 Data-dependent fail-on-first has two distinct variants: one for LD/ST
505 (see [[sv/ldst]],
506 the other for arithmetic operations (actually, CR-driven)
507 ([[sv/normal]]) and CR operations ([[sv/cr_ops]]).
508 Note in each
509 case the assumption is that vector elements are required appear to be
510 executed in sequential Program Order, element 0 being the first.
511
512 * LD/ST ffirst treats the first LD/ST in a vector (element 0) as an
513 ordinary one. Exceptions occur "as normal". However for elements 1
514 and above, if an exception would occur, then VL is **truncated** to the
515 previous element.
516 * Data-driven (CR-driven) fail-on-first activates when Rc=1 or other
517 CR-creating operation produces a result (including cmp). Similar to
518 branch, an analysis of the CR is performed and if the test fails, the
519 vector operation terminates and discards all element operations
520 above the current one (and the current one if VLi is not set),
521 and VL is truncated to either
522 the *previous* element or the current one, depending on whether
523 VLi (VL "inclusive") is set.
524
525 Thus the new VL comprises a contiguous vector of results,
526 all of which pass the testing criteria (equal to zero, less than zero).
527
528 The CR-based data-driven fail-on-first is new and not found in ARM
529 SVE or RVV. It is extremely useful for reducing instruction count,
530 however requires speculative execution involving modifications of VL
531 to get high performance implementations. An additional mode (RC1=1)
532 effectively turns what would otherwise be an arithmetic operation
533 into a type of `cmp`. The CR is stored (and the CR.eq bit tested
534 against the `inv` field).
535 If the CR.eq bit is equal to `inv` then the Vector is truncated and
536 the loop ends.
537 Note that when RC1=1 the result elements are never stored, only the CRs.
538
539 VLi is only available as an option when `Rc=0` (or for instructions
540 which do not have Rc). When set, the current element is always
541 also included in the count (the new length that VL will be set to).
542 This may be useful in combination with "inv" to truncate the Vector
543 to `exclude` elements that fail a test, or, in the case of implementations
544 of strncpy, to include the terminating zero.
545
546 In CR-based data-driven fail-on-first there is only the option to select
547 and test one bit of each CR (just as with branch BO). For more complex
548 tests this may be insufficient. If that is the case, a vectorised crops
549 (crand, cror) may be used, and ffirst applied to the crop instead of to
550 the arithmetic vector.
551
552 One extremely important aspect of ffirst is:
553
554 * LDST ffirst may never set VL equal to zero. This because on the first
555 element an exception must be raised "as normal".
556 * CR-based data-dependent ffirst on the other hand **can** set VL equal
557 to zero. This is the only means in the entirety of SV that VL may be set
558 to zero (with the exception of via the SV.STATE SPR). When VL is set
559 zero due to the first element failing the CR bit-test, all subsequent
560 vectorised operations are effectively `nops` which is
561 *precisely the desired and intended behaviour*.
562
563 Another aspect is that for ffirst LD/STs, VL may be truncated arbitrarily
564 to a nonzero value for any implementation-specific reason. For example:
565 it is perfectly reasonable for implementations to alter VL when ffirst
566 LD or ST operations are initiated on a nonaligned boundary, such that
567 within a loop the subsequent iteration of that loop begins subsequent
568 ffirst LD/ST operations on an aligned boundary. Likewise, to reduce
569 workloads or balance resources.
570
571 CR-based data-dependent first on the other hand MUST not truncate VL
572 arbitrarily to a length decided by the hardware: VL MUST only be
573 truncated based explicitly on whether a test fails.
574 This because it is a precise test on which algorithms
575 will rely.
576
577 ## Data-dependent fail-first on CR operations (crand etc)
578
579 Operations that actually produce or alter CR Field as a result
580 do not also in turn have an Rc=1 mode. However it makes no
581 sense to try to test the 4 bits of a CR Field for being equal
582 or not equal to zero. Moreover, the result is already in the
583 form that is desired: it is a CR field. Therefore,
584 CR-based operations have their own SVP64 Mode, described
585 in [[sv/cr_ops]]
586
587 There are two primary different types of CR operations:
588
589 * Those which have a 3-bit operand field (referring to a CR Field)
590 * Those which have a 5-bit operand (referring to a bit within the
591 whole 32-bit CR)
592
593 More details can be found in [[sv/cr_ops]].
594
595 # pred-result mode
596
597 Predicate-result merges common CR testing with predication, saving on
598 instruction count. In essence, a Condition Register Field test
599 is performed, and if it fails it is considered to have been
600 *as if* the destination predicate bit was zero.
601 Arithmetic and Logical Pred-result is covered in [[sv/normal]]
602
603 Ped-result mode may not be applied on CR ops.
604
605 Although CR operations (mtcr, crand, cror) may be Vectorised,
606 predicated, pred-result mode applies to operations that have
607 an Rc=1 mode, or make sense to add an RC1 option.
608
609 # CR Operations
610
611 CRs are slightly more involved than INT or FP registers due to the
612 possibility for indexing individual bits (crops BA/BB/BT). Again however
613 the access pattern needs to be understandable in relation to v3.0B / v3.1B
614 numbering, with a clear linear relationship and mapping existing when
615 SV is applied.
616
617 ## CR EXTRA mapping table and algorithm
618
619 Numbering relationships for CR fields are already complex due to being
620 in BE format (*the relationship is not clearly explained in the v3.0B
621 or v3.1 specification*). However with some care and consideration
622 the exact same mapping used for INT and FP regfiles may be applied,
623 just to the upper bits, as explained below. The notation
624 `CR{field number}` is used to indicate access to a particular
625 Condition Register Field (as opposed to the notation `CR[bit]`
626 which accesses one bit of the 32 bit Power ISA v3.0B
627 Condition Register)
628
629 `CR{n}` refers to `CR0` when `n=0` and consequently, for CR0-7, is defined, in v3.0B pseudocode, as:
630
631 CR{7-n} = CR[32+n*4:35+n*4]
632
633 For SVP64 the relationship for the sequential
634 numbering of elements is to the CR **fields** within
635 the CR Register, not to individual bits within the CR register.
636
637 In OpenPOWER v3.0/1, BF/BT/BA/BB are all 5 bits. The top 3 bits (0:2)
638 select one of the 8 CRs; the bottom 2 bits (3:4) select one of 4 bits
639 *in* that CR. The numbering was determined (after 4 months of
640 analysis and research) to be as follows:
641
642 CR_index = 7-(BA>>2) # top 3 bits but BE
643 bit_index = 3-(BA & 0b11) # low 2 bits but BE
644 CR_reg = CR{CR_index} # get the CR
645 # finally get the bit from the CR.
646 CR_bit = (CR_reg & (1<<bit_index)) != 0
647
648 When it comes to applying SV, it is the CR\_reg number to which SV EXTRA2/3
649 applies, **not** the CR\_bit portion (bits 3:4):
650
651 if extra3_mode:
652 spec = EXTRA3
653 else:
654 spec = EXTRA2<<1 | 0b0
655 if spec[0]:
656 # vector constructs "BA[0:2] spec[1:2] 00 BA[3:4]"
657 return ((BA >> 2)<<6) | # hi 3 bits shifted up
658 (spec[1:2]<<4) | # to make room for these
659 (BA & 0b11) # CR_bit on the end
660 else:
661 # scalar constructs "00 spec[1:2] BA[0:4]"
662 return (spec[1:2] << 5) | BA
663
664 Thus, for example, to access a given bit for a CR in SV mode, the v3.0B
665 algorithm to determin CR\_reg is modified to as follows:
666
667 CR_index = 7-(BA>>2) # top 3 bits but BE
668 if spec[0]:
669 # vector mode, 0-124 increments of 4
670 CR_index = (CR_index<<4) | (spec[1:2] << 2)
671 else:
672 # scalar mode, 0-32 increments of 1
673 CR_index = (spec[1:2]<<3) | CR_index
674 # same as for v3.0/v3.1 from this point onwards
675 bit_index = 3-(BA & 0b11) # low 2 bits but BE
676 CR_reg = CR{CR_index} # get the CR
677 # finally get the bit from the CR.
678 CR_bit = (CR_reg & (1<<bit_index)) != 0
679
680 Note here that the decoding pattern to determine CR\_bit does not change.
681
682 Note: high-performance implementations may read/write Vectors of CRs in
683 batches of aligned 32-bit chunks (CR0-7, CR7-15). This is to greatly
684 simplify internal design. If instructions are issued where CR Vectors
685 do not start on a 32-bit aligned boundary, performance may be affected.
686
687 ## CR fields as inputs/outputs of vector operations
688
689 CRs (or, the arithmetic operations associated with them)
690 may be marked as Vectorised or Scalar. When Rc=1 in arithmetic operations that have no explicit EXTRA to cover the CR, the CR is Vectorised if the destination is Vectorised. Likewise if the destination is scalar then so is the CR.
691
692 When vectorized, the CR inputs/outputs are sequentially read/written
693 to 4-bit CR fields. Vectorised Integer results, when Rc=1, will begin
694 writing to CR8 (TBD evaluate) and increase sequentially from there.
695 This is so that:
696
697 * implementations may rely on the Vector CRs being aligned to 8. This
698 means that CRs may be read or written in aligned batches of 32 bits
699 (8 CRs per batch), for high performance implementations.
700 * scalar Rc=1 operation (CR0, CR1) and callee-saved CRs (CR2-4) are not
701 overwritten by vector Rc=1 operations except for very large VL
702 * CR-based predication, from CR32, is also not interfered with
703 (except by large VL).
704
705 However when the SV result (destination) is marked as a scalar by the
706 EXTRA field the *standard* v3.0B behaviour applies: the accompanying
707 CR when Rc=1 is written to. This is CR0 for integer operations and CR1
708 for FP operations.
709
710 Note that yes, the CR Fields are genuinely Vectorised. Unlike in SIMD VSX which
711 has a single CR (CR6) for a given SIMD result, SV Vectorised OpenPOWER
712 v3.0B scalar operations produce a **tuple** of element results: the
713 result of the operation as one part of that element *and a corresponding
714 CR element*. Greatly simplified pseudocode:
715
716 for i in range(VL):
717 # calculate the vector result of an add
718 iregs[RT+i] = iregs[RA+i] + iregs[RB+i]
719 # now calculate CR bits
720 CRs{8+i}.eq = iregs[RT+i] == 0
721 CRs{8+i}.gt = iregs[RT+i] > 0
722 ... etc
723
724 If a "cumulated" CR based analysis of results is desired (a la VSX CR6)
725 then a followup instruction must be performed, setting "reduce" mode on
726 the Vector of CRs, using cr ops (crand, crnor) to do so. This provides far
727 more flexibility in analysing vectors than standard Vector ISAs. Normal
728 Vector ISAs are typically restricted to "were all results nonzero" and
729 "were some results nonzero". The application of mapreduce to Vectorised
730 cr operations allows far more sophisticated analysis, particularly in
731 conjunction with the new crweird operations see [[sv/cr_int_predication]].
732
733 Note in particular that the use of a separate instruction in this way
734 ensures that high performance multi-issue OoO inplementations do not
735 have the computation of the cumulative analysis CR as a bottleneck and
736 hindrance, regardless of the length of VL.
737
738 Additionally,
739 SVP64 [[sv/branches]] may be used, even when the branch itself is to
740 the following instruction. The combined side-effects of CTR reduction
741 and VL truncation provide several benefits.
742
743 (see [[discussion]]. some alternative schemes are described there)
744
745 ## Rc=1 when SUBVL!=1
746
747 sub-vectors are effectively a form of Packed SIMD (length 2 to 4). Only 1 bit of
748 predicate is allocated per subvector; likewise only one CR is allocated
749 per subvector.
750
751 This leaves a conundrum as to how to apply CR computation per subvector,
752 when normally Rc=1 is exclusively applied to scalar elements. A solution
753 is to perform a bitwise OR or AND of the subvector tests. Given that
754 OE is ignored in SVP64, this field may (when available) be used to select OR or
755 AND behavior.
756
757 ### Table of CR fields
758
759 CRn is the notation used by the OpenPower spec to refer to CR field #i,
760 so FP instructions with Rc=1 write to CR1 (n=1).
761
762 CRs are not stored in SPRs: they are registers in their own right.
763 Therefore context-switching the full set of CRs involves a Vectorised
764 mfcr or mtcr, using VL=8 to do so. This is exactly as how
765 scalar OpenPOWER context-switches CRs: it is just that there are now
766 more of them.
767
768 The 64 SV CRs are arranged similarly to the way the 128 integer registers
769 are arranged. TODO a python program that auto-generates a CSV file
770 which can be included in a table, which is in a new page (so as not to
771 overwhelm this one). [[svp64/cr_names]]
772
773 # Register Profiles
774
775 **NOTE THIS TABLE SHOULD NO LONGER BE HAND EDITED** see
776 <https://bugs.libre-soc.org/show_bug.cgi?id=548> for details.
777
778 Instructions are broken down by Register Profiles as listed in the
779 following auto-generated page: [[opcode_regs_deduped]]. "Non-SV"
780 indicates that the operations with this Register Profile cannot be
781 Vectorised (mtspr, bc, dcbz, twi)
782
783 TODO generate table which will be here [[svp64/reg_profiles]]
784
785 # SV pseudocode illilustration
786
787 ## Single-predicated Instruction
788
789 illustration of normal mode add operation: zeroing not included, elwidth
790 overrides not included. if there is no predicate, it is set to all 1s
791
792 function op_add(rd, rs1, rs2) # add not VADD!
793 int i, id=0, irs1=0, irs2=0;
794 predval = get_pred_val(FALSE, rd);
795 for (i = 0; i < VL; i++)
796 STATE.srcoffs = i # save context
797 if (predval & 1<<i) # predication uses intregs
798 ireg[rd+id] <= ireg[rs1+irs1] + ireg[rs2+irs2];
799 if (!int_vec[rd].isvec) break;
800 if (rd.isvec) { id += 1; }
801 if (rs1.isvec) { irs1 += 1; }
802 if (rs2.isvec) { irs2 += 1; }
803 if (id == VL or irs1 == VL or irs2 == VL)
804 {
805 # end VL hardware loop
806 STATE.srcoffs = 0; # reset
807 return;
808 }
809
810 This has several modes:
811
812 * RT.v = RA.v RB.v
813 * RT.v = RA.v RB.s (and RA.s RB.v)
814 * RT.v = RA.s RB.s
815 * RT.s = RA.v RB.v
816 * RT.s = RA.v RB.s (and RA.s RB.v)
817 * RT.s = RA.s RB.s
818
819 All of these may be predicated. Vector-Vector is straightfoward.
820 When one of source is a Vector and the other a Scalar, it is clear that
821 each element of the Vector source should be added to the Scalar source,
822 each result placed into the Vector (or, if the destination is a scalar,
823 only the first nonpredicated result).
824
825 The one that is not obvious is RT=vector but both RA/RB=scalar.
826 Here this acts as a "splat scalar result", copying the same result into
827 all nonpredicated result elements. If a fixed destination scalar was
828 intended, then an all-Scalar operation should be used.
829
830 See <https://bugs.libre-soc.org/show_bug.cgi?id=552>
831
832 # Assembly Annotation
833
834 Assembly code annotation is required for SV to be able to successfully
835 mark instructions as "prefixed".
836
837 A reasonable (prototype) starting point:
838
839 svp64 [field=value]*
840
841 Fields:
842
843 * ew=8/16/32 - element width
844 * sew=8/16/32 - source element width
845 * vec=2/3/4 - SUBVL
846 * mode=mr/satu/sats/crpred
847 * pred=1\<\<3/r3/~r3/r10/~r10/r30/~r30/lt/gt/le/ge/eq/ne
848
849 similar to x86 "rex" prefix.
850
851 For actual assembler:
852
853 sv.asmcode/mode.vec{N}.ew=8,sw=16,m={pred},sm={pred} reg.v, src.s
854
855 Qualifiers:
856
857 * m={pred}: predicate mask mode
858 * sm={pred}: source-predicate mask mode (only allowed in Twin-predication)
859 * vec{N}: vec2 OR vec3 OR vec4 - sets SUBVL=2/3/4
860 * ew={N}: ew=8/16/32 - sets elwidth override
861 * sw={N}: sw=8/16/32 - sets source elwidth override
862 * ff={xx}: see fail-first mode
863 * pr={xx}: see predicate-result mode
864 * sat{x}: satu / sats - see saturation mode
865 * mr: see map-reduce mode
866 * mr.svm see map-reduce with sub-vector mode
867 * crm: see map-reduce CR mode
868 * crm.svm see map-reduce CR with sub-vector mode
869 * sz: predication with source-zeroing
870 * dz: predication with dest-zeroing
871
872 For modes:
873
874 * pred-result:
875 - pm=lt/gt/le/ge/eq/ne/so/ns
876 - RC1 mode
877 * fail-first
878 - ff=lt/gt/le/ge/eq/ne/so/ns
879 - RC1 mode
880 * saturation:
881 - sats
882 - satu
883 * map-reduce:
884 - mr OR crm: "normal" map-reduce mode or CR-mode.
885 - mr.svm OR crm.svm: when vec2/3/4 set, sub-vector mapreduce is enabled
886
887 # Proposed Parallel-reduction algorithm
888
889 **This algorithm contains a MV operation and may NOT be used. Removal
890 of the MV operation may be achieved by using index-redirection as was
891 achieved in DCT and FFT REMAP**
892
893 ```
894 /// reference implementation of proposed SimpleV reduction semantics.
895 ///
896 // reduction operation -- we still use this algorithm even
897 // if the reduction operation isn't associative or
898 // commutative.
899 XXX VIOLATION OF SVP64 DESIGN PRINCIPLES XXXX
900 /// XXX `pred` is a user-visible Vector Condition register XXXX
901 XXX VIOLATION OF SVP64 DESIGN PRINCIPLES XXXX
902 ///
903 /// all input arrays have length `vl`
904 def reduce(vl, vec, pred):
905 pred = copy(pred) # must not damage predicate
906 step = 1;
907 while step < vl
908 step *= 2;
909 for i in (0..vl).step_by(step)
910 other = i + step / 2;
911 other_pred = other < vl && pred[other];
912 if pred[i] && other_pred
913 vec[i] += vec[other];
914 else if other_pred
915 XXX VIOLATION OF SVP64 DESIGN XXX
916 XXX vec[i] = vec[other]; XXX
917 XXX VIOLATION OF SVP64 DESIGN XXX
918 pred[i] |= other_pred;
919 ```
920
921 The first principle in SVP64 being violated is that SVP64 is a fully-independent
922 Abstraction of hardware-looping in between issue and execute phases
923 that has no relation to the operation it issues. The above pseudocode
924 conditionally changes not only the type of element operation issued
925 (a MV in some cases) but also the number of arguments (2 for a MV).
926 At the very least, for Vertical-First Mode this will result in unanticipated and unexpected behaviour (maximise "surprises" for programmers) in
927 the middle of loops, that will be far too hard to explain.
928
929 The second principle being violated by the above algorithm is the expectation
930 that temporary storage is available for a modified predicate: there is no
931 such space, and predicates are read-only to reduce complexity at the
932 micro-architectural level.
933 SVP64 is founded on the principle that all operations are
934 "re-entrant" with respect to interrupts and exceptions: SVSTATE must
935 be saved and restored alongside PC and MSR, but nothing more. It is perfectly
936 fine to have context-switching back to the operation be somewhat slower,
937 through "reconstruction" of temporary internal state based on what SVSTATE
938 contains, but nothing more.
939
940 An alternative algorithm is therefore required that does not perform MVs,
941 and does not require additional state to be saved on context-switching.
942
943 ```
944 def reduce( vl, vec, pred ):
945 pred = copy(pred) # must not damage predicate
946 j = 0
947 vi = [] # array of lookup indices to skip nonpredicated
948 for i, pbit in enumerate(pred):
949 if pbit:
950 vi[j] = i
951 j += 1
952 step = 2
953 while step <= vl
954 halfstep = step // 2
955 for i in (0..vl).step_by(step)
956 other = vi[i + halfstep]
957 ir = vi[i]
958 other_pred = other < vl && pred[other]
959 if pred[i] && other_pred
960 vec[ir] += vec[other]
961 else if other_pred:
962 vi[ir] = vi[other] # index redirection, no MV
963 pred[ir] |= other_pred # reconstructed on context-switch
964 step *= 2
965 ```
966
967 In this version the need for an explicit MV is made unnecessary by instead
968 leaving elements *in situ*. The internal modifications to the predicate may,
969 due to the reduction being entirely deterministic, be "reconstructed"
970 on a context-switch. This may make some implementations slower.
971
972 *Implementor's Note: many SIMD-based Parallel Reduction Algorithms are
973 implemented in hardware with MVs that ensure lane-crossing is minimised.
974 The mistake which would be catastrophic to SVP64 to make is to then
975 limit the Reduction Sequence for all implementors
976 based solely and exclusively on what one
977 specific internal microarchitecture does.
978 In SIMD ISAs the internal SIMD Architectural design is exposed and imposed on the programmer. Cray-style Vector ISAs on the other hand provide convenient,
979 compact and efficient encodings of abstract concepts.
980 It is the Implementor's responsibility to produce a design
981 that complies with the above algorithm,
982 utilising internal Micro-coding and other techniques to transparently
983 insert MV operations
984 if necessary or desired, to give the level of efficiency or performance
985 required.*
986
987 # Element-width overrides
988
989 Element-width overrides are best illustrated with a packed structure
990 union in the c programming language. The following should be taken
991 literally, and assume always a little-endian layout:
992
993 typedef union {
994 uint8_t b[];
995 uint16_t s[];
996 uint32_t i[];
997 uint64_t l[];
998 uint8_t actual_bytes[8];
999 } el_reg_t;
1000
1001 elreg_t int_regfile[128];
1002
1003 get_polymorphed_reg(reg, bitwidth, offset):
1004 el_reg_t res;
1005 res.l = 0; // TODO: going to need sign-extending / zero-extending
1006 if bitwidth == 8:
1007 reg.b = int_regfile[reg].b[offset]
1008 elif bitwidth == 16:
1009 reg.s = int_regfile[reg].s[offset]
1010 elif bitwidth == 32:
1011 reg.i = int_regfile[reg].i[offset]
1012 elif bitwidth == 64:
1013 reg.l = int_regfile[reg].l[offset]
1014 return res
1015
1016 set_polymorphed_reg(reg, bitwidth, offset, val):
1017 if (!reg.isvec):
1018 # not a vector: first element only, overwrites high bits
1019 int_regfile[reg].l[0] = val
1020 elif bitwidth == 8:
1021 int_regfile[reg].b[offset] = val
1022 elif bitwidth == 16:
1023 int_regfile[reg].s[offset] = val
1024 elif bitwidth == 32:
1025 int_regfile[reg].i[offset] = val
1026 elif bitwidth == 64:
1027 int_regfile[reg].l[offset] = val
1028
1029 In effect the GPR registers r0 to r127 (and corresponding FPRs fp0
1030 to fp127) are reinterpreted to be "starting points" in a byte-addressable
1031 memory. Vectors - which become just a virtual naming construct - effectively
1032 overlap.
1033
1034 It is extremely important for implementors to note that the only circumstance
1035 where upper portions of an underlying 64-bit register are zero'd out is
1036 when the destination is a scalar. The ideal register file has byte-level
1037 write-enable lines, just like most SRAMs, in order to avoid READ-MODIFY-WRITE.
1038
1039 An example ADD operation with predication and element width overrides:
1040
1041  for (i = 0; i < VL; i++)
1042 if (predval & 1<<i) # predication
1043 src1 = get_polymorphed_reg(RA, srcwid, irs1)
1044 src2 = get_polymorphed_reg(RB, srcwid, irs2)
1045 result = src1 + src2 # actual add here
1046 set_polymorphed_reg(RT, destwid, ird, result)
1047 if (!RT.isvec) break
1048 if (RT.isvec)  { id += 1; }
1049 if (RA.isvec)  { irs1 += 1; }
1050 if (RB.isvec)  { irs2 += 1; }
1051
1052 Thus it can be clearly seen that elements are packed by their
1053 element width, and the packing starts from the source (or destination)
1054 specified by the instruction.
1055
1056 # Twin (implicit) result operations
1057
1058 Some operations in the Power ISA already target two 64-bit scalar
1059 registers: `lq` for example, and LD with update.
1060 Some mathematical algorithms are more
1061 efficient when there are two outputs rather than one, providing
1062 feedback loops between elements (the most well-known being add with
1063 carry). 64-bit multiply
1064 for example actually internally produces a 128 bit result, which clearly
1065 cannot be stored in a single 64 bit register. Some ISAs recommend
1066 "macro op fusion": the practice of setting a convention whereby if
1067 two commonly used instructions (mullo, mulhi) use the same ALU but
1068 one selects the low part of an identical operation and the other
1069 selects the high part, then optimised micro-architectures may
1070 "fuse" those two instructions together, using Micro-coding techniques,
1071 internally.
1072
1073 The practice and convention of macro-op fusion however is not compatible
1074 with SVP64 Horizontal-First, because Horizontal Mode may only
1075 be applied to a single instruction at a time, and SVP64 is based on
1076 the principle of strict Program Order even at the element
1077 level. Thus it becomes
1078 necessary to add explicit more complex single instructions with
1079 more operands than would normally be seen in the average RISC ISA
1080 (3-in, 2-out, in some cases). If it
1081 was not for Power ISA already having LD/ST with update as well as
1082 Condition Codes and `lq` this would be hard to justify.
1083
1084 With limited space in the `EXTRA` Field, and Power ISA opcodes
1085 being only 32 bit, 5 operands is quite an ask. `lq` however sets
1086 a precedent: `RTp` stands for "RT pair". In other words the result
1087 is stored in RT and RT+1. For Scalar operations, following this
1088 precedent is perfectly reasonable. In Scalar mode,
1089 `madded` therefore stores the two halves of the 128-bit multiply
1090 into RT and RT+1.
1091
1092 What, then, of `sv.madded`? If the destination is hard-coded to
1093 RT and RT+1 the instruction is not useful when Vectorised because
1094 the output will be overwritten on the next element. To solve this
1095 is easy: define the destination registers as RT and RT+MAXVL
1096 respectively. This makes it easy for compilers to statically allocate
1097 registers even when VL changes dynamically.
1098
1099 Bear in mind that both RT and RT+MAXVL are starting points for Vectors,
1100 and bear in mind that element-width overrides still have to be taken
1101 into consideration, the starting point for the implicit destination
1102 is best illustrated in pseudocode:
1103
1104 # demo of madded
1105  for (i = 0; i < VL; i++)
1106 if (predval & 1<<i) # predication
1107 src1 = get_polymorphed_reg(RA, srcwid, irs1)
1108 src2 = get_polymorphed_reg(RB, srcwid, irs2)
1109 src2 = get_polymorphed_reg(RC, srcwid, irs3)
1110 result = src1*src2 + src2
1111 destmask = (2<<destwid)-1
1112 # store two halves of result, both start from RT.
1113 set_polymorphed_reg(RT, destwid, ird , result&destmask)
1114 set_polymorphed_reg(RT, destwid, ird+MAXVL, result>>destwid)
1115 if (!RT.isvec) break
1116 if (RT.isvec)  { id += 1; }
1117 if (RA.isvec)  { irs1 += 1; }
1118 if (RB.isvec)  { irs2 += 1; }
1119 if (RC.isvec)  { irs3 += 1; }
1120
1121 The significant part here is that the second half is stored
1122 starting not from RT+MAXVL at all: it is the *element* index
1123 that is offset by MAXVL, both halves actually starting from RT.
1124 If VL is 3, MAXVL is 5, RT is 1, and dest elwidth is 32 then the elements
1125 RT0 to RT2 are stored:
1126
1127 0..31 32..63
1128 r0 unchanged unchanged
1129 r1 RT0.lo RT1.lo
1130 r2 RT2.lo unchanged
1131 r3 unchanged RT0.hi
1132 r4 RT1.hi RT2.hi
1133 r5 unchanged unchanged
1134
1135 Note that all of the LO halves start from r1, but that the HI halves
1136 start from half-way into r3. The reason is that with MAXVL bring
1137 5 and elwidth being 32, this is the 5th element
1138 offset (in 32 bit quantities) counting from r1.
1139
1140 *Programmer's note: accessing registers that have been placed
1141 starting on a non-contiguous boundary (half-way along a scalar
1142 register) can be inconvenient: REMAP can provide an offset but
1143 it requires extra instructions to set up. A simple solution
1144 is to ensure that MAXVL is rounded up such that the Vector
1145 ends cleanly on a contiguous register boundary. MAXVL=6 in
1146 the above example would achieve that*
1147
1148 Additional DRAFT Scalar instructions in 3-in 2-out form
1149 with an implicit 2nd destination:
1150
1151 * [[isa/svfixedarith]]
1152 * [[isa/svfparith]]
1153