(no commit message)
[libreriscv.git] / openpower / sv / svp64 / appendix.mdwn
1 [[!tag standards]]
2
3 # Appendix
4
5 * <https://bugs.libre-soc.org/show_bug.cgi?id=574> Saturation
6 * <https://bugs.libre-soc.org/show_bug.cgi?id=558#c47> Parallel Prefix
7 * <https://bugs.libre-soc.org/show_bug.cgi?id=697> Reduce Modes
8 * <https://bugs.libre-soc.org/show_bug.cgi?id=809> OV sv.addex discussion
9
10 This is the appendix to [[sv/svp64]], providing explanations of modes
11 etc. leaving the main svp64 page's primary purpose as outlining the
12 instruction format.
13
14 Table of contents:
15
16 [[!toc]]
17
18 # Partial Implementations
19
20 It is perfectly legal to implement subsets of SVP64 as long as illegal
21 instruction traps are always raised on unimplemented features,
22 so that soft-emulation is possible,
23 even for future revisions of SVP64. With SVP64 being partly controlled
24 through contextual SPRs, a little care has to be taken.
25
26 **All** SPRs
27 not implemented including reserved ones for future use must raise an illegal
28 instruction trap if read or written. This allows software the
29 opportunity to emulate the context created by the given SPR.
30
31 **Embedded Scalar Scenario**
32
33 In this scenario an implementation does not wish to implement the Vectorisation
34 but simply wishes to take advantage of predication or other feature
35 of SVP64, such as instructions that might only be available if prefixed.
36 Such an implementation would be entirely free to do so with the proviso
37 that:
38
39 * any attempts to call `setvl` shall either raise an illegal instruction
40 or be partially implemented to set SVSTATE correctly.
41 * if SVSTATE contains any value in any bit that is not supported
42 in hardware, an illegal instruction shall be raised when an SVP64
43 prefixed instruction is executed.
44 * if SVSTATE contains values requesting supported features at the time
45 that the prefixed instruction is executed then it is executed in
46 hardware as per specification, with no illegal exception trap raised.
47
48 Example, assuming that hardware implements scalar operations only,
49 and implements predication but not elwidth overrides:
50
51 setvli r0, 4 # sets VL equal to 4
52 sv.addi r5, r0, 1 # raises an 0x700 trap
53 setvli r0, 1 # sets VL equal to 1
54 sv.addi r5, r0, 1 # gets executed by hardware
55 sv.addi/ew=8 r5, r0, 1 # raises an 0x700 trap
56 sv.ori/sm=EQ r5, r0, 1 # executed by hardware
57
58 The first `sv.addi` raises an illegal instruction trap because
59 VL has been set to 4, and this is not supported. Likewise
60 elwidth overrides if requested always raise illegal instruction
61 traps.
62
63 **Full implementation (current revision) scenario**
64
65 In this scenario, SVP64 is implemented as it stands in its entirety.
66 However a future revision or a competitor processor decides to also
67 implement portions of Quad-Precision VSX as SVP64-Vectorised.
68 Compatibility is **only** achieved if the earlier implementor raises
69 illegal instruction exceptions on **all** unimplemented opcodes within
70 the SVP64-Prefixed space, *even those marked by the Scalar Power ISA as
71 not needing to raise illegal instructions*.
72
73 Additionally a future version of the specification adds a new feature,
74 requiring an additional SPR. This SPR was, at the time of implementation,
75 marked as "Reserved". The early implementation raises an illegal
76 instruction trap when this SPR is read or written, and consequently has
77 an opportunity to trap-and-emulate the full capability of the revised
78 version of the SVP64 Specification.
79
80 # XER, SO and other global flags
81
82 Vector systems are expected to be high performance. This is achieved
83 through parallelism, which requires that elements in the vector be
84 independent. XER SO/OV and other global "accumulation" flags (CR.SO) cause
85 Read-Write Hazards on single-bit global resources, having a significant
86 detrimental effect.
87
88 Consequently in SV, XER.SO behaviour is disregarded (including
89 in `cmp` instructions). XER.SO is not read, but XER.OV may be written,
90 breaking the Read-Modify-Write Hazard Chain that complicates
91 microarchitectural implementations.
92 This includes when `scalar identity behaviour` occurs. If precise
93 OpenPOWER v3.0/1 scalar behaviour is desired then OpenPOWER v3.0/1
94 instructions should be used without an SV Prefix.
95
96 TODO jacob add about OV https://www.intel.com/content/dam/www/public/us/en/documents/white-papers/ia-large-integer-arithmetic-paper.pdf
97
98 Of note here is that XER.SO and OV may already be disregarded in the
99 Power ISA v3.0/1 SFFS (Scalar Fixed and Floating) Compliancy Subset.
100 SVP64 simply makes it mandatory to disregard XER.SO even for other Subsets,
101 but only for SVP64 Prefixed Operations.
102
103 XER.CA/CA32 on the other hand is expected and required to be implemented
104 according to standard Power ISA Scalar behaviour. Interestingly, due
105 to SVP64 being in effect a hardware for-loop around Scalar instructions
106 executing in precise Program Order, a little thought shows that a Vectorised
107 Carry-In-Out add is in effect a Big Integer Add, taking a single bit Carry In
108 and producing, at the end, a single bit Carry out. High performance
109 implementations may exploit this observation to deploy efficient
110 Parallel Carry Lookahead.
111
112 # assume VL=4, this results in 4 sequential ops (below)
113 sv.adde r0.v, r4.v, r8.v
114
115 # instructions that get executed in backend hardware:
116 adde r0, r4, r8 # takes carry-in, produces carry-out
117 adde r1, r5, r9 # takes carry from previous
118 ...
119 adde r3, r7, r11 # likewise
120
121 It can clearly be seen that the carry chains from one
122 64 bit add to the next, the end result being that a
123 256-bit "Big Integer Add" has been performed, and that
124 CA contains the 257th bit. A one-instruction 512-bit Add
125 may be performed by setting VL=8, and a one-instruction
126 1024-bit add by setting VL=16, and so on.
127
128 # v3.0B/v3.1 relevant instructions
129
130 SV is primarily designed for use as an efficient hybrid 3D GPU / VPU /
131 CPU ISA.
132
133 Vectorisation of the VSX Packed SIMD system makes no sense whatsoever,
134 the sole exceptions potentially being any operations with 128-bit
135 operands such as `vrlq` (Rotate Quad Word) and `xsaddqp` (Scalar
136 Quad-precision Add).
137 SV effectively *replaces* the majority of VSX, requiring far less
138 instructions, and provides, at the very minimum, predication
139 (which VSX was designed without).
140
141 Likewise, Load/Store Multiple make no sense to
142 have because they are not only provided by SV, the SV alternatives may
143 be predicated as well, making them far better suited to use in function
144 calls and context-switching.
145
146 Additionally, some v3.0/1 instructions simply make no sense at all in a
147 Vector context: `rfid` falls into this category,
148 as well as `sc` and `scv`. Here there is simply no point
149 trying to Vectorise them: the standard OpenPOWER v3.0/1 instructions
150 should be called instead.
151
152 Fortuitously this leaves several Major Opcodes free for use by SV
153 to fit alternative future instructions. In a 3D context this means
154 Vector Product, Vector Normalise, [[sv/mv.swizzle]], Texture LD/ST
155 operations, and others critical to an efficient, effective 3D GPU and
156 VPU ISA. With such instructions being included as standard in other
157 commercially-successful GPU ISAs it is likewise critical that a 3D
158 GPU/VPU based on svp64 also have such instructions.
159
160 Note however that svp64 is stand-alone and is in no way
161 critically dependent on the existence or provision of 3D GPU or VPU
162 instructions. These should be considered extensions, and their discussion
163 and specification is out of scope for this document.
164
165 Note, again: this is *only* under svp64 prefixing. Standard v3.0B /
166 v3.1B is *not* altered by svp64 in any way.
167
168 ## Major opcode map (v3.0B)
169
170 This table is taken from v3.0B.
171 Table 9: Primary Opcode Map (opcode bits 0:5)
172
173 ```
174 | 000 | 001 | 010 | 011 | 100 | 101 | 110 | 111
175 000 | | | tdi | twi | EXT04 | | | mulli | 000
176 001 | subfic | | cmpli | cmpi | addic | addic. | addi | addis | 001
177 010 | bc/l/a | EXT17 | b/l/a | EXT19 | rlwimi| rlwinm | | rlwnm | 010
178 011 | ori | oris | xori | xoris | andi. | andis. | EXT30 | EXT31 | 011
179 100 | lwz | lwzu | lbz | lbzu | stw | stwu | stb | stbu | 100
180 101 | lhz | lhzu | lha | lhau | sth | sthu | lmw | stmw | 101
181 110 | lfs | lfsu | lfd | lfdu | stfs | stfsu | stfd | stfdu | 110
182 111 | lq | EXT57 | EXT58 | EXT59 | EXT60 | EXT61 | EXT62 | EXT63 | 111
183 | 000 | 001 | 010 | 011 | 100 | 101 | 110 | 111
184 ```
185
186 ## Suitable for svp64-only
187
188 This is the same table containing v3.0B Primary Opcodes except those that
189 make no sense in a Vectorisation Context have been removed. These removed
190 POs can, *in the SV Vector Context only*, be assigned to alternative
191 (Vectorised-only) instructions, including future extensions.
192 EXT04 retains the scalar `madd*` operations but would have all PackedSIMD
193 (aka VSX) operations removed.
194
195 Note, again, to emphasise: outside of svp64 these opcodes **do not**
196 change. When not prefixed with svp64 these opcodes **specifically**
197 retain their v3.0B / v3.1B OpenPOWER Standard compliant meaning.
198
199 ```
200 | 000 | 001 | 010 | 011 | 100 | 101 | 110 | 111
201 000 | | | | | EXT04 | | | mulli | 000
202 001 | subfic | | cmpli | cmpi | addic | addic. | addi | addis | 001
203 010 | bc/l/a | | | EXT19 | rlwimi| rlwinm | | rlwnm | 010
204 011 | ori | oris | xori | xoris | andi. | andis. | EXT30 | EXT31 | 011
205 100 | lwz | lwzu | lbz | lbzu | stw | stwu | stb | stbu | 100
206 101 | lhz | lhzu | lha | lhau | sth | sthu | | | 101
207 110 | lfs | lfsu | lfd | lfdu | stfs | stfsu | stfd | stfdu | 110
208 111 | | | EXT58 | EXT59 | | EXT61 | | EXT63 | 111
209 | 000 | 001 | 010 | 011 | 100 | 101 | 110 | 111
210 ```
211
212 It is important to note that having a different v3.0B Scalar opcode
213 that is different from an SVP64 one is highly undesirable: the complexity
214 in the decoder is greatly increased.
215
216 # EXTRA Field Mapping
217
218 The purpose of the 9-bit EXTRA field mapping is to mark individual
219 registers (RT, RA, BFA) as either scalar or vector, and to extend
220 their numbering from 0..31 in Power ISA v3.0 to 0..127 in SVP64.
221 Three of the 9 bits may also be used up for a 2nd Predicate (Twin
222 Predication) leaving a mere 6 bits for qualifying registers. As can
223 be seen there is significant pressure on these (and in fact all) SVP64 bits.
224
225 In Power ISA v3.1 prefixing there are bits which describe and classify
226 the prefix in a fashion that is independent of the suffix. MLSS for
227 example. For SVP64 there is insufficient space to make the SVP64 Prefix
228 "self-describing", and consequently every single Scalar instruction
229 had to be individually analysed, by rote, to craft an EXTRA Field Mapping.
230 This process was semi-automated and is described in this section.
231 The final results, which are part of the SVP64 Specification, are here:
232
233 * [[openpower/opcode_regs_deduped]]
234
235 Firstly, every instruction's mnemonic (`add RT, RA, RB`) was analysed
236 from reading the markdown formatted version of the Scalar pseudocode
237 which is machine-readable and found in [[openpower/isatables]]. The
238 analysis gives, by instruction, a "Register Profile". `add RT, RA, RB`
239 for example is given a designation `RM-2R-1W` because it requires
240 two GPR reads and one GPR write.
241
242 Secondly, the total number of registers was added up (2R-1W is 3 registers)
243 and if less than or equal to three then that instruction could be given an
244 EXTRA3 designation. Four or more is given an EXTRA2 designation because
245 there are only 9 bits available.
246
247 Thirdly, the instruction was analysed to see if Twin or Single
248 Predication was suitable. As a general rule this was if there
249 was only a single operand and a single result (`extw` and LD/ST)
250 however it was found that some 2 or 3 operand instructions also
251 qualify. Given that 3 of the 9 bits of EXTRA had to be sacrificed for use
252 in Twin Predication, some compromises were made, here. LDST is
253 Twin but also has 3 operands in some operations, so only EXTRA2 can be used.
254
255 Fourthly, a packing format was decided: for 2R-1W an EXTRA3 indexing
256 could have been decided
257 that RA would be indexed 0 (EXTRA bits 0-2), RB indexed 1 (EXTRA bits 3-5)
258 and RT indexed 2 (EXTRA bits 6-8). In some cases (LD/ST with update)
259 RA-as-a-source is given a **different** EXTRA index from RA-as-a-result
260 (because it is possible to do, and perceived to be useful). Rc=1
261 co-results (CR0, CR1) are always given the same EXTRA index as their
262 main result (RT, FRT).
263
264 Fifthly, in an automated process the results of the analysis
265 were outputted in CSV Format for use in machine-readable form
266 by sv_analysis.py <https://git.libre-soc.org/?p=openpower-isa.git;a=blob;f=src/openpower/sv/sv_analysis.py;hb=HEAD>
267
268 This process was laborious but logical, and, crucially, once a
269 decision is made (and ratified) cannot be reversed.
270 Qualifying future Power ISA Scalar instructions for SVP64
271 is **strongly** advised to utilise this same process and the same
272 sv_analysis.py program as a canonical method of maintaining the
273 relationships. Alterations to that same program which
274 change the Designation is **prohibited** once finalised (ratified
275 through the Power ISA WG Process). It would
276 be similar to deciding that `add` should be changed from X-Form
277 to D-Form.
278
279 # Single Predication
280
281 This is a standard mode normally found in Vector ISAs. every element in every source Vector and in the destination uses the same bit of one single predicate mask.
282
283 In SVSTATE, for Single-predication, implementors MUST increment both srcstep and dststep, but depending on whether sz and/or dz are set, srcstep and
284 dststep can still potentially become different indices. Only when sz=dz
285 is srcstep guaranteed to equal dststep at all times.
286
287 Example 1:
288
289 * VL=3
290 * mask=0b1101
291 * sz=0, dz=1
292
293 The following schedule for srcstep and dststep will occur:
294
295 | srcstep | dststep | comment |
296 | ---- | ----- | -------- |
297 | 0 | 0 | both mask[src=0] and mask[dst=0] are 1 |
298 | 1 | 2 | sz=1 but dz=0: dst skips mask[1], src soes not |
299 | 2 | 3 | mask[src=2] and mask[dst=3] are 1 |
300
301 sz and dz however still interact
302
303 # Twin Predication
304
305 This is a novel concept that allows predication to be applied to a single
306 source and a single dest register. The following types of traditional
307 Vector operations may be encoded with it, *without requiring explicit
308 opcodes to do so*
309
310 * VSPLAT (a single scalar distributed across a vector)
311 * VEXTRACT (like LLVM IR [`extractelement`](https://releases.llvm.org/11.0.0/docs/LangRef.html#extractelement-instruction))
312 * VINSERT (like LLVM IR [`insertelement`](https://releases.llvm.org/11.0.0/docs/LangRef.html#insertelement-instruction))
313 * VCOMPRESS (like LLVM IR [`llvm.masked.compressstore.*`](https://releases.llvm.org/11.0.0/docs/LangRef.html#llvm-masked-compressstore-intrinsics))
314 * VEXPAND (like LLVM IR [`llvm.masked.expandload.*`](https://releases.llvm.org/11.0.0/docs/LangRef.html#llvm-masked-expandload-intrinsics))
315
316 Those patterns (and more) may be applied to:
317
318 * mv (the usual way that V\* ISA operations are created)
319 * exts\* sign-extension
320 * rwlinm and other RS-RA shift operations (**note**: excluding
321 those that take RA as both a src and dest. These are not
322 1-src 1-dest, they are 2-src, 1-dest)
323 * LD and ST (treating AGEN as one source)
324 * FP fclass, fsgn, fneg, fabs, fcvt, frecip, fsqrt etc.
325 * Condition Register ops mfcr, mtcr and other similar
326
327 This is a huge list that creates extremely powerful combinations,
328 particularly given that one of the predicate options is `(1<<r3)`
329
330 Additional unusual capabilities of Twin Predication include a back-to-back
331 version of VCOMPRESS-VEXPAND which is effectively the ability to do
332 sequentially ordered multiple VINSERTs. The source predicate selects a
333 sequentially ordered subset of elements to be inserted; the destination
334 predicate specifies the sequentially ordered recipient locations.
335 This is equivalent to
336 `llvm.masked.compressstore.*`
337 followed by
338 `llvm.masked.expandload.*`
339 with a single instruction.
340
341 This extreme power and flexibility comes down to the fact that SVP64
342 is not actually a Vector ISA: it is a loop-abstraction-concept that
343 is applied *in general* to Scalar operations, just like the x86
344 `REP` instruction (if put on steroids).
345
346 # Reduce modes
347
348 Reduction in SVP64 is deterministic and somewhat of a misnomer. A normal
349 Vector ISA would have explicit Reduce opcodes with defined characteristics
350 per operation: in SX Aurora there is even an additional scalar argument
351 containing the initial reduction value, and the default is either 0
352 or 1 depending on the specifics of the explicit opcode.
353 SVP64 fundamentally has to
354 utilise *existing* Scalar Power ISA v3.0B operations, which presents some
355 unique challenges.
356
357 The solution turns out to be to simply define reduction as permitting
358 deterministic element-based schedules to be issued using the base Scalar
359 operations, and to rely on the underlying microarchitecture to resolve
360 Register Hazards at the element level. This goes back to
361 the fundamental principle that SV is nothing more than a Sub-Program-Counter
362 sitting between Decode and Issue phases.
363
364 Microarchitectures *may* take opportunities to parallelise the reduction
365 but only if in doing so they preserve Program Order at the Element Level.
366 Opportunities where this is possible include an `OR` operation
367 or a MIN/MAX operation: it may be possible to parallelise the reduction,
368 but for Floating Point it is not permitted due to different results
369 being obtained if the reduction is not executed in strict Program-Sequential
370 Order.
371
372 In essence it becomes the programmer's responsibility to leverage the
373 pre-determined schedules to desired effect.
374
375 ## Scalar result reduction and iteration
376
377 Scalar Reduction per se does not exist, instead is implemented in SVP64
378 as a simple and natural relaxation of the usual restriction on the Vector
379 Looping which would terminate if the destination was marked as a Scalar.
380 Scalar Reduction by contrast *keeps issuing Vector Element Operations*
381 even though the destination register is marked as scalar.
382 Thus it is up to the programmer to be aware of this, observe some
383 conventions, and thus end up achieving the desired outcome of scalar
384 reduction.
385
386 It is also important to appreciate that there is no
387 actual imposition or restriction on how this mode is utilised: there
388 will therefore be several valuable uses (including Vector Iteration
389 and "Reverse-Gear")
390 and it is up to the programmer to make best use of the
391 (strictly deterministic) capability
392 provided.
393
394 In this mode, which is suited to operations involving carry or overflow,
395 one register must be assigned, by convention by the programmer to be the
396 "accumulator". Scalar reduction is thus categorised by:
397
398 * One of the sources is a Vector
399 * the destination is a scalar
400 * optionally but most usefully when one source scalar register is
401 also the scalar destination (which may be informally termed
402 the "accumulator")
403 * That the source register type is the same as the destination register
404 type identified as the "accumulator". Scalar reduction on `cmp`,
405 `setb` or `isel` makes no sense for example because of the mixture
406 between CRs and GPRs.
407
408 *Note that issuing instructions in Scalar reduce mode such as `setb`
409 are neither `UNDEFINED` nor prohibited, despite them not making much
410 sense at first glance.
411 Scalar reduce is strictly defined behaviour, and the cost in
412 hardware terms of prohibition of seemingly non-sensical operations is too great.
413 Therefore it is permitted and required to be executed successfully.
414 Implementors **MAY** choose to optimise such instructions in instances
415 where their use results in "extraneous execution", i.e. where it is clear
416 that the sequence of operations, comprising multiple overwrites to
417 a scalar destination **without** cumulative, iterative, or reductive
418 behaviour (no "accumulator"), may discard all but the last element
419 operation. Identification
420 of such is trivial to do for `setb` and `cmp`: the source register type is
421 a completely different register file from the destination.
422 Likewise Scalar reduction when the destination is a Vector
423 is as if the Reduction Mode was not requested.*
424
425 Typical applications include simple operations such as `ADD r3, r10.v,
426 r3` where, clearly, r3 is being used to accumulate the addition of all
427 elements of the vector starting at r10.
428
429 # add RT, RA,RB but when RT==RA
430 for i in range(VL):
431 iregs[RA] += iregs[RB+i] # RT==RA
432
433 However, *unless* the operation is marked as "mapreduce" (`sv.add/mr`)
434 SV ordinarily
435 **terminates** at the first scalar operation. Only by marking the
436 operation as "mapreduce" will it continue to issue multiple sub-looped
437 (element) instructions in `Program Order`.
438
439 To perform the loop in reverse order, the ```RG``` (reverse gear) bit must be set. This may be useful in situations where the results may be different
440 (floating-point) if executed in a different order. Given that there is
441 no actual prohibition on Reduce Mode being applied when the destination
442 is a Vector, the "Reverse Gear" bit turns out to be a way to apply Iterative
443 or Cumulative Vector operations in reverse. `sv.add/rg r3.v, r4.v, r4.v`
444 for example will start at the opposite end of the Vector and push
445 a cumulative series of overlapping add operations into the Execution units of
446 the underlying hardware.
447
448 Other examples include shift-mask operations where a Vector of inserts
449 into a single destination register is required (see [[sv/bitmanip]], bmset),
450 as a way to construct
451 a value quickly from multiple arbitrary bit-ranges and bit-offsets.
452 Using the same register as both the source and destination, with Vectors
453 of different offsets masks and values to be inserted has multiple
454 applications including Video, cryptography and JIT compilation.
455
456 # assume VL=4:
457 # * Vector of shift-offsets contained in RC (r12.v)
458 # * Vector of masks contained in RB (r8.v)
459 # * Vector of values to be masked-in in RA (r4.v)
460 # * Scalar destination RT (r0) to receive all mask-offset values
461 sv.bmset/mr r0, r4.v, r8.v, r12.v
462
463 Due to the Deterministic Scheduling,
464 Subtract and Divide are still permitted to be executed in this mode,
465 although from an algorithmic perspective it is strongly discouraged.
466 It would be better to use addition followed by one final subtract,
467 or in the case of divide, to get better accuracy, to perform a multiply
468 cascade followed by a final divide.
469
470 Note that single-operand or three-operand scalar-dest reduce is perfectly
471 well permitted: the programmer may still declare one register, used as
472 both a Vector source and Scalar destination, to be utilised as
473 the "accumulator". In the case of `sv.fmadds` and `sv.maddhw` etc
474 this naturally fits well with the normal expected usage of these
475 operations.
476
477 If an interrupt or exception occurs in the middle of the scalar mapreduce,
478 the scalar destination register **MUST** be updated with the current
479 (intermediate) result, because this is how ```Program Order``` is
480 preserved (Vector Loops are to be considered to be just another way of issuing instructions
481 in Program Order). In this way, after return from interrupt,
482 the scalar mapreduce may continue where it left off. This provides
483 "precise" exception behaviour.
484
485 Note that hardware is perfectly permitted to perform multi-issue
486 parallel optimisation of the scalar reduce operation: it's just that
487 as far as the user is concerned, all exceptions and interrupts **MUST**
488 be precise.
489
490 ## Vector result reduce mode
491
492 Vector Reduce Mode issues a deterministic tree-reduction schedule to the underlying micro-architecture. Like Scalar reduction, the "Scalar Base"
493 (Power ISA v3.0B) operation is leveraged, unmodified, to give the
494 *appearance* and *effect* of Reduction.
495
496 Given that the tree-reduction schedule is deterministic,
497 Interrupts and exceptions
498 can therefore also be precise. The final result will be in the first
499 non-predicate-masked-out destination element, but due again to
500 the deterministic schedule programmers may find uses for the intermediate
501 results.
502
503 When Rc=1 a corresponding Vector of co-resultant CRs is also
504 created. No special action is taken: the result and its CR Field
505 are stored "as usual" exactly as all other SVP64 Rc=1 operations.
506
507 ## Sub-Vector Horizontal Reduction
508
509 Note that when SVM is clear and SUBVL!=1 the sub-elements are
510 *independent*, i.e. they are mapreduced per *sub-element* as a result.
511 illustration with a vec2, assuming RA==RT, e.g `sv.add/mr/vec2 r4, r4, r16`
512
513 for i in range(0, VL):
514 # RA==RT in the instruction. does not have to be
515 iregs[RT].x = op(iregs[RT].x, iregs[RB+i].x)
516 iregs[RT].y = op(iregs[RT].y, iregs[RB+i].y)
517
518 Thus logically there is nothing special or unanticipated about
519 `SVM=0`: it is expected behaviour according to standard SVP64
520 Sub-Vector rules.
521
522 By contrast, when SVM is set and SUBVL!=1, a Horizontal
523 Subvector mode is enabled, which behaves very much more
524 like a traditional Vector Processor Reduction instruction.
525 Example for a vec3:
526
527 for i in range(VL):
528 result = iregs[RA+i].x
529 result = op(result, iregs[RA+i].y)
530 result = op(result, iregs[RA+i].z)
531 iregs[RT+i] = result
532
533 In this mode, when Rc=1 the Vector of CRs is as normal: each result
534 element creates a corresponding CR element (for the final, reduced, result).
535
536 # Fail-on-first
537
538 Data-dependent fail-on-first has two distinct variants: one for LD/ST
539 (see [[sv/ldst]],
540 the other for arithmetic operations (actually, CR-driven)
541 ([[sv/normal]]) and CR operations ([[sv/cr_ops]]).
542 Note in each
543 case the assumption is that vector elements are required appear to be
544 executed in sequential Program Order, element 0 being the first.
545
546 * LD/ST ffirst treats the first LD/ST in a vector (element 0) as an
547 ordinary one. Exceptions occur "as normal". However for elements 1
548 and above, if an exception would occur, then VL is **truncated** to the
549 previous element.
550 * Data-driven (CR-driven) fail-on-first activates when Rc=1 or other
551 CR-creating operation produces a result (including cmp). Similar to
552 branch, an analysis of the CR is performed and if the test fails, the
553 vector operation terminates and discards all element operations
554 above the current one (and the current one if VLi is not set),
555 and VL is truncated to either
556 the *previous* element or the current one, depending on whether
557 VLi (VL "inclusive") is set.
558
559 Thus the new VL comprises a contiguous vector of results,
560 all of which pass the testing criteria (equal to zero, less than zero).
561
562 The CR-based data-driven fail-on-first is new and not found in ARM
563 SVE or RVV. It is extremely useful for reducing instruction count,
564 however requires speculative execution involving modifications of VL
565 to get high performance implementations. An additional mode (RC1=1)
566 effectively turns what would otherwise be an arithmetic operation
567 into a type of `cmp`. The CR is stored (and the CR.eq bit tested
568 against the `inv` field).
569 If the CR.eq bit is equal to `inv` then the Vector is truncated and
570 the loop ends.
571 Note that when RC1=1 the result elements are never stored, only the CRs.
572
573 VLi is only available as an option when `Rc=0` (or for instructions
574 which do not have Rc). When set, the current element is always
575 also included in the count (the new length that VL will be set to).
576 This may be useful in combination with "inv" to truncate the Vector
577 to `exclude` elements that fail a test, or, in the case of implementations
578 of strncpy, to include the terminating zero.
579
580 In CR-based data-driven fail-on-first there is only the option to select
581 and test one bit of each CR (just as with branch BO). For more complex
582 tests this may be insufficient. If that is the case, a vectorised crops
583 (crand, cror) may be used, and ffirst applied to the crop instead of to
584 the arithmetic vector.
585
586 One extremely important aspect of ffirst is:
587
588 * LDST ffirst may never set VL equal to zero. This because on the first
589 element an exception must be raised "as normal".
590 * CR-based data-dependent ffirst on the other hand **can** set VL equal
591 to zero. This is the only means in the entirety of SV that VL may be set
592 to zero (with the exception of via the SV.STATE SPR). When VL is set
593 zero due to the first element failing the CR bit-test, all subsequent
594 vectorised operations are effectively `nops` which is
595 *precisely the desired and intended behaviour*.
596
597 Another aspect is that for ffirst LD/STs, VL may be truncated arbitrarily
598 to a nonzero value for any implementation-specific reason. For example:
599 it is perfectly reasonable for implementations to alter VL when ffirst
600 LD or ST operations are initiated on a nonaligned boundary, such that
601 within a loop the subsequent iteration of that loop begins subsequent
602 ffirst LD/ST operations on an aligned boundary. Likewise, to reduce
603 workloads or balance resources.
604
605 CR-based data-dependent first on the other hand MUST not truncate VL
606 arbitrarily to a length decided by the hardware: VL MUST only be
607 truncated based explicitly on whether a test fails.
608 This because it is a precise test on which algorithms
609 will rely.
610
611 ## Data-dependent fail-first on CR operations (crand etc)
612
613 Operations that actually produce or alter CR Field as a result
614 do not also in turn have an Rc=1 mode. However it makes no
615 sense to try to test the 4 bits of a CR Field for being equal
616 or not equal to zero. Moreover, the result is already in the
617 form that is desired: it is a CR field. Therefore,
618 CR-based operations have their own SVP64 Mode, described
619 in [[sv/cr_ops]]
620
621 There are two primary different types of CR operations:
622
623 * Those which have a 3-bit operand field (referring to a CR Field)
624 * Those which have a 5-bit operand (referring to a bit within the
625 whole 32-bit CR)
626
627 More details can be found in [[sv/cr_ops]].
628
629 # pred-result mode
630
631 Pred-result mode may not be applied on CR-based operations.
632
633 Although CR operations (mtcr, crand, cror) may be Vectorised,
634 predicated, pred-result mode applies to operations that have
635 an Rc=1 mode, or make sense to add an RC1 option.
636
637 Predicate-result merges common CR testing with predication, saving on
638 instruction count. In essence, a Condition Register Field test
639 is performed, and if it fails it is considered to have been
640 *as if* the destination predicate bit was zero. Given that
641 there are no CR-based operations that produce Rc=1 co-results,
642 there can be no pred-result mode for mtcr and other CR-based instructions
643
644 Arithmetic and Logical Pred-result, which does have Rc=1 or for which
645 RC1 Mode makes sense, is covered in [[sv/normal]]
646
647 # CR Operations
648
649 CRs are slightly more involved than INT or FP registers due to the
650 possibility for indexing individual bits (crops BA/BB/BT). Again however
651 the access pattern needs to be understandable in relation to v3.0B / v3.1B
652 numbering, with a clear linear relationship and mapping existing when
653 SV is applied.
654
655 ## CR EXTRA mapping table and algorithm <a name="cr_extra"></a>
656
657 Numbering relationships for CR fields are already complex due to being
658 in BE format (*the relationship is not clearly explained in the v3.0B
659 or v3.1 specification*). However with some care and consideration
660 the exact same mapping used for INT and FP regfiles may be applied,
661 just to the upper bits, as explained below. The notation
662 `CR{field number}` is used to indicate access to a particular
663 Condition Register Field (as opposed to the notation `CR[bit]`
664 which accesses one bit of the 32 bit Power ISA v3.0B
665 Condition Register)
666
667 `CR{n}` refers to `CR0` when `n=0` and consequently, for CR0-7, is defined, in v3.0B pseudocode, as:
668
669 CR{7-n} = CR[32+n*4:35+n*4]
670
671 For SVP64 the relationship for the sequential
672 numbering of elements is to the CR **fields** within
673 the CR Register, not to individual bits within the CR register.
674
675 In OpenPOWER v3.0/1, BF/BT/BA/BB are all 5 bits. The top 3 bits (0:2)
676 select one of the 8 CRs; the bottom 2 bits (3:4) select one of 4 bits
677 *in* that CR (EQ/LT/GT/SO). The numbering was determined (after 4 months of
678 analysis and research) to be as follows:
679
680 CR_index = 7-(BA>>2) # top 3 bits but BE
681 bit_index = 3-(BA & 0b11) # low 2 bits but BE
682 CR_reg = CR{CR_index} # get the CR
683 # finally get the bit from the CR.
684 CR_bit = (CR_reg & (1<<bit_index)) != 0
685
686 When it comes to applying SV, it is the CR\_reg number to which SV EXTRA2/3
687 applies, **not** the CR\_bit portion (bits 3-4):
688
689 if extra3_mode:
690 spec = EXTRA3
691 else:
692 spec = EXTRA2<<1 | 0b0
693 if spec[0]:
694 # vector constructs "BA[0:2] spec[1:2] 00 BA[3:4]"
695 return ((BA >> 2)<<6) | # hi 3 bits shifted up
696 (spec[1:2]<<4) | # to make room for these
697 (BA & 0b11) # CR_bit on the end
698 else:
699 # scalar constructs "00 spec[1:2] BA[0:4]"
700 return (spec[1:2] << 5) | BA
701
702 Thus, for example, to access a given bit for a CR in SV mode, the v3.0B
703 algorithm to determine CR\_reg is modified to as follows:
704
705 CR_index = 7-(BA>>2) # top 3 bits but BE
706 if spec[0]:
707 # vector mode, 0-124 increments of 4
708 CR_index = (CR_index<<4) | (spec[1:2] << 2)
709 else:
710 # scalar mode, 0-32 increments of 1
711 CR_index = (spec[1:2]<<3) | CR_index
712 # same as for v3.0/v3.1 from this point onwards
713 bit_index = 3-(BA & 0b11) # low 2 bits but BE
714 CR_reg = CR{CR_index} # get the CR
715 # finally get the bit from the CR.
716 CR_bit = (CR_reg & (1<<bit_index)) != 0
717
718 Note here that the decoding pattern to determine CR\_bit does not change.
719
720 Note: high-performance implementations may read/write Vectors of CRs in
721 batches of aligned 32-bit chunks (CR0-7, CR7-15). This is to greatly
722 simplify internal design. If instructions are issued where CR Vectors
723 do not start on a 32-bit aligned boundary, performance may be affected.
724
725 ## CR fields as inputs/outputs of vector operations
726
727 CRs (or, the arithmetic operations associated with them)
728 may be marked as Vectorised or Scalar. When Rc=1 in arithmetic operations that have no explicit EXTRA to cover the CR, the CR is Vectorised if the destination is Vectorised. Likewise if the destination is scalar then so is the CR.
729
730 When vectorized, the CR inputs/outputs are sequentially read/written
731 to 4-bit CR fields. Vectorised Integer results, when Rc=1, will begin
732 writing to CR8 (TBD evaluate) and increase sequentially from there.
733 This is so that:
734
735 * implementations may rely on the Vector CRs being aligned to 8. This
736 means that CRs may be read or written in aligned batches of 32 bits
737 (8 CRs per batch), for high performance implementations.
738 * scalar Rc=1 operation (CR0, CR1) and callee-saved CRs (CR2-4) are not
739 overwritten by vector Rc=1 operations except for very large VL
740 * CR-based predication, from CR32, is also not interfered with
741 (except by large VL).
742
743 However when the SV result (destination) is marked as a scalar by the
744 EXTRA field the *standard* v3.0B behaviour applies: the accompanying
745 CR when Rc=1 is written to. This is CR0 for integer operations and CR1
746 for FP operations.
747
748 Note that yes, the CR Fields are genuinely Vectorised. Unlike in SIMD VSX which
749 has a single CR (CR6) for a given SIMD result, SV Vectorised OpenPOWER
750 v3.0B scalar operations produce a **tuple** of element results: the
751 result of the operation as one part of that element *and a corresponding
752 CR element*. Greatly simplified pseudocode:
753
754 for i in range(VL):
755 # calculate the vector result of an add
756 iregs[RT+i] = iregs[RA+i] + iregs[RB+i]
757 # now calculate CR bits
758 CRs{8+i}.eq = iregs[RT+i] == 0
759 CRs{8+i}.gt = iregs[RT+i] > 0
760 ... etc
761
762 If a "cumulated" CR based analysis of results is desired (a la VSX CR6)
763 then a followup instruction must be performed, setting "reduce" mode on
764 the Vector of CRs, using cr ops (crand, crnor) to do so. This provides far
765 more flexibility in analysing vectors than standard Vector ISAs. Normal
766 Vector ISAs are typically restricted to "were all results nonzero" and
767 "were some results nonzero". The application of mapreduce to Vectorised
768 cr operations allows far more sophisticated analysis, particularly in
769 conjunction with the new crweird operations see [[sv/cr_int_predication]].
770
771 Note in particular that the use of a separate instruction in this way
772 ensures that high performance multi-issue OoO inplementations do not
773 have the computation of the cumulative analysis CR as a bottleneck and
774 hindrance, regardless of the length of VL.
775
776 Additionally,
777 SVP64 [[sv/branches]] may be used, even when the branch itself is to
778 the following instruction. The combined side-effects of CTR reduction
779 and VL truncation provide several benefits.
780
781 (see [[discussion]]. some alternative schemes are described there)
782
783 ## Rc=1 when SUBVL!=1
784
785 sub-vectors are effectively a form of Packed SIMD (length 2 to 4). Only 1 bit of
786 predicate is allocated per subvector; likewise only one CR is allocated
787 per subvector.
788
789 This leaves a conundrum as to how to apply CR computation per subvector,
790 when normally Rc=1 is exclusively applied to scalar elements. A solution
791 is to perform a bitwise OR or AND of the subvector tests. Given that
792 OE is ignored in SVP64, this field may (when available) be used to select OR or
793 AND behavior.
794
795 ### Table of CR fields
796
797 CRn is the notation used by the OpenPower spec to refer to CR field #i,
798 so FP instructions with Rc=1 write to CR1 (n=1).
799
800 CRs are not stored in SPRs: they are registers in their own right.
801 Therefore context-switching the full set of CRs involves a Vectorised
802 mfcr or mtcr, using VL=8 to do so. This is exactly as how
803 scalar OpenPOWER context-switches CRs: it is just that there are now
804 more of them.
805
806 The 64 SV CRs are arranged similarly to the way the 128 integer registers
807 are arranged. TODO a python program that auto-generates a CSV file
808 which can be included in a table, which is in a new page (so as not to
809 overwhelm this one). [[svp64/cr_names]]
810
811 # Register Profiles
812
813 **NOTE THIS TABLE SHOULD NO LONGER BE HAND EDITED** see
814 <https://bugs.libre-soc.org/show_bug.cgi?id=548> for details.
815
816 Instructions are broken down by Register Profiles as listed in the
817 following auto-generated page: [[opcode_regs_deduped]]. "Non-SV"
818 indicates that the operations with this Register Profile cannot be
819 Vectorised (mtspr, bc, dcbz, twi)
820
821 TODO generate table which will be here [[svp64/reg_profiles]]
822
823 # SV pseudocode illilustration
824
825 ## Single-predicated Instruction
826
827 illustration of normal mode add operation: zeroing not included, elwidth
828 overrides not included. if there is no predicate, it is set to all 1s
829
830 function op_add(rd, rs1, rs2) # add not VADD!
831 int i, id=0, irs1=0, irs2=0;
832 predval = get_pred_val(FALSE, rd);
833 for (i = 0; i < VL; i++)
834 STATE.srcoffs = i # save context
835 if (predval & 1<<i) # predication uses intregs
836 ireg[rd+id] <= ireg[rs1+irs1] + ireg[rs2+irs2];
837 if (!int_vec[rd].isvec) break;
838 if (rd.isvec) { id += 1; }
839 if (rs1.isvec) { irs1 += 1; }
840 if (rs2.isvec) { irs2 += 1; }
841 if (id == VL or irs1 == VL or irs2 == VL)
842 {
843 # end VL hardware loop
844 STATE.srcoffs = 0; # reset
845 return;
846 }
847
848 This has several modes:
849
850 * RT.v = RA.v RB.v
851 * RT.v = RA.v RB.s (and RA.s RB.v)
852 * RT.v = RA.s RB.s
853 * RT.s = RA.v RB.v
854 * RT.s = RA.v RB.s (and RA.s RB.v)
855 * RT.s = RA.s RB.s
856
857 All of these may be predicated. Vector-Vector is straightfoward.
858 When one of source is a Vector and the other a Scalar, it is clear that
859 each element of the Vector source should be added to the Scalar source,
860 each result placed into the Vector (or, if the destination is a scalar,
861 only the first nonpredicated result).
862
863 The one that is not obvious is RT=vector but both RA/RB=scalar.
864 Here this acts as a "splat scalar result", copying the same result into
865 all nonpredicated result elements. If a fixed destination scalar was
866 intended, then an all-Scalar operation should be used.
867
868 See <https://bugs.libre-soc.org/show_bug.cgi?id=552>
869
870 # Assembly Annotation
871
872 Assembly code annotation is required for SV to be able to successfully
873 mark instructions as "prefixed".
874
875 A reasonable (prototype) starting point:
876
877 svp64 [field=value]*
878
879 Fields:
880
881 * ew=8/16/32 - element width
882 * sew=8/16/32 - source element width
883 * vec=2/3/4 - SUBVL
884 * mode=mr/satu/sats/crpred
885 * pred=1\<\<3/r3/~r3/r10/~r10/r30/~r30/lt/gt/le/ge/eq/ne
886
887 similar to x86 "rex" prefix.
888
889 For actual assembler:
890
891 sv.asmcode/mode.vec{N}.ew=8,sw=16,m={pred},sm={pred} reg.v, src.s
892
893 Qualifiers:
894
895 * m={pred}: predicate mask mode
896 * sm={pred}: source-predicate mask mode (only allowed in Twin-predication)
897 * vec{N}: vec2 OR vec3 OR vec4 - sets SUBVL=2/3/4
898 * ew={N}: ew=8/16/32 - sets elwidth override
899 * sw={N}: sw=8/16/32 - sets source elwidth override
900 * ff={xx}: see fail-first mode
901 * pr={xx}: see predicate-result mode
902 * sat{x}: satu / sats - see saturation mode
903 * mr: see map-reduce mode
904 * mr.svm see map-reduce with sub-vector mode
905 * crm: see map-reduce CR mode
906 * crm.svm see map-reduce CR with sub-vector mode
907 * sz: predication with source-zeroing
908 * dz: predication with dest-zeroing
909
910 For modes:
911
912 * pred-result:
913 - pm=lt/gt/le/ge/eq/ne/so/ns
914 - RC1 mode
915 * fail-first
916 - ff=lt/gt/le/ge/eq/ne/so/ns
917 - RC1 mode
918 * saturation:
919 - sats
920 - satu
921 * map-reduce:
922 - mr OR crm: "normal" map-reduce mode or CR-mode.
923 - mr.svm OR crm.svm: when vec2/3/4 set, sub-vector mapreduce is enabled
924
925 # Proposed Parallel-reduction algorithm
926
927 **This algorithm contains a MV operation and may NOT be used. Removal
928 of the MV operation may be achieved by using index-redirection as was
929 achieved in DCT and FFT REMAP**
930
931 ```
932 /// reference implementation of proposed SimpleV reduction semantics.
933 ///
934 // reduction operation -- we still use this algorithm even
935 // if the reduction operation isn't associative or
936 // commutative.
937 XXX VIOLATION OF SVP64 DESIGN PRINCIPLES XXXX
938 /// XXX `pred` is a user-visible Vector Condition register XXXX
939 XXX VIOLATION OF SVP64 DESIGN PRINCIPLES XXXX
940 ///
941 /// all input arrays have length `vl`
942 def reduce(vl, vec, pred):
943 pred = copy(pred) # must not damage predicate
944 step = 1;
945 while step < vl
946 step *= 2;
947 for i in (0..vl).step_by(step)
948 other = i + step / 2;
949 other_pred = other < vl && pred[other];
950 if pred[i] && other_pred
951 vec[i] += vec[other];
952 else if other_pred
953 XXX VIOLATION OF SVP64 DESIGN XXX
954 XXX vec[i] = vec[other]; XXX
955 XXX VIOLATION OF SVP64 DESIGN XXX
956 pred[i] |= other_pred;
957 ```
958
959 The first principle in SVP64 being violated is that SVP64 is a fully-independent
960 Abstraction of hardware-looping in between issue and execute phases
961 that has no relation to the operation it issues. The above pseudocode
962 conditionally changes not only the type of element operation issued
963 (a MV in some cases) but also the number of arguments (2 for a MV).
964 At the very least, for Vertical-First Mode this will result in unanticipated and unexpected behaviour (maximise "surprises" for programmers) in
965 the middle of loops, that will be far too hard to explain.
966
967 The second principle being violated by the above algorithm is the expectation
968 that temporary storage is available for a modified predicate: there is no
969 such space, and predicates are read-only to reduce complexity at the
970 micro-architectural level.
971 SVP64 is founded on the principle that all operations are
972 "re-entrant" with respect to interrupts and exceptions: SVSTATE must
973 be saved and restored alongside PC and MSR, but nothing more. It is perfectly
974 fine to have context-switching back to the operation be somewhat slower,
975 through "reconstruction" of temporary internal state based on what SVSTATE
976 contains, but nothing more.
977
978 An alternative algorithm is therefore required that does not perform MVs,
979 and does not require additional state to be saved on context-switching.
980
981 ```
982 def reduce( vl, vec, pred ):
983 pred = copy(pred) # must not damage predicate
984 j = 0
985 vi = [] # array of lookup indices to skip nonpredicated
986 for i, pbit in enumerate(pred):
987 if pbit:
988 vi[j] = i
989 j += 1
990 step = 2
991 while step <= vl
992 halfstep = step // 2
993 for i in (0..vl).step_by(step)
994 other = vi[i + halfstep]
995 ir = vi[i]
996 other_pred = other < vl && pred[other]
997 if pred[i] && other_pred
998 vec[ir] += vec[other]
999 else if other_pred:
1000 vi[ir] = vi[other] # index redirection, no MV
1001 pred[ir] |= other_pred # reconstructed on context-switch
1002 step *= 2
1003 ```
1004
1005 In this version the need for an explicit MV is made unnecessary by instead
1006 leaving elements *in situ*. The internal modifications to the predicate may,
1007 due to the reduction being entirely deterministic, be "reconstructed"
1008 on a context-switch. This may make some implementations slower.
1009
1010 *Implementor's Note: many SIMD-based Parallel Reduction Algorithms are
1011 implemented in hardware with MVs that ensure lane-crossing is minimised.
1012 The mistake which would be catastrophic to SVP64 to make is to then
1013 limit the Reduction Sequence for all implementors
1014 based solely and exclusively on what one
1015 specific internal microarchitecture does.
1016 In SIMD ISAs the internal SIMD Architectural design is exposed and imposed on the programmer. Cray-style Vector ISAs on the other hand provide convenient,
1017 compact and efficient encodings of abstract concepts.
1018 It is the Implementor's responsibility to produce a design
1019 that complies with the above algorithm,
1020 utilising internal Micro-coding and other techniques to transparently
1021 insert MV operations
1022 if necessary or desired, to give the level of efficiency or performance
1023 required.*
1024
1025 # Element-width overrides <a name="elwidth"> </>
1026
1027 Element-width overrides are best illustrated with a packed structure
1028 union in the c programming language. The following should be taken
1029 literally, and assume always a little-endian layout:
1030
1031 typedef union {
1032 uint8_t b[];
1033 uint16_t s[];
1034 uint32_t i[];
1035 uint64_t l[];
1036 uint8_t actual_bytes[8];
1037 } el_reg_t;
1038
1039 elreg_t int_regfile[128];
1040
1041 get_polymorphed_reg(reg, bitwidth, offset):
1042 el_reg_t res;
1043 res.l = 0; // TODO: going to need sign-extending / zero-extending
1044 if bitwidth == 8:
1045 reg.b = int_regfile[reg].b[offset]
1046 elif bitwidth == 16:
1047 reg.s = int_regfile[reg].s[offset]
1048 elif bitwidth == 32:
1049 reg.i = int_regfile[reg].i[offset]
1050 elif bitwidth == 64:
1051 reg.l = int_regfile[reg].l[offset]
1052 return res
1053
1054 set_polymorphed_reg(reg, bitwidth, offset, val):
1055 if (!reg.isvec):
1056 # not a vector: first element only, overwrites high bits
1057 int_regfile[reg].l[0] = val
1058 elif bitwidth == 8:
1059 int_regfile[reg].b[offset] = val
1060 elif bitwidth == 16:
1061 int_regfile[reg].s[offset] = val
1062 elif bitwidth == 32:
1063 int_regfile[reg].i[offset] = val
1064 elif bitwidth == 64:
1065 int_regfile[reg].l[offset] = val
1066
1067 In effect the GPR registers r0 to r127 (and corresponding FPRs fp0
1068 to fp127) are reinterpreted to be "starting points" in a byte-addressable
1069 memory. Vectors - which become just a virtual naming construct - effectively
1070 overlap.
1071
1072 It is extremely important for implementors to note that the only circumstance
1073 where upper portions of an underlying 64-bit register are zero'd out is
1074 when the destination is a scalar. The ideal register file has byte-level
1075 write-enable lines, just like most SRAMs, in order to avoid READ-MODIFY-WRITE.
1076
1077 An example ADD operation with predication and element width overrides:
1078
1079  for (i = 0; i < VL; i++)
1080 if (predval & 1<<i) # predication
1081 src1 = get_polymorphed_reg(RA, srcwid, irs1)
1082 src2 = get_polymorphed_reg(RB, srcwid, irs2)
1083 result = src1 + src2 # actual add here
1084 set_polymorphed_reg(RT, destwid, ird, result)
1085 if (!RT.isvec) break
1086 if (RT.isvec)  { id += 1; }
1087 if (RA.isvec)  { irs1 += 1; }
1088 if (RB.isvec)  { irs2 += 1; }
1089
1090 Thus it can be clearly seen that elements are packed by their
1091 element width, and the packing starts from the source (or destination)
1092 specified by the instruction.
1093
1094 # Twin (implicit) result operations
1095
1096 Some operations in the Power ISA already target two 64-bit scalar
1097 registers: `lq` for example, and LD with update.
1098 Some mathematical algorithms are more
1099 efficient when there are two outputs rather than one, providing
1100 feedback loops between elements (the most well-known being add with
1101 carry). 64-bit multiply
1102 for example actually internally produces a 128 bit result, which clearly
1103 cannot be stored in a single 64 bit register. Some ISAs recommend
1104 "macro op fusion": the practice of setting a convention whereby if
1105 two commonly used instructions (mullo, mulhi) use the same ALU but
1106 one selects the low part of an identical operation and the other
1107 selects the high part, then optimised micro-architectures may
1108 "fuse" those two instructions together, using Micro-coding techniques,
1109 internally.
1110
1111 The practice and convention of macro-op fusion however is not compatible
1112 with SVP64 Horizontal-First, because Horizontal Mode may only
1113 be applied to a single instruction at a time, and SVP64 is based on
1114 the principle of strict Program Order even at the element
1115 level. Thus it becomes
1116 necessary to add explicit more complex single instructions with
1117 more operands than would normally be seen in the average RISC ISA
1118 (3-in, 2-out, in some cases). If it
1119 was not for Power ISA already having LD/ST with update as well as
1120 Condition Codes and `lq` this would be hard to justify.
1121
1122 With limited space in the `EXTRA` Field, and Power ISA opcodes
1123 being only 32 bit, 5 operands is quite an ask. `lq` however sets
1124 a precedent: `RTp` stands for "RT pair". In other words the result
1125 is stored in RT and RT+1. For Scalar operations, following this
1126 precedent is perfectly reasonable. In Scalar mode,
1127 `madded` therefore stores the two halves of the 128-bit multiply
1128 into RT and RT+1.
1129
1130 What, then, of `sv.madded`? If the destination is hard-coded to
1131 RT and RT+1 the instruction is not useful when Vectorised because
1132 the output will be overwritten on the next element. To solve this
1133 is easy: define the destination registers as RT and RT+MAXVL
1134 respectively. This makes it easy for compilers to statically allocate
1135 registers even when VL changes dynamically.
1136
1137 Bear in mind that both RT and RT+MAXVL are starting points for Vectors,
1138 and bear in mind that element-width overrides still have to be taken
1139 into consideration, the starting point for the implicit destination
1140 is best illustrated in pseudocode:
1141
1142 # demo of madded
1143  for (i = 0; i < VL; i++)
1144 if (predval & 1<<i) # predication
1145 src1 = get_polymorphed_reg(RA, srcwid, irs1)
1146 src2 = get_polymorphed_reg(RB, srcwid, irs2)
1147 src2 = get_polymorphed_reg(RC, srcwid, irs3)
1148 result = src1*src2 + src2
1149 destmask = (2<<destwid)-1
1150 # store two halves of result, both start from RT.
1151 set_polymorphed_reg(RT, destwid, ird , result&destmask)
1152 set_polymorphed_reg(RT, destwid, ird+MAXVL, result>>destwid)
1153 if (!RT.isvec) break
1154 if (RT.isvec)  { id += 1; }
1155 if (RA.isvec)  { irs1 += 1; }
1156 if (RB.isvec)  { irs2 += 1; }
1157 if (RC.isvec)  { irs3 += 1; }
1158
1159 The significant part here is that the second half is stored
1160 starting not from RT+MAXVL at all: it is the *element* index
1161 that is offset by MAXVL, both halves actually starting from RT.
1162 If VL is 3, MAXVL is 5, RT is 1, and dest elwidth is 32 then the elements
1163 RT0 to RT2 are stored:
1164
1165 0..31 32..63
1166 r0 unchanged unchanged
1167 r1 RT0.lo RT1.lo
1168 r2 RT2.lo unchanged
1169 r3 unchanged RT0.hi
1170 r4 RT1.hi RT2.hi
1171 r5 unchanged unchanged
1172
1173 Note that all of the LO halves start from r1, but that the HI halves
1174 start from half-way into r3. The reason is that with MAXVL bring
1175 5 and elwidth being 32, this is the 5th element
1176 offset (in 32 bit quantities) counting from r1.
1177
1178 *Programmer's note: accessing registers that have been placed
1179 starting on a non-contiguous boundary (half-way along a scalar
1180 register) can be inconvenient: REMAP can provide an offset but
1181 it requires extra instructions to set up. A simple solution
1182 is to ensure that MAXVL is rounded up such that the Vector
1183 ends cleanly on a contiguous register boundary. MAXVL=6 in
1184 the above example would achieve that*
1185
1186 Additional DRAFT Scalar instructions in 3-in 2-out form
1187 with an implicit 2nd destination:
1188
1189 * [[isa/svfixedarith]]
1190 * [[isa/svfparith]]
1191