3e4699da7cb1465932f06efd99dd5e5013037c54
[libreriscv.git] / openpower / sv / svp64 / appendix.mdwn
1 [[!tag standards]]
2
3 # Appendix
4
5 * <https://bugs.libre-soc.org/show_bug.cgi?id=574> Saturation
6 * <https://bugs.libre-soc.org/show_bug.cgi?id=558#c47> Parallel Prefix
7 * <https://bugs.libre-soc.org/show_bug.cgi?id=697> Reduce Modes
8 * <https://bugs.libre-soc.org/show_bug.cgi?id=864> parallel prefix simulator
9 * <https://bugs.libre-soc.org/show_bug.cgi?id=809> OV sv.addex discussion
10 * ARM SVE Fault-first <https://alastairreid.github.io/papers/sve-ieee-micro-2017.pdf>
11
12 This is the appendix to [[sv/svp64]], providing explanations of modes
13 etc. leaving the main svp64 page's primary purpose as outlining the
14 instruction format.
15
16 Table of contents:
17
18 [[!toc]]
19
20 # Partial Implementations
21
22 It is perfectly legal to implement subsets of SVP64 as long as illegal
23 instruction traps are always raised on unimplemented features,
24 so that soft-emulation is possible,
25 even for future revisions of SVP64. With SVP64 being partly controlled
26 through contextual SPRs, a little care has to be taken.
27
28 **All** SPRs
29 not implemented including reserved ones for future use must raise an illegal
30 instruction trap if read or written. This allows software the
31 opportunity to emulate the context created by the given SPR.
32
33 See [[sv/compliancy_levels]] for full details.
34
35 # XER, SO and other global flags
36
37 Vector systems are expected to be high performance. This is achieved
38 through parallelism, which requires that elements in the vector be
39 independent. XER SO/OV and other global "accumulation" flags (CR.SO) cause
40 Read-Write Hazards on single-bit global resources, having a significant
41 detrimental effect.
42
43 Consequently in SV, XER.SO behaviour is disregarded (including
44 in `cmp` instructions). XER.SO is not read, but XER.OV may be written,
45 breaking the Read-Modify-Write Hazard Chain that complicates
46 microarchitectural implementations.
47 This includes when `scalar identity behaviour` occurs. If precise
48 OpenPOWER v3.0/1 scalar behaviour is desired then OpenPOWER v3.0/1
49 instructions should be used without an SV Prefix.
50
51 TODO jacob add about OV https://www.intel.com/content/dam/www/public/us/en/documents/white-papers/ia-large-integer-arithmetic-paper.pdf
52
53 Of note here is that XER.SO and OV may already be disregarded in the
54 Power ISA v3.0/1 SFFS (Scalar Fixed and Floating) Compliancy Subset.
55 SVP64 simply makes it mandatory to disregard XER.SO even for other Subsets,
56 but only for SVP64 Prefixed Operations.
57
58 XER.CA/CA32 on the other hand is expected and required to be implemented
59 according to standard Power ISA Scalar behaviour. Interestingly, due
60 to SVP64 being in effect a hardware for-loop around Scalar instructions
61 executing in precise Program Order, a little thought shows that a Vectorised
62 Carry-In-Out add is in effect a Big Integer Add, taking a single bit Carry In
63 and producing, at the end, a single bit Carry out. High performance
64 implementations may exploit this observation to deploy efficient
65 Parallel Carry Lookahead.
66
67 # assume VL=4, this results in 4 sequential ops (below)
68 sv.adde r0.v, r4.v, r8.v
69
70 # instructions that get executed in backend hardware:
71 adde r0, r4, r8 # takes carry-in, produces carry-out
72 adde r1, r5, r9 # takes carry from previous
73 ...
74 adde r3, r7, r11 # likewise
75
76 It can clearly be seen that the carry chains from one
77 64 bit add to the next, the end result being that a
78 256-bit "Big Integer Add with Carry" has been performed, and that
79 CA contains the 257th bit. A one-instruction 512-bit Add-with-Carry
80 may be performed by setting VL=8, and a one-instruction
81 1024-bit Add-with-Carry by setting VL=16, and so on. More on
82 this in [[openpower/sv/biginteger]]
83
84 # v3.0B/v3.1 relevant instructions
85
86 SV is primarily designed for use as an efficient hybrid 3D GPU / VPU /
87 CPU ISA.
88
89 Vectorisation of the VSX Packed SIMD system makes no sense whatsoever,
90 the sole exceptions potentially being any operations with 128-bit
91 operands such as `vrlq` (Rotate Quad Word) and `xsaddqp` (Scalar
92 Quad-precision Add).
93 SV effectively *replaces* the majority of VSX, requiring far less
94 instructions, and provides, at the very minimum, predication
95 (which VSX was designed without).
96
97 Likewise, Load/Store Multiple make no sense to
98 have because they are not only provided by SV, the SV alternatives may
99 be predicated as well, making them far better suited to use in function
100 calls and context-switching.
101
102 Additionally, some v3.0/1 instructions simply make no sense at all in a
103 Vector context: `rfid` falls into this category,
104 as well as `sc` and `scv`. Here there is simply no point
105 trying to Vectorise them: the standard OpenPOWER v3.0/1 instructions
106 should be called instead.
107
108 Fortuitously this leaves several Major Opcodes free for use by SV
109 to fit alternative future instructions. In a 3D context this means
110 Vector Product, Vector Normalise, [[sv/mv.swizzle]], Texture LD/ST
111 operations, and others critical to an efficient, effective 3D GPU and
112 VPU ISA. With such instructions being included as standard in other
113 commercially-successful GPU ISAs it is likewise critical that a 3D
114 GPU/VPU based on svp64 also have such instructions.
115
116 Note however that svp64 is stand-alone and is in no way
117 critically dependent on the existence or provision of 3D GPU or VPU
118 instructions. These should be considered entirely separate
119 extensions, and their discussion
120 and specification is out of scope for this document.
121
122 ## Major opcode map (v3.0B)
123
124 This table is taken from v3.0B.
125 Table 9: Primary Opcode Map (opcode bits 0:5)
126
127 ```
128 | 000 | 001 | 010 | 011 | 100 | 101 | 110 | 111
129 000 | | | tdi | twi | EXT04 | | | mulli | 000
130 001 | subfic | | cmpli | cmpi | addic | addic. | addi | addis | 001
131 010 | bc/l/a | EXT17 | b/l/a | EXT19 | rlwimi| rlwinm | | rlwnm | 010
132 011 | ori | oris | xori | xoris | andi. | andis. | EXT30 | EXT31 | 011
133 100 | lwz | lwzu | lbz | lbzu | stw | stwu | stb | stbu | 100
134 101 | lhz | lhzu | lha | lhau | sth | sthu | lmw | stmw | 101
135 110 | lfs | lfsu | lfd | lfdu | stfs | stfsu | stfd | stfdu | 110
136 111 | lq | EXT57 | EXT58 | EXT59 | EXT60 | EXT61 | EXT62 | EXT63 | 111
137 | 000 | 001 | 010 | 011 | 100 | 101 | 110 | 111
138 ```
139
140 It is important to note that having a different v3.0B Scalar opcode
141 that is different from an SVP64 one is highly undesirable: the complexity
142 in the decoder is greatly increased, through breaking of the RISC paradigm.
143
144 # EXTRA Field Mapping
145
146 The purpose of the 9-bit EXTRA field mapping is to mark individual
147 registers (RT, RA, BFA) as either scalar or vector, and to extend
148 their numbering from 0..31 in Power ISA v3.0 to 0..127 in SVP64.
149 Three of the 9 bits may also be used up for a 2nd Predicate (Twin
150 Predication) leaving a mere 6 bits for qualifying registers. As can
151 be seen there is significant pressure on these (and in fact all) SVP64 bits.
152
153 In Power ISA v3.1 prefixing there are bits which describe and classify
154 the prefix in a fashion that is independent of the suffix. MLSS for
155 example. For SVP64 there is insufficient space to make the SVP64 Prefix
156 "self-describing", and consequently every single Scalar instruction
157 had to be individually analysed, by rote, to craft an EXTRA Field Mapping.
158 This process was semi-automated and is described in this section.
159 The final results, which are part of the SVP64 Specification, are here:
160 [[openpower/opcode_regs_deduped]]
161
162 * Firstly, every instruction's mnemonic (`add RT, RA, RB`) was analysed
163 from reading the markdown formatted version of the Scalar pseudocode
164 which is machine-readable and found in [[openpower/isatables]]. The
165 analysis gives, by instruction, a "Register Profile". `add RT, RA, RB`
166 for example is given a designation `RM-2R-1W` because it requires
167 two GPR reads and one GPR write.
168 * Secondly, the total number of registers was added up (2R-1W is 3 registers)
169 and if less than or equal to three then that instruction could be given an
170 EXTRA3 designation. Four or more is given an EXTRA2 designation because
171 there are only 9 bits available.
172 * Thirdly, the instruction was analysed to see if Twin or Single
173 Predication was suitable. As a general rule this was if there
174 was only a single operand and a single result (`extw` and LD/ST)
175 however it was found that some 2 or 3 operand instructions also
176 qualify. Given that 3 of the 9 bits of EXTRA had to be sacrificed for use
177 in Twin Predication, some compromises were made, here. LDST is
178 Twin but also has 3 operands in some operations, so only EXTRA2 can be used.
179 * Fourthly, a packing format was decided: for 2R-1W an EXTRA3 indexing
180 could have been decided
181 that RA would be indexed 0 (EXTRA bits 0-2), RB indexed 1 (EXTRA bits 3-5)
182 and RT indexed 2 (EXTRA bits 6-8). In some cases (LD/ST with update)
183 RA-as-a-source is given a **different** EXTRA index from RA-as-a-result
184 (because it is possible to do, and perceived to be useful). Rc=1
185 co-results (CR0, CR1) are always given the same EXTRA index as their
186 main result (RT, FRT).
187 * Fifthly, in an automated process the results of the analysis
188 were outputted in CSV Format for use in machine-readable form
189 by sv_analysis.py <https://git.libre-soc.org/?p=openpower-isa.git;a=blob;f=src/openpower/sv/sv_analysis.py;hb=HEAD>
190
191 This process was laborious but logical, and, crucially, once a
192 decision is made (and ratified) cannot be reversed.
193 Qualifying future Power ISA Scalar instructions for SVP64
194 is **strongly** advised to utilise this same process and the same
195 sv_analysis.py program as a canonical method of maintaining the
196 relationships. Alterations to that same program which
197 change the Designation is **prohibited** once finalised (ratified
198 through the Power ISA WG Process). It would
199 be similar to deciding that `add` should be changed from X-Form
200 to D-Form.
201
202 # Single Predication <a name="1p"> </a>
203
204 This is a standard mode normally found in Vector ISAs. every element in every source Vector and in the destination uses the same bit of one single predicate mask.
205
206 In SVSTATE, for Single-predication, implementors MUST increment both srcstep and dststep, but depending on whether sz and/or dz are set, srcstep and
207 dststep can still potentially become different indices. Only when sz=dz
208 is srcstep guaranteed to equal dststep at all times.
209
210 Note that in some Mode Formats there is only one flag (zz). This indicates
211 that *both* sz *and* dz are set to the same.
212
213 Example 1:
214
215 * VL=4
216 * mask=0b1101
217 * sz=0, dz=1
218
219 The following schedule for srcstep and dststep will occur:
220
221 | srcstep | dststep | comment |
222 | ---- | ----- | -------- |
223 | 0 | 0 | both mask[src=0] and mask[dst=0] are 1 |
224 | 1 | 2 | sz=1 but dz=0: dst skips mask[1], src soes not |
225 | 2 | 3 | mask[src=2] and mask[dst=3] are 1 |
226 | end | end | loop has ended because dst reached VL-1 |
227
228 Example 2:
229
230 * VL=4
231 * mask=0b1101
232 * sz=1, dz=0
233
234 The following schedule for srcstep and dststep will occur:
235
236 | srcstep | dststep | comment |
237 | ---- | ----- | -------- |
238 | 0 | 0 | both mask[src=0] and mask[dst=0] are 1 |
239 | 2 | 1 | sz=0 but dz=1: src skips mask[1], dst does not |
240 | 3 | 2 | mask[src=3] and mask[dst=2] are 1 |
241 | end | end | loop has ended because src reached VL-1 |
242
243 In both these examples it is crucial to note that despite there being
244 a single predicate mask, with sz and dz being different, srcstep and
245 dststep are being requested to react differently.
246
247 Example 3:
248
249 * VL=4
250 * mask=0b1101
251 * sz=0, dz=0
252
253 The following schedule for srcstep and dststep will occur:
254
255 | srcstep | dststep | comment |
256 | ---- | ----- | -------- |
257 | 0 | 0 | both mask[src=0] and mask[dst=0] are 1 |
258 | 2 | 2 | sz=0 and dz=0: both src and dst skip mask[1] |
259 | 3 | 3 | mask[src=3] and mask[dst=3] are 1 |
260 | end | end | loop has ended because src and dst reached VL-1 |
261
262 Here, both srcstep and dststep remain in lockstep because sz=dz=1
263
264 # Twin Predication <a name="2p"> </a>
265
266 This is a novel concept that allows predication to be applied to a single
267 source and a single dest register. The following types of traditional
268 Vector operations may be encoded with it, *without requiring explicit
269 opcodes to do so*
270
271 * VSPLAT (a single scalar distributed across a vector)
272 * VEXTRACT (like LLVM IR [`extractelement`](https://releases.llvm.org/11.0.0/docs/LangRef.html#extractelement-instruction))
273 * VINSERT (like LLVM IR [`insertelement`](https://releases.llvm.org/11.0.0/docs/LangRef.html#insertelement-instruction))
274 * VCOMPRESS (like LLVM IR [`llvm.masked.compressstore.*`](https://releases.llvm.org/11.0.0/docs/LangRef.html#llvm-masked-compressstore-intrinsics))
275 * VEXPAND (like LLVM IR [`llvm.masked.expandload.*`](https://releases.llvm.org/11.0.0/docs/LangRef.html#llvm-masked-expandload-intrinsics))
276
277 Those patterns (and more) may be applied to:
278
279 * mv (the usual way that V\* ISA operations are created)
280 * exts\* sign-extension
281 * rwlinm and other RS-RA shift operations (**note**: excluding
282 those that take RA as both a src and dest. These are not
283 1-src 1-dest, they are 2-src, 1-dest)
284 * LD and ST (treating AGEN as one source)
285 * FP fclass, fsgn, fneg, fabs, fcvt, frecip, fsqrt etc.
286 * Condition Register ops mfcr, mtcr and other similar
287
288 This is a huge list that creates extremely powerful combinations,
289 particularly given that one of the predicate options is `(1<<r3)`
290
291 Additional unusual capabilities of Twin Predication include a back-to-back
292 version of VCOMPRESS-VEXPAND which is effectively the ability to do
293 sequentially ordered multiple VINSERTs. The source predicate selects a
294 sequentially ordered subset of elements to be inserted; the destination
295 predicate specifies the sequentially ordered recipient locations.
296 This is equivalent to
297 `llvm.masked.compressstore.*`
298 followed by
299 `llvm.masked.expandload.*`
300 with a single instruction.
301
302 This extreme power and flexibility comes down to the fact that SVP64
303 is not actually a Vector ISA: it is a loop-abstraction-concept that
304 is applied *in general* to Scalar operations, just like the x86
305 `REP` instruction (if put on steroids).
306
307 # EXTRA Pack/Unpack bits
308
309 The pack/unpack concept of VSX `vpack` is abstracted out as a Sub-Vector
310 reordering Schedule.
311 Two bits in the `RM` field
312 enable either "packing" or "unpacking"
313 on the subvectors vec2/3/4.
314
315 First, llustrating a
316 "normal" SVP64 operation with `SUBVL!=1:` (assuming no elwidth overrides),
317 note that the VL loop is outer and the SUBVL loop inner:
318
319 def index():
320 for i in range(VL):
321 for j in range(SUBVL):
322 yield i*SUBVL+j
323
324 for idx in index():
325 operation_on(RA+idx)
326
327 For pack/unpack (again, no elwidth overrides), note that now there is the
328 option to swap the SUBVL and VL loop orders.
329 In effect the Pack/Unpack performs a Transpose of the subvector elements:
330
331 # yield an outer-SUBVL or inner VL loop with SUBVL
332 def index_p(outer):
333 if outer:
334 for j in range(SUBVL):
335 for i in range(VL):
336 yield i+VL*j
337 else:
338 for i in range(VL):
339 for j in range(SUBVL):
340 yield i*SUBVL+j
341
342 # walk through both source and dest indices simultaneously
343 for src_idx, dst_idx in zip(index_p(PACK), index_p(UNPACK)):
344 move_operation(RT+dst_idx, RA+src_idx)
345
346 "yield" from python is used here for simplicity and clarity.
347 The two Finite State Machines for the generation of the source
348 and destination element offsets progress incrementally in
349 lock-step.
350
351 Example VL=2, SUBVL=3, PACK_en=1 - elements grouped by
352 vec3 will be redistributed such that Sub-elements 0 are
353 packed together, Sub-elements 1 are packed together, as
354 are Sub-elements 2.
355
356 srcstep=0 srcstep=1
357 0 1 2 3 4 5
358
359 dststep=0 dststep=1 dststep=2
360 0 3 1 4 2 5
361
362 Setting of both `PACK_en` and `UNPACK_en` is neither prohibited nor
363 `UNDEFINED` because the reordering is fully deterministic, and
364 additional REMAP reordering may be applied. Combined with
365 Matrix REMAP this would
366 give potentially up to 4 Dimensions of reordering.
367
368 Pack/Unpack applies primarily to mv operations, mv.swizzle,
369 and some other single-source
370 single-destination operations such as Indexed LD/ST and extsw.
371 [[sv/mv.swizzle]] has a slightly different pseudocode algorithm
372 for Vertical-First Mode.
373
374 # Reduce modes
375
376 Reduction in SVP64 is deterministic and somewhat of a misnomer. A normal
377 Vector ISA would have explicit Reduce opcodes with defined characteristics
378 per operation: in SX Aurora there is even an additional scalar argument
379 containing the initial reduction value, and the default is either 0
380 or 1 depending on the specifics of the explicit opcode.
381 SVP64 fundamentally has to
382 utilise *existing* Scalar Power ISA v3.0B operations, which presents some
383 unique challenges.
384
385 The solution turns out to be to simply define reduction as permitting
386 deterministic element-based schedules to be issued using the base Scalar
387 operations, and to rely on the underlying microarchitecture to resolve
388 Register Hazards at the element level. This goes back to
389 the fundamental principle that SV is nothing more than a Sub-Program-Counter
390 sitting between Decode and Issue phases.
391
392 For Scalar Reduction,
393 Microarchitectures *may* take opportunities to parallelise the reduction
394 but only if in doing so they preserve strict Program Order at the Element Level.
395 Opportunities where this is possible include an `OR` operation
396 or a MIN/MAX operation: it may be possible to parallelise the reduction,
397 but for Floating Point it is not permitted due to different results
398 being obtained if the reduction is not executed in strict Program-Sequential
399 Order.
400
401 In essence it becomes the programmer's responsibility to leverage the
402 pre-determined schedules to desired effect.
403
404 ## Scalar result reduction and iteration
405
406 Scalar Reduction per se does not exist, instead is implemented in SVP64
407 as a simple and natural relaxation of the usual restriction on the Vector
408 Looping which would terminate if the destination was marked as a Scalar.
409 Scalar Reduction by contrast *keeps issuing Vector Element Operations*
410 even though the destination register is marked as scalar.
411 Thus it is up to the programmer to be aware of this, observe some
412 conventions, and thus end up achieving the desired outcome of scalar
413 reduction.
414
415 It is also important to appreciate that there is no
416 actual imposition or restriction on how this mode is utilised: there
417 will therefore be several valuable uses (including Vector Iteration
418 and "Reverse-Gear")
419 and it is up to the programmer to make best use of the
420 (strictly deterministic) capability
421 provided.
422
423 In this mode, which is suited to operations involving carry or overflow,
424 one register must be assigned, by convention by the programmer to be the
425 "accumulator". Scalar reduction is thus categorised by:
426
427 * One of the sources is a Vector
428 * the destination is a scalar
429 * optionally but most usefully when one source scalar register is
430 also the scalar destination (which may be informally termed
431 the "accumulator")
432 * That the source register type is the same as the destination register
433 type identified as the "accumulator". Scalar reduction on `cmp`,
434 `setb` or `isel` makes no sense for example because of the mixture
435 between CRs and GPRs.
436
437 *Note that issuing instructions in Scalar reduce mode such as `setb`
438 are neither `UNDEFINED` nor prohibited, despite them not making much
439 sense at first glance.
440 Scalar reduce is strictly defined behaviour, and the cost in
441 hardware terms of prohibition of seemingly non-sensical operations is too great.
442 Therefore it is permitted and required to be executed successfully.
443 Implementors **MAY** choose to optimise such instructions in instances
444 where their use results in "extraneous execution", i.e. where it is clear
445 that the sequence of operations, comprising multiple overwrites to
446 a scalar destination **without** cumulative, iterative, or reductive
447 behaviour (no "accumulator"), may discard all but the last element
448 operation. Identification
449 of such is trivial to do for `setb` and `cmp`: the source register type is
450 a completely different register file from the destination.
451 Likewise Scalar reduction when the destination is a Vector
452 is as if the Reduction Mode was not requested. However it would clearly
453 be unacceptable to perform such optimisations on cache-inhibited LD/ST,
454 so some considerable care needs to be taken.*
455
456 Typical applications include simple operations such as `ADD r3, r10.v,
457 r3` where, clearly, r3 is being used to accumulate the addition of all
458 elements of the vector starting at r10.
459
460 # add RT, RA,RB but when RT==RA
461 for i in range(VL):
462 iregs[RA] += iregs[RB+i] # RT==RA
463
464 However, *unless* the operation is marked as "mapreduce" (`sv.add/mr`)
465 SV ordinarily
466 **terminates** at the first scalar operation. Only by marking the
467 operation as "mapreduce" will it continue to issue multiple sub-looped
468 (element) instructions in `Program Order`.
469
470 To perform the loop in reverse order, the ```RG``` (reverse gear) bit must be set. This may be useful in situations where the results may be different
471 (floating-point) if executed in a different order. Given that there is
472 no actual prohibition on Reduce Mode being applied when the destination
473 is a Vector, the "Reverse Gear" bit turns out to be a way to apply Iterative
474 or Cumulative Vector operations in reverse. `sv.add/rg r3.v, r4.v, r4.v`
475 for example will start at the opposite end of the Vector and push
476 a cumulative series of overlapping add operations into the Execution units of
477 the underlying hardware.
478
479 Other examples include shift-mask operations where a Vector of inserts
480 into a single destination register is required (see [[sv/bitmanip]], bmset),
481 as a way to construct
482 a value quickly from multiple arbitrary bit-ranges and bit-offsets.
483 Using the same register as both the source and destination, with Vectors
484 of different offsets masks and values to be inserted has multiple
485 applications including Video, cryptography and JIT compilation.
486
487 # assume VL=4:
488 # * Vector of shift-offsets contained in RC (r12.v)
489 # * Vector of masks contained in RB (r8.v)
490 # * Vector of values to be masked-in in RA (r4.v)
491 # * Scalar destination RT (r0) to receive all mask-offset values
492 sv.bmset/mr r0, r4.v, r8.v, r12.v
493
494 Due to the Deterministic Scheduling,
495 Subtract and Divide are still permitted to be executed in this mode,
496 although from an algorithmic perspective it is strongly discouraged.
497 It would be better to use addition followed by one final subtract,
498 or in the case of divide, to get better accuracy, to perform a multiply
499 cascade followed by a final divide.
500
501 Note that single-operand or three-operand scalar-dest reduce is perfectly
502 well permitted: the programmer may still declare one register, used as
503 both a Vector source and Scalar destination, to be utilised as
504 the "accumulator". In the case of `sv.fmadds` and `sv.maddhw` etc
505 this naturally fits well with the normal expected usage of these
506 operations.
507
508 If an interrupt or exception occurs in the middle of the scalar mapreduce,
509 the scalar destination register **MUST** be updated with the current
510 (intermediate) result, because this is how ```Program Order``` is
511 preserved (Vector Loops are to be considered to be just another way of issuing instructions
512 in Program Order). In this way, after return from interrupt,
513 the scalar mapreduce may continue where it left off. This provides
514 "precise" exception behaviour.
515
516 Note that hardware is perfectly permitted to perform multi-issue
517 parallel optimisation of the scalar reduce operation: it's just that
518 as far as the user is concerned, all exceptions and interrupts **MUST**
519 be precise.
520
521 ## Vector result reduce mode
522
523 Vector Reduce Mode issues a deterministic tree-reduction schedule to the underlying micro-architecture. Like Scalar reduction, the "Scalar Base"
524 (Power ISA v3.0B) operation is leveraged, unmodified, to give the
525 *appearance* and *effect* of Reduction.
526
527 In Horizontal-First Mode, Vector-result reduction **requires**
528 the destination to be a Vector, which will be used to store
529 intermediary results.
530
531 Given that the tree-reduction schedule is deterministic,
532 Interrupts and exceptions
533 can therefore also be precise. The final result will be in the first
534 non-predicate-masked-out destination element, but due again to
535 the deterministic schedule programmers may find uses for the intermediate
536 results.
537
538 When Rc=1 a corresponding Vector of co-resultant CRs is also
539 created. No special action is taken: the result and its CR Field
540 are stored "as usual" exactly as all other SVP64 Rc=1 operations.
541
542 Note that the Schedule only makes sense on top of certain instructions:
543 X-Form with a Register Profile of `RT,RA,RB` is fine. Like Scalar
544 Reduction, nothing is prohibited:
545 the results of execution on an unsuitable instruction may simply
546 not make sense. Many 3-input instructions (madd, fmadd) unlike Scalar
547 Reduction in particular do not make sense, but `ternlogi`, if used
548 with care, would.
549
550 **Parallel-Reduction with Predication**
551
552 To avoid breaking the strict RISC-paradigm, keeping the Issue-Schedule
553 completely separate from the actual element-level (scalar) operations,
554 Move operations are **not** included in the Schedule. This means that
555 the Schedule leaves the final (scalar) result in the first-non-masked
556 element of the Vector used. With the predicate mask being dynamic
557 (but deterministic) this result could be anywhere.
558
559 If that result is needed to be moved to a (single) scalar register
560 then a follow-up `sv.mv/sm=predicate rt, ra.v` instruction will be
561 needed to get it, where the predicate is the exact same predicate used
562 in the prior Parallel-Reduction instruction. For *some* implementations
563 this may be a slow operation. It may be better to perform a pre-copy
564 of the values, compressing them (VREDUCE-style) into a contiguous block,
565 which will guarantee that the result goes into the very first element
566 of the destination vector.
567
568 **Usage conditions**
569
570 The simplest usage is to perform an overwrite, specifying all three
571 register operands the same.
572
573 setvl VL=6
574 sv.add/vr 8.v, 8.v, 8.v
575
576 The Reduction Schedule will issue the Parallel Tree Reduction spanning
577 registers 8 through 13, by adjusting the offsets to RT, RA and RB as
578 necessary (see "Parallel Reduction algorithm" in a later section).
579
580 A non-overwrite is possible as well but just as with the overwrite
581 version, only those destination elements necessary for storing
582 intermediary computations will be written to: the remaining elements
583 will **not** be overwritten and will **not** be zero'd.
584
585 setvl VL=4
586 sv.add/vr 0.v, 8.v, 8.v
587
588 ## Sub-Vector Horizontal Reduction
589
590 Note that when SVM is clear and SUBVL!=1 the sub-elements are
591 *independent*, i.e. they are mapreduced per *sub-element* as a result.
592 illustration with a vec2, assuming RA==RT, e.g `sv.add/mr/vec2 r4, r4, r16.v`
593
594 for i in range(0, VL):
595 # RA==RT in the instruction. does not have to be
596 iregs[RT].x = op(iregs[RT].x, iregs[RB+i].x)
597 iregs[RT].y = op(iregs[RT].y, iregs[RB+i].y)
598
599 Thus logically there is nothing special or unanticipated about
600 `SVM=0`: it is expected behaviour according to standard SVP64
601 Sub-Vector rules.
602
603 By contrast, when SVM is set and SUBVL!=1, a Horizontal
604 Subvector mode is enabled, which behaves very much more
605 like a traditional Vector Processor Reduction instruction.
606
607 Example for a vec2:
608
609 for i in range(VL):
610 iregs[RT+i] = op(iregs[RA+i].x, iregs[RB+i].y)
611
612 Example for a vec3:
613
614 for i in range(VL):
615 iregs[RT+i] = op(iregs[RA+i].x, iregs[RB+i].y)
616 iregs[RT+i] = op(iregs[RT+i] , iregs[RB+i].z)
617
618 Example for a vec4:
619
620 for i in range(VL):
621 iregs[RT+i] = op(iregs[RA+i].x, iregs[RB+i].y)
622 iregs[RT+i] = op(iregs[RT+i] , iregs[RB+i].z)
623 iregs[RT+i] = op(iregs[RT+i] , iregs[RB+i].w)
624
625 In this mode, when Rc=1 the Vector of CRs is as normal: each result
626 element creates a corresponding CR element (for the final, reduced, result).
627
628 Note:
629
630 1. that the destination (RT) is inherently used as an "Accumulator"
631 register, and consequently the Sub-Vector Loop is interruptible.
632 If RT is a Scalar then as usual the main VL Loop terminates at the
633 first predicated element (or the first element if unpredicated).
634 2. that the Sub-Vector designation applies to RA and RB *but not RT*.
635 3. that the number of operations executed is one less than the Sub-vector
636 length
637
638 # Fail-on-first <a name="fail-first"> </a>
639
640 Data-dependent fail-on-first has two distinct variants: one for LD/ST
641 (see [[sv/ldst]],
642 the other for arithmetic operations (actually, CR-driven)
643 [[sv/normal]] and CR operations [[sv/cr_ops]].
644 Note in each
645 case the assumption is that vector elements are required appear to be
646 executed in sequential Program Order, element 0 being the first.
647
648 * LD/ST ffirst treats the first LD/ST in a vector (element 0) as an
649 ordinary one. Exceptions occur "as normal". However for elements 1
650 and above, if an exception would occur, then VL is **truncated** to the
651 previous element.
652 * Data-driven (CR-driven) fail-on-first activates when Rc=1 or other
653 CR-creating operation produces a result (including cmp). Similar to
654 branch, an analysis of the CR is performed and if the test fails, the
655 vector operation terminates and discards all element operations
656 above the current one (and the current one if VLi is not set),
657 and VL is truncated to either
658 the *previous* element or the current one, depending on whether
659 VLi (VL "inclusive") is set.
660
661 Thus the new VL comprises a contiguous vector of results,
662 all of which pass the testing criteria (equal to zero, less than zero).
663
664 The CR-based data-driven fail-on-first is new and not found in ARM
665 SVE or RVV. At the same time it is also "old" because it is a generalisation
666 of the Z80
667 [Block compare](https://rvbelzen.tripod.com/z80prgtemp/z80prg04.htm)
668 instructions, especially
669 [CPIR](http://z80-heaven.wikidot.com/instructions-set:cpir)
670 which is based on CP (compare) as the ultimate "element" (suffix)
671 operation to which the repeat (prefix) is applied.
672 It is extremely useful for reducing instruction count,
673 however requires speculative execution involving modifications of VL
674 to get high performance implementations. An additional mode (RC1=1)
675 effectively turns what would otherwise be an arithmetic operation
676 into a type of `cmp`. The CR is stored (and the CR.eq bit tested
677 against the `inv` field).
678 If the CR.eq bit is equal to `inv` then the Vector is truncated and
679 the loop ends.
680 Note that when RC1=1 the result elements are never stored, only the CRs.
681
682 VLi is only available as an option when `Rc=0` (or for instructions
683 which do not have Rc). When set, the current element is always
684 also included in the count (the new length that VL will be set to).
685 This may be useful in combination with "inv" to truncate the Vector
686 to *exclude* elements that fail a test, or, in the case of implementations
687 of strncpy, to include the terminating zero.
688
689 In CR-based data-driven fail-on-first there is only the option to select
690 and test one bit of each CR (just as with branch BO). For more complex
691 tests this may be insufficient. If that is the case, a vectorised crops
692 (crand, cror) may be used, and ffirst applied to the crop instead of to
693 the arithmetic vector.
694
695 One extremely important aspect of ffirst is:
696
697 * LDST ffirst may never set VL equal to zero. This because on the first
698 element an exception must be raised "as normal".
699 * CR-based data-dependent ffirst on the other hand **can** set VL equal
700 to zero. This is the only means in the entirety of SV that VL may be set
701 to zero (with the exception of via the SV.STATE SPR). When VL is set
702 zero due to the first element failing the CR bit-test, all subsequent
703 vectorised operations are effectively `nops` which is
704 *precisely the desired and intended behaviour*.
705
706 Another aspect is that for ffirst LD/STs, VL may be truncated arbitrarily
707 to a nonzero value for any implementation-specific reason. For example:
708 it is perfectly reasonable for implementations to alter VL when ffirst
709 LD or ST operations are initiated on a nonaligned boundary, such that
710 within a loop the subsequent iteration of that loop begins subsequent
711 ffirst LD/ST operations on an aligned boundary. Likewise, to reduce
712 workloads or balance resources.
713
714 CR-based data-dependent first on the other hand MUST not truncate VL
715 arbitrarily to a length decided by the hardware: VL MUST only be
716 truncated based explicitly on whether a test fails.
717 This because it is a precise test on which algorithms
718 will rely.
719
720 *Note: there is no reverse-direction for Data-dependent Fail-First.
721 REMAP will need to be activated to invert the ordering of element
722 traversal.*
723
724 ## Data-dependent fail-first on CR operations (crand etc)
725
726 Operations that actually produce or alter CR Field as a result
727 do not also in turn have an Rc=1 mode. However it makes no
728 sense to try to test the 4 bits of a CR Field for being equal
729 or not equal to zero. Moreover, the result is already in the
730 form that is desired: it is a CR field. Therefore,
731 CR-based operations have their own SVP64 Mode, described
732 in [[sv/cr_ops]]
733
734 There are two primary different types of CR operations:
735
736 * Those which have a 3-bit operand field (referring to a CR Field)
737 * Those which have a 5-bit operand (referring to a bit within the
738 whole 32-bit CR)
739
740 More details can be found in [[sv/cr_ops]].
741
742 # pred-result mode
743
744 Pred-result mode may not be applied on CR-based operations.
745
746 Although CR operations (mtcr, crand, cror) may be Vectorised,
747 predicated, pred-result mode applies to operations that have
748 an Rc=1 mode, or make sense to add an RC1 option.
749
750 Predicate-result merges common CR testing with predication, saving on
751 instruction count. In essence, a Condition Register Field test
752 is performed, and if it fails it is considered to have been
753 *as if* the destination predicate bit was zero. Given that
754 there are no CR-based operations that produce Rc=1 co-results,
755 there can be no pred-result mode for mtcr and other CR-based instructions
756
757 Arithmetic and Logical Pred-result, which does have Rc=1 or for which
758 RC1 Mode makes sense, is covered in [[sv/normal]]
759
760 # CR Operations
761
762 CRs are slightly more involved than INT or FP registers due to the
763 possibility for indexing individual bits (crops BA/BB/BT). Again however
764 the access pattern needs to be understandable in relation to v3.0B / v3.1B
765 numbering, with a clear linear relationship and mapping existing when
766 SV is applied.
767
768 ## CR EXTRA mapping table and algorithm <a name="cr_extra"></a>
769
770 Numbering relationships for CR fields are already complex due to being
771 in BE format (*the relationship is not clearly explained in the v3.0B
772 or v3.1 specification*). However with some care and consideration
773 the exact same mapping used for INT and FP regfiles may be applied,
774 just to the upper bits, as explained below. The notation
775 `CR{field number}` is used to indicate access to a particular
776 Condition Register Field (as opposed to the notation `CR[bit]`
777 which accesses one bit of the 32 bit Power ISA v3.0B
778 Condition Register)
779
780 `CR{n}` refers to `CR0` when `n=0` and consequently, for CR0-7, is defined, in v3.0B pseudocode, as:
781
782 CR{7-n} = CR[32+n*4:35+n*4]
783
784 For SVP64 the relationship for the sequential
785 numbering of elements is to the CR **fields** within
786 the CR Register, not to individual bits within the CR register.
787
788 In OpenPOWER v3.0/1, BF/BT/BA/BB are all 5 bits. The top 3 bits (0:2)
789 select one of the 8 CRs; the bottom 2 bits (3:4) select one of 4 bits
790 *in* that CR (EQ/LT/GT/SO). The numbering was determined (after 4 months of
791 analysis and research) to be as follows:
792
793 CR_index = 7-(BA>>2) # top 3 bits but BE
794 bit_index = 3-(BA & 0b11) # low 2 bits but BE
795 CR_reg = CR{CR_index} # get the CR
796 # finally get the bit from the CR.
797 CR_bit = (CR_reg & (1<<bit_index)) != 0
798
799 When it comes to applying SV, it is the CR\_reg number to which SV EXTRA2/3
800 applies, **not** the CR\_bit portion (bits 3-4):
801
802 if extra3_mode:
803 spec = EXTRA3
804 else:
805 spec = EXTRA2<<1 | 0b0
806 if spec[0]:
807 # vector constructs "BA[0:2] spec[1:2] 00 BA[3:4]"
808 return ((BA >> 2)<<6) | # hi 3 bits shifted up
809 (spec[1:2]<<4) | # to make room for these
810 (BA & 0b11) # CR_bit on the end
811 else:
812 # scalar constructs "00 spec[1:2] BA[0:4]"
813 return (spec[1:2] << 5) | BA
814
815 Thus, for example, to access a given bit for a CR in SV mode, the v3.0B
816 algorithm to determine CR\_reg is modified to as follows:
817
818 CR_index = 7-(BA>>2) # top 3 bits but BE
819 if spec[0]:
820 # vector mode, 0-124 increments of 4
821 CR_index = (CR_index<<4) | (spec[1:2] << 2)
822 else:
823 # scalar mode, 0-32 increments of 1
824 CR_index = (spec[1:2]<<3) | CR_index
825 # same as for v3.0/v3.1 from this point onwards
826 bit_index = 3-(BA & 0b11) # low 2 bits but BE
827 CR_reg = CR{CR_index} # get the CR
828 # finally get the bit from the CR.
829 CR_bit = (CR_reg & (1<<bit_index)) != 0
830
831 Note here that the decoding pattern to determine CR\_bit does not change.
832
833 Note: high-performance implementations may read/write Vectors of CRs in
834 batches of aligned 32-bit chunks (CR0-7, CR7-15). This is to greatly
835 simplify internal design. If instructions are issued where CR Vectors
836 do not start on a 32-bit aligned boundary, performance may be affected.
837
838 ## CR fields as inputs/outputs of vector operations
839
840 CRs (or, the arithmetic operations associated with them)
841 may be marked as Vectorised or Scalar. When Rc=1 in arithmetic operations that have no explicit EXTRA to cover the CR, the CR is Vectorised if the destination is Vectorised. Likewise if the destination is scalar then so is the CR.
842
843 When vectorized, the CR inputs/outputs are sequentially read/written
844 to 4-bit CR fields. Vectorised Integer results, when Rc=1, will begin
845 writing to CR8 (TBD evaluate) and increase sequentially from there.
846 This is so that:
847
848 * implementations may rely on the Vector CRs being aligned to 8. This
849 means that CRs may be read or written in aligned batches of 32 bits
850 (8 CRs per batch), for high performance implementations.
851 * scalar Rc=1 operation (CR0, CR1) and callee-saved CRs (CR2-4) are not
852 overwritten by vector Rc=1 operations except for very large VL
853 * CR-based predication, from CR32, is also not interfered with
854 (except by large VL).
855
856 However when the SV result (destination) is marked as a scalar by the
857 EXTRA field the *standard* v3.0B behaviour applies: the accompanying
858 CR when Rc=1 is written to. This is CR0 for integer operations and CR1
859 for FP operations.
860
861 Note that yes, the CR Fields are genuinely Vectorised. Unlike in SIMD VSX which
862 has a single CR (CR6) for a given SIMD result, SV Vectorised OpenPOWER
863 v3.0B scalar operations produce a **tuple** of element results: the
864 result of the operation as one part of that element *and a corresponding
865 CR element*. Greatly simplified pseudocode:
866
867 for i in range(VL):
868 # calculate the vector result of an add
869 iregs[RT+i] = iregs[RA+i] + iregs[RB+i]
870 # now calculate CR bits
871 CRs{8+i}.eq = iregs[RT+i] == 0
872 CRs{8+i}.gt = iregs[RT+i] > 0
873 ... etc
874
875 If a "cumulated" CR based analysis of results is desired (a la VSX CR6)
876 then a followup instruction must be performed, setting "reduce" mode on
877 the Vector of CRs, using cr ops (crand, crnor) to do so. This provides far
878 more flexibility in analysing vectors than standard Vector ISAs. Normal
879 Vector ISAs are typically restricted to "were all results nonzero" and
880 "were some results nonzero". The application of mapreduce to Vectorised
881 cr operations allows far more sophisticated analysis, particularly in
882 conjunction with the new crweird operations see [[sv/cr_int_predication]].
883
884 Note in particular that the use of a separate instruction in this way
885 ensures that high performance multi-issue OoO inplementations do not
886 have the computation of the cumulative analysis CR as a bottleneck and
887 hindrance, regardless of the length of VL.
888
889 Additionally,
890 SVP64 [[sv/branches]] may be used, even when the branch itself is to
891 the following instruction. The combined side-effects of CTR reduction
892 and VL truncation provide several benefits.
893
894 (see [[discussion]]. some alternative schemes are described there)
895
896 ## Rc=1 when SUBVL!=1
897
898 sub-vectors are effectively a form of Packed SIMD (length 2 to 4). Only 1 bit of
899 predicate is allocated per subvector; likewise only one CR is allocated
900 per subvector.
901
902 This leaves a conundrum as to how to apply CR computation per subvector,
903 when normally Rc=1 is exclusively applied to scalar elements. A solution
904 is to perform a bitwise OR or AND of the subvector tests. Given that
905 OE is ignored in SVP64, this field may (when available) be used to select OR or
906 AND behavior.
907
908 ### Table of CR fields
909
910 CRn is the notation used by the OpenPower spec to refer to CR field #i,
911 so FP instructions with Rc=1 write to CR1 (n=1).
912
913 CRs are not stored in SPRs: they are registers in their own right.
914 Therefore context-switching the full set of CRs involves a Vectorised
915 mfcr or mtcr, using VL=8 to do so. This is exactly as how
916 scalar OpenPOWER context-switches CRs: it is just that there are now
917 more of them.
918
919 The 64 SV CRs are arranged similarly to the way the 128 integer registers
920 are arranged. TODO a python program that auto-generates a CSV file
921 which can be included in a table, which is in a new page (so as not to
922 overwhelm this one). [[svp64/cr_names]]
923
924 # Register Profiles
925
926 Instructions are broken down by Register Profiles as listed in the
927 following auto-generated page: [[opcode_regs_deduped]]. These tables,
928 despite being auto-generated, are part of the Specification.
929
930 # SV pseudocode illilustration
931
932 ## Single-predicated Instruction
933
934 illustration of normal mode add operation: zeroing not included, elwidth
935 overrides not included. if there is no predicate, it is set to all 1s
936
937 function op_add(rd, rs1, rs2) # add not VADD!
938 int i, id=0, irs1=0, irs2=0;
939 predval = get_pred_val(FALSE, rd);
940 for (i = 0; i < VL; i++)
941 STATE.srcoffs = i # save context
942 if (predval & 1<<i) # predication uses intregs
943 ireg[rd+id] <= ireg[rs1+irs1] + ireg[rs2+irs2];
944 if (!int_vec[rd].isvec) break;
945 if (rd.isvec) { id += 1; }
946 if (rs1.isvec) { irs1 += 1; }
947 if (rs2.isvec) { irs2 += 1; }
948 if (id == VL or irs1 == VL or irs2 == VL)
949 {
950 # end VL hardware loop
951 STATE.srcoffs = 0; # reset
952 return;
953 }
954
955 This has several modes:
956
957 * RT.v = RA.v RB.v
958 * RT.v = RA.v RB.s (and RA.s RB.v)
959 * RT.v = RA.s RB.s
960 * RT.s = RA.v RB.v
961 * RT.s = RA.v RB.s (and RA.s RB.v)
962 * RT.s = RA.s RB.s
963
964 All of these may be predicated. Vector-Vector is straightfoward.
965 When one of source is a Vector and the other a Scalar, it is clear that
966 each element of the Vector source should be added to the Scalar source,
967 each result placed into the Vector (or, if the destination is a scalar,
968 only the first nonpredicated result).
969
970 The one that is not obvious is RT=vector but both RA/RB=scalar.
971 Here this acts as a "splat scalar result", copying the same result into
972 all nonpredicated result elements. If a fixed destination scalar was
973 intended, then an all-Scalar operation should be used.
974
975 See <https://bugs.libre-soc.org/show_bug.cgi?id=552>
976
977 # Assembly Annotation
978
979 Assembly code annotation is required for SV to be able to successfully
980 mark instructions as "prefixed".
981
982 A reasonable (prototype) starting point:
983
984 svp64 [field=value]*
985
986 Fields:
987
988 * ew=8/16/32 - element width
989 * sew=8/16/32 - source element width
990 * vec=2/3/4 - SUBVL
991 * mode=mr/satu/sats/crpred
992 * pred=1\<\<3/r3/~r3/r10/~r10/r30/~r30/lt/gt/le/ge/eq/ne
993
994 similar to x86 "rex" prefix.
995
996 For actual assembler:
997
998 sv.asmcode/mode.vec{N}.ew=8,sw=16,m={pred},sm={pred} reg.v, src.s
999
1000 Qualifiers:
1001
1002 * m={pred}: predicate mask mode
1003 * sm={pred}: source-predicate mask mode (only allowed in Twin-predication)
1004 * vec{N}: vec2 OR vec3 OR vec4 - sets SUBVL=2/3/4
1005 * ew={N}: ew=8/16/32 - sets elwidth override
1006 * sw={N}: sw=8/16/32 - sets source elwidth override
1007 * ff={xx}: see fail-first mode
1008 * pr={xx}: see predicate-result mode
1009 * sat{x}: satu / sats - see saturation mode
1010 * mr: see map-reduce mode
1011 * mrr: map-reduce, reverse-gear (VL-1 downto 0)
1012 * mr.svm see map-reduce with sub-vector mode
1013 * crm: see map-reduce CR mode
1014 * crm.svm see map-reduce CR with sub-vector mode
1015 * sz: predication with source-zeroing
1016 * dz: predication with dest-zeroing
1017
1018 For modes:
1019
1020 * pred-result:
1021 - pm=lt/gt/le/ge/eq/ne/so/ns
1022 - RC1 mode
1023 * fail-first
1024 - ff=lt/gt/le/ge/eq/ne/so/ns
1025 - RC1 mode
1026 * saturation:
1027 - sats
1028 - satu
1029 * map-reduce:
1030 - mr OR crm: "normal" map-reduce mode or CR-mode.
1031 - mr.svm OR crm.svm: when vec2/3/4 set, sub-vector mapreduce is enabled
1032
1033 # Parallel-reduction algorithm
1034
1035 The principle of SVP64 is that SVP64 is a fully-independent
1036 Abstraction of hardware-looping in between issue and execute phases
1037 that has no relation to the operation it issues.
1038 Additional state cannot be saved on context-switching beyond that
1039 of SVSTATE, making things slightly tricky.
1040
1041 Executable demo pseudocode, full version
1042 [here](https://git.libre-soc.org/?p=libreriscv.git;a=blob;f=openpower/sv/test_preduce.py;hb=HEAD)
1043
1044 ```
1045 [[!inline pages="openpower/sv/preduce.py" raw="yes" ]]
1046 ```
1047
1048 This algorithm works by noting when data remains in-place rather than
1049 being reduced, and referring to that alternative position on subsequent
1050 layers of reduction. It is re-entrant. If however interrupted and
1051 restored, some implementations may take longer to re-establish the
1052 context.
1053
1054 Its application by default is that:
1055
1056 * RA, FRA or BFA is the first register as the first operand
1057 (ci index offset in the above pseudocode)
1058 * RB, FRB or BFB is the second (co index offset)
1059 * RT (result) also uses ci **if RA==RT**
1060
1061 For more complex applications a REMAP Schedule must be used
1062
1063 *Programmers's note:
1064 if passed a predicate mask with only one bit set, this algorithm
1065 takes no action, similar to when a predicate mask is all zero.*
1066
1067 *Implementor's Note: many SIMD-based Parallel Reduction Algorithms are
1068 implemented in hardware with MVs that ensure lane-crossing is minimised.
1069 The mistake which would be catastrophic to SVP64 to make is to then
1070 limit the Reduction Sequence for all implementors
1071 based solely and exclusively on what one
1072 specific internal microarchitecture does.
1073 In SIMD ISAs the internal SIMD Architectural design is exposed and imposed on the programmer. Cray-style Vector ISAs on the other hand provide convenient,
1074 compact and efficient encodings of abstract concepts.*
1075 **It is the Implementor's responsibility to produce a design
1076 that complies with the above algorithm,
1077 utilising internal Micro-coding and other techniques to transparently
1078 insert micro-architectural lane-crossing Move operations
1079 if necessary or desired, to give the level of efficiency or performance
1080 required.**
1081
1082 # Element-width overrides <a name="elwidth"> </>
1083
1084 Element-width overrides are best illustrated with a packed structure
1085 union in the c programming language. The following should be taken
1086 literally, and assume always a little-endian layout:
1087
1088 typedef union {
1089 uint8_t b[];
1090 uint16_t s[];
1091 uint32_t i[];
1092 uint64_t l[];
1093 uint8_t actual_bytes[8];
1094 } el_reg_t;
1095
1096 elreg_t int_regfile[128];
1097
1098 get_polymorphed_reg(reg, bitwidth, offset):
1099 el_reg_t res;
1100 res.l = 0; // TODO: going to need sign-extending / zero-extending
1101 if bitwidth == 8:
1102 reg.b = int_regfile[reg].b[offset]
1103 elif bitwidth == 16:
1104 reg.s = int_regfile[reg].s[offset]
1105 elif bitwidth == 32:
1106 reg.i = int_regfile[reg].i[offset]
1107 elif bitwidth == 64:
1108 reg.l = int_regfile[reg].l[offset]
1109 return res
1110
1111 set_polymorphed_reg(reg, bitwidth, offset, val):
1112 if (!reg.isvec):
1113 # not a vector: first element only, overwrites high bits
1114 int_regfile[reg].l[0] = val
1115 elif bitwidth == 8:
1116 int_regfile[reg].b[offset] = val
1117 elif bitwidth == 16:
1118 int_regfile[reg].s[offset] = val
1119 elif bitwidth == 32:
1120 int_regfile[reg].i[offset] = val
1121 elif bitwidth == 64:
1122 int_regfile[reg].l[offset] = val
1123
1124 In effect the GPR registers r0 to r127 (and corresponding FPRs fp0
1125 to fp127) are reinterpreted to be "starting points" in a byte-addressable
1126 memory. Vectors - which become just a virtual naming construct - effectively
1127 overlap.
1128
1129 It is extremely important for implementors to note that the only circumstance
1130 where upper portions of an underlying 64-bit register are zero'd out is
1131 when the destination is a scalar. The ideal register file has byte-level
1132 write-enable lines, just like most SRAMs, in order to avoid READ-MODIFY-WRITE.
1133
1134 An example ADD operation with predication and element width overrides:
1135
1136  for (i = 0; i < VL; i++)
1137 if (predval & 1<<i) # predication
1138 src1 = get_polymorphed_reg(RA, srcwid, irs1)
1139 src2 = get_polymorphed_reg(RB, srcwid, irs2)
1140 result = src1 + src2 # actual add here
1141 set_polymorphed_reg(RT, destwid, ird, result)
1142 if (!RT.isvec) break
1143 if (RT.isvec)  { id += 1; }
1144 if (RA.isvec)  { irs1 += 1; }
1145 if (RB.isvec)  { irs2 += 1; }
1146
1147 Thus it can be clearly seen that elements are packed by their
1148 element width, and the packing starts from the source (or destination)
1149 specified by the instruction.
1150
1151 # Twin (implicit) result operations
1152
1153 Some operations in the Power ISA already target two 64-bit scalar
1154 registers: `lq` for example, and LD with update.
1155 Some mathematical algorithms are more
1156 efficient when there are two outputs rather than one, providing
1157 feedback loops between elements (the most well-known being add with
1158 carry). 64-bit multiply
1159 for example actually internally produces a 128 bit result, which clearly
1160 cannot be stored in a single 64 bit register. Some ISAs recommend
1161 "macro op fusion": the practice of setting a convention whereby if
1162 two commonly used instructions (mullo, mulhi) use the same ALU but
1163 one selects the low part of an identical operation and the other
1164 selects the high part, then optimised micro-architectures may
1165 "fuse" those two instructions together, using Micro-coding techniques,
1166 internally.
1167
1168 The practice and convention of macro-op fusion however is not compatible
1169 with SVP64 Horizontal-First, because Horizontal Mode may only
1170 be applied to a single instruction at a time, and SVP64 is based on
1171 the principle of strict Program Order even at the element
1172 level. Thus it becomes
1173 necessary to add explicit more complex single instructions with
1174 more operands than would normally be seen in the average RISC ISA
1175 (3-in, 2-out, in some cases). If it
1176 was not for Power ISA already having LD/ST with update as well as
1177 Condition Codes and `lq` this would be hard to justify.
1178
1179 With limited space in the `EXTRA` Field, and Power ISA opcodes
1180 being only 32 bit, 5 operands is quite an ask. `lq` however sets
1181 a precedent: `RTp` stands for "RT pair". In other words the result
1182 is stored in RT and RT+1. For Scalar operations, following this
1183 precedent is perfectly reasonable. In Scalar mode,
1184 `madded` therefore stores the two halves of the 128-bit multiply
1185 into RT and RT+1.
1186
1187 What, then, of `sv.madded`? If the destination is hard-coded to
1188 RT and RT+1 the instruction is not useful when Vectorised because
1189 the output will be overwritten on the next element. To solve this
1190 is easy: define the destination registers as RT and RT+MAXVL
1191 respectively. This makes it easy for compilers to statically allocate
1192 registers even when VL changes dynamically.
1193
1194 Bear in mind that both RT and RT+MAXVL are starting points for Vectors,
1195 and bear in mind that element-width overrides still have to be taken
1196 into consideration, the starting point for the implicit destination
1197 is best illustrated in pseudocode:
1198
1199 # demo of madded
1200  for (i = 0; i < VL; i++)
1201 if (predval & 1<<i) # predication
1202 src1 = get_polymorphed_reg(RA, srcwid, irs1)
1203 src2 = get_polymorphed_reg(RB, srcwid, irs2)
1204 src2 = get_polymorphed_reg(RC, srcwid, irs3)
1205 result = src1*src2 + src2
1206 destmask = (2<<destwid)-1
1207 # store two halves of result, both start from RT.
1208 set_polymorphed_reg(RT, destwid, ird , result&destmask)
1209 set_polymorphed_reg(RT, destwid, ird+MAXVL, result>>destwid)
1210 if (!RT.isvec) break
1211 if (RT.isvec)  { id += 1; }
1212 if (RA.isvec)  { irs1 += 1; }
1213 if (RB.isvec)  { irs2 += 1; }
1214 if (RC.isvec)  { irs3 += 1; }
1215
1216 The significant part here is that the second half is stored
1217 starting not from RT+MAXVL at all: it is the *element* index
1218 that is offset by MAXVL, both halves actually starting from RT.
1219 If VL is 3, MAXVL is 5, RT is 1, and dest elwidth is 32 then the elements
1220 RT0 to RT2 are stored:
1221
1222 0..31 32..63
1223 r0 unchanged unchanged
1224 r1 RT0.lo RT1.lo
1225 r2 RT2.lo unchanged
1226 r3 unchanged RT0.hi
1227 r4 RT1.hi RT2.hi
1228 r5 unchanged unchanged
1229
1230 Note that all of the LO halves start from r1, but that the HI halves
1231 start from half-way into r3. The reason is that with MAXVL bring
1232 5 and elwidth being 32, this is the 5th element
1233 offset (in 32 bit quantities) counting from r1.
1234
1235 *Programmer's note: accessing registers that have been placed
1236 starting on a non-contiguous boundary (half-way along a scalar
1237 register) can be inconvenient: REMAP can provide an offset but
1238 it requires extra instructions to set up. A simple solution
1239 is to ensure that MAXVL is rounded up such that the Vector
1240 ends cleanly on a contiguous register boundary. MAXVL=6 in
1241 the above example would achieve that*
1242
1243 Additional DRAFT Scalar instructions in 3-in 2-out form
1244 with an implicit 2nd destination:
1245
1246 * [[isa/svfixedarith]]
1247 * [[isa/svfparith]]
1248