(no commit message)
[libreriscv.git] / openpower / sv / svp64 / appendix.mdwn
1 [[!tag standards]]
2
3 # Appendix
4
5 * <https://bugs.libre-soc.org/show_bug.cgi?id=574> Saturation
6 * <https://bugs.libre-soc.org/show_bug.cgi?id=558#c47> Parallel Prefix
7 * <https://bugs.libre-soc.org/show_bug.cgi?id=697> Reduce Modes
8 * <https://bugs.libre-soc.org/show_bug.cgi?id=864> parallel prefix simulator
9 * <https://bugs.libre-soc.org/show_bug.cgi?id=809> OV sv.addex discussion
10
11 This is the appendix to [[sv/svp64]], providing explanations of modes
12 etc. leaving the main svp64 page's primary purpose as outlining the
13 instruction format.
14
15 Table of contents:
16
17 [[!toc]]
18
19 # Partial Implementations
20
21 It is perfectly legal to implement subsets of SVP64 as long as illegal
22 instruction traps are always raised on unimplemented features,
23 so that soft-emulation is possible,
24 even for future revisions of SVP64. With SVP64 being partly controlled
25 through contextual SPRs, a little care has to be taken.
26
27 **All** SPRs
28 not implemented including reserved ones for future use must raise an illegal
29 instruction trap if read or written. This allows software the
30 opportunity to emulate the context created by the given SPR.
31
32 See [[sv/compliancy_levels]] for full details.
33
34 # XER, SO and other global flags
35
36 Vector systems are expected to be high performance. This is achieved
37 through parallelism, which requires that elements in the vector be
38 independent. XER SO/OV and other global "accumulation" flags (CR.SO) cause
39 Read-Write Hazards on single-bit global resources, having a significant
40 detrimental effect.
41
42 Consequently in SV, XER.SO behaviour is disregarded (including
43 in `cmp` instructions). XER.SO is not read, but XER.OV may be written,
44 breaking the Read-Modify-Write Hazard Chain that complicates
45 microarchitectural implementations.
46 This includes when `scalar identity behaviour` occurs. If precise
47 OpenPOWER v3.0/1 scalar behaviour is desired then OpenPOWER v3.0/1
48 instructions should be used without an SV Prefix.
49
50 TODO jacob add about OV https://www.intel.com/content/dam/www/public/us/en/documents/white-papers/ia-large-integer-arithmetic-paper.pdf
51
52 Of note here is that XER.SO and OV may already be disregarded in the
53 Power ISA v3.0/1 SFFS (Scalar Fixed and Floating) Compliancy Subset.
54 SVP64 simply makes it mandatory to disregard XER.SO even for other Subsets,
55 but only for SVP64 Prefixed Operations.
56
57 XER.CA/CA32 on the other hand is expected and required to be implemented
58 according to standard Power ISA Scalar behaviour. Interestingly, due
59 to SVP64 being in effect a hardware for-loop around Scalar instructions
60 executing in precise Program Order, a little thought shows that a Vectorised
61 Carry-In-Out add is in effect a Big Integer Add, taking a single bit Carry In
62 and producing, at the end, a single bit Carry out. High performance
63 implementations may exploit this observation to deploy efficient
64 Parallel Carry Lookahead.
65
66 # assume VL=4, this results in 4 sequential ops (below)
67 sv.adde r0.v, r4.v, r8.v
68
69 # instructions that get executed in backend hardware:
70 adde r0, r4, r8 # takes carry-in, produces carry-out
71 adde r1, r5, r9 # takes carry from previous
72 ...
73 adde r3, r7, r11 # likewise
74
75 It can clearly be seen that the carry chains from one
76 64 bit add to the next, the end result being that a
77 256-bit "Big Integer Add with Carry" has been performed, and that
78 CA contains the 257th bit. A one-instruction 512-bit Add-with-Carry
79 may be performed by setting VL=8, and a one-instruction
80 1024-bit Add-with-Carry by setting VL=16, and so on. More on
81 this in [[openpower/sv/biginteger]]
82
83 # v3.0B/v3.1 relevant instructions
84
85 SV is primarily designed for use as an efficient hybrid 3D GPU / VPU /
86 CPU ISA.
87
88 Vectorisation of the VSX Packed SIMD system makes no sense whatsoever,
89 the sole exceptions potentially being any operations with 128-bit
90 operands such as `vrlq` (Rotate Quad Word) and `xsaddqp` (Scalar
91 Quad-precision Add).
92 SV effectively *replaces* the majority of VSX, requiring far less
93 instructions, and provides, at the very minimum, predication
94 (which VSX was designed without).
95
96 Likewise, Load/Store Multiple make no sense to
97 have because they are not only provided by SV, the SV alternatives may
98 be predicated as well, making them far better suited to use in function
99 calls and context-switching.
100
101 Additionally, some v3.0/1 instructions simply make no sense at all in a
102 Vector context: `rfid` falls into this category,
103 as well as `sc` and `scv`. Here there is simply no point
104 trying to Vectorise them: the standard OpenPOWER v3.0/1 instructions
105 should be called instead.
106
107 Fortuitously this leaves several Major Opcodes free for use by SV
108 to fit alternative future instructions. In a 3D context this means
109 Vector Product, Vector Normalise, [[sv/mv.swizzle]], Texture LD/ST
110 operations, and others critical to an efficient, effective 3D GPU and
111 VPU ISA. With such instructions being included as standard in other
112 commercially-successful GPU ISAs it is likewise critical that a 3D
113 GPU/VPU based on svp64 also have such instructions.
114
115 Note however that svp64 is stand-alone and is in no way
116 critically dependent on the existence or provision of 3D GPU or VPU
117 instructions. These should be considered extensions, and their discussion
118 and specification is out of scope for this document.
119
120 Note, again: this is *only* under svp64 prefixing. Standard v3.0B /
121 v3.1B is *not* altered by svp64 in any way.
122
123 ## Major opcode map (v3.0B)
124
125 This table is taken from v3.0B.
126 Table 9: Primary Opcode Map (opcode bits 0:5)
127
128 ```
129 | 000 | 001 | 010 | 011 | 100 | 101 | 110 | 111
130 000 | | | tdi | twi | EXT04 | | | mulli | 000
131 001 | subfic | | cmpli | cmpi | addic | addic. | addi | addis | 001
132 010 | bc/l/a | EXT17 | b/l/a | EXT19 | rlwimi| rlwinm | | rlwnm | 010
133 011 | ori | oris | xori | xoris | andi. | andis. | EXT30 | EXT31 | 011
134 100 | lwz | lwzu | lbz | lbzu | stw | stwu | stb | stbu | 100
135 101 | lhz | lhzu | lha | lhau | sth | sthu | lmw | stmw | 101
136 110 | lfs | lfsu | lfd | lfdu | stfs | stfsu | stfd | stfdu | 110
137 111 | lq | EXT57 | EXT58 | EXT59 | EXT60 | EXT61 | EXT62 | EXT63 | 111
138 | 000 | 001 | 010 | 011 | 100 | 101 | 110 | 111
139 ```
140
141 It is important to note that having a different v3.0B Scalar opcode
142 that is different from an SVP64 one is highly undesirable: the complexity
143 in the decoder is greatly increased, through breaking of the RISC paradigm.
144
145 # EXTRA Field Mapping
146
147 The purpose of the 9-bit EXTRA field mapping is to mark individual
148 registers (RT, RA, BFA) as either scalar or vector, and to extend
149 their numbering from 0..31 in Power ISA v3.0 to 0..127 in SVP64.
150 Three of the 9 bits may also be used up for a 2nd Predicate (Twin
151 Predication) leaving a mere 6 bits for qualifying registers. As can
152 be seen there is significant pressure on these (and in fact all) SVP64 bits.
153
154 In Power ISA v3.1 prefixing there are bits which describe and classify
155 the prefix in a fashion that is independent of the suffix. MLSS for
156 example. For SVP64 there is insufficient space to make the SVP64 Prefix
157 "self-describing", and consequently every single Scalar instruction
158 had to be individually analysed, by rote, to craft an EXTRA Field Mapping.
159 This process was semi-automated and is described in this section.
160 The final results, which are part of the SVP64 Specification, are here:
161 [[openpower/opcode_regs_deduped]]
162
163 * Firstly, every instruction's mnemonic (`add RT, RA, RB`) was analysed
164 from reading the markdown formatted version of the Scalar pseudocode
165 which is machine-readable and found in [[openpower/isatables]]. The
166 analysis gives, by instruction, a "Register Profile". `add RT, RA, RB`
167 for example is given a designation `RM-2R-1W` because it requires
168 two GPR reads and one GPR write.
169 * Secondly, the total number of registers was added up (2R-1W is 3 registers)
170 and if less than or equal to three then that instruction could be given an
171 EXTRA3 designation. Four or more is given an EXTRA2 designation because
172 there are only 9 bits available.
173 * Thirdly, the instruction was analysed to see if Twin or Single
174 Predication was suitable. As a general rule this was if there
175 was only a single operand and a single result (`extw` and LD/ST)
176 however it was found that some 2 or 3 operand instructions also
177 qualify. Given that 3 of the 9 bits of EXTRA had to be sacrificed for use
178 in Twin Predication, some compromises were made, here. LDST is
179 Twin but also has 3 operands in some operations, so only EXTRA2 can be used.
180 * Fourthly, a packing format was decided: for 2R-1W an EXTRA3 indexing
181 could have been decided
182 that RA would be indexed 0 (EXTRA bits 0-2), RB indexed 1 (EXTRA bits 3-5)
183 and RT indexed 2 (EXTRA bits 6-8). In some cases (LD/ST with update)
184 RA-as-a-source is given a **different** EXTRA index from RA-as-a-result
185 (because it is possible to do, and perceived to be useful). Rc=1
186 co-results (CR0, CR1) are always given the same EXTRA index as their
187 main result (RT, FRT).
188 * Fifthly, in an automated process the results of the analysis
189 were outputted in CSV Format for use in machine-readable form
190 by sv_analysis.py <https://git.libre-soc.org/?p=openpower-isa.git;a=blob;f=src/openpower/sv/sv_analysis.py;hb=HEAD>
191
192 This process was laborious but logical, and, crucially, once a
193 decision is made (and ratified) cannot be reversed.
194 Qualifying future Power ISA Scalar instructions for SVP64
195 is **strongly** advised to utilise this same process and the same
196 sv_analysis.py program as a canonical method of maintaining the
197 relationships. Alterations to that same program which
198 change the Designation is **prohibited** once finalised (ratified
199 through the Power ISA WG Process). It would
200 be similar to deciding that `add` should be changed from X-Form
201 to D-Form.
202
203 # Single Predication <a name="1p"> </a>
204
205 This is a standard mode normally found in Vector ISAs. every element in every source Vector and in the destination uses the same bit of one single predicate mask.
206
207 In SVSTATE, for Single-predication, implementors MUST increment both srcstep and dststep, but depending on whether sz and/or dz are set, srcstep and
208 dststep can still potentially become different indices. Only when sz=dz
209 is srcstep guaranteed to equal dststep at all times.
210
211 Note that in some Mode Formats there is only one flag (zz). This indicates
212 that *both* sz *and* dz are set to the same.
213
214 Example 1:
215
216 * VL=4
217 * mask=0b1101
218 * sz=0, dz=1
219
220 The following schedule for srcstep and dststep will occur:
221
222 | srcstep | dststep | comment |
223 | ---- | ----- | -------- |
224 | 0 | 0 | both mask[src=0] and mask[dst=0] are 1 |
225 | 1 | 2 | sz=1 but dz=0: dst skips mask[1], src soes not |
226 | 2 | 3 | mask[src=2] and mask[dst=3] are 1 |
227 | end | end | loop has ended because dst reached VL-1 |
228
229 Example 2:
230
231 * VL=4
232 * mask=0b1101
233 * sz=1, dz=0
234
235 The following schedule for srcstep and dststep will occur:
236
237 | srcstep | dststep | comment |
238 | ---- | ----- | -------- |
239 | 0 | 0 | both mask[src=0] and mask[dst=0] are 1 |
240 | 2 | 1 | sz=0 but dz=1: src skips mask[1], dst does not |
241 | 3 | 2 | mask[src=3] and mask[dst=2] are 1 |
242 | end | end | loop has ended because src reached VL-1 |
243
244 In both these examples it is crucial to note that despite there being
245 a single predicate mask, with sz and dz being different, srcstep and
246 dststep are being requested to react differently.
247
248 Example 3:
249
250 * VL=4
251 * mask=0b1101
252 * sz=0, dz=0
253
254 The following schedule for srcstep and dststep will occur:
255
256 | srcstep | dststep | comment |
257 | ---- | ----- | -------- |
258 | 0 | 0 | both mask[src=0] and mask[dst=0] are 1 |
259 | 2 | 2 | sz=0 and dz=0: both src and dst skip mask[1] |
260 | 3 | 3 | mask[src=3] and mask[dst=3] are 1 |
261 | end | end | loop has ended because src and dst reached VL-1 |
262
263 Here, both srcstep and dststep remain in lockstep because sz=dz=1
264
265 # EXTRA Pack/Unpack Modes
266
267 The pack/unpack concept of VSX `vpack` is abstracted out as a Sub-Vector
268 reordering Schedule, named `RM-2P-1S1D-PU`.
269 The usual RM-2P-1S1D is reduced from EXTRA3 to EXTRA2, making
270 room for 2 extra bits that enable either "packing" or "unpacking"
271 on the subvectors vec2/3/4.
272
273 Illustrating a
274 "normal" SVP64 operation with `SUBVL!=1:` (assuming no elwidth overrides):
275
276 def index():
277 for i in range(VL):
278 for j in range(SUBVL):
279 yield i*SUBVL+j
280
281 for idx in index():
282 operation_on(RA+idx)
283
284 For pack/unpack (again, no elwidth overrides):
285
286 # yield an outer-SUBVL or inner VL loop with SUBVL
287 def index_p(outer):
288 if outer:
289 for j in range(SUBVL):
290 for i in range(VL):
291 yield i+VL*j
292 else:
293 for i in range(VL):
294 for j in range(SUBVL):
295 yield i*SUBVL+j
296
297 # walk through both source and dest indices simultaneously
298 for src_idx, dst_idx in zip(index_p(PACK), index_p(UNPACK)):
299 move_operation(RT+dst_idx, RA+src_idx)
300
301 "yield" from python is used here for simplicity and clarity.
302 The two Finite State Machines for the generation of the source
303 and destination element offsets progress incrementally in
304 lock-step.
305
306 Setting of both `PACK_en` and `UNPACK_en` is neither prohibited nor
307 `UNDEFINED` because the reordering is fully deterministic, and
308 additional REMAP reordering may be applied. For Matrix this would
309 give potentially up to 4 Dimensions of reordering.
310
311 Pack/Unpack applies to mv operations and some other single-source
312 single-destination operations such as Indexed LD/ST and extsw.
313 [[sv/mv.swizzle] has a slightly different pseudocode algorithm
314 for Vertical-First Mode.
315
316 # Twin Predication <a name="2p"> </a>
317
318 This is a novel concept that allows predication to be applied to a single
319 source and a single dest register. The following types of traditional
320 Vector operations may be encoded with it, *without requiring explicit
321 opcodes to do so*
322
323 * VSPLAT (a single scalar distributed across a vector)
324 * VEXTRACT (like LLVM IR [`extractelement`](https://releases.llvm.org/11.0.0/docs/LangRef.html#extractelement-instruction))
325 * VINSERT (like LLVM IR [`insertelement`](https://releases.llvm.org/11.0.0/docs/LangRef.html#insertelement-instruction))
326 * VCOMPRESS (like LLVM IR [`llvm.masked.compressstore.*`](https://releases.llvm.org/11.0.0/docs/LangRef.html#llvm-masked-compressstore-intrinsics))
327 * VEXPAND (like LLVM IR [`llvm.masked.expandload.*`](https://releases.llvm.org/11.0.0/docs/LangRef.html#llvm-masked-expandload-intrinsics))
328
329 Those patterns (and more) may be applied to:
330
331 * mv (the usual way that V\* ISA operations are created)
332 * exts\* sign-extension
333 * rwlinm and other RS-RA shift operations (**note**: excluding
334 those that take RA as both a src and dest. These are not
335 1-src 1-dest, they are 2-src, 1-dest)
336 * LD and ST (treating AGEN as one source)
337 * FP fclass, fsgn, fneg, fabs, fcvt, frecip, fsqrt etc.
338 * Condition Register ops mfcr, mtcr and other similar
339
340 This is a huge list that creates extremely powerful combinations,
341 particularly given that one of the predicate options is `(1<<r3)`
342
343 Additional unusual capabilities of Twin Predication include a back-to-back
344 version of VCOMPRESS-VEXPAND which is effectively the ability to do
345 sequentially ordered multiple VINSERTs. The source predicate selects a
346 sequentially ordered subset of elements to be inserted; the destination
347 predicate specifies the sequentially ordered recipient locations.
348 This is equivalent to
349 `llvm.masked.compressstore.*`
350 followed by
351 `llvm.masked.expandload.*`
352 with a single instruction.
353
354 This extreme power and flexibility comes down to the fact that SVP64
355 is not actually a Vector ISA: it is a loop-abstraction-concept that
356 is applied *in general* to Scalar operations, just like the x86
357 `REP` instruction (if put on steroids).
358
359 # Reduce modes
360
361 Reduction in SVP64 is deterministic and somewhat of a misnomer. A normal
362 Vector ISA would have explicit Reduce opcodes with defined characteristics
363 per operation: in SX Aurora there is even an additional scalar argument
364 containing the initial reduction value, and the default is either 0
365 or 1 depending on the specifics of the explicit opcode.
366 SVP64 fundamentally has to
367 utilise *existing* Scalar Power ISA v3.0B operations, which presents some
368 unique challenges.
369
370 The solution turns out to be to simply define reduction as permitting
371 deterministic element-based schedules to be issued using the base Scalar
372 operations, and to rely on the underlying microarchitecture to resolve
373 Register Hazards at the element level. This goes back to
374 the fundamental principle that SV is nothing more than a Sub-Program-Counter
375 sitting between Decode and Issue phases.
376
377 For Scalar Reduction,
378 Microarchitectures *may* take opportunities to parallelise the reduction
379 but only if in doing so they preserve strict Program Order at the Element Level.
380 Opportunities where this is possible include an `OR` operation
381 or a MIN/MAX operation: it may be possible to parallelise the reduction,
382 but for Floating Point it is not permitted due to different results
383 being obtained if the reduction is not executed in strict Program-Sequential
384 Order.
385
386 In essence it becomes the programmer's responsibility to leverage the
387 pre-determined schedules to desired effect.
388
389 ## Scalar result reduction and iteration
390
391 Scalar Reduction per se does not exist, instead is implemented in SVP64
392 as a simple and natural relaxation of the usual restriction on the Vector
393 Looping which would terminate if the destination was marked as a Scalar.
394 Scalar Reduction by contrast *keeps issuing Vector Element Operations*
395 even though the destination register is marked as scalar.
396 Thus it is up to the programmer to be aware of this, observe some
397 conventions, and thus end up achieving the desired outcome of scalar
398 reduction.
399
400 It is also important to appreciate that there is no
401 actual imposition or restriction on how this mode is utilised: there
402 will therefore be several valuable uses (including Vector Iteration
403 and "Reverse-Gear")
404 and it is up to the programmer to make best use of the
405 (strictly deterministic) capability
406 provided.
407
408 In this mode, which is suited to operations involving carry or overflow,
409 one register must be assigned, by convention by the programmer to be the
410 "accumulator". Scalar reduction is thus categorised by:
411
412 * One of the sources is a Vector
413 * the destination is a scalar
414 * optionally but most usefully when one source scalar register is
415 also the scalar destination (which may be informally termed
416 the "accumulator")
417 * That the source register type is the same as the destination register
418 type identified as the "accumulator". Scalar reduction on `cmp`,
419 `setb` or `isel` makes no sense for example because of the mixture
420 between CRs and GPRs.
421
422 *Note that issuing instructions in Scalar reduce mode such as `setb`
423 are neither `UNDEFINED` nor prohibited, despite them not making much
424 sense at first glance.
425 Scalar reduce is strictly defined behaviour, and the cost in
426 hardware terms of prohibition of seemingly non-sensical operations is too great.
427 Therefore it is permitted and required to be executed successfully.
428 Implementors **MAY** choose to optimise such instructions in instances
429 where their use results in "extraneous execution", i.e. where it is clear
430 that the sequence of operations, comprising multiple overwrites to
431 a scalar destination **without** cumulative, iterative, or reductive
432 behaviour (no "accumulator"), may discard all but the last element
433 operation. Identification
434 of such is trivial to do for `setb` and `cmp`: the source register type is
435 a completely different register file from the destination.
436 Likewise Scalar reduction when the destination is a Vector
437 is as if the Reduction Mode was not requested. However it would clearly
438 be unacceptable to perform such optimisations on cache-inhibited LD/ST,
439 so some considerable care needs to be taken.*
440
441 Typical applications include simple operations such as `ADD r3, r10.v,
442 r3` where, clearly, r3 is being used to accumulate the addition of all
443 elements of the vector starting at r10.
444
445 # add RT, RA,RB but when RT==RA
446 for i in range(VL):
447 iregs[RA] += iregs[RB+i] # RT==RA
448
449 However, *unless* the operation is marked as "mapreduce" (`sv.add/mr`)
450 SV ordinarily
451 **terminates** at the first scalar operation. Only by marking the
452 operation as "mapreduce" will it continue to issue multiple sub-looped
453 (element) instructions in `Program Order`.
454
455 To perform the loop in reverse order, the ```RG``` (reverse gear) bit must be set. This may be useful in situations where the results may be different
456 (floating-point) if executed in a different order. Given that there is
457 no actual prohibition on Reduce Mode being applied when the destination
458 is a Vector, the "Reverse Gear" bit turns out to be a way to apply Iterative
459 or Cumulative Vector operations in reverse. `sv.add/rg r3.v, r4.v, r4.v`
460 for example will start at the opposite end of the Vector and push
461 a cumulative series of overlapping add operations into the Execution units of
462 the underlying hardware.
463
464 Other examples include shift-mask operations where a Vector of inserts
465 into a single destination register is required (see [[sv/bitmanip]], bmset),
466 as a way to construct
467 a value quickly from multiple arbitrary bit-ranges and bit-offsets.
468 Using the same register as both the source and destination, with Vectors
469 of different offsets masks and values to be inserted has multiple
470 applications including Video, cryptography and JIT compilation.
471
472 # assume VL=4:
473 # * Vector of shift-offsets contained in RC (r12.v)
474 # * Vector of masks contained in RB (r8.v)
475 # * Vector of values to be masked-in in RA (r4.v)
476 # * Scalar destination RT (r0) to receive all mask-offset values
477 sv.bmset/mr r0, r4.v, r8.v, r12.v
478
479 Due to the Deterministic Scheduling,
480 Subtract and Divide are still permitted to be executed in this mode,
481 although from an algorithmic perspective it is strongly discouraged.
482 It would be better to use addition followed by one final subtract,
483 or in the case of divide, to get better accuracy, to perform a multiply
484 cascade followed by a final divide.
485
486 Note that single-operand or three-operand scalar-dest reduce is perfectly
487 well permitted: the programmer may still declare one register, used as
488 both a Vector source and Scalar destination, to be utilised as
489 the "accumulator". In the case of `sv.fmadds` and `sv.maddhw` etc
490 this naturally fits well with the normal expected usage of these
491 operations.
492
493 If an interrupt or exception occurs in the middle of the scalar mapreduce,
494 the scalar destination register **MUST** be updated with the current
495 (intermediate) result, because this is how ```Program Order``` is
496 preserved (Vector Loops are to be considered to be just another way of issuing instructions
497 in Program Order). In this way, after return from interrupt,
498 the scalar mapreduce may continue where it left off. This provides
499 "precise" exception behaviour.
500
501 Note that hardware is perfectly permitted to perform multi-issue
502 parallel optimisation of the scalar reduce operation: it's just that
503 as far as the user is concerned, all exceptions and interrupts **MUST**
504 be precise.
505
506 ## Vector result reduce mode
507
508 Vector Reduce Mode issues a deterministic tree-reduction schedule to the underlying micro-architecture. Like Scalar reduction, the "Scalar Base"
509 (Power ISA v3.0B) operation is leveraged, unmodified, to give the
510 *appearance* and *effect* of Reduction.
511
512 Vector-result reduction **requires**
513 the destination to be a Vector, which will be used to store
514 intermediary results.
515
516 Given that the tree-reduction schedule is deterministic,
517 Interrupts and exceptions
518 can therefore also be precise. The final result will be in the first
519 non-predicate-masked-out destination element, but due again to
520 the deterministic schedule programmers may find uses for the intermediate
521 results.
522
523 When Rc=1 a corresponding Vector of co-resultant CRs is also
524 created. No special action is taken: the result and its CR Field
525 are stored "as usual" exactly as all other SVP64 Rc=1 operations.
526
527 Note that the Schedule only makes sense on top of certain instructions:
528 X-Form with a Register Profile of `RT,RA,RB` is fine. Like Scalar
529 Reduction, nothing is prohibited:
530 the results of execution on an unsuitable instruction may simply
531 not make sense. Many 3-input instructions (madd, fmadd) unlike Scalar
532 Reduction in particular do not make sense, but `ternlogi`, if used
533 with care, would.
534
535 **Parallel-Reduction with Predication**
536
537 To avoid breaking the strict RISC-paradigm, keeping the Issue-Schedule
538 completely separate from the actual element-level (scalar) operations,
539 Move operations are **not** included in the Schedule. This means that
540 the Schedule leaves the final (scalar) result in the first-non-masked
541 element of the Vector used. With the predicate mask being dynamic
542 (but deterministic) this result could be anywhere.
543
544 If that result is needed to be moved to a (single) scalar register
545 then a follow-up `sv.mv/sm=predicate rt, ra.v` instruction will be
546 needed to get it, where the predicate is the exact same predicate used
547 in the prior Parallel-Reduction instruction. For *some* implementations
548 this may be a slow operation. It may be better to perform a pre-copy
549 of the values, compressing them (VREDUCE-style) into a contiguous block,
550 which will guarantee that the result goes into the very first element
551 of the destination vector.
552
553 **Usage conditions**
554
555 The simplest usage is to perform an overwrite, specifying all three
556 register operands the same.
557
558 setvl VL=6
559 sv.add/vr 8.v, 8.v, 8.v
560
561 The Reduction Schedule will issue the Parallel Tree Reduction spanning
562 registers 8 through 13, by adjusting the offsets to RT, RA and RB as
563 necessary (see "Parallel Reduction algorithm" in a later section).
564
565 A non-overwrite is possible as well but just as with the overwrite
566 version, only those destination elements necessary for storing
567 intermediary computations will be written to: the remaining elements
568 will **not** be overwritten and will **not** be zero'd.
569
570 setvl VL=4
571 sv.add/vr 0.v, 8.v, 8.v
572
573 ## Sub-Vector Horizontal Reduction
574
575 Note that when SVM is clear and SUBVL!=1 the sub-elements are
576 *independent*, i.e. they are mapreduced per *sub-element* as a result.
577 illustration with a vec2, assuming RA==RT, e.g `sv.add/mr/vec2 r4, r4, r16.v`
578
579 for i in range(0, VL):
580 # RA==RT in the instruction. does not have to be
581 iregs[RT].x = op(iregs[RT].x, iregs[RB+i].x)
582 iregs[RT].y = op(iregs[RT].y, iregs[RB+i].y)
583
584 Thus logically there is nothing special or unanticipated about
585 `SVM=0`: it is expected behaviour according to standard SVP64
586 Sub-Vector rules.
587
588 By contrast, when SVM is set and SUBVL!=1, a Horizontal
589 Subvector mode is enabled, which behaves very much more
590 like a traditional Vector Processor Reduction instruction.
591
592 Example for a vec2:
593
594 for i in range(VL):
595 iregs[RT+i] = op(iregs[RA+i].x, iregs[RB+i].y)
596
597 Example for a vec3:
598
599 for i in range(VL):
600 iregs[RT+i] = op(iregs[RA+i].x, iregs[RB+i].y)
601 iregs[RT+i] = op(iregs[RT+i] , iregs[RB+i].z)
602
603 Example for a vec4:
604
605 for i in range(VL):
606 iregs[RT+i] = op(iregs[RA+i].x, iregs[RB+i].y)
607 iregs[RT+i] = op(iregs[RT+i] , iregs[RB+i].z)
608 iregs[RT+i] = op(iregs[RT+i] , iregs[RB+i].w)
609
610 In this mode, when Rc=1 the Vector of CRs is as normal: each result
611 element creates a corresponding CR element (for the final, reduced, result).
612
613 Note:
614
615 1. that the destination (RT) is inherently used as an "Accumulator"
616 register, and consequently the Sub-Vector Loop is interruptible.
617 If RT is a Scalar then as usual the main VL Loop terminates at the
618 first predicated element (or the first element if unpredicated).
619 2. that the Sub-Vector designation applies to RA and RB *but not RT*.
620 3. that the number of operations executed is one less than the Sub-vector
621 length
622
623 # Fail-on-first
624
625 Data-dependent fail-on-first has two distinct variants: one for LD/ST
626 (see [[sv/ldst]],
627 the other for arithmetic operations (actually, CR-driven)
628 ([[sv/normal]]) and CR operations ([[sv/cr_ops]]).
629 Note in each
630 case the assumption is that vector elements are required appear to be
631 executed in sequential Program Order, element 0 being the first.
632
633 * LD/ST ffirst treats the first LD/ST in a vector (element 0) as an
634 ordinary one. Exceptions occur "as normal". However for elements 1
635 and above, if an exception would occur, then VL is **truncated** to the
636 previous element.
637 * Data-driven (CR-driven) fail-on-first activates when Rc=1 or other
638 CR-creating operation produces a result (including cmp). Similar to
639 branch, an analysis of the CR is performed and if the test fails, the
640 vector operation terminates and discards all element operations
641 above the current one (and the current one if VLi is not set),
642 and VL is truncated to either
643 the *previous* element or the current one, depending on whether
644 VLi (VL "inclusive") is set.
645
646 Thus the new VL comprises a contiguous vector of results,
647 all of which pass the testing criteria (equal to zero, less than zero).
648
649 The CR-based data-driven fail-on-first is new and not found in ARM
650 SVE or RVV. It is extremely useful for reducing instruction count,
651 however requires speculative execution involving modifications of VL
652 to get high performance implementations. An additional mode (RC1=1)
653 effectively turns what would otherwise be an arithmetic operation
654 into a type of `cmp`. The CR is stored (and the CR.eq bit tested
655 against the `inv` field).
656 If the CR.eq bit is equal to `inv` then the Vector is truncated and
657 the loop ends.
658 Note that when RC1=1 the result elements are never stored, only the CRs.
659
660 VLi is only available as an option when `Rc=0` (or for instructions
661 which do not have Rc). When set, the current element is always
662 also included in the count (the new length that VL will be set to).
663 This may be useful in combination with "inv" to truncate the Vector
664 to *exclude* elements that fail a test, or, in the case of implementations
665 of strncpy, to include the terminating zero.
666
667 In CR-based data-driven fail-on-first there is only the option to select
668 and test one bit of each CR (just as with branch BO). For more complex
669 tests this may be insufficient. If that is the case, a vectorised crops
670 (crand, cror) may be used, and ffirst applied to the crop instead of to
671 the arithmetic vector.
672
673 One extremely important aspect of ffirst is:
674
675 * LDST ffirst may never set VL equal to zero. This because on the first
676 element an exception must be raised "as normal".
677 * CR-based data-dependent ffirst on the other hand **can** set VL equal
678 to zero. This is the only means in the entirety of SV that VL may be set
679 to zero (with the exception of via the SV.STATE SPR). When VL is set
680 zero due to the first element failing the CR bit-test, all subsequent
681 vectorised operations are effectively `nops` which is
682 *precisely the desired and intended behaviour*.
683
684 Another aspect is that for ffirst LD/STs, VL may be truncated arbitrarily
685 to a nonzero value for any implementation-specific reason. For example:
686 it is perfectly reasonable for implementations to alter VL when ffirst
687 LD or ST operations are initiated on a nonaligned boundary, such that
688 within a loop the subsequent iteration of that loop begins subsequent
689 ffirst LD/ST operations on an aligned boundary. Likewise, to reduce
690 workloads or balance resources.
691
692 CR-based data-dependent first on the other hand MUST not truncate VL
693 arbitrarily to a length decided by the hardware: VL MUST only be
694 truncated based explicitly on whether a test fails.
695 This because it is a precise test on which algorithms
696 will rely.
697
698 ## Data-dependent fail-first on CR operations (crand etc)
699
700 Operations that actually produce or alter CR Field as a result
701 do not also in turn have an Rc=1 mode. However it makes no
702 sense to try to test the 4 bits of a CR Field for being equal
703 or not equal to zero. Moreover, the result is already in the
704 form that is desired: it is a CR field. Therefore,
705 CR-based operations have their own SVP64 Mode, described
706 in [[sv/cr_ops]]
707
708 There are two primary different types of CR operations:
709
710 * Those which have a 3-bit operand field (referring to a CR Field)
711 * Those which have a 5-bit operand (referring to a bit within the
712 whole 32-bit CR)
713
714 More details can be found in [[sv/cr_ops]].
715
716 # pred-result mode
717
718 Pred-result mode may not be applied on CR-based operations.
719
720 Although CR operations (mtcr, crand, cror) may be Vectorised,
721 predicated, pred-result mode applies to operations that have
722 an Rc=1 mode, or make sense to add an RC1 option.
723
724 Predicate-result merges common CR testing with predication, saving on
725 instruction count. In essence, a Condition Register Field test
726 is performed, and if it fails it is considered to have been
727 *as if* the destination predicate bit was zero. Given that
728 there are no CR-based operations that produce Rc=1 co-results,
729 there can be no pred-result mode for mtcr and other CR-based instructions
730
731 Arithmetic and Logical Pred-result, which does have Rc=1 or for which
732 RC1 Mode makes sense, is covered in [[sv/normal]]
733
734 # CR Operations
735
736 CRs are slightly more involved than INT or FP registers due to the
737 possibility for indexing individual bits (crops BA/BB/BT). Again however
738 the access pattern needs to be understandable in relation to v3.0B / v3.1B
739 numbering, with a clear linear relationship and mapping existing when
740 SV is applied.
741
742 ## CR EXTRA mapping table and algorithm <a name="cr_extra"></a>
743
744 Numbering relationships for CR fields are already complex due to being
745 in BE format (*the relationship is not clearly explained in the v3.0B
746 or v3.1 specification*). However with some care and consideration
747 the exact same mapping used for INT and FP regfiles may be applied,
748 just to the upper bits, as explained below. The notation
749 `CR{field number}` is used to indicate access to a particular
750 Condition Register Field (as opposed to the notation `CR[bit]`
751 which accesses one bit of the 32 bit Power ISA v3.0B
752 Condition Register)
753
754 `CR{n}` refers to `CR0` when `n=0` and consequently, for CR0-7, is defined, in v3.0B pseudocode, as:
755
756 CR{7-n} = CR[32+n*4:35+n*4]
757
758 For SVP64 the relationship for the sequential
759 numbering of elements is to the CR **fields** within
760 the CR Register, not to individual bits within the CR register.
761
762 In OpenPOWER v3.0/1, BF/BT/BA/BB are all 5 bits. The top 3 bits (0:2)
763 select one of the 8 CRs; the bottom 2 bits (3:4) select one of 4 bits
764 *in* that CR (EQ/LT/GT/SO). The numbering was determined (after 4 months of
765 analysis and research) to be as follows:
766
767 CR_index = 7-(BA>>2) # top 3 bits but BE
768 bit_index = 3-(BA & 0b11) # low 2 bits but BE
769 CR_reg = CR{CR_index} # get the CR
770 # finally get the bit from the CR.
771 CR_bit = (CR_reg & (1<<bit_index)) != 0
772
773 When it comes to applying SV, it is the CR\_reg number to which SV EXTRA2/3
774 applies, **not** the CR\_bit portion (bits 3-4):
775
776 if extra3_mode:
777 spec = EXTRA3
778 else:
779 spec = EXTRA2<<1 | 0b0
780 if spec[0]:
781 # vector constructs "BA[0:2] spec[1:2] 00 BA[3:4]"
782 return ((BA >> 2)<<6) | # hi 3 bits shifted up
783 (spec[1:2]<<4) | # to make room for these
784 (BA & 0b11) # CR_bit on the end
785 else:
786 # scalar constructs "00 spec[1:2] BA[0:4]"
787 return (spec[1:2] << 5) | BA
788
789 Thus, for example, to access a given bit for a CR in SV mode, the v3.0B
790 algorithm to determine CR\_reg is modified to as follows:
791
792 CR_index = 7-(BA>>2) # top 3 bits but BE
793 if spec[0]:
794 # vector mode, 0-124 increments of 4
795 CR_index = (CR_index<<4) | (spec[1:2] << 2)
796 else:
797 # scalar mode, 0-32 increments of 1
798 CR_index = (spec[1:2]<<3) | CR_index
799 # same as for v3.0/v3.1 from this point onwards
800 bit_index = 3-(BA & 0b11) # low 2 bits but BE
801 CR_reg = CR{CR_index} # get the CR
802 # finally get the bit from the CR.
803 CR_bit = (CR_reg & (1<<bit_index)) != 0
804
805 Note here that the decoding pattern to determine CR\_bit does not change.
806
807 Note: high-performance implementations may read/write Vectors of CRs in
808 batches of aligned 32-bit chunks (CR0-7, CR7-15). This is to greatly
809 simplify internal design. If instructions are issued where CR Vectors
810 do not start on a 32-bit aligned boundary, performance may be affected.
811
812 ## CR fields as inputs/outputs of vector operations
813
814 CRs (or, the arithmetic operations associated with them)
815 may be marked as Vectorised or Scalar. When Rc=1 in arithmetic operations that have no explicit EXTRA to cover the CR, the CR is Vectorised if the destination is Vectorised. Likewise if the destination is scalar then so is the CR.
816
817 When vectorized, the CR inputs/outputs are sequentially read/written
818 to 4-bit CR fields. Vectorised Integer results, when Rc=1, will begin
819 writing to CR8 (TBD evaluate) and increase sequentially from there.
820 This is so that:
821
822 * implementations may rely on the Vector CRs being aligned to 8. This
823 means that CRs may be read or written in aligned batches of 32 bits
824 (8 CRs per batch), for high performance implementations.
825 * scalar Rc=1 operation (CR0, CR1) and callee-saved CRs (CR2-4) are not
826 overwritten by vector Rc=1 operations except for very large VL
827 * CR-based predication, from CR32, is also not interfered with
828 (except by large VL).
829
830 However when the SV result (destination) is marked as a scalar by the
831 EXTRA field the *standard* v3.0B behaviour applies: the accompanying
832 CR when Rc=1 is written to. This is CR0 for integer operations and CR1
833 for FP operations.
834
835 Note that yes, the CR Fields are genuinely Vectorised. Unlike in SIMD VSX which
836 has a single CR (CR6) for a given SIMD result, SV Vectorised OpenPOWER
837 v3.0B scalar operations produce a **tuple** of element results: the
838 result of the operation as one part of that element *and a corresponding
839 CR element*. Greatly simplified pseudocode:
840
841 for i in range(VL):
842 # calculate the vector result of an add
843 iregs[RT+i] = iregs[RA+i] + iregs[RB+i]
844 # now calculate CR bits
845 CRs{8+i}.eq = iregs[RT+i] == 0
846 CRs{8+i}.gt = iregs[RT+i] > 0
847 ... etc
848
849 If a "cumulated" CR based analysis of results is desired (a la VSX CR6)
850 then a followup instruction must be performed, setting "reduce" mode on
851 the Vector of CRs, using cr ops (crand, crnor) to do so. This provides far
852 more flexibility in analysing vectors than standard Vector ISAs. Normal
853 Vector ISAs are typically restricted to "were all results nonzero" and
854 "were some results nonzero". The application of mapreduce to Vectorised
855 cr operations allows far more sophisticated analysis, particularly in
856 conjunction with the new crweird operations see [[sv/cr_int_predication]].
857
858 Note in particular that the use of a separate instruction in this way
859 ensures that high performance multi-issue OoO inplementations do not
860 have the computation of the cumulative analysis CR as a bottleneck and
861 hindrance, regardless of the length of VL.
862
863 Additionally,
864 SVP64 [[sv/branches]] may be used, even when the branch itself is to
865 the following instruction. The combined side-effects of CTR reduction
866 and VL truncation provide several benefits.
867
868 (see [[discussion]]. some alternative schemes are described there)
869
870 ## Rc=1 when SUBVL!=1
871
872 sub-vectors are effectively a form of Packed SIMD (length 2 to 4). Only 1 bit of
873 predicate is allocated per subvector; likewise only one CR is allocated
874 per subvector.
875
876 This leaves a conundrum as to how to apply CR computation per subvector,
877 when normally Rc=1 is exclusively applied to scalar elements. A solution
878 is to perform a bitwise OR or AND of the subvector tests. Given that
879 OE is ignored in SVP64, this field may (when available) be used to select OR or
880 AND behavior.
881
882 ### Table of CR fields
883
884 CRn is the notation used by the OpenPower spec to refer to CR field #i,
885 so FP instructions with Rc=1 write to CR1 (n=1).
886
887 CRs are not stored in SPRs: they are registers in their own right.
888 Therefore context-switching the full set of CRs involves a Vectorised
889 mfcr or mtcr, using VL=8 to do so. This is exactly as how
890 scalar OpenPOWER context-switches CRs: it is just that there are now
891 more of them.
892
893 The 64 SV CRs are arranged similarly to the way the 128 integer registers
894 are arranged. TODO a python program that auto-generates a CSV file
895 which can be included in a table, which is in a new page (so as not to
896 overwhelm this one). [[svp64/cr_names]]
897
898 # Register Profiles
899
900 **NOTE THIS TABLE SHOULD NO LONGER BE HAND EDITED** see
901 <https://bugs.libre-soc.org/show_bug.cgi?id=548> for details.
902
903 Instructions are broken down by Register Profiles as listed in the
904 following auto-generated page: [[opcode_regs_deduped]]. "Non-SV"
905 indicates that the operations with this Register Profile cannot be
906 Vectorised (mtspr, bc, dcbz, twi)
907
908 TODO generate table which will be here [[svp64/reg_profiles]]
909
910 # SV pseudocode illilustration
911
912 ## Single-predicated Instruction
913
914 illustration of normal mode add operation: zeroing not included, elwidth
915 overrides not included. if there is no predicate, it is set to all 1s
916
917 function op_add(rd, rs1, rs2) # add not VADD!
918 int i, id=0, irs1=0, irs2=0;
919 predval = get_pred_val(FALSE, rd);
920 for (i = 0; i < VL; i++)
921 STATE.srcoffs = i # save context
922 if (predval & 1<<i) # predication uses intregs
923 ireg[rd+id] <= ireg[rs1+irs1] + ireg[rs2+irs2];
924 if (!int_vec[rd].isvec) break;
925 if (rd.isvec) { id += 1; }
926 if (rs1.isvec) { irs1 += 1; }
927 if (rs2.isvec) { irs2 += 1; }
928 if (id == VL or irs1 == VL or irs2 == VL)
929 {
930 # end VL hardware loop
931 STATE.srcoffs = 0; # reset
932 return;
933 }
934
935 This has several modes:
936
937 * RT.v = RA.v RB.v
938 * RT.v = RA.v RB.s (and RA.s RB.v)
939 * RT.v = RA.s RB.s
940 * RT.s = RA.v RB.v
941 * RT.s = RA.v RB.s (and RA.s RB.v)
942 * RT.s = RA.s RB.s
943
944 All of these may be predicated. Vector-Vector is straightfoward.
945 When one of source is a Vector and the other a Scalar, it is clear that
946 each element of the Vector source should be added to the Scalar source,
947 each result placed into the Vector (or, if the destination is a scalar,
948 only the first nonpredicated result).
949
950 The one that is not obvious is RT=vector but both RA/RB=scalar.
951 Here this acts as a "splat scalar result", copying the same result into
952 all nonpredicated result elements. If a fixed destination scalar was
953 intended, then an all-Scalar operation should be used.
954
955 See <https://bugs.libre-soc.org/show_bug.cgi?id=552>
956
957 # Assembly Annotation
958
959 Assembly code annotation is required for SV to be able to successfully
960 mark instructions as "prefixed".
961
962 A reasonable (prototype) starting point:
963
964 svp64 [field=value]*
965
966 Fields:
967
968 * ew=8/16/32 - element width
969 * sew=8/16/32 - source element width
970 * vec=2/3/4 - SUBVL
971 * mode=mr/satu/sats/crpred
972 * pred=1\<\<3/r3/~r3/r10/~r10/r30/~r30/lt/gt/le/ge/eq/ne
973
974 similar to x86 "rex" prefix.
975
976 For actual assembler:
977
978 sv.asmcode/mode.vec{N}.ew=8,sw=16,m={pred},sm={pred} reg.v, src.s
979
980 Qualifiers:
981
982 * m={pred}: predicate mask mode
983 * sm={pred}: source-predicate mask mode (only allowed in Twin-predication)
984 * vec{N}: vec2 OR vec3 OR vec4 - sets SUBVL=2/3/4
985 * ew={N}: ew=8/16/32 - sets elwidth override
986 * sw={N}: sw=8/16/32 - sets source elwidth override
987 * ff={xx}: see fail-first mode
988 * pr={xx}: see predicate-result mode
989 * sat{x}: satu / sats - see saturation mode
990 * mr: see map-reduce mode
991 * mr.svm see map-reduce with sub-vector mode
992 * crm: see map-reduce CR mode
993 * crm.svm see map-reduce CR with sub-vector mode
994 * sz: predication with source-zeroing
995 * dz: predication with dest-zeroing
996
997 For modes:
998
999 * pred-result:
1000 - pm=lt/gt/le/ge/eq/ne/so/ns
1001 - RC1 mode
1002 * fail-first
1003 - ff=lt/gt/le/ge/eq/ne/so/ns
1004 - RC1 mode
1005 * saturation:
1006 - sats
1007 - satu
1008 * map-reduce:
1009 - mr OR crm: "normal" map-reduce mode or CR-mode.
1010 - mr.svm OR crm.svm: when vec2/3/4 set, sub-vector mapreduce is enabled
1011
1012 # Parallel-reduction algorithm
1013
1014 The principle of SVP64 is that SVP64 is a fully-independent
1015 Abstraction of hardware-looping in between issue and execute phases
1016 that has no relation to the operation it issues.
1017 Additional state cannot be saved on context-switching beyond that
1018 of SVSTATE, making things slightly tricky.
1019
1020 Executable demo pseudocode, full version
1021 [here](https://git.libre-soc.org/?p=libreriscv.git;a=blob;f=openpower/sv/test_preduce.py;hb=HEAD)
1022
1023 ```
1024 [[!inline raw="yes" pages="openpower/sv/preduce.py" ]]
1025 ```
1026
1027 This algorithm works by noting when data remains in-place rather than
1028 being reduced, and referring to that alternative position on subsequent
1029 layers of reduction. It is re-entrant. If however interrupted and
1030 restored, some implementations may take longer to re-establish the
1031 context.
1032
1033 Its application by default is that:
1034
1035 * RA, FRA or BFA is the first register as the first operand
1036 (ci index offset in the above pseudocode)
1037 * RB, FRB or BFB is the second (co index offset)
1038 * RT (result) also uses ci **if RA==RT**
1039
1040 For more complex applications a REMAP Schedule must be used
1041
1042 *Programmers's note:
1043 if passed a predicate mask with only one bit set, this algorithm
1044 takes no action, similar to when a predicate mask is all zero.*
1045
1046 *Implementor's Note: many SIMD-based Parallel Reduction Algorithms are
1047 implemented in hardware with MVs that ensure lane-crossing is minimised.
1048 The mistake which would be catastrophic to SVP64 to make is to then
1049 limit the Reduction Sequence for all implementors
1050 based solely and exclusively on what one
1051 specific internal microarchitecture does.
1052 In SIMD ISAs the internal SIMD Architectural design is exposed and imposed on the programmer. Cray-style Vector ISAs on the other hand provide convenient,
1053 compact and efficient encodings of abstract concepts.*
1054 **It is the Implementor's responsibility to produce a design
1055 that complies with the above algorithm,
1056 utilising internal Micro-coding and other techniques to transparently
1057 insert micro-architectural lane-crossing Move operations
1058 if necessary or desired, to give the level of efficiency or performance
1059 required.**
1060
1061 # Element-width overrides <a name="elwidth"> </>
1062
1063 Element-width overrides are best illustrated with a packed structure
1064 union in the c programming language. The following should be taken
1065 literally, and assume always a little-endian layout:
1066
1067 typedef union {
1068 uint8_t b[];
1069 uint16_t s[];
1070 uint32_t i[];
1071 uint64_t l[];
1072 uint8_t actual_bytes[8];
1073 } el_reg_t;
1074
1075 elreg_t int_regfile[128];
1076
1077 get_polymorphed_reg(reg, bitwidth, offset):
1078 el_reg_t res;
1079 res.l = 0; // TODO: going to need sign-extending / zero-extending
1080 if bitwidth == 8:
1081 reg.b = int_regfile[reg].b[offset]
1082 elif bitwidth == 16:
1083 reg.s = int_regfile[reg].s[offset]
1084 elif bitwidth == 32:
1085 reg.i = int_regfile[reg].i[offset]
1086 elif bitwidth == 64:
1087 reg.l = int_regfile[reg].l[offset]
1088 return res
1089
1090 set_polymorphed_reg(reg, bitwidth, offset, val):
1091 if (!reg.isvec):
1092 # not a vector: first element only, overwrites high bits
1093 int_regfile[reg].l[0] = val
1094 elif bitwidth == 8:
1095 int_regfile[reg].b[offset] = val
1096 elif bitwidth == 16:
1097 int_regfile[reg].s[offset] = val
1098 elif bitwidth == 32:
1099 int_regfile[reg].i[offset] = val
1100 elif bitwidth == 64:
1101 int_regfile[reg].l[offset] = val
1102
1103 In effect the GPR registers r0 to r127 (and corresponding FPRs fp0
1104 to fp127) are reinterpreted to be "starting points" in a byte-addressable
1105 memory. Vectors - which become just a virtual naming construct - effectively
1106 overlap.
1107
1108 It is extremely important for implementors to note that the only circumstance
1109 where upper portions of an underlying 64-bit register are zero'd out is
1110 when the destination is a scalar. The ideal register file has byte-level
1111 write-enable lines, just like most SRAMs, in order to avoid READ-MODIFY-WRITE.
1112
1113 An example ADD operation with predication and element width overrides:
1114
1115  for (i = 0; i < VL; i++)
1116 if (predval & 1<<i) # predication
1117 src1 = get_polymorphed_reg(RA, srcwid, irs1)
1118 src2 = get_polymorphed_reg(RB, srcwid, irs2)
1119 result = src1 + src2 # actual add here
1120 set_polymorphed_reg(RT, destwid, ird, result)
1121 if (!RT.isvec) break
1122 if (RT.isvec)  { id += 1; }
1123 if (RA.isvec)  { irs1 += 1; }
1124 if (RB.isvec)  { irs2 += 1; }
1125
1126 Thus it can be clearly seen that elements are packed by their
1127 element width, and the packing starts from the source (or destination)
1128 specified by the instruction.
1129
1130 # Twin (implicit) result operations
1131
1132 Some operations in the Power ISA already target two 64-bit scalar
1133 registers: `lq` for example, and LD with update.
1134 Some mathematical algorithms are more
1135 efficient when there are two outputs rather than one, providing
1136 feedback loops between elements (the most well-known being add with
1137 carry). 64-bit multiply
1138 for example actually internally produces a 128 bit result, which clearly
1139 cannot be stored in a single 64 bit register. Some ISAs recommend
1140 "macro op fusion": the practice of setting a convention whereby if
1141 two commonly used instructions (mullo, mulhi) use the same ALU but
1142 one selects the low part of an identical operation and the other
1143 selects the high part, then optimised micro-architectures may
1144 "fuse" those two instructions together, using Micro-coding techniques,
1145 internally.
1146
1147 The practice and convention of macro-op fusion however is not compatible
1148 with SVP64 Horizontal-First, because Horizontal Mode may only
1149 be applied to a single instruction at a time, and SVP64 is based on
1150 the principle of strict Program Order even at the element
1151 level. Thus it becomes
1152 necessary to add explicit more complex single instructions with
1153 more operands than would normally be seen in the average RISC ISA
1154 (3-in, 2-out, in some cases). If it
1155 was not for Power ISA already having LD/ST with update as well as
1156 Condition Codes and `lq` this would be hard to justify.
1157
1158 With limited space in the `EXTRA` Field, and Power ISA opcodes
1159 being only 32 bit, 5 operands is quite an ask. `lq` however sets
1160 a precedent: `RTp` stands for "RT pair". In other words the result
1161 is stored in RT and RT+1. For Scalar operations, following this
1162 precedent is perfectly reasonable. In Scalar mode,
1163 `madded` therefore stores the two halves of the 128-bit multiply
1164 into RT and RT+1.
1165
1166 What, then, of `sv.madded`? If the destination is hard-coded to
1167 RT and RT+1 the instruction is not useful when Vectorised because
1168 the output will be overwritten on the next element. To solve this
1169 is easy: define the destination registers as RT and RT+MAXVL
1170 respectively. This makes it easy for compilers to statically allocate
1171 registers even when VL changes dynamically.
1172
1173 Bear in mind that both RT and RT+MAXVL are starting points for Vectors,
1174 and bear in mind that element-width overrides still have to be taken
1175 into consideration, the starting point for the implicit destination
1176 is best illustrated in pseudocode:
1177
1178 # demo of madded
1179  for (i = 0; i < VL; i++)
1180 if (predval & 1<<i) # predication
1181 src1 = get_polymorphed_reg(RA, srcwid, irs1)
1182 src2 = get_polymorphed_reg(RB, srcwid, irs2)
1183 src2 = get_polymorphed_reg(RC, srcwid, irs3)
1184 result = src1*src2 + src2
1185 destmask = (2<<destwid)-1
1186 # store two halves of result, both start from RT.
1187 set_polymorphed_reg(RT, destwid, ird , result&destmask)
1188 set_polymorphed_reg(RT, destwid, ird+MAXVL, result>>destwid)
1189 if (!RT.isvec) break
1190 if (RT.isvec)  { id += 1; }
1191 if (RA.isvec)  { irs1 += 1; }
1192 if (RB.isvec)  { irs2 += 1; }
1193 if (RC.isvec)  { irs3 += 1; }
1194
1195 The significant part here is that the second half is stored
1196 starting not from RT+MAXVL at all: it is the *element* index
1197 that is offset by MAXVL, both halves actually starting from RT.
1198 If VL is 3, MAXVL is 5, RT is 1, and dest elwidth is 32 then the elements
1199 RT0 to RT2 are stored:
1200
1201 0..31 32..63
1202 r0 unchanged unchanged
1203 r1 RT0.lo RT1.lo
1204 r2 RT2.lo unchanged
1205 r3 unchanged RT0.hi
1206 r4 RT1.hi RT2.hi
1207 r5 unchanged unchanged
1208
1209 Note that all of the LO halves start from r1, but that the HI halves
1210 start from half-way into r3. The reason is that with MAXVL bring
1211 5 and elwidth being 32, this is the 5th element
1212 offset (in 32 bit quantities) counting from r1.
1213
1214 *Programmer's note: accessing registers that have been placed
1215 starting on a non-contiguous boundary (half-way along a scalar
1216 register) can be inconvenient: REMAP can provide an offset but
1217 it requires extra instructions to set up. A simple solution
1218 is to ensure that MAXVL is rounded up such that the Vector
1219 ends cleanly on a contiguous register boundary. MAXVL=6 in
1220 the above example would achieve that*
1221
1222 Additional DRAFT Scalar instructions in 3-in 2-out form
1223 with an implicit 2nd destination:
1224
1225 * [[isa/svfixedarith]]
1226 * [[isa/svfparith]]
1227