(no commit message)
[libreriscv.git] / openpower / sv / svp64 / appendix.mdwn
1 [[!tag standards]]
2
3 # Appendix
4
5 * <https://bugs.libre-soc.org/show_bug.cgi?id=574> Saturation
6 * <https://bugs.libre-soc.org/show_bug.cgi?id=558#c47> Parallel Prefix
7 * <https://bugs.libre-soc.org/show_bug.cgi?id=697> Reduce Modes
8 * <https://bugs.libre-soc.org/show_bug.cgi?id=864> parallel prefix simulator
9 * <https://bugs.libre-soc.org/show_bug.cgi?id=809> OV sv.addex discussion
10
11 This is the appendix to [[sv/svp64]], providing explanations of modes
12 etc. leaving the main svp64 page's primary purpose as outlining the
13 instruction format.
14
15 Table of contents:
16
17 [[!toc]]
18
19 # Partial Implementations
20
21 See [[sv/compliancy_levels]] for full details.
22
23 It is perfectly legal to implement subsets of SVP64 as long as illegal
24 instruction traps are always raised on unimplemented features,
25 so that soft-emulation is possible,
26 even for future revisions of SVP64. With SVP64 being partly controlled
27 through contextual SPRs, a little care has to be taken.
28
29 **All** SPRs
30 not implemented including reserved ones for future use must raise an illegal
31 instruction trap if read or written. This allows software the
32 opportunity to emulate the context created by the given SPR.
33
34 **Embedded Scalar Scenario**
35
36 In this scenario an implementation does not wish to implement the Vectorisation
37 but simply wishes to take advantage of predication or other feature
38 of SVP64, such as instructions that might only be available if prefixed.
39 Such an implementation would be entirely free to do so with the proviso
40 that:
41
42 * any attempts to call `setvl` shall either raise an illegal instruction
43 or be partially implemented to set SVSTATE correctly.
44 * if SVSTATE contains any value in any bit that is not supported
45 in hardware, an illegal instruction shall be raised when an SVP64
46 prefixed instruction is executed.
47 * if SVSTATE contains values requesting supported features at the time
48 that the prefixed instruction is executed then it is executed in
49 hardware as per specification, with no illegal exception trap raised.
50
51 Example, assuming that hardware implements scalar operations only,
52 and implements predication but not elwidth overrides:
53
54 setvli r0, 4 # sets VL equal to 4
55 sv.addi r5, r0, 1 # raises an 0x700 trap
56 setvli r0, 1 # sets VL equal to 1
57 sv.addi r5, r0, 1 # gets executed by hardware
58 sv.addi/ew=8 r5, r0, 1 # raises an 0x700 trap
59 sv.ori/sm=EQ r5, r0, 1 # executed by hardware
60
61 The first `sv.addi` raises an illegal instruction trap because
62 VL has been set to 4, and this is not supported. Likewise
63 elwidth overrides if requested always raise illegal instruction
64 traps.
65
66 **Full implementation (current revision) scenario**
67
68 In this scenario, SVP64 is implemented as it stands in its entirety.
69 However a future revision or a competitor processor decides to also
70 implement portions of Quad-Precision VSX as SVP64-Vectorised.
71 Compatibility is **only** achieved if the earlier implementor raises
72 illegal instruction exceptions on **all** unimplemented opcodes within
73 the SVP64-Prefixed space, *even those marked by the Scalar Power ISA as
74 not needing to raise illegal instructions*.
75
76 Additionally a future version of the specification adds a new feature,
77 requiring an additional SPR. This SPR was, at the time of implementation,
78 marked as "Reserved". The early implementation raises an illegal
79 instruction trap when this SPR is read or written, and consequently has
80 an opportunity to trap-and-emulate the full capability of the revised
81 version of the SVP64 Specification.
82
83 # XER, SO and other global flags
84
85 Vector systems are expected to be high performance. This is achieved
86 through parallelism, which requires that elements in the vector be
87 independent. XER SO/OV and other global "accumulation" flags (CR.SO) cause
88 Read-Write Hazards on single-bit global resources, having a significant
89 detrimental effect.
90
91 Consequently in SV, XER.SO behaviour is disregarded (including
92 in `cmp` instructions). XER.SO is not read, but XER.OV may be written,
93 breaking the Read-Modify-Write Hazard Chain that complicates
94 microarchitectural implementations.
95 This includes when `scalar identity behaviour` occurs. If precise
96 OpenPOWER v3.0/1 scalar behaviour is desired then OpenPOWER v3.0/1
97 instructions should be used without an SV Prefix.
98
99 TODO jacob add about OV https://www.intel.com/content/dam/www/public/us/en/documents/white-papers/ia-large-integer-arithmetic-paper.pdf
100
101 Of note here is that XER.SO and OV may already be disregarded in the
102 Power ISA v3.0/1 SFFS (Scalar Fixed and Floating) Compliancy Subset.
103 SVP64 simply makes it mandatory to disregard XER.SO even for other Subsets,
104 but only for SVP64 Prefixed Operations.
105
106 XER.CA/CA32 on the other hand is expected and required to be implemented
107 according to standard Power ISA Scalar behaviour. Interestingly, due
108 to SVP64 being in effect a hardware for-loop around Scalar instructions
109 executing in precise Program Order, a little thought shows that a Vectorised
110 Carry-In-Out add is in effect a Big Integer Add, taking a single bit Carry In
111 and producing, at the end, a single bit Carry out. High performance
112 implementations may exploit this observation to deploy efficient
113 Parallel Carry Lookahead.
114
115 # assume VL=4, this results in 4 sequential ops (below)
116 sv.adde r0.v, r4.v, r8.v
117
118 # instructions that get executed in backend hardware:
119 adde r0, r4, r8 # takes carry-in, produces carry-out
120 adde r1, r5, r9 # takes carry from previous
121 ...
122 adde r3, r7, r11 # likewise
123
124 It can clearly be seen that the carry chains from one
125 64 bit add to the next, the end result being that a
126 256-bit "Big Integer Add" has been performed, and that
127 CA contains the 257th bit. A one-instruction 512-bit Add
128 may be performed by setting VL=8, and a one-instruction
129 1024-bit add by setting VL=16, and so on.
130
131 # v3.0B/v3.1 relevant instructions
132
133 SV is primarily designed for use as an efficient hybrid 3D GPU / VPU /
134 CPU ISA.
135
136 Vectorisation of the VSX Packed SIMD system makes no sense whatsoever,
137 the sole exceptions potentially being any operations with 128-bit
138 operands such as `vrlq` (Rotate Quad Word) and `xsaddqp` (Scalar
139 Quad-precision Add).
140 SV effectively *replaces* the majority of VSX, requiring far less
141 instructions, and provides, at the very minimum, predication
142 (which VSX was designed without).
143
144 Likewise, Load/Store Multiple make no sense to
145 have because they are not only provided by SV, the SV alternatives may
146 be predicated as well, making them far better suited to use in function
147 calls and context-switching.
148
149 Additionally, some v3.0/1 instructions simply make no sense at all in a
150 Vector context: `rfid` falls into this category,
151 as well as `sc` and `scv`. Here there is simply no point
152 trying to Vectorise them: the standard OpenPOWER v3.0/1 instructions
153 should be called instead.
154
155 Fortuitously this leaves several Major Opcodes free for use by SV
156 to fit alternative future instructions. In a 3D context this means
157 Vector Product, Vector Normalise, [[sv/mv.swizzle]], Texture LD/ST
158 operations, and others critical to an efficient, effective 3D GPU and
159 VPU ISA. With such instructions being included as standard in other
160 commercially-successful GPU ISAs it is likewise critical that a 3D
161 GPU/VPU based on svp64 also have such instructions.
162
163 Note however that svp64 is stand-alone and is in no way
164 critically dependent on the existence or provision of 3D GPU or VPU
165 instructions. These should be considered extensions, and their discussion
166 and specification is out of scope for this document.
167
168 Note, again: this is *only* under svp64 prefixing. Standard v3.0B /
169 v3.1B is *not* altered by svp64 in any way.
170
171 ## Major opcode map (v3.0B)
172
173 This table is taken from v3.0B.
174 Table 9: Primary Opcode Map (opcode bits 0:5)
175
176 ```
177 | 000 | 001 | 010 | 011 | 100 | 101 | 110 | 111
178 000 | | | tdi | twi | EXT04 | | | mulli | 000
179 001 | subfic | | cmpli | cmpi | addic | addic. | addi | addis | 001
180 010 | bc/l/a | EXT17 | b/l/a | EXT19 | rlwimi| rlwinm | | rlwnm | 010
181 011 | ori | oris | xori | xoris | andi. | andis. | EXT30 | EXT31 | 011
182 100 | lwz | lwzu | lbz | lbzu | stw | stwu | stb | stbu | 100
183 101 | lhz | lhzu | lha | lhau | sth | sthu | lmw | stmw | 101
184 110 | lfs | lfsu | lfd | lfdu | stfs | stfsu | stfd | stfdu | 110
185 111 | lq | EXT57 | EXT58 | EXT59 | EXT60 | EXT61 | EXT62 | EXT63 | 111
186 | 000 | 001 | 010 | 011 | 100 | 101 | 110 | 111
187 ```
188
189 ## Suitable for svp64-only
190
191 This is the same table containing v3.0B Primary Opcodes except those that
192 make no sense in a Vectorisation Context have been removed. These removed
193 POs can, *in the SV Vector Context only*, be assigned to alternative
194 (Vectorised-only) instructions, including future extensions.
195 EXT04 retains the scalar `madd*` operations but would have all PackedSIMD
196 (aka VSX) operations removed.
197
198 Note, again, to emphasise: outside of svp64 these opcodes **do not**
199 change. When not prefixed with svp64 these opcodes **specifically**
200 retain their v3.0B / v3.1B OpenPOWER Standard compliant meaning.
201
202 ```
203 | 000 | 001 | 010 | 011 | 100 | 101 | 110 | 111
204 000 | | | | | EXT04 | | | mulli | 000
205 001 | subfic | | cmpli | cmpi | addic | addic. | addi | addis | 001
206 010 | bc/l/a | | | EXT19 | rlwimi| rlwinm | | rlwnm | 010
207 011 | ori | oris | xori | xoris | andi. | andis. | EXT30 | EXT31 | 011
208 100 | lwz | lwzu | lbz | lbzu | stw | stwu | stb | stbu | 100
209 101 | lhz | lhzu | lha | lhau | sth | sthu | | | 101
210 110 | lfs | lfsu | lfd | lfdu | stfs | stfsu | stfd | stfdu | 110
211 111 | | | EXT58 | EXT59 | | EXT61 | | EXT63 | 111
212 | 000 | 001 | 010 | 011 | 100 | 101 | 110 | 111
213 ```
214
215 It is important to note that having a different v3.0B Scalar opcode
216 that is different from an SVP64 one is highly undesirable: the complexity
217 in the decoder is greatly increased.
218
219 # EXTRA Field Mapping
220
221 The purpose of the 9-bit EXTRA field mapping is to mark individual
222 registers (RT, RA, BFA) as either scalar or vector, and to extend
223 their numbering from 0..31 in Power ISA v3.0 to 0..127 in SVP64.
224 Three of the 9 bits may also be used up for a 2nd Predicate (Twin
225 Predication) leaving a mere 6 bits for qualifying registers. As can
226 be seen there is significant pressure on these (and in fact all) SVP64 bits.
227
228 In Power ISA v3.1 prefixing there are bits which describe and classify
229 the prefix in a fashion that is independent of the suffix. MLSS for
230 example. For SVP64 there is insufficient space to make the SVP64 Prefix
231 "self-describing", and consequently every single Scalar instruction
232 had to be individually analysed, by rote, to craft an EXTRA Field Mapping.
233 This process was semi-automated and is described in this section.
234 The final results, which are part of the SVP64 Specification, are here:
235
236 * [[openpower/opcode_regs_deduped]]
237
238 Firstly, every instruction's mnemonic (`add RT, RA, RB`) was analysed
239 from reading the markdown formatted version of the Scalar pseudocode
240 which is machine-readable and found in [[openpower/isatables]]. The
241 analysis gives, by instruction, a "Register Profile". `add RT, RA, RB`
242 for example is given a designation `RM-2R-1W` because it requires
243 two GPR reads and one GPR write.
244
245 Secondly, the total number of registers was added up (2R-1W is 3 registers)
246 and if less than or equal to three then that instruction could be given an
247 EXTRA3 designation. Four or more is given an EXTRA2 designation because
248 there are only 9 bits available.
249
250 Thirdly, the instruction was analysed to see if Twin or Single
251 Predication was suitable. As a general rule this was if there
252 was only a single operand and a single result (`extw` and LD/ST)
253 however it was found that some 2 or 3 operand instructions also
254 qualify. Given that 3 of the 9 bits of EXTRA had to be sacrificed for use
255 in Twin Predication, some compromises were made, here. LDST is
256 Twin but also has 3 operands in some operations, so only EXTRA2 can be used.
257
258 Fourthly, a packing format was decided: for 2R-1W an EXTRA3 indexing
259 could have been decided
260 that RA would be indexed 0 (EXTRA bits 0-2), RB indexed 1 (EXTRA bits 3-5)
261 and RT indexed 2 (EXTRA bits 6-8). In some cases (LD/ST with update)
262 RA-as-a-source is given a **different** EXTRA index from RA-as-a-result
263 (because it is possible to do, and perceived to be useful). Rc=1
264 co-results (CR0, CR1) are always given the same EXTRA index as their
265 main result (RT, FRT).
266
267 Fifthly, in an automated process the results of the analysis
268 were outputted in CSV Format for use in machine-readable form
269 by sv_analysis.py <https://git.libre-soc.org/?p=openpower-isa.git;a=blob;f=src/openpower/sv/sv_analysis.py;hb=HEAD>
270
271 This process was laborious but logical, and, crucially, once a
272 decision is made (and ratified) cannot be reversed.
273 Qualifying future Power ISA Scalar instructions for SVP64
274 is **strongly** advised to utilise this same process and the same
275 sv_analysis.py program as a canonical method of maintaining the
276 relationships. Alterations to that same program which
277 change the Designation is **prohibited** once finalised (ratified
278 through the Power ISA WG Process). It would
279 be similar to deciding that `add` should be changed from X-Form
280 to D-Form.
281
282 # Single Predication <a name="1p"> </a>
283
284 This is a standard mode normally found in Vector ISAs. every element in every source Vector and in the destination uses the same bit of one single predicate mask.
285
286 In SVSTATE, for Single-predication, implementors MUST increment both srcstep and dststep, but depending on whether sz and/or dz are set, srcstep and
287 dststep can still potentially become different indices. Only when sz=dz
288 is srcstep guaranteed to equal dststep at all times.
289
290 Note that in some Mode Formats there is only one flag (zz). This indicates
291 that *both* sz *and* dz are set to the same.
292
293 Example 1:
294
295 * VL=4
296 * mask=0b1101
297 * sz=0, dz=1
298
299 The following schedule for srcstep and dststep will occur:
300
301 | srcstep | dststep | comment |
302 | ---- | ----- | -------- |
303 | 0 | 0 | both mask[src=0] and mask[dst=0] are 1 |
304 | 1 | 2 | sz=1 but dz=0: dst skips mask[1], src soes not |
305 | 2 | 3 | mask[src=2] and mask[dst=3] are 1 |
306 | end | end | loop has ended because dst reached VL-1 |
307
308 Example 2:
309
310 * VL=4
311 * mask=0b1101
312 * sz=1, dz=0
313
314 The following schedule for srcstep and dststep will occur:
315
316 | srcstep | dststep | comment |
317 | ---- | ----- | -------- |
318 | 0 | 0 | both mask[src=0] and mask[dst=0] are 1 |
319 | 2 | 1 | sz=0 but dz=1: src skips mask[1], dst does not |
320 | 3 | 2 | mask[src=3] and mask[dst=2] are 1 |
321 | end | end | loop has ended because src reached VL-1 |
322
323 In both these examples it is crucial to note that despite there being
324 a single predicate mask, with sz and dz being different, srcstep and
325 dststep are being requested to react differently.
326
327 Example 3:
328
329 * VL=4
330 * mask=0b1101
331 * sz=0, dz=0
332
333 The following schedule for srcstep and dststep will occur:
334
335 | srcstep | dststep | comment |
336 | ---- | ----- | -------- |
337 | 0 | 0 | both mask[src=0] and mask[dst=0] are 1 |
338 | 2 | 2 | sz=0 and dz=0: both src and dst skip mask[1] |
339 | 3 | 3 | mask[src=3] and mask[dst=3] are 1 |
340 | end | end | loop has ended because src and dst reached VL-1 |
341
342 Here, both srcstep and dststep remain in lockstep because sz=dz=1
343
344 # Twin Predication <a name="2p"> </a>
345
346 This is a novel concept that allows predication to be applied to a single
347 source and a single dest register. The following types of traditional
348 Vector operations may be encoded with it, *without requiring explicit
349 opcodes to do so*
350
351 * VSPLAT (a single scalar distributed across a vector)
352 * VEXTRACT (like LLVM IR [`extractelement`](https://releases.llvm.org/11.0.0/docs/LangRef.html#extractelement-instruction))
353 * VINSERT (like LLVM IR [`insertelement`](https://releases.llvm.org/11.0.0/docs/LangRef.html#insertelement-instruction))
354 * VCOMPRESS (like LLVM IR [`llvm.masked.compressstore.*`](https://releases.llvm.org/11.0.0/docs/LangRef.html#llvm-masked-compressstore-intrinsics))
355 * VEXPAND (like LLVM IR [`llvm.masked.expandload.*`](https://releases.llvm.org/11.0.0/docs/LangRef.html#llvm-masked-expandload-intrinsics))
356
357 Those patterns (and more) may be applied to:
358
359 * mv (the usual way that V\* ISA operations are created)
360 * exts\* sign-extension
361 * rwlinm and other RS-RA shift operations (**note**: excluding
362 those that take RA as both a src and dest. These are not
363 1-src 1-dest, they are 2-src, 1-dest)
364 * LD and ST (treating AGEN as one source)
365 * FP fclass, fsgn, fneg, fabs, fcvt, frecip, fsqrt etc.
366 * Condition Register ops mfcr, mtcr and other similar
367
368 This is a huge list that creates extremely powerful combinations,
369 particularly given that one of the predicate options is `(1<<r3)`
370
371 Additional unusual capabilities of Twin Predication include a back-to-back
372 version of VCOMPRESS-VEXPAND which is effectively the ability to do
373 sequentially ordered multiple VINSERTs. The source predicate selects a
374 sequentially ordered subset of elements to be inserted; the destination
375 predicate specifies the sequentially ordered recipient locations.
376 This is equivalent to
377 `llvm.masked.compressstore.*`
378 followed by
379 `llvm.masked.expandload.*`
380 with a single instruction.
381
382 This extreme power and flexibility comes down to the fact that SVP64
383 is not actually a Vector ISA: it is a loop-abstraction-concept that
384 is applied *in general* to Scalar operations, just like the x86
385 `REP` instruction (if put on steroids).
386
387 # Reduce modes
388
389 Reduction in SVP64 is deterministic and somewhat of a misnomer. A normal
390 Vector ISA would have explicit Reduce opcodes with defined characteristics
391 per operation: in SX Aurora there is even an additional scalar argument
392 containing the initial reduction value, and the default is either 0
393 or 1 depending on the specifics of the explicit opcode.
394 SVP64 fundamentally has to
395 utilise *existing* Scalar Power ISA v3.0B operations, which presents some
396 unique challenges.
397
398 The solution turns out to be to simply define reduction as permitting
399 deterministic element-based schedules to be issued using the base Scalar
400 operations, and to rely on the underlying microarchitecture to resolve
401 Register Hazards at the element level. This goes back to
402 the fundamental principle that SV is nothing more than a Sub-Program-Counter
403 sitting between Decode and Issue phases.
404
405 Microarchitectures *may* take opportunities to parallelise the reduction
406 but only if in doing so they preserve Program Order at the Element Level.
407 Opportunities where this is possible include an `OR` operation
408 or a MIN/MAX operation: it may be possible to parallelise the reduction,
409 but for Floating Point it is not permitted due to different results
410 being obtained if the reduction is not executed in strict Program-Sequential
411 Order.
412
413 In essence it becomes the programmer's responsibility to leverage the
414 pre-determined schedules to desired effect.
415
416 ## Scalar result reduction and iteration
417
418 Scalar Reduction per se does not exist, instead is implemented in SVP64
419 as a simple and natural relaxation of the usual restriction on the Vector
420 Looping which would terminate if the destination was marked as a Scalar.
421 Scalar Reduction by contrast *keeps issuing Vector Element Operations*
422 even though the destination register is marked as scalar.
423 Thus it is up to the programmer to be aware of this, observe some
424 conventions, and thus end up achieving the desired outcome of scalar
425 reduction.
426
427 It is also important to appreciate that there is no
428 actual imposition or restriction on how this mode is utilised: there
429 will therefore be several valuable uses (including Vector Iteration
430 and "Reverse-Gear")
431 and it is up to the programmer to make best use of the
432 (strictly deterministic) capability
433 provided.
434
435 In this mode, which is suited to operations involving carry or overflow,
436 one register must be assigned, by convention by the programmer to be the
437 "accumulator". Scalar reduction is thus categorised by:
438
439 * One of the sources is a Vector
440 * the destination is a scalar
441 * optionally but most usefully when one source scalar register is
442 also the scalar destination (which may be informally termed
443 the "accumulator")
444 * That the source register type is the same as the destination register
445 type identified as the "accumulator". Scalar reduction on `cmp`,
446 `setb` or `isel` makes no sense for example because of the mixture
447 between CRs and GPRs.
448
449 *Note that issuing instructions in Scalar reduce mode such as `setb`
450 are neither `UNDEFINED` nor prohibited, despite them not making much
451 sense at first glance.
452 Scalar reduce is strictly defined behaviour, and the cost in
453 hardware terms of prohibition of seemingly non-sensical operations is too great.
454 Therefore it is permitted and required to be executed successfully.
455 Implementors **MAY** choose to optimise such instructions in instances
456 where their use results in "extraneous execution", i.e. where it is clear
457 that the sequence of operations, comprising multiple overwrites to
458 a scalar destination **without** cumulative, iterative, or reductive
459 behaviour (no "accumulator"), may discard all but the last element
460 operation. Identification
461 of such is trivial to do for `setb` and `cmp`: the source register type is
462 a completely different register file from the destination.
463 Likewise Scalar reduction when the destination is a Vector
464 is as if the Reduction Mode was not requested.*
465
466 Typical applications include simple operations such as `ADD r3, r10.v,
467 r3` where, clearly, r3 is being used to accumulate the addition of all
468 elements of the vector starting at r10.
469
470 # add RT, RA,RB but when RT==RA
471 for i in range(VL):
472 iregs[RA] += iregs[RB+i] # RT==RA
473
474 However, *unless* the operation is marked as "mapreduce" (`sv.add/mr`)
475 SV ordinarily
476 **terminates** at the first scalar operation. Only by marking the
477 operation as "mapreduce" will it continue to issue multiple sub-looped
478 (element) instructions in `Program Order`.
479
480 To perform the loop in reverse order, the ```RG``` (reverse gear) bit must be set. This may be useful in situations where the results may be different
481 (floating-point) if executed in a different order. Given that there is
482 no actual prohibition on Reduce Mode being applied when the destination
483 is a Vector, the "Reverse Gear" bit turns out to be a way to apply Iterative
484 or Cumulative Vector operations in reverse. `sv.add/rg r3.v, r4.v, r4.v`
485 for example will start at the opposite end of the Vector and push
486 a cumulative series of overlapping add operations into the Execution units of
487 the underlying hardware.
488
489 Other examples include shift-mask operations where a Vector of inserts
490 into a single destination register is required (see [[sv/bitmanip]], bmset),
491 as a way to construct
492 a value quickly from multiple arbitrary bit-ranges and bit-offsets.
493 Using the same register as both the source and destination, with Vectors
494 of different offsets masks and values to be inserted has multiple
495 applications including Video, cryptography and JIT compilation.
496
497 # assume VL=4:
498 # * Vector of shift-offsets contained in RC (r12.v)
499 # * Vector of masks contained in RB (r8.v)
500 # * Vector of values to be masked-in in RA (r4.v)
501 # * Scalar destination RT (r0) to receive all mask-offset values
502 sv.bmset/mr r0, r4.v, r8.v, r12.v
503
504 Due to the Deterministic Scheduling,
505 Subtract and Divide are still permitted to be executed in this mode,
506 although from an algorithmic perspective it is strongly discouraged.
507 It would be better to use addition followed by one final subtract,
508 or in the case of divide, to get better accuracy, to perform a multiply
509 cascade followed by a final divide.
510
511 Note that single-operand or three-operand scalar-dest reduce is perfectly
512 well permitted: the programmer may still declare one register, used as
513 both a Vector source and Scalar destination, to be utilised as
514 the "accumulator". In the case of `sv.fmadds` and `sv.maddhw` etc
515 this naturally fits well with the normal expected usage of these
516 operations.
517
518 If an interrupt or exception occurs in the middle of the scalar mapreduce,
519 the scalar destination register **MUST** be updated with the current
520 (intermediate) result, because this is how ```Program Order``` is
521 preserved (Vector Loops are to be considered to be just another way of issuing instructions
522 in Program Order). In this way, after return from interrupt,
523 the scalar mapreduce may continue where it left off. This provides
524 "precise" exception behaviour.
525
526 Note that hardware is perfectly permitted to perform multi-issue
527 parallel optimisation of the scalar reduce operation: it's just that
528 as far as the user is concerned, all exceptions and interrupts **MUST**
529 be precise.
530
531 ## Vector result reduce mode
532
533 Vector Reduce Mode issues a deterministic tree-reduction schedule to the underlying micro-architecture. Like Scalar reduction, the "Scalar Base"
534 (Power ISA v3.0B) operation is leveraged, unmodified, to give the
535 *appearance* and *effect* of Reduction.
536
537 Given that the tree-reduction schedule is deterministic,
538 Interrupts and exceptions
539 can therefore also be precise. The final result will be in the first
540 non-predicate-masked-out destination element, but due again to
541 the deterministic schedule programmers may find uses for the intermediate
542 results.
543
544 When Rc=1 a corresponding Vector of co-resultant CRs is also
545 created. No special action is taken: the result and its CR Field
546 are stored "as usual" exactly as all other SVP64 Rc=1 operations.
547
548 ## Sub-Vector Horizontal Reduction
549
550 Note that when SVM is clear and SUBVL!=1 the sub-elements are
551 *independent*, i.e. they are mapreduced per *sub-element* as a result.
552 illustration with a vec2, assuming RA==RT, e.g `sv.add/mr/vec2 r4, r4, r16`
553
554 for i in range(0, VL):
555 # RA==RT in the instruction. does not have to be
556 iregs[RT].x = op(iregs[RT].x, iregs[RB+i].x)
557 iregs[RT].y = op(iregs[RT].y, iregs[RB+i].y)
558
559 Thus logically there is nothing special or unanticipated about
560 `SVM=0`: it is expected behaviour according to standard SVP64
561 Sub-Vector rules.
562
563 By contrast, when SVM is set and SUBVL!=1, a Horizontal
564 Subvector mode is enabled, which behaves very much more
565 like a traditional Vector Processor Reduction instruction.
566 Example for a vec3:
567
568 for i in range(VL):
569 result = iregs[RA+i].x
570 result = op(result, iregs[RA+i].y)
571 result = op(result, iregs[RA+i].z)
572 iregs[RT+i] = result
573
574 In this mode, when Rc=1 the Vector of CRs is as normal: each result
575 element creates a corresponding CR element (for the final, reduced, result).
576
577 # Fail-on-first
578
579 Data-dependent fail-on-first has two distinct variants: one for LD/ST
580 (see [[sv/ldst]],
581 the other for arithmetic operations (actually, CR-driven)
582 ([[sv/normal]]) and CR operations ([[sv/cr_ops]]).
583 Note in each
584 case the assumption is that vector elements are required appear to be
585 executed in sequential Program Order, element 0 being the first.
586
587 * LD/ST ffirst treats the first LD/ST in a vector (element 0) as an
588 ordinary one. Exceptions occur "as normal". However for elements 1
589 and above, if an exception would occur, then VL is **truncated** to the
590 previous element.
591 * Data-driven (CR-driven) fail-on-first activates when Rc=1 or other
592 CR-creating operation produces a result (including cmp). Similar to
593 branch, an analysis of the CR is performed and if the test fails, the
594 vector operation terminates and discards all element operations
595 above the current one (and the current one if VLi is not set),
596 and VL is truncated to either
597 the *previous* element or the current one, depending on whether
598 VLi (VL "inclusive") is set.
599
600 Thus the new VL comprises a contiguous vector of results,
601 all of which pass the testing criteria (equal to zero, less than zero).
602
603 The CR-based data-driven fail-on-first is new and not found in ARM
604 SVE or RVV. It is extremely useful for reducing instruction count,
605 however requires speculative execution involving modifications of VL
606 to get high performance implementations. An additional mode (RC1=1)
607 effectively turns what would otherwise be an arithmetic operation
608 into a type of `cmp`. The CR is stored (and the CR.eq bit tested
609 against the `inv` field).
610 If the CR.eq bit is equal to `inv` then the Vector is truncated and
611 the loop ends.
612 Note that when RC1=1 the result elements are never stored, only the CRs.
613
614 VLi is only available as an option when `Rc=0` (or for instructions
615 which do not have Rc). When set, the current element is always
616 also included in the count (the new length that VL will be set to).
617 This may be useful in combination with "inv" to truncate the Vector
618 to `exclude` elements that fail a test, or, in the case of implementations
619 of strncpy, to include the terminating zero.
620
621 In CR-based data-driven fail-on-first there is only the option to select
622 and test one bit of each CR (just as with branch BO). For more complex
623 tests this may be insufficient. If that is the case, a vectorised crops
624 (crand, cror) may be used, and ffirst applied to the crop instead of to
625 the arithmetic vector.
626
627 One extremely important aspect of ffirst is:
628
629 * LDST ffirst may never set VL equal to zero. This because on the first
630 element an exception must be raised "as normal".
631 * CR-based data-dependent ffirst on the other hand **can** set VL equal
632 to zero. This is the only means in the entirety of SV that VL may be set
633 to zero (with the exception of via the SV.STATE SPR). When VL is set
634 zero due to the first element failing the CR bit-test, all subsequent
635 vectorised operations are effectively `nops` which is
636 *precisely the desired and intended behaviour*.
637
638 Another aspect is that for ffirst LD/STs, VL may be truncated arbitrarily
639 to a nonzero value for any implementation-specific reason. For example:
640 it is perfectly reasonable for implementations to alter VL when ffirst
641 LD or ST operations are initiated on a nonaligned boundary, such that
642 within a loop the subsequent iteration of that loop begins subsequent
643 ffirst LD/ST operations on an aligned boundary. Likewise, to reduce
644 workloads or balance resources.
645
646 CR-based data-dependent first on the other hand MUST not truncate VL
647 arbitrarily to a length decided by the hardware: VL MUST only be
648 truncated based explicitly on whether a test fails.
649 This because it is a precise test on which algorithms
650 will rely.
651
652 ## Data-dependent fail-first on CR operations (crand etc)
653
654 Operations that actually produce or alter CR Field as a result
655 do not also in turn have an Rc=1 mode. However it makes no
656 sense to try to test the 4 bits of a CR Field for being equal
657 or not equal to zero. Moreover, the result is already in the
658 form that is desired: it is a CR field. Therefore,
659 CR-based operations have their own SVP64 Mode, described
660 in [[sv/cr_ops]]
661
662 There are two primary different types of CR operations:
663
664 * Those which have a 3-bit operand field (referring to a CR Field)
665 * Those which have a 5-bit operand (referring to a bit within the
666 whole 32-bit CR)
667
668 More details can be found in [[sv/cr_ops]].
669
670 # pred-result mode
671
672 Pred-result mode may not be applied on CR-based operations.
673
674 Although CR operations (mtcr, crand, cror) may be Vectorised,
675 predicated, pred-result mode applies to operations that have
676 an Rc=1 mode, or make sense to add an RC1 option.
677
678 Predicate-result merges common CR testing with predication, saving on
679 instruction count. In essence, a Condition Register Field test
680 is performed, and if it fails it is considered to have been
681 *as if* the destination predicate bit was zero. Given that
682 there are no CR-based operations that produce Rc=1 co-results,
683 there can be no pred-result mode for mtcr and other CR-based instructions
684
685 Arithmetic and Logical Pred-result, which does have Rc=1 or for which
686 RC1 Mode makes sense, is covered in [[sv/normal]]
687
688 # CR Operations
689
690 CRs are slightly more involved than INT or FP registers due to the
691 possibility for indexing individual bits (crops BA/BB/BT). Again however
692 the access pattern needs to be understandable in relation to v3.0B / v3.1B
693 numbering, with a clear linear relationship and mapping existing when
694 SV is applied.
695
696 ## CR EXTRA mapping table and algorithm <a name="cr_extra"></a>
697
698 Numbering relationships for CR fields are already complex due to being
699 in BE format (*the relationship is not clearly explained in the v3.0B
700 or v3.1 specification*). However with some care and consideration
701 the exact same mapping used for INT and FP regfiles may be applied,
702 just to the upper bits, as explained below. The notation
703 `CR{field number}` is used to indicate access to a particular
704 Condition Register Field (as opposed to the notation `CR[bit]`
705 which accesses one bit of the 32 bit Power ISA v3.0B
706 Condition Register)
707
708 `CR{n}` refers to `CR0` when `n=0` and consequently, for CR0-7, is defined, in v3.0B pseudocode, as:
709
710 CR{7-n} = CR[32+n*4:35+n*4]
711
712 For SVP64 the relationship for the sequential
713 numbering of elements is to the CR **fields** within
714 the CR Register, not to individual bits within the CR register.
715
716 In OpenPOWER v3.0/1, BF/BT/BA/BB are all 5 bits. The top 3 bits (0:2)
717 select one of the 8 CRs; the bottom 2 bits (3:4) select one of 4 bits
718 *in* that CR (EQ/LT/GT/SO). The numbering was determined (after 4 months of
719 analysis and research) to be as follows:
720
721 CR_index = 7-(BA>>2) # top 3 bits but BE
722 bit_index = 3-(BA & 0b11) # low 2 bits but BE
723 CR_reg = CR{CR_index} # get the CR
724 # finally get the bit from the CR.
725 CR_bit = (CR_reg & (1<<bit_index)) != 0
726
727 When it comes to applying SV, it is the CR\_reg number to which SV EXTRA2/3
728 applies, **not** the CR\_bit portion (bits 3-4):
729
730 if extra3_mode:
731 spec = EXTRA3
732 else:
733 spec = EXTRA2<<1 | 0b0
734 if spec[0]:
735 # vector constructs "BA[0:2] spec[1:2] 00 BA[3:4]"
736 return ((BA >> 2)<<6) | # hi 3 bits shifted up
737 (spec[1:2]<<4) | # to make room for these
738 (BA & 0b11) # CR_bit on the end
739 else:
740 # scalar constructs "00 spec[1:2] BA[0:4]"
741 return (spec[1:2] << 5) | BA
742
743 Thus, for example, to access a given bit for a CR in SV mode, the v3.0B
744 algorithm to determine CR\_reg is modified to as follows:
745
746 CR_index = 7-(BA>>2) # top 3 bits but BE
747 if spec[0]:
748 # vector mode, 0-124 increments of 4
749 CR_index = (CR_index<<4) | (spec[1:2] << 2)
750 else:
751 # scalar mode, 0-32 increments of 1
752 CR_index = (spec[1:2]<<3) | CR_index
753 # same as for v3.0/v3.1 from this point onwards
754 bit_index = 3-(BA & 0b11) # low 2 bits but BE
755 CR_reg = CR{CR_index} # get the CR
756 # finally get the bit from the CR.
757 CR_bit = (CR_reg & (1<<bit_index)) != 0
758
759 Note here that the decoding pattern to determine CR\_bit does not change.
760
761 Note: high-performance implementations may read/write Vectors of CRs in
762 batches of aligned 32-bit chunks (CR0-7, CR7-15). This is to greatly
763 simplify internal design. If instructions are issued where CR Vectors
764 do not start on a 32-bit aligned boundary, performance may be affected.
765
766 ## CR fields as inputs/outputs of vector operations
767
768 CRs (or, the arithmetic operations associated with them)
769 may be marked as Vectorised or Scalar. When Rc=1 in arithmetic operations that have no explicit EXTRA to cover the CR, the CR is Vectorised if the destination is Vectorised. Likewise if the destination is scalar then so is the CR.
770
771 When vectorized, the CR inputs/outputs are sequentially read/written
772 to 4-bit CR fields. Vectorised Integer results, when Rc=1, will begin
773 writing to CR8 (TBD evaluate) and increase sequentially from there.
774 This is so that:
775
776 * implementations may rely on the Vector CRs being aligned to 8. This
777 means that CRs may be read or written in aligned batches of 32 bits
778 (8 CRs per batch), for high performance implementations.
779 * scalar Rc=1 operation (CR0, CR1) and callee-saved CRs (CR2-4) are not
780 overwritten by vector Rc=1 operations except for very large VL
781 * CR-based predication, from CR32, is also not interfered with
782 (except by large VL).
783
784 However when the SV result (destination) is marked as a scalar by the
785 EXTRA field the *standard* v3.0B behaviour applies: the accompanying
786 CR when Rc=1 is written to. This is CR0 for integer operations and CR1
787 for FP operations.
788
789 Note that yes, the CR Fields are genuinely Vectorised. Unlike in SIMD VSX which
790 has a single CR (CR6) for a given SIMD result, SV Vectorised OpenPOWER
791 v3.0B scalar operations produce a **tuple** of element results: the
792 result of the operation as one part of that element *and a corresponding
793 CR element*. Greatly simplified pseudocode:
794
795 for i in range(VL):
796 # calculate the vector result of an add
797 iregs[RT+i] = iregs[RA+i] + iregs[RB+i]
798 # now calculate CR bits
799 CRs{8+i}.eq = iregs[RT+i] == 0
800 CRs{8+i}.gt = iregs[RT+i] > 0
801 ... etc
802
803 If a "cumulated" CR based analysis of results is desired (a la VSX CR6)
804 then a followup instruction must be performed, setting "reduce" mode on
805 the Vector of CRs, using cr ops (crand, crnor) to do so. This provides far
806 more flexibility in analysing vectors than standard Vector ISAs. Normal
807 Vector ISAs are typically restricted to "were all results nonzero" and
808 "were some results nonzero". The application of mapreduce to Vectorised
809 cr operations allows far more sophisticated analysis, particularly in
810 conjunction with the new crweird operations see [[sv/cr_int_predication]].
811
812 Note in particular that the use of a separate instruction in this way
813 ensures that high performance multi-issue OoO inplementations do not
814 have the computation of the cumulative analysis CR as a bottleneck and
815 hindrance, regardless of the length of VL.
816
817 Additionally,
818 SVP64 [[sv/branches]] may be used, even when the branch itself is to
819 the following instruction. The combined side-effects of CTR reduction
820 and VL truncation provide several benefits.
821
822 (see [[discussion]]. some alternative schemes are described there)
823
824 ## Rc=1 when SUBVL!=1
825
826 sub-vectors are effectively a form of Packed SIMD (length 2 to 4). Only 1 bit of
827 predicate is allocated per subvector; likewise only one CR is allocated
828 per subvector.
829
830 This leaves a conundrum as to how to apply CR computation per subvector,
831 when normally Rc=1 is exclusively applied to scalar elements. A solution
832 is to perform a bitwise OR or AND of the subvector tests. Given that
833 OE is ignored in SVP64, this field may (when available) be used to select OR or
834 AND behavior.
835
836 ### Table of CR fields
837
838 CRn is the notation used by the OpenPower spec to refer to CR field #i,
839 so FP instructions with Rc=1 write to CR1 (n=1).
840
841 CRs are not stored in SPRs: they are registers in their own right.
842 Therefore context-switching the full set of CRs involves a Vectorised
843 mfcr or mtcr, using VL=8 to do so. This is exactly as how
844 scalar OpenPOWER context-switches CRs: it is just that there are now
845 more of them.
846
847 The 64 SV CRs are arranged similarly to the way the 128 integer registers
848 are arranged. TODO a python program that auto-generates a CSV file
849 which can be included in a table, which is in a new page (so as not to
850 overwhelm this one). [[svp64/cr_names]]
851
852 # Register Profiles
853
854 **NOTE THIS TABLE SHOULD NO LONGER BE HAND EDITED** see
855 <https://bugs.libre-soc.org/show_bug.cgi?id=548> for details.
856
857 Instructions are broken down by Register Profiles as listed in the
858 following auto-generated page: [[opcode_regs_deduped]]. "Non-SV"
859 indicates that the operations with this Register Profile cannot be
860 Vectorised (mtspr, bc, dcbz, twi)
861
862 TODO generate table which will be here [[svp64/reg_profiles]]
863
864 # SV pseudocode illilustration
865
866 ## Single-predicated Instruction
867
868 illustration of normal mode add operation: zeroing not included, elwidth
869 overrides not included. if there is no predicate, it is set to all 1s
870
871 function op_add(rd, rs1, rs2) # add not VADD!
872 int i, id=0, irs1=0, irs2=0;
873 predval = get_pred_val(FALSE, rd);
874 for (i = 0; i < VL; i++)
875 STATE.srcoffs = i # save context
876 if (predval & 1<<i) # predication uses intregs
877 ireg[rd+id] <= ireg[rs1+irs1] + ireg[rs2+irs2];
878 if (!int_vec[rd].isvec) break;
879 if (rd.isvec) { id += 1; }
880 if (rs1.isvec) { irs1 += 1; }
881 if (rs2.isvec) { irs2 += 1; }
882 if (id == VL or irs1 == VL or irs2 == VL)
883 {
884 # end VL hardware loop
885 STATE.srcoffs = 0; # reset
886 return;
887 }
888
889 This has several modes:
890
891 * RT.v = RA.v RB.v
892 * RT.v = RA.v RB.s (and RA.s RB.v)
893 * RT.v = RA.s RB.s
894 * RT.s = RA.v RB.v
895 * RT.s = RA.v RB.s (and RA.s RB.v)
896 * RT.s = RA.s RB.s
897
898 All of these may be predicated. Vector-Vector is straightfoward.
899 When one of source is a Vector and the other a Scalar, it is clear that
900 each element of the Vector source should be added to the Scalar source,
901 each result placed into the Vector (or, if the destination is a scalar,
902 only the first nonpredicated result).
903
904 The one that is not obvious is RT=vector but both RA/RB=scalar.
905 Here this acts as a "splat scalar result", copying the same result into
906 all nonpredicated result elements. If a fixed destination scalar was
907 intended, then an all-Scalar operation should be used.
908
909 See <https://bugs.libre-soc.org/show_bug.cgi?id=552>
910
911 # Assembly Annotation
912
913 Assembly code annotation is required for SV to be able to successfully
914 mark instructions as "prefixed".
915
916 A reasonable (prototype) starting point:
917
918 svp64 [field=value]*
919
920 Fields:
921
922 * ew=8/16/32 - element width
923 * sew=8/16/32 - source element width
924 * vec=2/3/4 - SUBVL
925 * mode=mr/satu/sats/crpred
926 * pred=1\<\<3/r3/~r3/r10/~r10/r30/~r30/lt/gt/le/ge/eq/ne
927
928 similar to x86 "rex" prefix.
929
930 For actual assembler:
931
932 sv.asmcode/mode.vec{N}.ew=8,sw=16,m={pred},sm={pred} reg.v, src.s
933
934 Qualifiers:
935
936 * m={pred}: predicate mask mode
937 * sm={pred}: source-predicate mask mode (only allowed in Twin-predication)
938 * vec{N}: vec2 OR vec3 OR vec4 - sets SUBVL=2/3/4
939 * ew={N}: ew=8/16/32 - sets elwidth override
940 * sw={N}: sw=8/16/32 - sets source elwidth override
941 * ff={xx}: see fail-first mode
942 * pr={xx}: see predicate-result mode
943 * sat{x}: satu / sats - see saturation mode
944 * mr: see map-reduce mode
945 * mr.svm see map-reduce with sub-vector mode
946 * crm: see map-reduce CR mode
947 * crm.svm see map-reduce CR with sub-vector mode
948 * sz: predication with source-zeroing
949 * dz: predication with dest-zeroing
950
951 For modes:
952
953 * pred-result:
954 - pm=lt/gt/le/ge/eq/ne/so/ns
955 - RC1 mode
956 * fail-first
957 - ff=lt/gt/le/ge/eq/ne/so/ns
958 - RC1 mode
959 * saturation:
960 - sats
961 - satu
962 * map-reduce:
963 - mr OR crm: "normal" map-reduce mode or CR-mode.
964 - mr.svm OR crm.svm: when vec2/3/4 set, sub-vector mapreduce is enabled
965
966 # Proposed Parallel-reduction algorithm
967
968 **This algorithm contains a MV operation and may NOT be used. Removal
969 of the MV operation may be achieved by using index-redirection as was
970 achieved in DCT and FFT REMAP**
971
972 ```
973 /// reference implementation of proposed SimpleV reduction semantics.
974 ///
975 // reduction operation -- we still use this algorithm even
976 // if the reduction operation isn't associative or
977 // commutative.
978 XXX VIOLATION OF SVP64 DESIGN PRINCIPLES XXXX
979 /// XXX `pred` is a user-visible Vector Condition register XXXX
980 XXX VIOLATION OF SVP64 DESIGN PRINCIPLES XXXX
981 ///
982 /// all input arrays have length `vl`
983 def reduce(vl, vec, pred):
984 pred = copy(pred) # must not damage predicate
985 step = 1;
986 while step < vl
987 step *= 2;
988 for i in (0..vl).step_by(step)
989 other = i + step / 2;
990 other_pred = other < vl && pred[other];
991 if pred[i] && other_pred
992 vec[i] += vec[other];
993 else if other_pred
994 XXX VIOLATION OF SVP64 DESIGN XXX
995 XXX vec[i] = vec[other]; XXX
996 XXX VIOLATION OF SVP64 DESIGN XXX
997 pred[i] |= other_pred;
998 ```
999
1000 The first principle in SVP64 being violated is that SVP64 is a fully-independent
1001 Abstraction of hardware-looping in between issue and execute phases
1002 that has no relation to the operation it issues. The above pseudocode
1003 conditionally changes not only the type of element operation issued
1004 (a MV in some cases) but also the number of arguments (2 for a MV).
1005 At the very least, for Vertical-First Mode this will result in unanticipated and unexpected behaviour (maximise "surprises" for programmers) in
1006 the middle of loops, that will be far too hard to explain.
1007
1008 The second principle being violated by the above algorithm is the expectation
1009 that temporary storage is available for a modified predicate: there is no
1010 such space, and predicates are read-only to reduce complexity at the
1011 micro-architectural level.
1012 SVP64 is founded on the principle that all operations are
1013 "re-entrant" with respect to interrupts and exceptions: SVSTATE must
1014 be saved and restored alongside PC and MSR, but nothing more. It is perfectly
1015 fine to have context-switching back to the operation be somewhat slower,
1016 through "reconstruction" of temporary internal state based on what SVSTATE
1017 contains, but nothing more.
1018
1019 An alternative algorithm is therefore required that does not perform MVs,
1020 and does not require additional state to be saved on context-switching.
1021
1022 ```
1023 def reduce( vl, vec, pred ):
1024 pred = copy(pred) # must not damage predicate
1025 j = 0
1026 vi = [] # array of lookup indices to skip nonpredicated
1027 for i, pbit in enumerate(pred):
1028 if pbit:
1029 vi[j] = i
1030 j += 1
1031 step = 2
1032 while step <= vl
1033 halfstep = step // 2
1034 for i in (0..vl).step_by(step)
1035 other = vi[i + halfstep]
1036 ir = vi[i]
1037 other_pred = other < vl && pred[other]
1038 if pred[i] && other_pred
1039 vec[ir] += vec[other]
1040 else if other_pred:
1041 vi[ir] = vi[other] # index redirection, no MV
1042 pred[ir] |= other_pred # reconstructed on context-switch
1043 step *= 2
1044 ```
1045
1046 In this version the need for an explicit MV is made unnecessary by instead
1047 leaving elements *in situ*. The internal modifications to the predicate may,
1048 due to the reduction being entirely deterministic, be "reconstructed"
1049 on a context-switch. This may make some implementations slower.
1050
1051 *Implementor's Note: many SIMD-based Parallel Reduction Algorithms are
1052 implemented in hardware with MVs that ensure lane-crossing is minimised.
1053 The mistake which would be catastrophic to SVP64 to make is to then
1054 limit the Reduction Sequence for all implementors
1055 based solely and exclusively on what one
1056 specific internal microarchitecture does.
1057 In SIMD ISAs the internal SIMD Architectural design is exposed and imposed on the programmer. Cray-style Vector ISAs on the other hand provide convenient,
1058 compact and efficient encodings of abstract concepts.
1059 It is the Implementor's responsibility to produce a design
1060 that complies with the above algorithm,
1061 utilising internal Micro-coding and other techniques to transparently
1062 insert MV operations
1063 if necessary or desired, to give the level of efficiency or performance
1064 required.*
1065
1066 # Element-width overrides <a name="elwidth"> </>
1067
1068 Element-width overrides are best illustrated with a packed structure
1069 union in the c programming language. The following should be taken
1070 literally, and assume always a little-endian layout:
1071
1072 typedef union {
1073 uint8_t b[];
1074 uint16_t s[];
1075 uint32_t i[];
1076 uint64_t l[];
1077 uint8_t actual_bytes[8];
1078 } el_reg_t;
1079
1080 elreg_t int_regfile[128];
1081
1082 get_polymorphed_reg(reg, bitwidth, offset):
1083 el_reg_t res;
1084 res.l = 0; // TODO: going to need sign-extending / zero-extending
1085 if bitwidth == 8:
1086 reg.b = int_regfile[reg].b[offset]
1087 elif bitwidth == 16:
1088 reg.s = int_regfile[reg].s[offset]
1089 elif bitwidth == 32:
1090 reg.i = int_regfile[reg].i[offset]
1091 elif bitwidth == 64:
1092 reg.l = int_regfile[reg].l[offset]
1093 return res
1094
1095 set_polymorphed_reg(reg, bitwidth, offset, val):
1096 if (!reg.isvec):
1097 # not a vector: first element only, overwrites high bits
1098 int_regfile[reg].l[0] = val
1099 elif bitwidth == 8:
1100 int_regfile[reg].b[offset] = val
1101 elif bitwidth == 16:
1102 int_regfile[reg].s[offset] = val
1103 elif bitwidth == 32:
1104 int_regfile[reg].i[offset] = val
1105 elif bitwidth == 64:
1106 int_regfile[reg].l[offset] = val
1107
1108 In effect the GPR registers r0 to r127 (and corresponding FPRs fp0
1109 to fp127) are reinterpreted to be "starting points" in a byte-addressable
1110 memory. Vectors - which become just a virtual naming construct - effectively
1111 overlap.
1112
1113 It is extremely important for implementors to note that the only circumstance
1114 where upper portions of an underlying 64-bit register are zero'd out is
1115 when the destination is a scalar. The ideal register file has byte-level
1116 write-enable lines, just like most SRAMs, in order to avoid READ-MODIFY-WRITE.
1117
1118 An example ADD operation with predication and element width overrides:
1119
1120  for (i = 0; i < VL; i++)
1121 if (predval & 1<<i) # predication
1122 src1 = get_polymorphed_reg(RA, srcwid, irs1)
1123 src2 = get_polymorphed_reg(RB, srcwid, irs2)
1124 result = src1 + src2 # actual add here
1125 set_polymorphed_reg(RT, destwid, ird, result)
1126 if (!RT.isvec) break
1127 if (RT.isvec)  { id += 1; }
1128 if (RA.isvec)  { irs1 += 1; }
1129 if (RB.isvec)  { irs2 += 1; }
1130
1131 Thus it can be clearly seen that elements are packed by their
1132 element width, and the packing starts from the source (or destination)
1133 specified by the instruction.
1134
1135 # Twin (implicit) result operations
1136
1137 Some operations in the Power ISA already target two 64-bit scalar
1138 registers: `lq` for example, and LD with update.
1139 Some mathematical algorithms are more
1140 efficient when there are two outputs rather than one, providing
1141 feedback loops between elements (the most well-known being add with
1142 carry). 64-bit multiply
1143 for example actually internally produces a 128 bit result, which clearly
1144 cannot be stored in a single 64 bit register. Some ISAs recommend
1145 "macro op fusion": the practice of setting a convention whereby if
1146 two commonly used instructions (mullo, mulhi) use the same ALU but
1147 one selects the low part of an identical operation and the other
1148 selects the high part, then optimised micro-architectures may
1149 "fuse" those two instructions together, using Micro-coding techniques,
1150 internally.
1151
1152 The practice and convention of macro-op fusion however is not compatible
1153 with SVP64 Horizontal-First, because Horizontal Mode may only
1154 be applied to a single instruction at a time, and SVP64 is based on
1155 the principle of strict Program Order even at the element
1156 level. Thus it becomes
1157 necessary to add explicit more complex single instructions with
1158 more operands than would normally be seen in the average RISC ISA
1159 (3-in, 2-out, in some cases). If it
1160 was not for Power ISA already having LD/ST with update as well as
1161 Condition Codes and `lq` this would be hard to justify.
1162
1163 With limited space in the `EXTRA` Field, and Power ISA opcodes
1164 being only 32 bit, 5 operands is quite an ask. `lq` however sets
1165 a precedent: `RTp` stands for "RT pair". In other words the result
1166 is stored in RT and RT+1. For Scalar operations, following this
1167 precedent is perfectly reasonable. In Scalar mode,
1168 `madded` therefore stores the two halves of the 128-bit multiply
1169 into RT and RT+1.
1170
1171 What, then, of `sv.madded`? If the destination is hard-coded to
1172 RT and RT+1 the instruction is not useful when Vectorised because
1173 the output will be overwritten on the next element. To solve this
1174 is easy: define the destination registers as RT and RT+MAXVL
1175 respectively. This makes it easy for compilers to statically allocate
1176 registers even when VL changes dynamically.
1177
1178 Bear in mind that both RT and RT+MAXVL are starting points for Vectors,
1179 and bear in mind that element-width overrides still have to be taken
1180 into consideration, the starting point for the implicit destination
1181 is best illustrated in pseudocode:
1182
1183 # demo of madded
1184  for (i = 0; i < VL; i++)
1185 if (predval & 1<<i) # predication
1186 src1 = get_polymorphed_reg(RA, srcwid, irs1)
1187 src2 = get_polymorphed_reg(RB, srcwid, irs2)
1188 src2 = get_polymorphed_reg(RC, srcwid, irs3)
1189 result = src1*src2 + src2
1190 destmask = (2<<destwid)-1
1191 # store two halves of result, both start from RT.
1192 set_polymorphed_reg(RT, destwid, ird , result&destmask)
1193 set_polymorphed_reg(RT, destwid, ird+MAXVL, result>>destwid)
1194 if (!RT.isvec) break
1195 if (RT.isvec)  { id += 1; }
1196 if (RA.isvec)  { irs1 += 1; }
1197 if (RB.isvec)  { irs2 += 1; }
1198 if (RC.isvec)  { irs3 += 1; }
1199
1200 The significant part here is that the second half is stored
1201 starting not from RT+MAXVL at all: it is the *element* index
1202 that is offset by MAXVL, both halves actually starting from RT.
1203 If VL is 3, MAXVL is 5, RT is 1, and dest elwidth is 32 then the elements
1204 RT0 to RT2 are stored:
1205
1206 0..31 32..63
1207 r0 unchanged unchanged
1208 r1 RT0.lo RT1.lo
1209 r2 RT2.lo unchanged
1210 r3 unchanged RT0.hi
1211 r4 RT1.hi RT2.hi
1212 r5 unchanged unchanged
1213
1214 Note that all of the LO halves start from r1, but that the HI halves
1215 start from half-way into r3. The reason is that with MAXVL bring
1216 5 and elwidth being 32, this is the 5th element
1217 offset (in 32 bit quantities) counting from r1.
1218
1219 *Programmer's note: accessing registers that have been placed
1220 starting on a non-contiguous boundary (half-way along a scalar
1221 register) can be inconvenient: REMAP can provide an offset but
1222 it requires extra instructions to set up. A simple solution
1223 is to ensure that MAXVL is rounded up such that the Vector
1224 ends cleanly on a contiguous register boundary. MAXVL=6 in
1225 the above example would achieve that*
1226
1227 Additional DRAFT Scalar instructions in 3-in 2-out form
1228 with an implicit 2nd destination:
1229
1230 * [[isa/svfixedarith]]
1231 * [[isa/svfparith]]
1232