(no commit message)
[libreriscv.git] / openpower / sv / svp64 / appendix.mdwn
1 # Appendix
2
3 * <https://bugs.libre-soc.org/show_bug.cgi?id=574>
4 * <https://bugs.libre-soc.org/show_bug.cgi?id=558#c47>
5 * <https://bugs.libre-soc.org/show_bug.cgi?id=697>
6
7 This is the appendix to [[sv/svp64]], providing explanations of modes
8 etc. leaving the main svp64 page's primary purpose as outlining the
9 instruction format.
10
11 Table of contents:
12
13 [[!toc]]
14
15 # XER, SO and other global flags
16
17 Vector systems are expected to be high performance. This is achieved
18 through parallelism, which requires that elements in the vector be
19 independent. XER SO and other global "accumulation" flags (CR.OV) cause
20 Read-Write Hazards on single-bit global resources, having a significant
21 detrimental effect.
22
23 Consequently in SV, XER.SO and CR.OV behaviour is disregarded (including
24 in `cmp` instructions). XER is simply neither read nor written.
25 This includes when `scalar identity behaviour` occurs. If precise
26 OpenPOWER v3.0/1 scalar behaviour is desired then OpenPOWER v3.0/1
27 instructions should be used without an SV Prefix.
28
29 An interesting side-effect of this decision is that the OE flag is now
30 free for other uses when SV Prefixing is used.
31
32 Regarding XER.CA: this does not fit either: it was designed for a scalar
33 ISA. Instead, both carry-in and carry-out go into the CR.so bit of a given
34 Vector element. This provides a means to perform large parallel batches
35 of Vectorised carry-capable additions. crweird instructions can be used
36 to transfer the CRs in and out of an integer, where bitmanipulation
37 may be performed to analyse the carry bits (including carry lookahead
38 propagation) before continuing with further parallel additions.
39
40 # v3.0B/v3.1 relevant instructions
41
42 SV is primarily designed for use as an efficient hybrid 3D GPU / VPU /
43 CPU ISA.
44
45 As mentioned above, OE=1 is not applicable in SV, freeing this bit for
46 alternative uses. Additionally, Vectorisation of the VSX SIMD system
47 likewise makes no sense whatsoever. SV *replaces* VSX and provides,
48 at the very minimum, predication (which VSX was designed without).
49 Thus all VSX Major Opcodes - all of them - are "unused" and must raise
50 illegal instruction exceptions in SV Prefix Mode.
51
52 Likewise, `lq` (Load Quad), and Load/Store Multiple make no sense to
53 have because they are not only provided by SV, the SV alternatives may
54 be predicated as well, making them far better suited to use in function
55 calls and context-switching.
56
57 Additionally, some v3.0/1 instructions simply make no sense at all in a
58 Vector context: `rfid` falls into this category,
59 as well as `sc` and `scv`. Here there is simply no point
60 trying to Vectorise them: the standard OpenPOWER v3.0/1 instructions
61 should be called instead.
62
63 Fortuitously this leaves several Major Opcodes free for use by SV
64 to fit alternative future instructions. In a 3D context this means
65 Vector Product, Vector Normalise, [[sv/mv.swizzle]], Texture LD/ST
66 operations, and others critical to an efficient, effective 3D GPU and
67 VPU ISA. With such instructions being included as standard in other
68 commercially-successful GPU ISAs it is likewise critical that a 3D
69 GPU/VPU based on svp64 also have such instructions.
70
71 Note however that svp64 is stand-alone and is in no way
72 critically dependent on the existence or provision of 3D GPU or VPU
73 instructions. These should be considered extensions, and their discussion
74 and specification is out of scope for this document.
75
76 Note, again: this is *only* under svp64 prefixing. Standard v3.0B /
77 v3.1B is *not* altered by svp64 in any way.
78
79 ## Major opcode map (v3.0B)
80
81 This table is taken from v3.0B.
82 Table 9: Primary Opcode Map (opcode bits 0:5)
83
84 | 000 | 001 | 010 | 011 | 100 | 101 | 110 | 111
85 000 | | | tdi | twi | EXT04 | | | mulli | 000
86 001 | subfic | | cmpli | cmpi | addic | addic. | addi | addis | 001
87 010 | bc/l/a | EXT17 | b/l/a | EXT19 | rlwimi| rlwinm | | rlwnm | 010
88 011 | ori | oris | xori | xoris | andi. | andis. | EXT30 | EXT31 | 011
89 100 | lwz | lwzu | lbz | lbzu | stw | stwu | stb | stbu | 100
90 101 | lhz | lhzu | lha | lhau | sth | sthu | lmw | stmw | 101
91 110 | lfs | lfsu | lfd | lfdu | stfs | stfsu | stfd | stfdu | 110
92 111 | lq | EXT57 | EXT58 | EXT59 | EXT60 | EXT61 | EXT62 | EXT63 | 111
93 | 000 | 001 | 010 | 011 | 100 | 101 | 110 | 111
94
95 ## Suitable for svp64-only
96
97 This is the same table containing v3.0B Primary Opcodes except those that
98 make no sense in a Vectorisation Context have been removed. These removed
99 POs can, *in the SV Vector Context only*, be assigned to alternative
100 (Vectorised-only) instructions, including future extensions.
101
102 Note, again, to emphasise: outside of svp64 these opcodes **do not**
103 change. When not prefixed with svp64 these opcodes **specifically**
104 retain their v3.0B / v3.1B OpenPOWER Standard compliant meaning.
105
106 | 000 | 001 | 010 | 011 | 100 | 101 | 110 | 111
107 000 | | | | | | | | mulli | 000
108 001 | subfic | | cmpli | cmpi | addic | addic. | addi | addis | 001
109 010 | | | | EXT19 | rlwimi| rlwinm | | rlwnm | 010
110 011 | ori | oris | xori | xoris | andi. | andis. | EXT30 | EXT31 | 011
111 100 | lwz | lwzu | lbz | lbzu | stw | stwu | stb | stbu | 100
112 101 | lhz | lhzu | lha | lhau | sth | sthu | | | 101
113 110 | lfs | lfsu | lfd | lfdu | stfs | stfsu | stfd | stfdu | 110
114 111 | | | EXT58 | EXT59 | | EXT61 | | EXT63 | 111
115 | 000 | 001 | 010 | 011 | 100 | 101 | 110 | 111
116
117 It is important to note that having a different v3.0B Scalar opcode
118 that is different from an SVP64 one is highly undesirable: the complexity
119 in the decoder is greatly increased.
120
121 # Single Predication
122
123 This is a standard mode normally found in Vector ISAs. every element in every source Vector and in the destination uses the same bit of one single predicate mask.
124
125 In SVSTATE, for Single-predication, implementors MUST increment both srcstep and dststep: unlike Twin-Predication the two must be equal at all times.
126
127 # Twin Predication
128
129 This is a novel concept that allows predication to be applied to a single
130 source and a single dest register. The following types of traditional
131 Vector operations may be encoded with it, *without requiring explicit
132 opcodes to do so*
133
134 * VSPLAT (a single scalar distributed across a vector)
135 * VEXTRACT (like LLVM IR [`extractelement`](https://releases.llvm.org/11.0.0/docs/LangRef.html#extractelement-instruction))
136 * VINSERT (like LLVM IR [`insertelement`](https://releases.llvm.org/11.0.0/docs/LangRef.html#insertelement-instruction))
137 * VCOMPRESS (like LLVM IR [`llvm.masked.compressstore.*`](https://releases.llvm.org/11.0.0/docs/LangRef.html#llvm-masked-compressstore-intrinsics))
138 * VEXPAND (like LLVM IR [`llvm.masked.expandload.*`](https://releases.llvm.org/11.0.0/docs/LangRef.html#llvm-masked-expandload-intrinsics))
139
140 Those patterns (and more) may be applied to:
141
142 * mv (the usual way that V\* ISA operations are created)
143 * exts\* sign-extension
144 * rwlinm and other RS-RA shift operations (**note**: excluding
145 those that take RA as both a src and dest. These are not
146 1-src 1-dest, they are 2-src, 1-dest)
147 * LD and ST (treating AGEN as one source)
148 * FP fclass, fsgn, fneg, fabs, fcvt, frecip, fsqrt etc.
149 * Condition Register ops mfcr, mtcr and other similar
150
151 This is a huge list that creates extremely powerful combinations,
152 particularly given that one of the predicate options is `(1<<r3)`
153
154 Additional unusual capabilities of Twin Predication include a back-to-back
155 version of VCOMPRESS-VEXPAND which is effectively the ability to do
156 sequentially ordered multiple VINSERTs. The source predicate selects a
157 sequentially ordered subset of elements to be inserted; the destination
158 predicate specifies the sequentially ordered recipient locations.
159 This is equivalent to
160 `llvm.masked.compressstore.*`
161 followed by
162 `llvm.masked.expandload.*`
163
164 # Reduce modes
165
166 Reduction in SVP64 is deterministic and somewhat of a misnomer. A normal
167 Vector ISA would have explicit Reduce opcodes with defined characteristics
168 per operation: in SX Aurora there is even an additional scalar argument
169 containing the initial reduction value, and the default is either 0
170 or 1 depending on the specifics of the explicit opcode.
171 SVP64 fundamentally has to
172 utilise *existing* Scalar Power ISA v3.0B operations, which presents some
173 unique challenges.
174
175 The solution turns out to be to simply define reduction as permitting
176 deterministic element-based schedules to be issued using the base Scalar
177 operations, and to rely on the underlying microarchitecture to resolve
178 Register Hazards at the element level. This goes back to
179 the fundamental principle that SV is nothing more than a Sub-Program-Counter
180 sitting between Decode and Issue phases.
181
182 Microarchitectures *may* take opportunities to parallelise the reduction
183 but only if in doing so they preserve Program Order at the Element Level.
184 Opportunities where this is possible include an `OR` operation
185 or a MIN/MAX operation: it may be possible to parallelise the reduction,
186 but for Floating Point it is not permitted due to different results
187 being obtained if the reduction is not executed in strict sequential
188 order.
189
190 In essence it becomes the programmer's responsibility to leverage the
191 pre-determined schedules to desired effect.
192
193 ## Scalar result reduction and iteration
194
195 Scalar Reduction per se does not exist, instead is implemented in SVP64
196 as a simple and natural relaxation of the usual restriction on the Vector
197 Looping which would terminate if the destination was marked as a Scalar.
198 Scalar Reduction by contrast *keeps issuing Vector Element Operations*
199 even though the destination register is marked as scalar.
200 Thus it is up to the programmer to be aware of this and observe some
201 conventions.
202
203 It is also important to appreciate that there is no
204 actual imposition or restriction on how this mode is utilised: there
205 will therefore be several valuable uses (including Vector Iteration
206 and "Reverse-Gear")
207 and it is up to the programmer to make best use of the
208 (strictly deterministic) capability
209 provided.
210
211 In this mode, which is suited to operations involving carry or overflow,
212 one register must be assigned, by convention by the programmer to be the
213 "accumulator". Scalar reduction is thus categorised by:
214
215 * One of the sources is a Vector
216 * the destination is a scalar
217 * optionally but most usefully when one source scalar register is
218 also the scalar destination (which may be informally termed
219 the "accumulator")
220 * That the source register type is the same as the destination register
221 type identified as the "accumulator". Scalar reduction on `cmp`,
222 `setb` or `isel` makes no sense for example because of the mixture
223 between CRs and GPRs.
224
225 *Note that issuing instructions in Scalar reduce mode such as `setb`
226 are neither `UNDEFINED` nor prohibited, despite them not making much
227 sense at first glance.
228 Scalar reduce is strictly defined behaviour, and the cost in
229 hardware terms of prohibition of seemingly non-sensical operations is too great.
230 Therefore it is permitted and required to be executed successfully.
231 Implementors **MAY** choose to optimise such instructions in instances
232 where their use results in "extraneous execution", i.e. where it is clear
233 that the sequence of operations, comprising multiple overwrites to
234 a scalar destination **without** cumulative, iterative, or reductive
235 behaviour (no "accumulator"), may discard all but the last element
236 operation. Identification
237 of such is trivial to do for `setb` and `cmp`: the source register type is
238 a completely different register file from the destination*
239
240 Typical applications include simple operations such as `ADD r3, r10.v,
241 r3` where, clearly, r3 is being used to accumulate the addition of all
242 elements is the vector starting at r10.
243
244 # add RT, RA,RB but when RT==RA
245 for i in range(VL):
246 iregs[RA] += iregs[RB+i] # RT==RA
247
248 However, *unless* the operation is marked as "mapreduce" (`sv.add/mr`)
249 SV ordinarily
250 **terminates** at the first scalar operation. Only by marking the
251 operation as "mapreduce" will it continue to issue multiple sub-looped
252 (element) instructions in `Program Order`.
253
254 To perform the loop in reverse order, the ```RG``` (reverse gear) bit must be set. This may be useful in situations where the results may be different
255 (floating-point) if executed in a different order. Given that there is
256 no actual prohibition on Reduce Mode being applied when the destination
257 is a Vector, the "Reverse Gear" bit turns out to be a way to apply Iterative
258 or Cumulative Vector operations in reverse. `sv.add/rg r3.v, r4.v, r4.v`
259 for example will start at the opposite end of the Vector and push
260 a cumulative series of overlapping add operations into the Execution units of
261 the underlying hardware.
262
263 Other examples include shift-mask operations where a Vector of inserts
264 into a single destination register is required, as a way to construct
265 a value quickly from multiple arbitrary bit-ranges and bit-offsets.
266 Using the same register as both the source and destination, with Vectors
267 of different offsets masks and values to be inserted has multiple
268 applications including Video, cryptography and JIT compilation.
269
270 Subtract and Divide are still permitted to be executed in this mode,
271 although from an algorithmic perspective it is strongly discouraged.
272 It would be better to use addition followed by one final subtract,
273 or in the case of divide, to get better accuracy, to perform a multiply
274 cascade followed by a final divide.
275
276 Note that single-operand or three-operand scalar-dest reduce is perfectly
277 well permitted: the programmer may still declare one register, used as
278 both a Vector source and Scalar destination, to be utilised as
279 the "accumulator". In the case of `sv.fmadds` and `sv.maddhw` etc
280 this naturally fits well with the normal expected usage of these
281 operations.
282
283 If an interrupt or exception occurs in the middle of the scalar mapreduce,
284 the scalar destination register **MUST** be updated with the current
285 (intermediate) result, because this is how ```Program Order``` is
286 preserved (Vector Loops are to be considered to be just another way of issuing instructions
287 in Program Order). In this way, after return from interrupt,
288 the scalar mapreduce may continue where it left off. This provides
289 "precise" exception behaviour.
290
291 Note that hardware is perfectly permitted to perform multi-issue
292 parallel optimisation of the scalar reduce operation: it's just that
293 as far as the user is concerned, all exceptions and interrupts **MUST**
294 be precise.
295
296 ## Vector result reduce mode
297
298 Vector Reduce Mode issues a deterministic tree-reduction schedule to the underlying micro-architecture. Like Scalar reduction, the "Scalar Base"
299 (Power ISA v3.0B) operation is leveraged, unmodified, to give the
300 *appearance* and *effect* of Reduction.
301
302 Given that the tree-reduction schedule is deterministic,
303 Interrupts and exceptions
304 can therefore also be precise. The final result will be in the first
305 non-predicate-masked-out destination element, but due again to
306 the deterministic schedule programmers may find uses for the intermediate
307 results.
308
309 When Rc=1 a corresponding Vector of co-resultant CRs is also
310 created. No special action is taken: the result and its CR Field
311 are stored "as usual" exactly as all other SVP64 Rc=1 operations.
312
313 ## Sub-Vector Horizontal Reduction
314
315 Note that when SVM is clear and SUBVL!=1 the sub-elements are
316 *independent*, i.e. they are mapreduced per *sub-element* as a result.
317 illustration with a vec2, assuming RA==RT, e.g `sv.add/mr/vec2 r4, r4, r16`
318
319 for i in range(0, VL):
320 # RA==RT in the instruction. does not have to be
321 iregs[RT].x = op(iregs[RT].x, iregs[RB+i].x)
322 iregs[RT].y = op(iregs[RT].y, iregs[RB+i].y)
323
324 Thus logically there is nothing special or unanticipated about
325 `SVM=0`: it is expected behaviour according to standard SVP64
326 Sub-Vector rules.
327
328 By contrast, when SVM is set and SUBVL!=1, a Horizontal
329 Subvector mode is enabled, which behaves very much more
330 like a traditional Vector Processor Reduction instruction.
331 Example for a vec3:
332
333 for i in range(VL):
334 result = iregs[RA+i].x
335 result = op(result, iregs[RA+i].y)
336 result = op(result, iregs[RA+i].z)
337 iregs[RT+i] = result
338
339 In this mode, when Rc=1 the Vector of CRs is as normal: each result
340 element creates a corresponding CR element (for the final, reduced, result).
341
342 # Fail-on-first
343
344 Data-dependent fail-on-first has two distinct variants: one for LD/ST
345 (see [[sv/ldst]],
346 the other for arithmetic operations (actually, CR-driven)
347 ([[sv/normal]]) and CR operations ([[sv/cr_ops]]).
348 Note in each
349 case the assumption is that vector elements are required appear to be
350 executed in sequential Program Order, element 0 being the first.
351
352 * LD/ST ffirst treats the first LD/ST in a vector (element 0) as an
353 ordinary one. Exceptions occur "as normal". However for elements 1
354 and above, if an exception would occur, then VL is **truncated** to the
355 previous element.
356 * Data-driven (CR-driven) fail-on-first activates when Rc=1 or other
357 CR-creating operation produces a result (including cmp). Similar to
358 branch, an analysis of the CR is performed and if the test fails, the
359 vector operation terminates and discards all element operations at and
360 above the current one, and VL is truncated to either
361 the *previous* element or the current one, depending on whether
362 VLi (VL "inclusive") is set.
363
364 Thus the new VL comprises a contiguous vector of results,
365 all of which pass the testing criteria (equal to zero, less than zero).
366
367 The CR-based data-driven fail-on-first is new and not found in ARM
368 SVE or RVV. It is extremely useful for reducing instruction count,
369 however requires speculative execution involving modifications of VL
370 to get high performance implementations. An additional mode (RC1=1)
371 effectively turns what would otherwise be an arithmetic operation
372 into a type of `cmp`. The CR is stored (and the CR.eq bit tested
373 against the `inv` field).
374 If the CR.eq bit is equal to `inv` then the Vector is truncated and
375 the loop ends.
376 Note that when RC1=1 the result elements are never stored, only the CRs.
377
378 VLi is only available as an option when `Rc=0` (or for instructions
379 which do not have Rc). When set, the current element is always
380 also included in the count (the new length that VL will be set to).
381 This may be useful in combination with "inv" to truncate the Vector
382 to `exclude` elements that fail a test, or, in the case of implementations
383 of strncpy, to include the terminating zero.
384
385 In CR-based data-driven fail-on-first there is only the option to select
386 and test one bit of each CR (just as with branch BO). For more complex
387 tests this may be insufficient. If that is the case, a vectorised crops
388 (crand, cror) may be used, and ffirst applied to the crop instead of to
389 the arithmetic vector.
390
391 One extremely important aspect of ffirst is:
392
393 * LDST ffirst may never set VL equal to zero. This because on the first
394 element an exception must be raised "as normal".
395 * CR-based data-dependent ffirst on the other hand **can** set VL equal
396 to zero. This is the only means in the entirety of SV that VL may be set
397 to zero (with the exception of via the SV.STATE SPR). When VL is set
398 zero due to the first element failing the CR bit-test, all subsequent
399 vectorised operations are effectively `nops` which is
400 *precisely the desired and intended behaviour*.
401
402 Another aspect is that for ffirst LD/STs, VL may be truncated arbitrarily
403 to a nonzero value for any implementation-specific reason. For example:
404 it is perfectly reasonable for implementations to alter VL when ffirst
405 LD or ST operations are initiated on a nonaligned boundary, such that
406 within a loop the subsequent iteration of that loop begins subsequent
407 ffirst LD/ST operations on an aligned boundary. Likewise, to reduce
408 workloads or balance resources.
409
410 CR-based data-dependent first on the other hand MUST not truncate VL
411 arbitrarily to a length decided by the hardware: VL MUST only be
412 truncated based explicitly on whether a test fails.
413 This because it is a precise test on which algorithms
414 will rely.
415
416 ## Data-dependent fail-first on CR operations (crand etc)
417
418 Operations that actually produce or alter CR Field as a result
419 do not also in turn have an Rc=1 mode. However it makes no
420 sense to try to test the 4 bits of a CR Field for being equal
421 or not equal to zero. Moreover, the result is already in the
422 form that is desired: it is a CR field. Therefore,
423 CR-based operations have their own SVP64 Mode, described
424 in [[sv/cr_ops]]
425
426 There are two primary different types of CR operations:
427
428 * Those which have a 3-bit operand field (referring to a CR Field)
429 * Those which have a 5-bit operand (referring to a bit within the
430 whole 32-bit CR)
431
432 More details can be found in [[sv/cr_ops]].
433
434 # pred-result mode
435
436 Predicate-result merges common CR testing with predication, saving on
437 instruction count. In essence, a Condition Register Field test
438 is performed, and if it fails it is considered to have been
439 *as if* the destination predicate bit was zero.
440 Arithmetic and Logical Pred-result is covered in [[sv/normal]]
441
442 ## pred-result mode on CR ops
443
444 CR operations (mtcr, crand, cror) may be Vectorised,
445 predicated, and also pred-result mode applied to it.
446 Vectorisation applies to 4-bit CR Fields which are treated as
447 elements, not the individual bits of the 32-bit CR.
448 CR ops and how to identify them is described in [[sv/cr_ops]]
449
450 # CR Operations
451
452 CRs are slightly more involved than INT or FP registers due to the
453 possibility for indexing individual bits (crops BA/BB/BT). Again however
454 the access pattern needs to be understandable in relation to v3.0B / v3.1B
455 numbering, with a clear linear relationship and mapping existing when
456 SV is applied.
457
458 ## CR EXTRA mapping table and algorithm
459
460 Numbering relationships for CR fields are already complex due to being
461 in BE format (*the relationship is not clearly explained in the v3.0B
462 or v3.1B specification*). However with some care and consideration
463 the exact same mapping used for INT and FP regfiles may be applied,
464 just to the upper bits, as explained below. The notation
465 `CR{field number}` is used to indicate access to a particular
466 Condition Register Field (as opposed to the notation `CR[bit]`
467 which accesses one bit of the 32 bit Power ISA v3.0B
468 Condition Register)
469
470 In OpenPOWER v3.0/1, BF/BT/BA/BB are all 5 bits. The top 3 bits (0:2)
471 select one of the 8 CRs; the bottom 2 bits (3:4) select one of 4 bits
472 *in* that CR. The numbering was determined (after 4 months of
473 analysis and research) to be as follows:
474
475 CR_index = 7-(BA>>2) # top 3 bits but BE
476 bit_index = 3-(BA & 0b11) # low 2 bits but BE
477 CR_reg = CR{CR_index} # get the CR
478 # finally get the bit from the CR.
479 CR_bit = (CR_reg & (1<<bit_index)) != 0
480
481 When it comes to applying SV, it is the CR\_reg number to which SV EXTRA2/3
482 applies, **not** the CR\_bit portion (bits 3:4):
483
484 if extra3_mode:
485 spec = EXTRA3
486 else:
487 spec = EXTRA2<<1 | 0b0
488 if spec[0]:
489 # vector constructs "BA[0:2] spec[1:2] 00 BA[3:4]"
490 return ((BA >> 2)<<6) | # hi 3 bits shifted up
491 (spec[1:2]<<4) | # to make room for these
492 (BA & 0b11) # CR_bit on the end
493 else:
494 # scalar constructs "00 spec[1:2] BA[0:4]"
495 return (spec[1:2] << 5) | BA
496
497 Thus, for example, to access a given bit for a CR in SV mode, the v3.0B
498 algorithm to determin CR\_reg is modified to as follows:
499
500 CR_index = 7-(BA>>2) # top 3 bits but BE
501 if spec[0]:
502 # vector mode, 0-124 increments of 4
503 CR_index = (CR_index<<4) | (spec[1:2] << 2)
504 else:
505 # scalar mode, 0-32 increments of 1
506 CR_index = (spec[1:2]<<3) | CR_index
507 # same as for v3.0/v3.1 from this point onwards
508 bit_index = 3-(BA & 0b11) # low 2 bits but BE
509 CR_reg = CR{CR_index} # get the CR
510 # finally get the bit from the CR.
511 CR_bit = (CR_reg & (1<<bit_index)) != 0
512
513 Note here that the decoding pattern to determine CR\_bit does not change.
514
515 Note: high-performance implementations may read/write Vectors of CRs in
516 batches of aligned 32-bit chunks (CR0-7, CR7-15). This is to greatly
517 simplify internal design. If instructions are issued where CR Vectors
518 do not start on a 32-bit aligned boundary, performance may be affected.
519
520 ## CR fields as inputs/outputs of vector operations
521
522 CRs (or, the arithmetic operations associated with them)
523 may be marked as Vectorised or Scalar. When Rc=1 in arithmetic operations that have no explicit EXTRA to cover the CR, the CR is Vectorised if the destination is Vectorised. Likewise if the destination is scalar then so is the CR.
524
525 When vectorized, the CR inputs/outputs are sequentially read/written
526 to 4-bit CR fields. Vectorised Integer results, when Rc=1, will begin
527 writing to CR8 (TBD evaluate) and increase sequentially from there.
528 This is so that:
529
530 * implementations may rely on the Vector CRs being aligned to 8. This
531 means that CRs may be read or written in aligned batches of 32 bits
532 (8 CRs per batch), for high performance implementations.
533 * scalar Rc=1 operation (CR0, CR1) and callee-saved CRs (CR2-4) are not
534 overwritten by vector Rc=1 operations except for very large VL
535 * CR-based predication, from CR32, is also not interfered with
536 (except by large VL).
537
538 However when the SV result (destination) is marked as a scalar by the
539 EXTRA field the *standard* v3.0B behaviour applies: the accompanying
540 CR when Rc=1 is written to. This is CR0 for integer operations and CR1
541 for FP operations.
542
543 Note that yes, the CR Fields are genuinely Vectorised. Unlike in SIMD VSX which
544 has a single CR (CR6) for a given SIMD result, SV Vectorised OpenPOWER
545 v3.0B scalar operations produce a **tuple** of element results: the
546 result of the operation as one part of that element *and a corresponding
547 CR element*. Greatly simplified pseudocode:
548
549 for i in range(VL):
550 # calculate the vector result of an add
551 iregs[RT+i] = iregs[RA+i] + iregs[RB+i]
552 # now calculate CR bits
553 CRs{8+i}.eq = iregs[RT+i] == 0
554 CRs{8+i}.gt = iregs[RT+i] > 0
555 ... etc
556
557 If a "cumulated" CR based analysis of results is desired (a la VSX CR6)
558 then a followup instruction must be performed, setting "reduce" mode on
559 the Vector of CRs, using cr ops (crand, crnor) to do so. This provides far
560 more flexibility in analysing vectors than standard Vector ISAs. Normal
561 Vector ISAs are typically restricted to "were all results nonzero" and
562 "were some results nonzero". The application of mapreduce to Vectorised
563 cr operations allows far more sophisticated analysis, particularly in
564 conjunction with the new crweird operations see [[sv/cr_int_predication]].
565
566 Note in particular that the use of a separate instruction in this way
567 ensures that high performance multi-issue OoO inplementations do not
568 have the computation of the cumulative analysis CR as a bottleneck and
569 hindrance, regardless of the length of VL.
570
571 Additionally,
572 SVP64 [[sv/branches]] may be used, even when the branch itself is to
573 the following instruction. The combined side-effects of CTR reduction
574 and VL truncation provide several benefits.
575
576 (see [[discussion]]. some alternative schemes are described there)
577
578 ## Rc=1 when SUBVL!=1
579
580 sub-vectors are effectively a form of Packed SIMD (length 2 to 4). Only 1 bit of
581 predicate is allocated per subvector; likewise only one CR is allocated
582 per subvector.
583
584 This leaves a conundrum as to how to apply CR computation per subvector,
585 when normally Rc=1 is exclusively applied to scalar elements. A solution
586 is to perform a bitwise OR or AND of the subvector tests. Given that
587 OE is ignored in SVP64, this field may (when available) be used to select OR or
588 AND behavior.
589
590 ### Table of CR fields
591
592 CR[i] is the notation used by the OpenPower spec to refer to CR field #i,
593 so FP instructions with Rc=1 write to CR[1] aka SVCR1_000.
594
595 CRs are not stored in SPRs: they are registers in their own right.
596 Therefore context-switching the full set of CRs involves a Vectorised
597 mfcr or mtcr, using VL=64, elwidth=8 to do so. This is exactly as how
598 scalar OpenPOWER context-switches CRs: it is just that there are now
599 more of them.
600
601 The 64 SV CRs are arranged similarly to the way the 128 integer registers
602 are arranged. TODO a python program that auto-generates a CSV file
603 which can be included in a table, which is in a new page (so as not to
604 overwhelm this one). [[svp64/cr_names]]
605
606 # Register Profiles
607
608 **NOTE THIS TABLE SHOULD NO LONGER BE HAND EDITED** see
609 <https://bugs.libre-soc.org/show_bug.cgi?id=548> for details.
610
611 Instructions are broken down by Register Profiles as listed in the
612 following auto-generated page: [[opcode_regs_deduped]]. "Non-SV"
613 indicates that the operations with this Register Profile cannot be
614 Vectorised (mtspr, bc, dcbz, twi)
615
616 TODO generate table which will be here [[svp64/reg_profiles]]
617
618 # SV pseudocode illilustration
619
620 ## Single-predicated Instruction
621
622 illustration of normal mode add operation: zeroing not included, elwidth
623 overrides not included. if there is no predicate, it is set to all 1s
624
625 function op_add(rd, rs1, rs2) # add not VADD!
626 int i, id=0, irs1=0, irs2=0; predval = get_pred_val(FALSE, rd);
627 for (i = 0; i < VL; i++)
628 STATE.srcoffs = i # save context if (predval & 1<<i) # predication
629 uses intregs
630 ireg[rd+id] <= ireg[rs1+irs1] + ireg[rs2+irs2]; if (!int_vec[rd
631 ].isvec) break;
632 if (rd.isvec) { id += 1; } if (rs1.isvec) { irs1 += 1; } if
633 (rs2.isvec) { irs2 += 1; } if (id == VL or irs1 == VL or irs2 ==
634 VL) {
635 # end VL hardware loop STATE.srcoffs = 0; # reset return;
636 }
637
638 This has several modes:
639
640 * RT.v = RA.v RB.v * RT.v = RA.v RB.s (and RA.s RB.v) * RT.v = RA.s RB.s *
641 RT.s = RA.v RB.v * RT.s = RA.v RB.s (and RA.s RB.v) * RT.s = RA.s RB.s
642
643 All of these may be predicated. Vector-Vector is straightfoward.
644 When one of source is a Vector and the other a Scalar, it is clear that
645 each element of the Vector source should be added to the Scalar source,
646 each result placed into the Vector (or, if the destination is a scalar,
647 only the first nonpredicated result).
648
649 The one that is not obvious is RT=vector but both RA/RB=scalar.
650 Here this acts as a "splat scalar result", copying the same result into
651 all nonpredicated result elements. If a fixed destination scalar was
652 intended, then an all-Scalar operation should be used.
653
654 See <https://bugs.libre-soc.org/show_bug.cgi?id=552>
655
656 # Assembly Annotation
657
658 Assembly code annotation is required for SV to be able to successfully
659 mark instructions as "prefixed".
660
661 A reasonable (prototype) starting point:
662
663 svp64 [field=value]*
664
665 Fields:
666
667 * ew=8/16/32 - element width
668 * sew=8/16/32 - source element width
669 * vec=2/3/4 - SUBVL
670 * mode=reduce/satu/sats/crpred
671 * pred=1\<\<3/r3/~r3/r10/~r10/r30/~r30/lt/gt/le/ge/eq/ne
672 * spred={reg spec}
673
674 similar to x86 "rex" prefix.
675
676 For actual assembler:
677
678 sv.asmcode/mode.vec{N}.ew=8,sw=16,m={pred},sm={pred} reg.v, src.s
679
680 Qualifiers:
681
682 * m={pred}: predicate mask mode
683 * sm={pred}: source-predicate mask mode (only allowed in Twin-predication)
684 * vec{N}: vec2 OR vec3 OR vec4 - sets SUBVL=2/3/4
685 * ew={N}: ew=8/16/32 - sets elwidth override
686 * sw={N}: sw=8/16/32 - sets source elwidth override
687 * ff={xx}: see fail-first mode
688 * pr={xx}: see predicate-result mode
689 * sat{x}: satu / sats - see saturation mode
690 * mr: see map-reduce mode
691 * mr.svm see map-reduce with sub-vector mode
692 * crm: see map-reduce CR mode
693 * crm.svm see map-reduce CR with sub-vector mode
694 * sz: predication with source-zeroing
695 * dz: predication with dest-zeroing
696
697 For modes:
698
699 * pred-result:
700 - pm=lt/gt/le/ge/eq/ne/so/ns OR
701 - pm=RC1 OR pm=~RC1
702 * fail-first
703 - ff=lt/gt/le/ge/eq/ne/so/ns OR
704 - ff=RC1 OR ff=~RC1
705 * saturation:
706 - sats
707 - satu
708 * map-reduce:
709 - mr OR crm: "normal" map-reduce mode or CR-mode.
710 - mr.svm OR crm.svm: when vec2/3/4 set, sub-vector mapreduce is enabled
711
712 # Proposed Parallel-reduction algorithm
713
714 ```
715 /// reference implementation of proposed SimpleV reduction semantics.
716 ///
717 // reduction operation -- we still use this algorithm even
718 // if the reduction operation isn't associative or
719 // commutative.
720 /// `temp_pred` is a user-visible Vector Condition register
721 ///
722 /// all input arrays have length `vl`
723 def reduce( vl, vec, pred, pred,):
724 step = 1;
725 while step < vl
726 step *= 2;
727 for i in (0..vl).step_by(step)
728 other = i + step / 2;
729 other_pred = other < vl && pred[other];
730 if pred[i] && other_pred
731 vec[i] += vec[other];
732 else if other_pred
733 vec[i] = vec[other];
734 pred[i] |= other_pred;
735
736 def reduce( vl, vec, pred, pred,):
737 j = 0
738 vi = [] # array of lookup indices to skip nonpredicated
739 for i, pbit in enumerate(pred):
740 if pbit:
741 vi[j] = i
742 j += 1
743 step = 2
744 while step <= vl
745 halfstep = step // 2
746 for i in (0..vl).step_by(step)
747 other = vi[i + halfstep]
748 i = vi[i]
749 other_pred = other < vl && pred[other]
750 if pred[i] && other_pred
751 vec[i] += vec[other]
752 pred[i] |= other_pred
753 step *= 2
754
755 ```