96f6021ecc168dfd41722c10c60b7ae202ee7991
[libreriscv.git] / openpower / sv / svp64 / appendix.mdwn
1 # Appendix
2
3 * <https://bugs.libre-soc.org/show_bug.cgi?id=574>
4 * <https://bugs.libre-soc.org/show_bug.cgi?id=558#c47>
5
6 This is the appendix to [[sv/svp64]], providing explanations of modes
7 etc. leaving the main svp64 page's primary purpose as outlining the instruction format.
8
9 Table of contents:
10
11 [[!toc]]
12
13 # XER, SO and other global flags
14
15 Vector systems are expected to be high performance. This is achieved
16 through parallelism, which requires that elements in the vector be
17 independent. XER SO and other global "accumulation" flags (CR.OV) cause
18 Read-Write Hazards on single-bit global resources, having a significant
19 detrimental effect.
20
21 Consequently in SV, XER.SO and CR.OV behaviour is disregarded (including in `cmp` instructions). XER is
22 simply neither read nor written. This includes when `scalar identity behaviour` occurs. If precise OpenPOWER v3.0/1 scalar behaviour is desired then OpenPOWER v3.0/1 instructions should be used without an SV Prefix.
23
24 An interesting side-effect of this decision is that the OE flag is now free for other uses when SV Prefixing is used.
25
26 Regarding XER.CA: this does not fit either: it was designed for a scalar ISA. Instead, both carry-in and carry-out go into the CR.so bit of a given Vector element. This provides a means to perform large parallel batches of Vectorised carry-capable additions. crweird instructions can be used to transfer the CRs in and out of an integer, where bitmanipulation may be performed to analyse the carry bits (including carry lookahead propagation) before continuing with further parallel additions.
27
28 # v3.0B/v3.1B relevant instructions
29
30 SV is primarily designed for use as an efficient hybrid 3D GPU / VPU / CPU ISA.
31
32 As mentioned above, OE=1 is not applicable in SV, freeing this bit for alternative uses. Additionally, Vectorisation of the VSX SIMD system likewise makes no sense whatsoever. SV *replaces* VSX and provides, at the very minimum, predication (which VSX was designed without). Thus all VSX Major Opcodes - all of them - are "unused" and must raise illegal instruction exceptions in SV Prefix Mode.
33
34 Likewise, `lq` (Load Quad), and Load/Store Multiple make no sense to have because they are not only provided by SV, the SV alternatives may be predicated as well, making them far better suited to use in function calls and context-switching.
35
36 Additionally, some v3.0/1 instructions simply make no sense at all in a Vector context: `twi` and `tdi` fall into this category, as do branch operations as well as `sc` and `scv`. Here there is simply no point trying to Vectorise them: the standard OpenPOWER v3.0/1 instructions should be called instead.
37
38 Fortuitously this leaves several Major Opcodes free for use by SV to fit alternative future instructions. In a 3D context this means Vector Product, Vector Normalise, [[sv/mv.swizzle]], Texture LD/ST operations, and others critical to an efficient, effective 3D GPU and VPU ISA. With such instructions being included as standard in other commercially-successful GPU ISAs it is likewise critical that a 3D GPU/VPU based on svp64 also have such instructions.
39
40 Note however that svp64 is stand-alone and is in no way critically dependent on the existence or provision of 3D GPU or VPU instructions. These should be considered extensions, and their discussion and specification is out of scope for this document.
41
42 Note, again: this is *only* under svp64 prefixing. Standard v3.0B / v3.1B is *not* altered by svp64 in any way.
43
44 ## Major opcode map (v3.0B)
45
46 This table is taken from v3.0B.
47 Table 9: Primary Opcode Map (opcode bits 0:5)
48
49 | 000 | 001 | 010 | 011 | 100 | 101 | 110 | 111
50 000 | | | tdi | twi | EXT04 | | | mulli | 000
51 001 | subfic | | cmpli | cmpi | addic | addic. | addi | addis | 001
52 010 | bc/l/a | EXT17 | b/l/a | EXT19 | rlwimi| rlwinm | | rlwnm | 010
53 011 | ori | oris | xori | xoris | andi. | andis. | EXT30 | EXT31 | 011
54 100 | lwz | lwzu | lbz | lbzu | stw | stwu | stb | stbu | 100
55 101 | lhz | lhzu | lha | lhau | sth | sthu | lmw | stmw | 101
56 110 | lfs | lfsu | lfd | lfdu | stfs | stfsu | stfd | stfdu | 110
57 111 | lq | EXT57 | EXT58 | EXT59 | EXT60 | EXT61 | EXT62 | EXT63 | 111
58 | 000 | 001 | 010 | 011 | 100 | 101 | 110 | 111
59
60 ## Suitable for svp64
61
62 This is the same table containing v3.0B Primary Opcodes except those that make no sense in a Vectorisation Context have been removed. These removed POs can, *in the SV Vector Context only*, be assigned to alternative (Vectorised-only) instructions, including future extensions.
63
64 Note, again, to emphasise: outside of svp64 these opcodes **do not** change. When not prefixed with svp64 these opcodes **specifically** retain their v3.0B / v3.1B OpenPOWER Standard compliant meaning.
65
66 | 000 | 001 | 010 | 011 | 100 | 101 | 110 | 111
67 000 | | | | | | | | mulli | 000
68 001 | subfic | | cmpli | cmpi | addic | addic. | addi | addis | 001
69 010 | | | | EXT19 | rlwimi| rlwinm | | rlwnm | 010
70 011 | ori | oris | xori | xoris | andi. | andis. | EXT30 | EXT31 | 011
71 100 | lwz | lwzu | lbz | lbzu | stw | stwu | stb | stbu | 100
72 101 | lhz | lhzu | lha | lhau | sth | sthu | | | 101
73 110 | lfs | lfsu | lfd | lfdu | stfs | stfsu | stfd | stfdu | 110
74 111 | | | EXT58 | EXT59 | | EXT61 | | EXT63 | 111
75 | 000 | 001 | 010 | 011 | 100 | 101 | 110 | 111
76
77 # Twin Predication
78
79 This is a novel concept that allows predication to be applied to a single
80 source and a single dest register. The following types of traditional
81 Vector operations may be encoded with it, *without requiring explicit
82 opcodes to do so*
83
84 * VSPLAT (a single scalar distributed across a vector)
85 * VEXTRACT (like LLVM IR [`extractelement`](https://releases.llvm.org/11.0.0/docs/LangRef.html#extractelement-instruction))
86 * VINSERT (like LLVM IR [`insertelement`](https://releases.llvm.org/11.0.0/docs/LangRef.html#insertelement-instruction))
87 * VCOMPRESS (like LLVM IR [`llvm.masked.compressstore.*`](https://releases.llvm.org/11.0.0/docs/LangRef.html#llvm-masked-compressstore-intrinsics))
88 * VEXPAND (like LLVM IR [`llvm.masked.expandload.*`](https://releases.llvm.org/11.0.0/docs/LangRef.html#llvm-masked-expandload-intrinsics))
89
90 Those patterns (and more) may be applied to:
91
92 * mv (the usual way that V\* ISA operations are created)
93 * exts\* sign-extension
94 * rwlinm and other RS-RA shift operations (**note**: excluding
95 those that take RA as both a src and dest. These are not
96 1-src 1-dest, they are 2-src, 1-dest)
97 * LD and ST (treating AGEN as one source)
98 * FP fclass, fsgn, fneg, fabs, fcvt, frecip, fsqrt etc.
99 * Condition Register ops mfcr, mtcr and other similar
100
101 This is a huge list that creates extremely powerful combinations,
102 particularly given that one of the predicate options is `(1<<r3)`
103
104 Additional unusual capabilities of Twin Predication include a back-to-back
105 version of VCOMPRESS-VEXPAND which is effectively the ability to do
106 sequentially ordered multiple VINSERTs. The source predicate selects a
107 sequentially ordered subset of elements to be inserted; the destination predicate specifies the sequentially ordered recipient locations.
108 This is equivalent to
109 `llvm.masked.compressstore.*`
110 followed by
111 `llvm.masked.expandload.*`
112
113
114 # Rounding, clamp and saturate
115
116 see [[av_opcodes]].
117
118 To help ensure that audio quality is not compromised by overflow,
119 "saturation" is provided, as well as a way to detect when saturation
120 occurred if desired (Rc=1). When Rc=1 there will be a *vector* of CRs, one CR per
121 element in the result (Note: this is different from VSX which has a
122 single CR per block).
123
124 When N=0 the result is saturated to within the maximum range of an
125 unsigned value. For integer ops this will be 0 to 2^elwidth-1. Similar
126 logic applies to FP operations, with the result being saturated to
127 maximum rather than returning INF, and the minimum to +0.0
128
129 When N=1 the same occurs except that the result is saturated to the min
130 or max of a signed result, and for FP to the min and max value rather than returning +/- INF.
131
132 When Rc=1, the CR "overflow" bit is set on the CR associated with the
133 element, to indicate whether saturation occurred. Note that due to
134 the hugely detrimental effect it has on parallel processing, XER.SO is
135 **ignored** completely and is **not** brought into play here. The CR
136 overflow bit is therefore simply set to zero if saturation did not occur,
137 and to one if it did.
138
139 Note also that saturate on operations that produce a carry output are prohibited due to the conflicting use of the CR.so bit for storing if saturation occurred.
140
141 Post-analysis of the Vector of CRs to find out if any given element hit
142 saturation may be done using a mapreduced CR op (cror), or by using the
143 new crweird instruction, transferring the relevant CR bits to a scalar
144 integer and testing it for nonzero. see [[sv/cr_int_predication]]
145
146 Note that the operation takes place at the maximum bitwidth (max of src and dest elwidth) and that truncation occurs to the range of the dest elwidth.
147
148 # Reduce mode
149
150 There are two variants here. The first is when the destination is scalar
151 and at least one of the sources is Vector. The second is more complex
152 and involves map-reduction on vectors.
153
154 The first defining characteristic distinguishing Scalar-dest reduce mode
155 from Vector reduce mode is that Scalar-dest reduce issues VL element
156 operations, whereas Vector reduce mode performs an actual map-reduce
157 (tree reduction): typically `O(VL log VL)` actual computations.
158
159 The second defining characteristic of scalar-dest reduce mode is that it
160 is, in simplistic and shallow terms *serial and sequential in nature*,
161 whereas the Vector reduce mode is definitely inherently paralleliseable.
162
163 The reason why scalar-dest reduce mode is "simplistically" serial and
164 sequential is that in certain circumstances (such as an `OR` operation
165 or a MIN/MAX operation) it may be possible to parallelise the reduction.
166
167 ## Scalar result reduce mode
168
169 In this mode, one register is identified as being the "accumulator".
170 Scalar reduction is thus categorised by:
171
172 * One of the sources is a Vector
173 * the destination is a scalar
174 * optionally but most usefully when one source register is also the destination
175 * That the source register type is the same as the destination register
176 type identified as the "accumulator". scalar reduction on `cmp`,
177 `setb` or `isel` is not possible for example because of the mixture
178 between CRs and GPRs.
179
180 Typical applications include simple operations such as `ADD r3, r10.v,
181 r3` where, clearly, r3 is being used to accumulate the addition of all
182 elements is the vector starting at r10.
183
184 # add RT, RA,RB but when RT==RA
185 for i in range(VL):
186 iregs[RA] += iregs[RB+i] # RT==RA
187
188 However, *unless* the operation is marked as "mapreduce", SV ordinarily
189 **terminates** at the first scalar operation. Only by marking the
190 operation as "mapreduce" will it continue to issue multiple sub-looped
191 (element) instructions in `Program Order`.
192
193 Other examples include shift-mask operations where a Vector of inserts
194 into a single destination register is required, as a way to construct
195 a value quickly from multiple arbitrary bit-ranges and bit-offsets.
196 Using the same register as both the source and destination, with Vectors
197 of different offsets masks and values to be inserted has multiple
198 applications including Video, cryptography and JIT compilation.
199
200 Subtract and Divide are still permitted to be executed in this mode,
201 although from an algorithmic perspective it is strongly discouraged.
202 It would be better to use addition followed by one final subtract,
203 or in the case of divide, to get better accuracy, to perform a multiply
204 cascade followed by a final divide.
205
206 ## Vector result reduce mode
207
208 1. limited to single predicated dual src operations (add RT, RA, RB).
209 triple source operations are prohibited (fma).
210 2. limited to operations that make sense. divide is excluded, as is
211 subtract (X - Y - Z produces different answers depending on the order)
212 and asymmetric CRops (crandc, crorc). sane operations:
213 multiply, min/max, add, logical bitwise OR, most other CR ops.
214 operations that do have the same source and dest register type are
215 also excluded (isel, cmp). operations involving carry or overflow
216 (XER.CA / OV) are also prohibited.
217 3. the destination is a vector but the result is stored, ultimately,
218 in the first nonzero predicated element. all other nonzero predicated
219 elements are undefined. *this includes the CR vector* when Rc=1
220 4. implementations may use any ordering and any algorithm to reduce
221 down to a single result. However it must be equivalent to a straight
222 application of mapreduce. The destination vector (except masked out
223 elements) may be used for storing any intermediate results. these may
224 be left in the vector (undefined).
225 5. CRM applies when Rc=1. When CRM is zero, the CR associated with
226 the result is regarded as a "some results met standard CR result
227 criteria". When CRM is one, this changes to "all results met standard
228 CR criteria".
229 6. implementations MAY use destoffs as well as srcoffs (see [[sv/sprs]])
230 in order to store sufficient state to resume operation should an
231 interrupt occur. this is also why implementations are permitted to use
232 the destination vector to store intermediary computations
233 7. *Predication may be applied*. zeroing mode is not an option. masked-out
234 inputs are ignored; masked-out elements in the destination vector are
235 unaltered (not used for the purposes of intermediary storage); the
236 scalar result is placed in the first available unmasked element.
237
238 Pseudocode for the case where RA==RB:
239
240 result = op(iregs[RA], iregs[RA+1])
241 CR = analyse(result)
242 for i in range(2, VL):
243 result = op(result, iregs[RA+i])
244 CRnew = analyse(result)
245 if Rc=1
246 if CRM:
247 CR = CR bitwise or CRnew
248 else:
249 CR = CR bitwise AND CRnew
250
251 TODO: case where RA!=RB which involves first a vector of 2-operand
252 results followed by a mapreduce on the intermediates.
253
254 Note that when SVM is clear and SUBVL!=1 the sub-elements are *independent*, i.e. they
255 are mapreduced per *sub-element* as a result. illustration with a vec2:
256
257 result.x = op(iregs[RA].x, iregs[RA+1].x)
258 result.y = op(iregs[RA].y, iregs[RA+1].y)
259 for i in range(2, VL):
260 result.x = op(result.x, iregs[RA+i].x)
261 result.y = op(result.y, iregs[RA+i].y)
262
263 Note here that Rc=1 does not make sense when SVM is clear and SUBVL!=1.
264
265 When SVM is set and SUBVL!=1, another variant is enabled: horizontal subvector mode. Example for a vec3:
266
267 for i in range(VL):
268 result = op(iregs[RA+i].x, iregs[RA+i].x)
269 result = op(result, iregs[RA+i].y)
270 result = op(result, iregs[RA+i].z)
271 iregs[RT+i] = result
272
273 In this mode, when Rc=1 the Vector of CRs is as normal: each result element creates a corresponding CR element.
274
275 # Fail-on-first
276
277 Data-dependent fail-on-first has two distinct variants: one for LD/ST,
278 the other for arithmetic operations (actually, CR-driven). Note in each
279 case the assumption is that vector elements are required appear to be
280 executed in sequential Program Order, element 0 being the first.
281
282 * LD/ST ffirst treats the first LD/ST in a vector (element 0) as an
283 ordinary one. Exceptions occur "as normal". However for elements 1
284 and above, if an exception would occur, then VL is **truncated** to the
285 previous element.
286 * Data-driven (CR-driven) fail-on-first activates when Rc=1 or other
287 CR-creating operation produces a result (including cmp). Similar to
288 branch, an analysis of the CR is performed and if the test fails, the
289 vector operation terminates and discards all element operations at and
290 above the current one, and VL is truncated to the *previous* element.
291 Thus the new VL comprises a contiguous vector of results, all of which
292 pass the testing criteria (equal to zero, less than zero).
293
294 The CR-based data-driven fail-on-first is new and not found in ARM SVE
295 or RVV. It is extremely useful for reducing instruction count, however
296 requires speculative execution involving modifications of VL to get high
297 performance implementations. An additional mode (RC1=1) effectively turns what would otherwise be an arithmetic operation into a type of `cmp`. The CR is stored (and the CR.eq bit tested). If the CR.eq bit fails then the Vector is truncated and the loop ends. Note that when RC1=1 the result elements arw never stored, only the CRs.
298
299 In CR-based data-driven fail-on-first there is only the option to select
300 and test one bit of each CR (just as with branch BO). For more complex
301 tests this may be insufficient. If that is the case, a vectorised crops
302 (crand, cror) may be used, and ffirst applied to the crop instead of to
303 the arithmetic vector.
304
305 One extremely important aspect of ffirst is:
306
307 * LDST ffirst may never set VL equal to zero. This because on the first
308 element an exception must be raised "as normal".
309 * CR-based data-dependent ffirst on the other hand **can** set VL equal
310 to zero. This is the only means in the entirety of SV that VL may be set
311 to zero (with the exception of via the SV.STATE SPR). When VL is set
312 zero due to the first element failing the CR bit-test, all subsequent
313 vectorised operations are effectively `nops` which is
314 *precisely the desired and intended behaviour*.
315
316 Another aspect is that for ffirst LD/STs, VL may be truncated arbitrarily to a nonzero value for any implementation-specific reason. For example: it is perfectly reasonable for implementations to alter VL when ffirst LD or ST operations are initiated on a nonaligned boundary, such that within a loop the subsequent iteration of that loop begins subsequent ffirst LD/ST operations on an aligned boundary. Likewise, to reduce workloads or balance resources.
317
318 CR-based data-dependent first on the other hand MUST not truncate VL arbitrarily. This because it is a precise test on which algorithms will rely.
319
320 # pred-result mode
321
322 This mode merges common CR testing with predication, saving on instruction count. Below is the pseudocode excluding predicate zeroing and elwidth overrides.
323
324 for i in range(VL):
325 # predication test, skip all masked out elements.
326 if predicate_masked_out(i):
327 continue
328 result = op(iregs[RA+i], iregs[RB+i])
329 CRnew = analyse(result) # calculates eq/lt/gt
330 # Rc=1 always stores the CR
331 if Rc=1 or RC1:
332 crregs[offs+i] = CRnew
333 # now test CR, similar to branch
334 if RC1 or CRnew[BO[0:1]] != BO[2]:
335 continue # test failed: cancel store
336 # result optionally stored but CR always is
337 iregs[RT+i] = result
338
339 The reason for allowing the CR element to be stored is so that post-analysis
340 of the CR Vector may be carried out. For example: Saturation may have occurred (and been prevented from updating, by the test) but it is desirable to know *which* elements fail saturation.
341
342 Note that RC1 Mode basically turns all operations into `cmp`. The calculation is performed but it is only the CR that is written. The element result is *always* discarded, never written (just like `cmp`).
343
344 Note that predication is still respected: predicate zeroing is slightly different: elements that fail the CR test *or* are masked out are zero'd.
345
346 ## pred-result mode on CR ops
347
348 Yes, really: CR operations (mtcr, crand, cror) may be Vectorised, predicated, and also pred-result mode applied to it. In this case, the Vectorisation applies to the batch of 4 bits, i.e. it is not the CR individual bits that are treated as the Vector, but the CRs themselves (CR0, CR8, CR9...)
349
350 Thus after each Vectorised operation (crand) a test of the CR result can in fact be performed.
351
352 # CR Operations
353
354 CRs are slightly more involved than INT or FP registers due to the
355 possibility for indexing individual bits (crops BA/BB/BT). Again however
356 the access pattern needs to be understandable in relation to v3.0B / v3.1B
357 numbering, with a clear linear relationship and mapping existing when
358 SV is applied.
359
360 ## CR EXTRA mapping table and algorithm
361
362 Numbering relationships for CR fields are already complex due to being
363 in BE format (*the relationship is not clearly explained in the v3.0B
364 or v3.1B specification*). However with some care and consideration
365 the exact same mapping used for INT and FP regfiles may be applied,
366 just to the upper bits, as explained below.
367
368 In OpenPOWER v3.0/1, BF/BT/BA/BB are all 5 bits. The top 3 bits (2:4)
369 select one of the 8 CRs; the bottom 2 bits (0:1) select one of 4 bits
370 *in* that CR. The numbering was determined (after 4 months of
371 analysis and research) to be as follows:
372
373 CR_index = 7-(BA>>2) # top 3 bits but BE
374 bit_index = 3-(BA & 0b11) # low 2 bits but BE
375 CR_reg = CR{CR_index} # get the CR
376 # finally get the bit from the CR.
377 CR_bit = (CR_reg & (1<<bit_index)) != 0
378
379 When it comes to applying SV, it is the CR\_reg number to which SV EXTRA2/3
380 applies, **not** the CR\_bit portion (bits 0:1):
381
382 if extra3_mode:
383 spec = EXTRA3
384 else:
385 spec = EXTRA2<<1 | 0b0
386 if spec[2]:
387 # vector constructs "BA[2:4] spec[0:1] 00 BA[0:1]"
388 return ((BA >> 2)<<6) | # hi 3 bits shifted up
389 (spec[0:1]<<4) | # to make room for these
390 (BA & 0b11) # CR_bit on the end
391 else:
392 # scalar constructs "00 spec[0:1] BA[0:4]"
393 return (spec[0:1] << 5) | BA
394
395 Thus, for example, to access a given bit for a CR in SV mode, the v3.0B
396 algorithm to determin CR\_reg is modified to as follows:
397
398 CR_index = 7-(BA>>2) # top 3 bits but BE
399 if spec[2]:
400 # vector mode, 0-124 increments of 4
401 CR_index = (CR_index<<4) | (spec[0:1] << 2)
402 else:
403 # scalar mode, 0-32 increments of 1
404 CR_index = (spec[0:1]<<3) | CR_index
405 # same as for v3.0/v3.1 from this point onwards
406 bit_index = 3-(BA & 0b11) # low 2 bits but BE
407 CR_reg = CR{CR_index} # get the CR
408 # finally get the bit from the CR.
409 CR_bit = (CR_reg & (1<<bit_index)) != 0
410
411 Note here that the decoding pattern to determine CR\_bit does not change.
412
413 Note: high-performance implementations may read/write Vectors of CRs in
414 batches of aligned 32-bit chunks (CR0-7, CR7-15). This is to greatly
415 simplify internal design. If instructions are issued where CR Vectors
416 do not start on a 32-bit aligned boundary, performance may be affected.
417
418 ## CR fields as inputs/outputs of vector operations
419
420 CRs (or, the arithmetic operations associated with them)
421 may be marked as Vectorised or Scalar. When Rc=1 in arithmetic operations that have no explicit EXTRA to cover the CR, the CR is Vectorised if the destination is Vectorised. Likewise if the destination is scalar then so is the CR.
422
423 When vectorized, the CR inputs/outputs are sequentially read/written
424 to 4-bit CR fields. Vectorised Integer results, when Rc=1, will begin
425 writing to CR8 (TBD evaluate) and increase sequentially from there.
426 This is so that:
427
428 * implementations may rely on the Vector CRs being aligned to 8. This
429 means that CRs may be read or written in aligned batches of 32 bits
430 (8 CRs per batch), for high performance implementations.
431 * scalar Rc=1 operation (CR0, CR1) and callee-saved CRs (CR2-4) are not
432 overwritten by vector Rc=1 operations except for very large VL
433 * CR-based predication, from CR32, is also not interfered with
434 (except by large VL).
435
436 However when the SV result (destination) is marked as a scalar by the
437 EXTRA field the *standard* v3.0B behaviour applies: the accompanying
438 CR when Rc=1 is written to. This is CR0 for integer operations and CR1
439 for FP operations.
440
441 Note that yes, the CRs are genuinely Vectorised. Unlike in SIMD VSX which
442 has a single CR (CR6) for a given SIMD result, SV Vectorised OpenPOWER
443 v3.0B scalar operations produce a **tuple** of element results: the
444 result of the operation as one part of that element *and a corresponding
445 CR element*. Greatly simplified pseudocode:
446
447 for i in range(VL):
448 # calculate the vector result of an add
449 iregs[RT+i] = iregs[RA+i] + iregs[RB+i]
450 # now calculate CR bits
451 CRs{8+i}.eq = iregs[RT+i] == 0
452 CRs{8+i}.gt = iregs[RT+i] > 0
453 ... etc
454
455 If a "cumulated" CR based analysis of results is desired (a la VSX CR6)
456 then a followup instruction must be performed, setting "reduce" mode on
457 the Vector of CRs, using cr ops (crand, crnor)to do so. This provides far
458 more flexibility in analysing vectors than standard Vector ISAs. Normal
459 Vector ISAs are typically restricted to "were all results nonzero" and
460 "were some results nonzero". The application of mapreduce to Vectorised
461 cr operations allows far more sophisticated analysis, particularly in
462 conjunction with the new crweird operations see [[sv/cr_int_predication]].
463
464 Note in particular that the use of a separate instruction in this way
465 ensures that high performance multi-issue OoO inplementations do not
466 have the computation of the cumulative analysis CR as a bottleneck and
467 hindrance, regardless of the length of VL.
468
469 (see [[discussion]]. some alternative schemes are described there)
470
471 ## Rc=1 when SUBVL!=1
472
473 sub-vectors are effectively a form of SIMD (length 2 to 4). Only 1 bit of predicate is allocated per subvector; likewise only one CR is allocated
474 per subvector.
475
476 This leaves a conundrum as to how to apply CR computation per subvector, when normally Rc=1 is exclusively applied to scalar elements. A solution is to perform a bitwise OR or AND of the subvector tests. Given that OE is ignored, rhis field may (when available) be used to select OR or AND behavior.
477
478 ### Table of CR fields
479
480 CR[i] is the notation used by the OpenPower spec to refer to CR field #i,
481 so FP instructions with Rc=1 write to CR[1] aka SVCR1_000.
482
483 CRs are not stored in SPRs: they are registers in their own right.
484 Therefore context-switching the full set of CRs involves a Vectorised
485 mfcr or mtcr, using VL=64, elwidth=8 to do so. This is exactly as how scalar OpenPOWER context-switches CRs: it is just that there are now more of them.
486
487 The 64 SV CRs are arranged similarly to the way the 128 integer registers
488 are arranged. TODO a python program that auto-generates a CSV file
489 which can be included in a table, which is in a new page (so as not to
490 overwhelm this one). [[svp64/cr_names]]
491
492 # Register Profiles
493
494 **NOTE THIS TABLE SHOULD NO LONGER BE HAND EDITED** see
495 <https://bugs.libre-soc.org/show_bug.cgi?id=548> for details.
496
497 Instructions are broken down by Register Profiles as listed in the
498 following auto-generated page: [[opcode_regs_deduped]]. "Non-SV"
499 indicates that the operations with this Register Profile cannot be
500 Vectorised (mtspr, bc, dcbz, twi)
501
502 TODO generate table which will be here [[svp64/reg_profiles]]
503
504 # SV pseudocode illilustration
505
506 ## Single-predicated Instruction
507
508 illustration of normal mode add operation: zeroing not included, elwidth overrides not included. if there is no predicate, it is set to all 1s
509
510 function op_add(rd, rs1, rs2) # add not VADD!
511 int i, id=0, irs1=0, irs2=0;
512 predval = get_pred_val(FALSE, rd);
513 for (i = 0; i < VL; i++)
514 STATE.srcoffs = i # save context
515 if (predval & 1<<i) # predication uses intregs
516 ireg[rd+id] <= ireg[rs1+irs1] + ireg[rs2+irs2];
517 if (!int_vec[rd ].isvec) break;
518 if (rd.isvec) { id += 1; }
519 if (rs1.isvec) { irs1 += 1; }
520 if (rs2.isvec) { irs2 += 1; }
521 if (id == VL or irs1 == VL or irs2 == VL) {
522 # end VL hardware loop
523 STATE.srcoffs = 0; # reset
524 return;
525 }
526
527 This has several modes:
528
529 * RT.v = RA.v RB.v
530 * RT.v = RA.v RB.s (and RA.s RB.v)
531 * RT.v = RA.s RB.s
532 * RT.s = RA.v RB.v
533 * RT.s = RA.v RB.s (and RA.s RB.v)
534 * RT.s = RA.s RB.s
535
536 All of these may be predicated. Vector-Vector is straightfoward. When one of source is a Vector and the other a Scalar, it is clear that each element of the Vector source should be added to the Scalar source, each result placed into the Vector (or, if the destination is a scalar, only the first nonpredicated result).
537
538 The one that is not obvious is RT=vector but both RA/RB=scalar. Here this acts as a "splat scalar result", copying the same result into all nonpredicated result elements. If a fixed destination scalar was intended, then an all-Scalar operation should be used.
539
540 See <https://bugs.libre-soc.org/show_bug.cgi?id=552>
541
542 # Assembly Annotation
543
544 Assembly code annotation is required for SV to be able to successfully
545 mark instructions as "prefixed".
546
547 A reasonable (prototype) starting point:
548
549 svp64 [field=value]*
550
551 Fields:
552
553 * ew=8/16/32 - element width
554 * sew=8/16/32 - source element width
555 * vec=2/3/4 - SUBVL
556 * mode=reduce/satu/sats/crpred
557 * pred=1\<\<3/r3/~r3/r10/~r10/r30/~r30/lt/gt/le/ge/eq/ne
558 * spred={reg spec}
559
560 similar to x86 "rex" prefix.