(no commit message)
[libreriscv.git] / openpower / sv / overview.mdwn
1 # SV Overview
2
3 This document provides an overview and introduction as to why SV (a
4 Cray-style Vector augmentation to OpenPOWER) exists, and how it works.
5
6 Links:
7
8 * [[discussion]] and
9 [bugreport](https://bugs.libre-soc.org/show_bug.cgi?id=556)
10 feel free to add comments, questions.
11 * [[SV|sv]]
12 * [[sv/svp64]]
13
14 Contents:
15
16 [[!toc]]
17
18 # Introduction: SIMD and Cray Vectors
19
20 SIMD, the primary method for easy parallelism of the
21 past 30 years in Computer Architectures, is [known to be
22 harmful](https://www.sigarch.org/simd-instructions-considered-harmful/).
23 SIMD provides a seductive simplicity that is easy to implement in
24 hardware. With each doubling in width it promises increases in raw performance without the complexity of either multi-issue or out-of-order execution.
25
26 Unfortunately, even with predication added, SIMD only becomes more and
27 more problematic with each power of two SIMD width increase introduced
28 through an ISA revision. The opcode proliferation, at O(N^6), inexorably
29 spirals out of control in the ISA, detrimentally impacting the hardware,
30 the software, the compilers and the testing and compliance.
31
32 Cray-style variable-length Vectors on the other hand result in
33 stunningly elegant and small loops, exceptionally high data throughput
34 per instruction (by one *or greater* orders of magnitude than SIMD), with no alarmingly high setup and cleanup code, where
35 at the hardware level the microarchitecture may execute from one element
36 right the way through to tens of thousands at a time, yet the executable
37 remains exactly the same and the ISA remains clear, true to the RISC
38 paradigm, and clean. Unlike in SIMD, powers of two limitations are not
39 involved in the ISA or in the assembly code.
40
41 SimpleV takes the Cray style Vector principle and applies it in the
42 abstract to a Scalar ISA, in the process allowing register file size
43 increases using "tagging" (similar to how x86 originally extended
44 registers from 32 to 64 bit).
45
46 ## SV
47
48 The fundamentals are:
49
50 * The Program Counter (PC) gains a "Sub Counter" context (Sub-PC)
51 * Vectorisation pauses the PC and runs a Sub-PC loop from 0 to VL-1
52 (where VL is Vector Length)
53 * The [[Program Order]] of "Sub-PC" instructions must be preserved,
54 just as is expected of instructions ordered by the PC.
55 * Some registers may be "tagged" as Vectors
56 * During the loop, "Vector"-tagged register are incremented by
57 one with each iteration, executing the *same instruction*
58 but with *different registers*
59 * Once the loop is completed *only then* is the Program Counter
60 allowed to move to the next instruction.
61
62 Hardware (and simulator) implementors are free and clear to implement this
63 as literally a for-loop, sitting in between instruction decode and issue.
64 Higher performance systems may deploy SIMD backends, multi-issue and
65 out-of-order execution, although it is strongly recommended to add
66 predication capability directly into SIMD backend units.
67
68 In OpenPOWER ISA v3.0B pseudo-code form, an ADD operation, assuming both
69 source and destination have been "tagged" as Vectors, is simply:
70
71 for i = 0 to VL-1:
72 GPR(RT+i) = GPR(RA+i) + GPR(RB+i)
73
74 At its heart, SimpleV really is this simple. On top of this fundamental
75 basis further refinements can be added which build up towards an extremely
76 powerful Vector augmentation system, with very little in the way of
77 additional opcodes required: simply external "context".
78
79 x86 was originally only 80 instructions: prior to AVX512 over 1,300 additional instructions have been added, almost all of them SIMD.
80
81 RISC-V RVV as of version 0.9 is over 188 instructions (more than the
82 rest of RV64G combined: 80 for RV64G and 27 for C). Over 95% of that functionality is added to
83 OpenPOWER v3 0B, by SimpleV augmentation, with around 5 to 8 instructions.
84
85 Even in OpenPOWER v3.0B, the Scalar Integer ISA is around 150
86 instructions, with IEEE754 FP adding approximately 80 more. VSX, being
87 based on SIMD design principles, adds somewhere in the region of 600 more.
88 SimpleV again provides over 95% of VSX functionality, simply by augmenting
89 the *Scalar* OpenPOWER ISA, and in the process providing features such
90 as predication, which VSX is entirely missing.
91
92 AVX512, SVE2, VSX, RVV, all of these systems have to provide different
93 types of register files: Scalar and Vector is the minimum. AVX512
94 even provides a mini mask regfile, followed by explicit instructions
95 that handle operations on each of them *and map between all of them*.
96 SV simply not only uses the existing scalar regfiles (including CRs),
97 but because operations exist within OpenPOWER to cover interactions
98 between the scalar regfiles (`mfcr`, `fcvt`) there is very little that
99 needs to be added.
100
101 In fairness to both VSX and RVV, there are things that are not provided
102 by SimpleV:
103
104 * 128 bit or above arithmetic and other operations
105 (VSX Rijndael and SHA primitives; VSX shuffle and bitpermute operations)
106 * register files above 128 entries
107 * Vector lengths over 64
108 * Unit-strided LD/ST and other comprehensive memory operations
109 (struct-based LD/ST from RVV for example)
110 * 32-bit instruction lengths. [[svp64]] had to be added as 64 bit.
111
112 These limitations, which stem inherently from the adaptation process of starting from a Scalar ISA, are not insurmountable. Over time, they may well be
113 addressed in future revisions of SV.
114
115 The rest of this document builds on the above simple loop to add:
116
117 * Vector-Scalar, Scalar-Vector and Scalar-Scalar operation
118 (of all register files: Integer, FP *and CRs*)
119 * Traditional Vector operations (VSPLAT, VINSERT, VCOMPRESS etc)
120 * Predication masks (essential for parallel if/else constructs)
121 * 8, 16 and 32 bit integer operations, and both FP16 and BF16.
122 * Compacted operations into registers (normally only provided by SIMD)
123 * Fail-on-first (introduced in ARM SVE2)
124 * A new concept: Data-dependent fail-first
125 * Condition-Register based *post-result* predication (also new)
126 * A completely new concept: "Twin Predication"
127 * vec2/3/4 "Subvectors" and Swizzling (standard fare for 3D)
128
129 All of this is *without modifying the OpenPOWER v3.0B ISA*, except to add
130 "wrapping context", similar to how v3.1B 64 Prefixes work.
131
132 # Adding Scalar / Vector
133
134 The first augmentation to the simple loop is to add the option for all
135 source and destinations to all be either scalar or vector. As a FSM
136 this is where our "simple" loop gets its first complexity.
137
138 function op_add(RT, RA, RB) # add not VADD!
139 int id=0, irs1=0, irs2=0;
140 for i = 0 to VL-1:
141 ireg[RT+id] <= ireg[RA+irs1] + ireg[RB+irs2];
142 if (!RT.isvec) break;
143 if (RT.isvec) { id += 1; }
144 if (RA.isvec) { irs1 += 1; }
145 if (RB.isvec) { irs2 += 1; }
146
147 This could have been written out as eight separate cases: one each for when each of `RA`, `RB` or `RT` is scalar or vector. Those eight cases, when optimally combined, result in the pseudocode above.
148
149 With some walkthroughs it is clear that the loop exits immediately
150 after the first scalar destination result is written, and that when the
151 destination is a Vector the loop proceeds to fill up the register file,
152 sequentially, starting at `RT` and ending at `RT+VL-1`. The two source
153 registers will, independently, either remain pointing at `RB` or `RA`
154 respectively, or, if marked as Vectors, will march incrementally in
155 lockstep, producing element results along the way, as the destination
156 also progresses through elements.
157
158 In this way all the eight permutations of Scalar and Vector behaviour
159 are covered, although without predication the scalar-destination ones are
160 reduced in usefulness. It does however clearly illustrate the principle.
161
162 Note in particular: there is no separate Scalar add instruction and
163 separate Vector instruction and separate Scalar-Vector instruction, *and
164 there is no separate Vector register file*: it's all the same instruction,
165 on the standard register file, just with a loop. Scalar happens to set
166 that loop size to one.
167
168 The important insight from the above is that, strictly speaking, Simple-V is not really a Vectorisation scheme at all: it is more of a hardware ISA "Compression scheme", allowing as it does for what would normally require multiple sequential instructions to be replaced with just one. This is where the rule that Program Order must be preserved in Sub-PC execution derives from. However in other ways, which will emerge below, the "tagging" concept presents an opportunity to include features definitely not common outside of Vector ISAs, and in that regard it's definitely a xlass of Vectorisation.
169
170 ## Register "tagging"
171
172 As an aside: in [[sv/svp64]] the encoding which allows SV to both extend the range beyond r0-r31 and to determine whether it is a scalar or vector is encoded in two to three bits, depending on the instruction.
173
174 The reason for using so few bits is because there are up to *four* registers to mark in this way (`fma`, `isel`) which starts to be of concern when there are only 24 available bits to specify the entire SV Vectorisation Context. In fact, for a small subset of instructions it is just not possible to tag every single register. Under these rare circumstances a tag has to be shared between two registers.
175
176 Below is the pseudocode which expresses the relationship which is usually applied to *every* register:
177
178 if extra3_mode:
179 spec = EXTRA3 # bit 2 s/v, 0-1 extends range
180 else:
181 spec = EXTRA2 << 1 # same as EXTRA3, shifted
182 if spec[2]: # vector
183 RA.isvec = True
184 return (RA << 2) | spec[0:1]
185 else: # scalar
186 RA.isvec = False
187 return (spec[0:1] << 5) | RA
188
189 Here we can see that the scalar registers are extended in the top bits, whilst vectors are shifted up by 2 bits, and then extended in the LSBs. Condition Registers have a slightly different scheme, along the same principle, which takes into account the fact that each CR may be bit-level addressed by Condition Register operations.
190
191 Readers familiar with OpenPOWER will know of Rc=1 operations that create an associated post-result "test", placing this test into an implicit Condition Register. The original researchers who created the POWER ISA chose CR0 for Integer, and CR1 for Floating Point. These *also become Vectorised* - implicitly - if the associated destination register is also Vectorised. This allows for some very interesting savings on instruction count due to the very same CR Vectors being predication masks.
192
193 # Adding single predication
194
195 The next step is to add a single predicate mask. This is where it gets
196 interesting. Predicate masks are a bitvector, each bit specifying, in
197 order, whether the element operation is to be skipped ("masked out")
198 or allowed. If there is no predicate, it is set to all 1s, which is
199 effectively the same as "no predicate".
200
201 function op_add(RT, RA, RB) # add not VADD!
202 int id=0, irs1=0, irs2=0;
203 predval = get_pred_val(FALSE, rd);
204 for i = 0 to VL-1:
205 if (predval & 1<<i) # predication bit test
206 ireg[RT+id] <= ireg[RA+irs1] + ireg[RB+irs2];
207 if (!RT.isvec) break;
208 if (RT.isvec) { id += 1; }
209 if (RA.isvec) { irs1 += 1; }
210 if (RB.isvec) { irs2 += 1; }
211
212 The key modification is to skip the creation and storage of the result
213 if the relevant predicate mask bit is clear, but *not the progression
214 through the registers*.
215
216 A particularly interesting case is if the destination is scalar, and the
217 first few bits of the predicate are zero. The loop proceeds to increment
218 the Scalar *source* registers until the first nonzero predicate bit is
219 found, whereupon a single result is computed, and *then* the loop exits.
220 This therefore uses the predicate to perform Vector source indexing.
221 This case was not possible without the predicate mask.
222
223 If all three registers are marked as Vector then the "traditional"
224 predicated Vector behaviour is provided. Yet, just as before, all other
225 options are still provided, right the way back to the pure-scalar case,
226 as if this were a straight OpenPOWER v3.0B non-augmented instruction.
227
228 Single Predication therefore provides several modes traditionally seen
229 in Vector ISAs:
230
231 * VINSERT: the predicate may be set as a single bit, the sources are scalar and the destination a vector.
232 * VSPLAT (result broadcasting) is provided by making the sources scalar and the destination a vector, and having no predicate set or having multiple bits set.
233 * VSELECT is provided by setting up (at least one of) the sources as a vector, using a single bit in olthe predicate, and the destination as a scalar.
234
235 All of this capability and coverage without even adding one single actual Vector opcode, let alone 180, 600 or 1,300!
236
237 # Predicate "zeroing" mode
238
239 Sometimes with predication it is ok to leave the masked-out element
240 alone (not modify the result) however sometimes it is better to zero the
241 masked-out elements. Zeroing can be combined with bit-wise ORing to build
242 up vectors from multiple predicate patterns: the same combining with
243 nonzeroing involves more mv operations and predicate mask operations.
244 Our pseudocode therefore ends up as follows, to take the enhancement
245 into account:
246
247 function op_add(RT, RA, RB) # add not VADD!
248 int id=0, irs1=0, irs2=0;
249 predval = get_pred_val(FALSE, rd);
250 for i = 0 to VL-1:
251 if (predval & 1<<i) # predication bit test
252 ireg[RT+id] <= ireg[RA+irs1] + ireg[RB+irs2];
253 if (!RT.isvec) break;
254 else if zeroing: # predicate failed
255 ireg[RT+id] = 0 # set element to zero
256 if (RT.isvec) { id += 1; }
257 if (RA.isvec) { irs1 += 1; }
258 if (RB.isvec) { irs2 += 1; }
259
260 Many Vector systems either have zeroing or they have nonzeroing, they
261 do not have both. This is because they usually have separate Vector
262 register files. However SV sits on top of standard register files and
263 consequently there are advantages to both, so both are provided.
264
265 # Element Width overrides <a name="elwidths"></a>
266
267 All good Vector ISAs have the usual bitwidths for operations: 8/16/32/64
268 bit integer operations, and IEEE754 FP32 and 64. Often also included
269 is FP16 and more recently BF16. The *really* good Vector ISAs have
270 variable-width vectors right down to bitlevel, and as high as 1024 bit
271 arithmetic per element, as well as IEEE754 FP128.
272
273 SV has an "override" system that *changes* the bitwidth of operations
274 that were intended by the original scalar ISA designers to have (for
275 example) 64 bit operations (only). The override widths are 8, 16 and
276 32 for integer, and FP16 and FP32 for IEEE754 (with BF16 to be added in
277 the future).
278
279 This presents a particularly intriguing conundrum given that the OpenPOWER
280 Scalar ISA was never designed with for example 8 bit operations in mind,
281 let alone Vectors of 8 bit.
282
283 The solution comes in terms of rethinking the definition of a Register
284 File. The typical regfile may be considered to be a multi-ported SRAM
285 block, 64 bits wide and usually 32 entries deep, to give 32 64 bit
286 registers. In c this would be:
287
288 typedef uint64_t reg_t;
289 reg_t int_regfile[32]; // standard scalar 32x 64bit
290
291 Conceptually, to get our variable element width vectors,
292 we may think of the regfile as instead being the following c-based data
293 structure, where all types uint16_t etc. are in little-endian order:
294
295 #pragma(packed)
296 typedef union {
297 uint8_t actual_bytes[8];
298 uint8_t b[0]; // array of type uint8_t
299 uint16_t s[0]; // array of LE ordered uint16_t
300 uint32_t i[0];
301 uint64_t l[0]; // default OpenPOWER ISA uses this
302 } reg_t;
303
304 reg_t int_regfile[128]; // SV extends to 128 regs
305
306 Setting `actual_bytes[3]` in any given `reg_t` to 0x01 would mean that:
307
308 * b[0..2] = 0x00 and b[3] = 0x01
309 * s[0] = 0x0000 and s[1] = 0x0001
310 * i[0] = 0x00010000
311 * l[0] = 0x0000000000010000
312
313 Then, our simple loop, instead of accessing the array of regfile entries
314 with a computed index, would access the appropriate element of the
315 appropriate type. Thus we have a series of overlapping conceptual arrays
316 that each start at what is traditionally thought of as "a register".
317 It then helps if we have a couple of routines:
318
319 get_polymorphed_reg(reg, bitwidth, offset):
320 reg_t res = 0;
321 if (!reg.isvec): # scalar
322 offset = 0
323 if bitwidth == 8:
324 reg.b = int_regfile[reg].b[offset]
325 elif bitwidth == 16:
326 reg.s = int_regfile[reg].s[offset]
327 elif bitwidth == 32:
328 reg.i = int_regfile[reg].i[offset]
329 elif bitwidth == default: # 64
330 reg.l = int_regfile[reg].l[offset]
331 return res
332
333 set_polymorphed_reg(reg, bitwidth, offset, val):
334 if (!reg.isvec): # scalar
335 offset = 0
336 if bitwidth == 8:
337 int_regfile[reg].b[offset] = val
338 elif bitwidth == 16:
339 int_regfile[reg].s[offset] = val
340 elif bitwidth == 32:
341 int_regfile[reg].i[offset] = val
342 elif bitwidth == default: # 64
343 int_regfile[reg].l[offset] = val
344
345 These basically provide a convenient parameterised way to access the
346 register file, at an arbitrary vector element offset and an arbitrary
347 element width. Our first simple loop thus becomes:
348
349 for i = 0 to VL-1:
350 src1 = get_polymorphed_reg(RA, srcwid, i)
351 src2 = get_polymorphed_reg(RB, srcwid, i)
352 result = src1 + src2 # actual add here
353 set_polymorphed_reg(rd, destwid, i, result)
354
355 With this loop, if elwidth=16 and VL=3 the first 48 bits of the target
356 register will contain three 16 bit addition results, and the upper 16
357 bits will be *unaltered*.
358
359 Note that things such as zero/sign-extension (and predication) have
360 been left out to illustrate the elwidth concept. Also note that it turns
361 out to be important to perform the operation at the maximum bitwidth -
362 `max(srcwid, destwid)` - such that any truncation, rounding errors or
363 other artefacts may all be ironed out. This turns out to be important
364 when applying Saturation for Audio DSP workloads.
365
366 Other than that, element width overrides, which can be applied to *either*
367 source or destination or both, are pretty straightforward, conceptually.
368 The details, for hardware engineers, involve byte-level write-enable
369 lines, which is exactly what is used on SRAMs anyway. Compiler writers
370 have to alter Register Allocation Tables to byte-level granularity.
371
372 One critical thing to note: upper parts of the underlying 64 bit
373 register are *not zero'd out* by a write involving a non-aligned Vector
374 Length. An 8 bit operation with VL=7 will *not* overwrite the 8th byte
375 of the destination. The only situation where a full overwrite occurs
376 is on "default" behaviour. This is extremely important to consider the
377 register file as a byte-level store, not a 64-bit-level store.
378
379 ## Why LE regfile?
380
381 The concept of having a regfile where the byte ordering of the underlying SRAM seems utter nonsense. Surely, a hardware implementation gets to choose the order, right? The bytes come in, all registers are 64 bit and it's just wiring, right?
382
383 The assumption in that question was, "all registers are 64 bit". SV allows SIMD-style packing of vectors into the 64 bit registers, and consequently it becomes critically important to decide a byte-order. That decision was - arbitrarily - LE mode. Actually it wasn't arbitrary at all: it was such hell to implement CRs and LD/ST in LibreSOC, with arbitrary insertions of 7-index here and 3-bitindex there that the decision to pick LE was extremrly easy.
384
385 Without such a decision, if two words are packed as elements into a 64 bit register, what does this mean? Should they be inverted so that the lower indexed element does into the HI or the LO word? should the 8 bytes of each register be inverted? Should the bytes in each element be inverted? The decision was therefore made: the c typedef union is, in a LE context, the definitive canonical definition, and implementations may choose whatever internal HDL wire order they like as long as the results conform to the elwidth pseudocode.
386
387 ## Source and Destination overrides
388
389 A minor fly in the ointment: what happens if the source and destination are over-ridden to different widths? For example, FP16 arithmetic is not accurate enough and may introduce rounding errors when up-converted to FP32 output. The rule is therefore set:
390
391 The operation MUST take place at the larger of the two widths
392
393 In pseudocode this is:
394
395 for i = 0 to VL-1:
396 src1 = get_polymorphed_reg(RA, srcwid, i)
397 src2 = get_polymorphed_reg(RB, srcwid, i)
398 opwidth = max(srcwid, destwid)
399 result = op_add(src1, src2, opwidth) # at max width
400 set_polymorphed_reg(rd, destwid, i, result)
401
402 It will turn out that under some conditions the combination of the extension of the source registers followed by truncation of the result gets rid of bits that didn't matter, and the operation might as well have taken place at the narrower width and could save resources that way. Examples include Logical OR where the source extension would place zeros in the upper bits, the result will be truncated and throw those zeros away.
403
404 Counterexamples include the previously mentioned FP16 arithmetic, where for operations such as division of large numbers by very small ones it should be clear that internal accuracy will play a major role in influencing the result. Hence the rule that the calculation takes place at the maximum bitwidth, and truncation follows afterwards.
405
406 ## Signed arithmetic
407
408 What happens when the operation involves signed arithmetic? Here the implementor has to use common sense, and make sure behaviour is accurately documented. If the result of the unmodified operation is sign-extended because one of the inputs is signed, then the input source operands must be first read at their overridden bitwidth and *then* sign-extended:
409
410 for i = 0 to VL-1:
411 src1 = get_polymorphed_reg(RA, srcwid, i)
412 src2 = get_polymorphed_reg(RB, srcwid, i)
413 opwidth = max(srcwid, destwid)
414 # srces known to be less than result width
415 src1 = sign_extend(src1, srcwid, destwid)
416 src2 = sign_extend(src2, srcwid, destwid)
417 result = op_signed(src1, src2, opwidth) # at max width
418 set_polymorphed_reg(rd, destwid, i, result)
419
420 The key here is that the cues are taken from the underlying operation.
421
422 ## Saturation
423
424 Audio DSPs need to be able to clip sound when the "volume" is adjusted, but if it is too loud and the signal wraps, distortion occurs. The solution is to clip (saturate) the audio and allow this to be detected. In practical terms this is a post-result analysis however it needs to take place at the largest bitwidth i.e. before a result is element width truncated. Only then can the arithmetic saturation condition be detected:
425
426 for i = 0 to VL-1:
427 src1 = get_polymorphed_reg(RA, srcwid, i)
428 src2 = get_polymorphed_reg(RB, srcwid, i)
429 opwidth = max(srcwid, destwid)
430 # unsigned add
431 result = op_add(src1, src2, opwidth) # at max width
432 # now saturate (unsigned)
433 sat = max(result, (1<<destwid)-1)
434 set_polymorphed_reg(rd, destwid, i, sat)
435 # set sat overflow
436 if Rc=1:
437 CR.ov = (sat != result)
438
439 So the actual computation took place at the larger width, but was post-analysed as an unsigned operation. If however "signed" saturation is requested then the actual arithmetic operation has to be carefully analysed to see what that actually means.
440
441 In terms of FP arithmetic, which by definition always has a sign bit do always takes place as a signed operation anyway, the request to saturate to signed min/max is pretty clear. However for integer arithmetic such as shift (plain shift, not arithmetic shift), or logical operations such as XOR, which were never designed to have the assumption that its inputs be considered as signed numbers, common sense has to kick in, and follow what CR0 does.
442
443 CR0 for Logical operations still applies: the test is still applied to produce CR.eq, CR.lt and CR.gt analysis. Following this lead we may do the same thing: although the input operations for and OR or XOR can in no way be thought of as "signed" we may at least consider the result to be signed, and thus apply min/max range detection -128 to +127 when truncating down to 8 bit for example.
444
445 for i = 0 to VL-1:
446 src1 = get_polymorphed_reg(RA, srcwid, i)
447 src2 = get_polymorphed_reg(RB, srcwid, i)
448 opwidth = max(srcwid, destwid)
449 # logical op, signed has no meaning
450 result = op_xor(src1, src2, opwidth)
451 # now saturate (unsigned)
452 sat = max(result, (1<<destwid-1)-1)
453 sat = min(result, -(1<<destwid-1))
454 set_polymorphed_reg(rd, destwid, i, sat)
455
456 Overall here the rule is: apply common sense then document the behaviour really clearly, for each and every operation.
457
458 # Quick recap so far
459
460 The above functionality pretty much covers around 85% of Vector ISA needs.
461 Predication is provided so that parallel if/then/else constructs can
462 be performed: critical given that sequential if/then statements and
463 branches simply do not translate successfully to Vector workloads.
464 VSPLAT capability is provided which is approximately 20% of all GPU
465 workload operations. Also covered, with elwidth overriding, is the
466 smaller arithmetic operations that caused ISAs developed from the
467 late 80s onwards to get themselves into a tiz when adding "Multimedia"
468 acceleration aka "SIMD" instructions.
469
470 Experienced Vector ISA readers will however have noted that VCOMPRESS
471 and VEXPAND are missing, as is Vector "reduce" (mapreduce) capability
472 and VGATHER and VSCATTER. Compress and Expand are covered by Twin
473 Predication, and yet to also be covered is fail-on-first, CR-based result
474 predication, and Subvectors and Swizzle.
475
476 ## SUBVL <a name="subvl"></a>
477
478 Adding in support for SUBVL is a matter of adding in an extra inner
479 for-loop, where register src and dest are still incremented inside the
480 inner part. Predication is still taken from the VL index, however it
481 is applied to the whole subvector:
482
483 function op_add(RT, RA, RB) # add not VADD!
484  int id=0, irs1=0, irs2=0;
485  predval = get_pred_val(FALSE, rd);
486 for i = 0 to VL-1:
487 if (predval & 1<<i) # predication uses intregs
488 for (s = 0; s < SUBVL; s++)
489 sd = id*SUBVL + s
490 srs1 = irs1*SUBVL + s
491 srs2 = irs2*SUBVL + s
492 ireg[RT+sd] <= ireg[RA+srs1] + ireg[RB+srs2];
493 if (!RT.isvec) break;
494 if (RT.isvec) { id += 1; }
495 if (RA.isvec) { irs1 += 1; }
496 if (RB.isvec) { irs2 += 1; }
497
498 The primary reason for this is because Shader Compilers treat vec2/3/4 as
499 "single units". Recognising this in hardware is just sensible.
500
501 # Swizzle <a name="swizzle"></a>
502
503 Swizzle is particularly important for 3D work. It allows in-place
504 reordering of XYZW, ARGB etc. and access of sub-portions of the same in
505 arbitrary order *without* requiring timeconsuming scalar mv instructions
506 (scalar due to the convoluted offsets).
507
508 Swizzling does not just do permutations: it allows multiple copying of vec2/3/4 elements, such as XXXW as the source operand, which will take 3 copies of the vec4 first element.
509
510 With somewhere between 10% and 30% of
511 operations in 3D Shaders involving swizzle this is a huge saving and
512 reduces pressure on register files.
513
514 In SV given the percentage of operations that also involve initialisation
515 to 0.0 or 1.0 into subvector elements the decision was made to include
516 those:
517
518 swizzle = get_swizzle_immed() # 12 bits
519 for (s = 0; s < SUBVL; s++)
520 remap = (swizzle >> 3*s) & 0b111
521 if remap < 4:
522 sm = id*SUBVL + remap
523 ireg[rd+s] <= ireg[RA+sm]
524 elif remap == 4:
525 ireg[rd+s] <= 0.0
526 elif remap == 5:
527 ireg[rd+s] <= 1.0
528
529 Note that a value of 6 (and 7) will leave the target subvector element
530 untouched. This is equivalent to a predicate mask which is built-in,
531 in immediate form, into the [[sv/mv.swizzle]] operation. mv.swizzle is
532 rare in that it is one of the few instructions needed to be added that
533 are never going to be part of a Scalar ISA. Even in High Performance
534 Compute workloads it is unusual: it is only because SV is targetted at
535 3D and Video that it is being considered.
536
537 Some 3D GPU ISAs also allow for two-operand subvector swizzles. These are
538 sufficiently unusual, and the immediate opcode space required so large,
539 that the tradeoff balance was decided in SV to only add mv.swizzle.
540
541 # Twin Predication
542
543 Twin Predication is cool. Essentially it is a back-to-back
544 VCOMPRESS-VEXPAND (a multiple sequentially ordered VINSERT). The compress
545 part is covered by the source predicate and the expand part by the
546 destination predicate. Of course, if either of those is all 1s then
547 the operation degenerates *to* VCOMPRESS or VEXPAND, respectively.
548
549 function op(RT, RS):
550  ps = get_pred_val(FALSE, RS); # predication on src
551  pd = get_pred_val(FALSE, RT); # ... AND on dest
552  for (int i = 0, int j = 0; i < VL && j < VL;):
553 if (RS.isvec) while (!(ps & 1<<i)) i++;
554 if (RT.isvec) while (!(pd & 1<<j)) j++;
555 reg[RT+j] = SCALAR_OPERATION_ON(reg[RS+i])
556 if (RS.isvec) i++;
557 if (RT.isvec) j++; else break
558
559 Here's the interesting part: given the fact that SV is a "context"
560 extension, the above pattern can be applied to a lot more than just MV,
561 which is normally only what VCOMPRESS and VEXPAND do in traditional
562 Vector ISAs: move registers. Twin Predication can be applied to `extsw`
563 or `fcvt`, LD/ST operations and even `rlwinmi` and other operations
564 taking a single source and immediate(s) such as `addi`. All of these
565 are termed single-source, single-destination.
566
567 LDST Address-generation,
568 or AGEN, is a special case of single source, because elwidth overriding does not make sense to apply to the computation of the 64 bit address itself, but it *does* make sense to apply elwidth overrides to the data being accessed *at* that address.
569
570 It also turns out that by using a single bit set in the source or
571 destination, *all* the sequential ordered standard patterns of Vector
572 ISAs are provided: VSPLAT, VSELECT, VINSERT, VCOMPRESS, VEXPAND.
573
574 The only one missing from the list here, because it is non-sequential,
575 is VGATHER (and VSCATTER): moving registers by specifying a vector of register indices
576 (`regs[rd] = regs[regs[rs]]` in a loop). This one is tricky because it
577 typically does not exist in standard scalar ISAs. If it did it would
578 be called [[sv/mv.x]]. Once Vectorised, it's a VGATHER/VSCATTER.
579
580 # CR predicate result analysis
581
582 OpenPOWER has Condition Registers. These store an analysis of the result
583 of an operation to test it for being greater, less than or equal to zero.
584 What if a test could be done, similar to branch BO testing, which hooked
585 into the predication system?
586
587 for i in range(VL):
588 # predication test, skip all masked out elements.
589 if predicate_masked_out(i): continue # skip
590 result = op(iregs[RA+i], iregs[RB+i])
591 CRnew = analyse(result) # calculates eq/lt/gt
592 # Rc=1 always stores the CR
593 if RC1 or Rc=1: crregs[offs+i] = CRnew
594 if RC1: continue # RC1 mode skips result store
595 # now test CR, similar to branch
596 if CRnew[BO[0:1]] == BO[2]:
597 # result optionally stored but CR always is
598 iregs[RT+i] = result
599
600 Note that whilst the Vector of CRs is always written to the CR regfile,
601 only those result elements that pass the BO test get written to the
602 integer regfile (when RC1 mode is not set). In RC1 mode the CR is always stored, but the result never is. This effectively turns every arithmetic operation into a type of `cmp` instruction.
603
604 Here for example if FP overflow occurred, and the CR testing was carried
605 out for that, all valid results would be stored but invalid ones would
606 not, but in addition the Vector of CRs would contain the indicators of
607 which ones failed. With the invalid results being simply not written
608 this could save resources (save on register file writes).
609
610 Also expected is, due to the fact that the predicate mask is effectively
611 ANDed with the post-result analysis as a secondary type of predication,
612 that there would be savings to be had in some types of operations where
613 the post-result analysis, if not included in SV, would need a second
614 predicate calculation followed by a predicate mask AND operation.
615
616 Note, hilariously, that Vectorised Condition Register Operations (crand, cror) may
617 also have post-result analysis applied to them. With Vectors of CRs being
618 utilised *for* predication, possibilities for compact and elegant code
619 begin to emerge from this innocuous-looking addition to SV.
620
621 # Exception-based Fail-on-first
622
623 One of the major issues with Vectorised LD/ST operations is when a batch of LDs cross a page-fault boundary. With considerable resources being taken up with in-flight data, a large Vector LD being cancelled or unable to roll back is either a detriment to performance or can cause data corruption.
624
625 What if, then, rather than cancel an entire Vector LD because the last operation would cause a page fault, instead truncate the Vector to the last successful element?
626
627 This is called "fail-on-first". Here is strncpy, illustrated from RVV:
628
629 strncpy:
630 c.mv a3, a0 # Copy dst
631 loop:
632 setvli x0, a2, vint8 # Vectors of bytes.
633 vlbff.v v1, (a1) # Get src bytes
634 vseq.vi v0, v1, 0 # Flag zero bytes
635 vmfirst a4, v0 # Zero found?
636 vmsif.v v0, v0 # Set mask up to and including zero byte.
637 vsb.v v1, (a3), v0.t # Write out bytes
638 c.bgez a4, exit # Done
639 csrr t1, vl # Get number of bytes fetched
640 c.add a1, a1, t1 # Bump src pointer
641 c.sub a2, a2, t1 # Decrement count.
642 c.add a3, a3, t1 # Bump dst pointer
643 c.bnez a2, loop # Anymore?
644 exit:
645 c.ret
646
647 Vector Length VL is truncated inherently at the first page faulting byte-level LD. Otherwise, with more powerful hardware the number of elements LOADed from memory could be dozens to hundreds or greater (memory bandwidth permitting).
648
649 With VL truncated the analysis looking for the zero byte and the subsequent STORE (a straight ST, not a ffirst ST) can proceed, safe in the knowledge that every byte loaded in the Vector is valid. Implementors are even permitted to "adapt" VL, truncating it early so that, for example, subsequent iterations of loops will have LD/STs on aligned boundaries.
650
651 SIMD strncpy hand-written assembly routines are, to be blunt about it, a total nightmare. 240 instructions is not uncommon, and the worst thing about them is that they are unable to cope with detection of a page fault condition.
652
653 Note: see <https://bugs.libre-soc.org/show_bug.cgi?id=561>
654
655 # Data-dependent fail-first
656
657 This is a minor variant on the CR-based predicate-result mode. Where pred-result continues with independent element testing (any of which may be parallelised), data-dependent fail-first *stops* at the first failure:
658
659 if Rc=0: BO = inv<<2 | 0b00 # test CR.eq bit z/nz
660 for i in range(VL):
661 # predication test, skip all masked out elements.
662 if predicate_masked_out(i): continue # skip
663 result = op(iregs[RA+i], iregs[RB+i])
664 CRnew = analyse(result) # calculates eq/lt/gt
665 # now test CR, similar to branch
666 if CRnew[BO[0:1]] != BO[2]:
667 VL = i # truncate: only successes allowed
668 break
669 # test passed: store result (and CR?)
670 if not RC1: iregs[RT+i] = result
671 if RC1 or Rc=1: crregs[offs+i] = CRnew
672
673 This is particularly useful, again, for FP operations that might overflow, where it is desirable to end the loop early, but also desirable to complete at least those operations that were okay (passed the test) without also having to slow down execution by adding extra instructions that tested for the possibility of that failure, in advance of doing the actual calculation.
674
675 The only minor downside here though is the change to VL, which in some implementations may cause pipeline stalls. This was one of the reasons why CR-based pred-result analysis was added, because that at least is entirely paralleliseable.
676
677 # Instruction format
678
679 Whilst this overview shows the internals, it does not go into detail on the actual instruction format itself. There are a couple of reasons for this: firstly, it's under development, and secondly, it needs to be proposed to the OpenPOWER Foundation ISA WG for consideration and review.
680
681 That said: draft pages for [[sv/setvl]] and [[sv/svp64]] are written up. The `setvl` instruction is pretty much as would be expected from a Cray style VL instruction: the only differences being that, firstly, the MAXVL (Maximum Vector Length) has to be specified, because that determines - precisely - how many of the *scalar* registers are to be used for a given Vector. Secondly: within the limit of MAXVL, VL is required to be set to the requested value. By contrast, RVV systems permit the hardware to set arbitrary values of VL.
682
683 The other key question is of course: what's the actual instruction format, and what's in it? Bearing in mind that this requires OPF review, the current draft is at the [[sv/svp64]] page, and includes space for all the different modes, the predicates, element width overrides, SUBVL and the register extensions, in 24 bits. This just about fits into an OpenPOWER v3.1B 64 bit Prefix by borrowing some of the Reserved Encoding space. The v3.1B suffix - containing as it does a 32 bit OpenPOWER instruction - aligns perfectly with SV.
684
685 Further reading is at the main [[SV|sv]] page.
686
687 # Conclusion
688
689 Starting from a scalar ISA - OpenPOWER v3.0B - it was shown above that, with conceptual sub-loops, a Scalar ISA can be turned into a Vector one, by embedding Scalar instructions - unmodified - into a Vector "context" using "Prefixing". With careful thought, this technique reaches 90% par with good Vector ISAs, increasing to 95% with the addition of a mere handful of additional context-vectoriseable scalar instructions ([[sv/mv.x]] amongst them).
690
691 What is particularly cool about the SV concept is that custom extensions and research need not be concerned about inventing new Vector instructions and how to get them to interact with the Scalar ISA: they are effectively one and the same. Any new instruction added at the Scalar level is inherently and automatically Vectorised, following some simple rules.
692