f3ad413d9f3db8b87a070ed6f2a1877f68ad1888
[libreriscv.git] / openpower / sv / overview.mdwn
1 # SV Overview
2
3 [[discussion]] page feel free to add comments, questions.
4
5 [[!toc]]
6
7 This document provides an overview and introduction as to why SV (a
8 Cray-style Vector augmentation to OpenPOWER) exists, and how it works.
9
10 SIMD, the primary method for easy parallelism of the
11 past 30 years in Computer Architectures, is [known to be
12 harmful](https://www.sigarch.org/simd-instructions-considered-harmful/).
13 SIMD provides a seductive simplicity that is easy to implement in
14 hardware. With each doubling in width it promises increases in raw performance without the complexity of either multi-issue or out-of-order execution.
15
16 Unfortunately, even with predication added, SIMD only becomes more and
17 more problematic with each power of two SIMD width increase introduced
18 through an ISA revision. The opcode proliferation, at O(N^6), inexorably
19 spirals out of control in the ISA, detrimentally impacting the hardware,
20 the software, the compilers and the testing and compliance.
21
22 Cray-style variable-length Vectors on the other hand result in
23 stunningly elegant and small loops, exceptionally high data throughput
24 per instruction (by one *or greater* orders of magnitude), with no alarmingly high setup and cleanup code, where
25 at the hardware level the microarchitecture may execute from one element
26 right the way through to tens of thousands at a time, yet the executable
27 remains exactly the same and the ISA remains clear, true to the RISC
28 paradigm, and clean. Unlike in SIMD, powers of two limitations are not
29 involved in either the hardware or in the assembly code.
30
31 SimpleV takes the Cray style Vector principle and applies it in the
32 abstract to a Scalar ISA, in the process allowing register file size
33 increases using "tagging" (similar to how x86 originally extended
34 registers from 32 to 64 bit).
35
36 The fundamentals are:
37
38 * The Program Counter gains a "Sub Counter" context.
39 * Vectorisation pauses the PC and runs a loop from 0 to VL-1
40 (where VL is Vector Length). This may be thought of as a
41 "Sub-PC"
42 * Some registers may be "tagged" as Vectors
43 * During the loop, "Vector"-tagged register are incremented by
44 one with each iteration, executing the *same instruction*
45 but with *different registers*
46 * Once the loop is completed *only then* is the Program Counter
47 allowed to move to the next instruction.
48
49 Hardware (and simulator) implementors are free and clear to implement this
50 as literally a for-loop, sitting in between instruction decode and issue.
51 Higher performance systems may deploy SIMD backends, multi-issue and
52 out-of-order execution, although it is strongly recommended to add
53 predication capability into all SIMD backend units.
54
55 In OpenPOWER ISA v3.0B pseudo-code form, an ADD operation, assuming both
56 source and destination have been "tagged" as Vectors, is simply:
57
58 for i = 0 to VL-1:
59 GPR(RT+i) = GPR(RA+i) + GPR(RB+i)
60
61 At its heart, SimpleV really is this simple. On top of this fundamental
62 basis further refinements can be added which build up towards an extremely
63 powerful Vector augmentation system, with very little in the way of
64 additional opcodes required: simply external "context".
65
66 RISC-V RVV as of version 0.9 is over 180 instructions (more than the
67 rest of RV64G combined). Over 95% of that functionality is added to
68 OpenPOWER v3 0B, by SimpleV augmentation, with around 5 to 8 instructions.
69
70 Even in OpenPOWER v3.0B, the Scalar Integer ISA is around 150
71 instructions, with IEEE754 FP adding approximately 80 more. VSX, being
72 based on SIMD design principles, adds somewhere in the region of 600 more.
73 SimpleV again provides over 95% of VSX functionality, simply by augmenting
74 the *Scalar* OpenPOWER ISA, and in the process providing features such
75 as predication, which VSX is entirely missing.
76
77 AVX512, SVE2, VSX, RVV, all of these systems have to provide different
78 types of register files: Scalar and Vector is the minimum. AVX512
79 even provides a mini mask regfile, followed by explicit instructions
80 that handle operations on each of them *and map between all of them*.
81 SV simply not only uses the existing scalar regfiles (including CRs),
82 but because operations exist within OpenPOWER to cover interactions
83 between the scalar regfiles (`mfcr`, `fcvt`) there is very little that
84 needs to be added.
85
86 In fairness to both VSX and RVV, there are things that are not provided
87 by SimpleV:
88
89 * 128 bit or above arithmetic and other operations
90 (VSX Rijndael and SHA primitives; VSX shuffle and bitpermute operations)
91 * register files above 128 entries
92 * Vector lengths over 64
93 * Unit-strided LD/ST and other comprehensive memory operations
94 (struct-based LD/ST from RVV for example)
95 * 32-bit instruction lengths. [[svp64]] had to be added as 64 bit.
96
97 These are not insurmountable limitations, that, over time, may well be
98 added in future revisions of SV.
99
100 The rest of this document builds on the above simple loop to add:
101
102 * Vector-Scalar, Scalar-Vector and Scalar-Scalar operation
103 * Traditional Vector operations (VSPLAT, VINSERT, VCOMPRESS etc)
104 * Predication masks (essential for parallel if/else constructs)
105 * 8, 16 and 32 bit integer operations, and both FP16 and BF16.
106 * Compacted operations into registers (normally only provided by SIMD)
107 * Fail-on-first (introduced in ARM SVE2)
108 * A new concept: Data-dependent fail-first
109 * Condition-Register based *post-result* predication (also new)
110 * A completely new concept: "Twin Predication"
111 * vec2/3/4 "Subvectors" and Swizzling (standard fare for 3D)
112
113 All of this is *without modifying the OpenPOWER v3.0B ISA*, except to add
114 "wrapping context", similar to how v3.1B 64 Prefixes work.
115
116 # Adding Scalar / Vector
117
118 The first augmentation to the simple loop is to add the option for all
119 source and destinations to all be either scalar or vector. As a FSM
120 this is where our "simple" loop gets its first complexity.
121
122 function op_add(RT, RA, RB) # add not VADD!
123 int id=0, irs1=0, irs2=0;
124 for i = 0 to VL-1:
125 ireg[RT+id] <= ireg[RA+irs1] + ireg[RB+irs2];
126 if (!RT.isvec) break;
127 if (RT.isvec) { id += 1; }
128 if (RA.isvec) { irs1 += 1; }
129 if (RB.isvec) { irs2 += 1; }
130
131 With some walkthroughs it is clear that the loop exits immediately
132 after the first scalar destination result is written, and that when the
133 destination is a Vector the loop proceeds to fill up the register file,
134 sequentially, starting at `rd` and ending at `rd+VL-1`. The two source
135 registers will, independently, either remain pointing at `RB` or `RA`
136 respectively, or, if marked as Vectors, will march incrementally in
137 lockstep, producing element results along the way, as the destination
138 also progresses through elements.
139
140 In this way all the eight permutations of Scalar and Vector behaviour
141 are covered, although without predication the scalar-destination ones are
142 reduced in usefulness. It does however clearly illustrate the principle.
143
144 Note in particular: there is no separate Scalar add instruction and
145 separate Vector instruction and separate Scalar-Vector instruction, *and
146 there is no separate Vector register file*: it's all the same instruction,
147 on the standard register file, just with a loop. Scalar happens to set
148 that loop size to one.
149
150 # Adding single predication
151
152 The next step is to add a single predicate mask. This is where it gets
153 interesting. Predicate masks are a bitvector, each bit specifying, in
154 order, whether the element operation is to be skipped ("masked out")
155 or allowed. If there is no predicate, it is set to all 1s, which is
156 effectively the same as "no predicate".
157
158 function op_add(RT, RA, RB) # add not VADD!
159 int id=0, irs1=0, irs2=0;
160 predval = get_pred_val(FALSE, rd);
161 for i = 0 to VL-1:
162 if (predval & 1<<i) # predication bit test
163 ireg[RT+id] <= ireg[RA+irs1] + ireg[RB+irs2];
164 if (!RT.isvec) break;
165 if (RT.isvec) { id += 1; }
166 if (RA.isvec) { irs1 += 1; }
167 if (RB.isvec) { irs2 += 1; }
168
169 The key modification is to skip the creation and storage of the result
170 if the relevant predicate mask bit is clear, but *not the progression
171 through the registers*.
172
173 A particularly interesting case is if the destination is scalar, and the
174 first few bits of the predicate are zero. The loop proceeds to increment
175 the Scalar *source* registers until the first nonzero predicate bit is
176 found, whereupon a single result is computed, and *then* the loop exits.
177 This therefore uses the predicate to perform Vector source indexing.
178 This case was not possible without the predicate mask.
179
180 If all three registers are marked as Vector then the "traditional"
181 predicated Vector behaviour is provided. Yet, just as before, all other
182 options are still provided, right the way back to the pure-scalar case,
183 as if this were a straight OpenPOWER v3.0B non-augmented instruction.
184
185 Single Predication therefore provides several modes traditionally seen
186 in Vector ISAs:
187
188 * the predicate may be set as a single bit, the sources are scalar and the destination a vector: this gives VINSERT (VINDEX) behaviour.
189 * VSPLAT (result broadcasting) is provided by making the sources scalar and the destination a vector, and having no predicate set or having multiple bits set.
190 * VSELECT is provided by setting up (at least one of) the sources as a vector, using a single bit in olthe predicate, and the destination as a scalar.
191
192 # Predicate "zeroing" mode
193
194 Sometimes with predication it is ok to leave the masked-out element
195 alone (not modify the result) however sometimes it is better to zero the
196 masked-out elements. Zeroing can be combined with bit-wise ORing to build
197 up vectors from multiple predicate patterns: the same combining with
198 nonzeroing involves more mv operations and predicate mask operations.
199 Our pseudocode therefore ends up as follows, to take the enhancement
200 into account:
201
202 function op_add(RT, RA, RB) # add not VADD!
203 int id=0, irs1=0, irs2=0;
204 predval = get_pred_val(FALSE, rd);
205 for i = 0 to VL-1:
206 if (predval & 1<<i) # predication bit test
207 ireg[RT+id] <= ireg[RA+irs1] + ireg[RB+irs2];
208 if (!RT.isvec) break;
209 else if zeroing: # predicate failed
210 ireg[RT+id] = 0 # set element to zero
211 if (RT.isvec) { id += 1; }
212 if (RA.isvec) { irs1 += 1; }
213 if (RB.isvec) { irs2 += 1; }
214
215 Many Vector systems either have zeroing or they have nonzeroing, they
216 do not have both. This is because they usually have separate Vector
217 register files. However SV sits on top of standard register files and
218 consequently there are advantages to both, so both are provided.
219
220 # Element Width overrides
221
222 All good Vector ISAs have the usual bitwidths for operations: 8/16/32/64
223 bit integer operations, and IEEE754 FP32 and 64. Often also included
224 is FP16 and more recently BF16. The *really* good Vector ISAs have
225 variable-width vectors right down to bitlevel, and as high as 1024 bit
226 arithmetic per element, as well as IEEE754 FP128.
227
228 SV has an "override" system that *changes* the bitwidth of operations
229 that were intended by the original scalar ISA designers to have (for
230 example) 64 bit operations (only). The override widths are 8, 16 and
231 32 for integer, and FP16 and FP32 for IEEE754 (with BF16 to be added in
232 the future).
233
234 This presents a particularly intriguing conundrum given that the OpenPOWER
235 Scalar ISA was never designed with for example 8 bit operations in mind,
236 let alone Vectors of 8 bit.
237
238 The solution comes in terms of rethinking the definition of a Register
239 File. The typical regfile may be considered to be a multi-ported SRAM
240 block, 64 bits wide and usually 32 entries deep, to give 32 64 bit
241 registers. Conceptually, to get our variable element width vectors,
242 we may think of the regfile as insead being the following c-based data
243 structure:
244
245 typedef union {
246 uint8_t actual_bytes[8];
247 uint8_t b[0]; // array of type uint8_t
248 uint16_t s[0];
249 uint32_t i[0];
250 uint64_t l[0]; // default OpenPOWER ISA uses this
251 } reg_t;
252
253 reg_t int_regfile[128]; // SV extends to 128 regs
254
255 Then, our simple loop, instead of accessing the array of regfile entries
256 with a computed index, would access the appropriate element of the
257 appropriate type. Thus we have a series of overlapping conceptual arrays
258 that each start at what is traditionally thought of as "a register".
259 It then helps if we have a couple of routines:
260
261 get_polymorphed_reg(reg, bitwidth, offset):
262 reg_t res = 0;
263 if (!reg.isvec): # scalar
264 offset = 0
265 if bitwidth == 8:
266 reg.b = int_regfile[reg].b[offset]
267 elif bitwidth == 16:
268 reg.s = int_regfile[reg].s[offset]
269 elif bitwidth == 32:
270 reg.i = int_regfile[reg].i[offset]
271 elif bitwidth == default: # 64
272 reg.l = int_regfile[reg].l[offset]
273 return res
274
275 set_polymorphed_reg(reg, bitwidth, offset, val):
276 if (!reg.isvec): # scalar
277 offset = 0
278 if bitwidth == 8:
279 int_regfile[reg].b[offset] = val
280 elif bitwidth == 16:
281 int_regfile[reg].s[offset] = val
282 elif bitwidth == 32:
283 int_regfile[reg].i[offset] = val
284 elif bitwidth == default: # 64
285 int_regfile[reg].l[offset] = val
286
287 These basically provide a convenient parameterised way to access the
288 register file, at an arbitrary vector element offset and an arbitrary
289 element width. Our first simple loop thus becomes:
290
291 for i = 0 to VL-1:
292 src1 = get_polymorphed_reg(RA, srcwid, i)
293 src2 = get_polymorphed_reg(RB, srcwid, i)
294 result = src1 + src2 # actual add here
295 set_polymorphed_reg(rd, destwid, i, result)
296
297 With this loop, if elwidth=16 and VL=3 the first 48 bits of the target
298 register will contain three 16 bit addition results, and the upper 16
299 bits will be *unaltered*.
300
301 Note that things such as zero/sign-extension (and predication) have
302 been left out to illustrate the elwidth concept. Also note that it turns
303 out to be important to perform the operation at the maximum bitwidth -
304 `max(srcwid, destwid)` - such that any truncation, rounding errors or
305 other artefacts may all be ironed out. This turns out to be important
306 when applying Saturation for Audio DSP workloads.
307
308 Other than that, element width overrides, which can be applied to *either*
309 source or destination or both, are pretty straightforward, conceptually.
310 The details, for hardware engineers, involve byte-level write-enable
311 lines, which is exactly what is used on SRAMs anyway. Compiler writers
312 have to alter Register Allocation Tables to byte-level granularity.
313
314 One critical thing to note: upper parts of the underlying 64 bit
315 register are *not zero'd out* by a write involving a non-aligned Vector
316 Length. An 8 bit operation with VL=7 will *not* overwrite the 8th byte
317 of the destination. The only situation where a full overwrite occurs
318 is on "default" behaviour. This is extremely important to consider the
319 register file as a byte-level store, not a 64-bit-level store.
320
321 # Quick recap so far
322
323 The above functionality pretty much covers around 85% of Vector ISA needs.
324 Predication is provided so that parallel if/then/else constructs can
325 be performed: critical given that sequential if/then statements and
326 branches simply do not translate successfully to Vector workloads.
327 VSPLAT capability is provided which is approximately 20% of all GPU
328 workload operations. Also covered, with elwidth overriding, is the
329 smaller arithmetic operations that caused ISAs developed from the
330 late 80s onwards to get themselves into a tiz when adding "Multimedia"
331 acceleration aka "SIMD" instructions.
332
333 Experienced Vector ISA readers will however have noted that VCOMPRESS
334 and VEXPAND are missing, as is Vector "reduce" (mapreduce) capability
335 and VGATHER and VSCATTER. Compress and Expand are covered by Twin
336 Predication, and yet to also be covered is fail-on-first, CR-based result
337 predication, and Subvectors and Swizzle.
338
339 ## SUBVL <a name="subvl"></a>
340
341 Adding in support for SUBVL is a matter of adding in an extra inner
342 for-loop, where register src and dest are still incremented inside the
343 inner part. Predication is still taken from the VL index, however it
344 is applied to the whole subvector:
345
346 function op_add(RT, RA, RB) # add not VADD!
347  int id=0, irs1=0, irs2=0;
348  predval = get_pred_val(FALSE, rd);
349 for i = 0 to VL-1:
350 if (predval & 1<<i) # predication uses intregs
351 for (s = 0; s < SUBVL; s++)
352 sd = id*SUBVL + s
353 srs1 = irs1*SUBVL + s
354 srs2 = irs2*SUBVL + s
355 ireg[RT+sd] <= ireg[RA+srs1] + ireg[RB+srs2];
356 if (!RT.isvec) break;
357 if (RT.isvec) { id += 1; }
358 if (RA.isvec) { irs1 += 1; }
359 if (RB.isvec) { irs2 += 1; }
360
361 # Swizzle <a name="subvl"></a>
362
363 Swizzle is particularly important for 3D work. It allows in-place
364 reordering of XYZW, ARGB etc. and access of sub-portions of the same in
365 arbitrary order *without* requiring timeconsuming scalar mv instructions
366 (scalar due to the convoluted offsets). With somewhere around 10% of
367 operations in 3D Shaders involving swizzle this is a huge saving and
368 reduces pressure on register files.
369
370 In SV given the percentage of operations that also involve initialisation
371 to 0.0 or 1.0 into subvector elements the decision was made to include
372 those:
373
374 swizzle = get_swizzle_immed() # 12 bits
375 for (s = 0; s < SUBVL; s++)
376 remap = (swizzle >> 3*s) & 0b111
377 if remap < 4:
378 sm = id*SUBVL + remap
379 ireg[rd+s] <= ireg[RA+sm]
380 elif remap == 4:
381 ireg[rd+s] <= 0.0
382 elif remap == 5:
383 ireg[rd+s] <= 1.0
384
385 Note that a value of 6 (and 7) will leave the target subvector element
386 untouched. This is equivalent to a predicate mask which is built-in,
387 in immediate form, into the [[sv/mv.swizzle]] operation. mv.swizzle is
388 rare in that it is one of the few instructions needed to be added that
389 are never going to be part of a Scalar ISA. Even in High Performance
390 Compute workloads it is unusual: it is only because SV is targetted at
391 3D and Video that it is being considered.
392
393 Some 3D GPU ISAs also allow for two-operand subvector swizzles. These are
394 sufficiently unusual, and the immediate opcode space required so large,
395 that the tradeoff balance was decided in SV to only add mv.swizzle.
396
397 # Twin Predication
398
399 Twin Predication is cool. Essentially it is a back-to-back
400 VCOMPRESS-VEXPAND (a multiple sequentially ordered VINSERT). The compress
401 part is covered by the source predicate and the expand part by the
402 destination predicate. Of course, if either of those is all 1s then
403 the operation degenerates *to* VCOMPRESS or VEXPAND, respectively.
404
405 function op(RT, RS):
406  ps = get_pred_val(FALSE, RS); # predication on src
407  pd = get_pred_val(FALSE, RT); # ... AND on dest
408  for (int i = 0, int j = 0; i < VL && j < VL;):
409 if (RS.isvec) while (!(ps & 1<<i)) i++;
410 if (RT.isvec) while (!(pd & 1<<j)) j++;
411 reg[RT+j] = SCALAR_OPERATION_ON(reg[RS+i])
412 if (int_csr[RS].isvec) i++;
413 if (int_csr[RT].isvec) j++; else break
414
415 Here's the interesting part: given the fact that SV is a "context"
416 extension, the above pattern can be applied to a lot more than just MV,
417 which is normally only what VCOMPRESS and VEXPAND do in traditional
418 Vector ISAs: move registers. Twin Predication can be applied to `extsw`
419 or `fcvt`, LD/ST operations and even `rlwinmi` and other operations
420 taking a single source and immediate(s) such as `addi`. All of these
421 are termed single-source, single-destination (LDST Address-generation,
422 or AGEN, is a single source).
423
424 It also turns out that by using a single bit set in the source or
425 destination, *all* the sequential ordered standard patterns of Vector
426 ISAs are provided: VSPLAT, VSELECT, VINSERT, VCOMPRESS, VEXPAND.
427
428 The only one missing from the list here, because it is non-sequential,
429 is VGATHER: moving registers by specifying a vector of register indices
430 (`regs[rd] = regs[regs[rs]]` in a loop). This one is tricky because it
431 typically does not exist in standard scalar ISAs. If it did it would
432 be called [[sv/mv.x]]
433
434 # CR predicate result analysis
435
436 OpenPOWER has Condition Registers. These store an analysis of the result
437 of an operation to test it for being greater, less than or equal to zero.
438 What if a test could be done, similar to branch BO testing, which hooked
439 into the predication system?
440
441 for i in range(VL):
442 # predication test, skip all masked out elements.
443 if predicate_masked_out(i): continue # skip
444 result = op(iregs[RA+i], iregs[RB+i])
445 CRnew = analyse(result) # calculates eq/lt/gt
446 # Rc=1 always stores the CR
447 if Rc=1: crregs[offs+i] = CRnew
448 # now test CR, similar to branch
449 if CRnew[BO[0:1]] == BO[2]:
450 # result optionally stored but CR always is
451 iregs[RT+i] = result
452
453 Note that whilst the Vector of CRs is always written to the CR regfile,
454 only those result elements that pass the BO test get written to the
455 integer regfile.
456
457 Here for example if FP overflow occurred, and the CR testing was carried
458 out for that, all valid results would be stored but invalid ones would
459 not, but in addition the Vector of CRs would contain the indicators of
460 which ones failed. With the invalid results being simply not written
461 this could save resources (save on register file writes).
462
463 Also expected is, due to the fact that the predicate mask is effectively
464 ANDed with the post-result analysis as a secondary type of predication,
465 that there would be savings to be had in some types of operations where
466 the post-result analysis, if not included in SV, would need a second
467 predicate calculation followed by a predicate mask AND operation.
468
469 Note, hilariously, that Condition Register Operations (crand, cror) may
470 also have post-result analysis applied to them. With Vectors of CRs being
471 utilised *for* predication, possibilities for compact and elegant code
472 begin to emerge from this innocuous-looking addition to SV.
473