92a7b76793b2e008ed686f7b14d187856f32afcc
[libreriscv.git] / openpower / sv / overview.mdwn
1 # SV Overview
2
3 [[discussion]] page feel free to add comments, questions.
4
5 [[!toc]]
6
7 This document provides an overview and introduction as to why SV (a
8 Cray-style Vector augmentation to OpenPOWER) exists, and how it works.
9
10 SIMD, the primary method for easy parallelism of the
11 past 30 years in Computer Architectures, is [known to be
12 harmful](https://www.sigarch.org/simd-instructions-considered-harmful/).
13 SIMD provides a seductive simplicity that is easy to implement in
14 hardware. With each doubling in width it promises increases in raw performance without the complexity of either multi-issue or out-of-order execution.
15
16 Unfortunately, even with predication added, SIMD only becomes more and
17 more problematic with each power of two SIMD width increase introduced
18 through an ISA revision. The opcode proliferation, at O(N^6), inexorably
19 spirals out of control in the ISA, detrimentally impacting the hardware,
20 the software, the compilers and the testing and compliance.
21
22 Cray-style variable-length Vectors on the other hand result in
23 stunningly elegant and small loops, exceptionally high data throughput
24 per instruction (by one *or greater* orders of magnitude than SIMD), with no alarmingly high setup and cleanup code, where
25 at the hardware level the microarchitecture may execute from one element
26 right the way through to tens of thousands at a time, yet the executable
27 remains exactly the same and the ISA remains clear, true to the RISC
28 paradigm, and clean. Unlike in SIMD, powers of two limitations are not
29 involved in either the hardware or in the assembly code.
30
31 SimpleV takes the Cray style Vector principle and applies it in the
32 abstract to a Scalar ISA, in the process allowing register file size
33 increases using "tagging" (similar to how x86 originally extended
34 registers from 32 to 64 bit).
35
36 The fundamentals are:
37
38 * The Program Counter gains a "Sub Counter" context.
39 * Vectorisation pauses the PC and runs a loop from 0 to VL-1
40 (where VL is Vector Length). This may be thought of as a
41 "Sub-PC"
42 * Some registers may be "tagged" as Vectors
43 * During the loop, "Vector"-tagged register are incremented by
44 one with each iteration, executing the *same instruction*
45 but with *different registers*
46 * Once the loop is completed *only then* is the Program Counter
47 allowed to move to the next instruction.
48
49 Hardware (and simulator) implementors are free and clear to implement this
50 as literally a for-loop, sitting in between instruction decode and issue.
51 Higher performance systems may deploy SIMD backends, multi-issue and
52 out-of-order execution, although it is strongly recommended to add
53 predication capability into all SIMD backend units.
54
55 In OpenPOWER ISA v3.0B pseudo-code form, an ADD operation, assuming both
56 source and destination have been "tagged" as Vectors, is simply:
57
58 for i = 0 to VL-1:
59 GPR(RT+i) = GPR(RA+i) + GPR(RB+i)
60
61 At its heart, SimpleV really is this simple. On top of this fundamental
62 basis further refinements can be added which build up towards an extremely
63 powerful Vector augmentation system, with very little in the way of
64 additional opcodes required: simply external "context".
65
66 RISC-V RVV as of version 0.9 is over 180 instructions (more than the
67 rest of RV64G combined). Over 95% of that functionality is added to
68 OpenPOWER v3 0B, by SimpleV augmentation, with around 5 to 8 instructions.
69
70 Even in OpenPOWER v3.0B, the Scalar Integer ISA is around 150
71 instructions, with IEEE754 FP adding approximately 80 more. VSX, being
72 based on SIMD design principles, adds somewhere in the region of 600 more.
73 SimpleV again provides over 95% of VSX functionality, simply by augmenting
74 the *Scalar* OpenPOWER ISA, and in the process providing features such
75 as predication, which VSX is entirely missing.
76
77 AVX512, SVE2, VSX, RVV, all of these systems have to provide different
78 types of register files: Scalar and Vector is the minimum. AVX512
79 even provides a mini mask regfile, followed by explicit instructions
80 that handle operations on each of them *and map between all of them*.
81 SV simply not only uses the existing scalar regfiles (including CRs),
82 but because operations exist within OpenPOWER to cover interactions
83 between the scalar regfiles (`mfcr`, `fcvt`) there is very little that
84 needs to be added.
85
86 In fairness to both VSX and RVV, there are things that are not provided
87 by SimpleV:
88
89 * 128 bit or above arithmetic and other operations
90 (VSX Rijndael and SHA primitives; VSX shuffle and bitpermute operations)
91 * register files above 128 entries
92 * Vector lengths over 64
93 * Unit-strided LD/ST and other comprehensive memory operations
94 (struct-based LD/ST from RVV for example)
95 * 32-bit instruction lengths. [[svp64]] had to be added as 64 bit.
96
97 These are not insurmountable limitations, that, over time, may well be
98 added in future revisions of SV.
99
100 The rest of this document builds on the above simple loop to add:
101
102 * Vector-Scalar, Scalar-Vector and Scalar-Scalar operation
103 (of all register files: Integer, FP *and CRs*)
104 * Traditional Vector operations (VSPLAT, VINSERT, VCOMPRESS etc)
105 * Predication masks (essential for parallel if/else constructs)
106 * 8, 16 and 32 bit integer operations, and both FP16 and BF16.
107 * Compacted operations into registers (normally only provided by SIMD)
108 * Fail-on-first (introduced in ARM SVE2)
109 * A new concept: Data-dependent fail-first
110 * Condition-Register based *post-result* predication (also new)
111 * A completely new concept: "Twin Predication"
112 * vec2/3/4 "Subvectors" and Swizzling (standard fare for 3D)
113
114 All of this is *without modifying the OpenPOWER v3.0B ISA*, except to add
115 "wrapping context", similar to how v3.1B 64 Prefixes work.
116
117 # Adding Scalar / Vector
118
119 The first augmentation to the simple loop is to add the option for all
120 source and destinations to all be either scalar or vector. As a FSM
121 this is where our "simple" loop gets its first complexity.
122
123 function op_add(RT, RA, RB) # add not VADD!
124 int id=0, irs1=0, irs2=0;
125 for i = 0 to VL-1:
126 ireg[RT+id] <= ireg[RA+irs1] + ireg[RB+irs2];
127 if (!RT.isvec) break;
128 if (RT.isvec) { id += 1; }
129 if (RA.isvec) { irs1 += 1; }
130 if (RB.isvec) { irs2 += 1; }
131
132 With some walkthroughs it is clear that the loop exits immediately
133 after the first scalar destination result is written, and that when the
134 destination is a Vector the loop proceeds to fill up the register file,
135 sequentially, starting at `rd` and ending at `rd+VL-1`. The two source
136 registers will, independently, either remain pointing at `RB` or `RA`
137 respectively, or, if marked as Vectors, will march incrementally in
138 lockstep, producing element results along the way, as the destination
139 also progresses through elements.
140
141 In this way all the eight permutations of Scalar and Vector behaviour
142 are covered, although without predication the scalar-destination ones are
143 reduced in usefulness. It does however clearly illustrate the principle.
144
145 Note in particular: there is no separate Scalar add instruction and
146 separate Vector instruction and separate Scalar-Vector instruction, *and
147 there is no separate Vector register file*: it's all the same instruction,
148 on the standard register file, just with a loop. Scalar happens to set
149 that loop size to one.
150
151 # Adding single predication
152
153 The next step is to add a single predicate mask. This is where it gets
154 interesting. Predicate masks are a bitvector, each bit specifying, in
155 order, whether the element operation is to be skipped ("masked out")
156 or allowed. If there is no predicate, it is set to all 1s, which is
157 effectively the same as "no predicate".
158
159 function op_add(RT, RA, RB) # add not VADD!
160 int id=0, irs1=0, irs2=0;
161 predval = get_pred_val(FALSE, rd);
162 for i = 0 to VL-1:
163 if (predval & 1<<i) # predication bit test
164 ireg[RT+id] <= ireg[RA+irs1] + ireg[RB+irs2];
165 if (!RT.isvec) break;
166 if (RT.isvec) { id += 1; }
167 if (RA.isvec) { irs1 += 1; }
168 if (RB.isvec) { irs2 += 1; }
169
170 The key modification is to skip the creation and storage of the result
171 if the relevant predicate mask bit is clear, but *not the progression
172 through the registers*.
173
174 A particularly interesting case is if the destination is scalar, and the
175 first few bits of the predicate are zero. The loop proceeds to increment
176 the Scalar *source* registers until the first nonzero predicate bit is
177 found, whereupon a single result is computed, and *then* the loop exits.
178 This therefore uses the predicate to perform Vector source indexing.
179 This case was not possible without the predicate mask.
180
181 If all three registers are marked as Vector then the "traditional"
182 predicated Vector behaviour is provided. Yet, just as before, all other
183 options are still provided, right the way back to the pure-scalar case,
184 as if this were a straight OpenPOWER v3.0B non-augmented instruction.
185
186 Single Predication therefore provides several modes traditionally seen
187 in Vector ISAs:
188
189 * VINSERT: the predicate may be set as a single bit, the sources are scalar and the destination a vector.
190 * VSPLAT (result broadcasting) is provided by making the sources scalar and the destination a vector, and having no predicate set or having multiple bits set.
191 * VSELECT is provided by setting up (at least one of) the sources as a vector, using a single bit in olthe predicate, and the destination as a scalar.
192
193 # Predicate "zeroing" mode
194
195 Sometimes with predication it is ok to leave the masked-out element
196 alone (not modify the result) however sometimes it is better to zero the
197 masked-out elements. Zeroing can be combined with bit-wise ORing to build
198 up vectors from multiple predicate patterns: the same combining with
199 nonzeroing involves more mv operations and predicate mask operations.
200 Our pseudocode therefore ends up as follows, to take the enhancement
201 into account:
202
203 function op_add(RT, RA, RB) # add not VADD!
204 int id=0, irs1=0, irs2=0;
205 predval = get_pred_val(FALSE, rd);
206 for i = 0 to VL-1:
207 if (predval & 1<<i) # predication bit test
208 ireg[RT+id] <= ireg[RA+irs1] + ireg[RB+irs2];
209 if (!RT.isvec) break;
210 else if zeroing: # predicate failed
211 ireg[RT+id] = 0 # set element to zero
212 if (RT.isvec) { id += 1; }
213 if (RA.isvec) { irs1 += 1; }
214 if (RB.isvec) { irs2 += 1; }
215
216 Many Vector systems either have zeroing or they have nonzeroing, they
217 do not have both. This is because they usually have separate Vector
218 register files. However SV sits on top of standard register files and
219 consequently there are advantages to both, so both are provided.
220
221 # Element Width overrides
222
223 All good Vector ISAs have the usual bitwidths for operations: 8/16/32/64
224 bit integer operations, and IEEE754 FP32 and 64. Often also included
225 is FP16 and more recently BF16. The *really* good Vector ISAs have
226 variable-width vectors right down to bitlevel, and as high as 1024 bit
227 arithmetic per element, as well as IEEE754 FP128.
228
229 SV has an "override" system that *changes* the bitwidth of operations
230 that were intended by the original scalar ISA designers to have (for
231 example) 64 bit operations (only). The override widths are 8, 16 and
232 32 for integer, and FP16 and FP32 for IEEE754 (with BF16 to be added in
233 the future).
234
235 This presents a particularly intriguing conundrum given that the OpenPOWER
236 Scalar ISA was never designed with for example 8 bit operations in mind,
237 let alone Vectors of 8 bit.
238
239 The solution comes in terms of rethinking the definition of a Register
240 File. The typical regfile may be considered to be a multi-ported SRAM
241 block, 64 bits wide and usually 32 entries deep, to give 32 64 bit
242 registers. Conceptually, to get our variable element width vectors,
243 we may think of the regfile as insead being the following c-based data
244 structure:
245
246 typedef union {
247 uint8_t actual_bytes[8];
248 uint8_t b[0]; // array of type uint8_t
249 uint16_t s[0];
250 uint32_t i[0];
251 uint64_t l[0]; // default OpenPOWER ISA uses this
252 } reg_t;
253
254 reg_t int_regfile[128]; // SV extends to 128 regs
255
256 Then, our simple loop, instead of accessing the array of regfile entries
257 with a computed index, would access the appropriate element of the
258 appropriate type. Thus we have a series of overlapping conceptual arrays
259 that each start at what is traditionally thought of as "a register".
260 It then helps if we have a couple of routines:
261
262 get_polymorphed_reg(reg, bitwidth, offset):
263 reg_t res = 0;
264 if (!reg.isvec): # scalar
265 offset = 0
266 if bitwidth == 8:
267 reg.b = int_regfile[reg].b[offset]
268 elif bitwidth == 16:
269 reg.s = int_regfile[reg].s[offset]
270 elif bitwidth == 32:
271 reg.i = int_regfile[reg].i[offset]
272 elif bitwidth == default: # 64
273 reg.l = int_regfile[reg].l[offset]
274 return res
275
276 set_polymorphed_reg(reg, bitwidth, offset, val):
277 if (!reg.isvec): # scalar
278 offset = 0
279 if bitwidth == 8:
280 int_regfile[reg].b[offset] = val
281 elif bitwidth == 16:
282 int_regfile[reg].s[offset] = val
283 elif bitwidth == 32:
284 int_regfile[reg].i[offset] = val
285 elif bitwidth == default: # 64
286 int_regfile[reg].l[offset] = val
287
288 These basically provide a convenient parameterised way to access the
289 register file, at an arbitrary vector element offset and an arbitrary
290 element width. Our first simple loop thus becomes:
291
292 for i = 0 to VL-1:
293 src1 = get_polymorphed_reg(RA, srcwid, i)
294 src2 = get_polymorphed_reg(RB, srcwid, i)
295 result = src1 + src2 # actual add here
296 set_polymorphed_reg(rd, destwid, i, result)
297
298 With this loop, if elwidth=16 and VL=3 the first 48 bits of the target
299 register will contain three 16 bit addition results, and the upper 16
300 bits will be *unaltered*.
301
302 Note that things such as zero/sign-extension (and predication) have
303 been left out to illustrate the elwidth concept. Also note that it turns
304 out to be important to perform the operation at the maximum bitwidth -
305 `max(srcwid, destwid)` - such that any truncation, rounding errors or
306 other artefacts may all be ironed out. This turns out to be important
307 when applying Saturation for Audio DSP workloads.
308
309 Other than that, element width overrides, which can be applied to *either*
310 source or destination or both, are pretty straightforward, conceptually.
311 The details, for hardware engineers, involve byte-level write-enable
312 lines, which is exactly what is used on SRAMs anyway. Compiler writers
313 have to alter Register Allocation Tables to byte-level granularity.
314
315 One critical thing to note: upper parts of the underlying 64 bit
316 register are *not zero'd out* by a write involving a non-aligned Vector
317 Length. An 8 bit operation with VL=7 will *not* overwrite the 8th byte
318 of the destination. The only situation where a full overwrite occurs
319 is on "default" behaviour. This is extremely important to consider the
320 register file as a byte-level store, not a 64-bit-level store.
321
322 # Quick recap so far
323
324 The above functionality pretty much covers around 85% of Vector ISA needs.
325 Predication is provided so that parallel if/then/else constructs can
326 be performed: critical given that sequential if/then statements and
327 branches simply do not translate successfully to Vector workloads.
328 VSPLAT capability is provided which is approximately 20% of all GPU
329 workload operations. Also covered, with elwidth overriding, is the
330 smaller arithmetic operations that caused ISAs developed from the
331 late 80s onwards to get themselves into a tiz when adding "Multimedia"
332 acceleration aka "SIMD" instructions.
333
334 Experienced Vector ISA readers will however have noted that VCOMPRESS
335 and VEXPAND are missing, as is Vector "reduce" (mapreduce) capability
336 and VGATHER and VSCATTER. Compress and Expand are covered by Twin
337 Predication, and yet to also be covered is fail-on-first, CR-based result
338 predication, and Subvectors and Swizzle.
339
340 ## SUBVL <a name="subvl"></a>
341
342 Adding in support for SUBVL is a matter of adding in an extra inner
343 for-loop, where register src and dest are still incremented inside the
344 inner part. Predication is still taken from the VL index, however it
345 is applied to the whole subvector:
346
347 function op_add(RT, RA, RB) # add not VADD!
348  int id=0, irs1=0, irs2=0;
349  predval = get_pred_val(FALSE, rd);
350 for i = 0 to VL-1:
351 if (predval & 1<<i) # predication uses intregs
352 for (s = 0; s < SUBVL; s++)
353 sd = id*SUBVL + s
354 srs1 = irs1*SUBVL + s
355 srs2 = irs2*SUBVL + s
356 ireg[RT+sd] <= ireg[RA+srs1] + ireg[RB+srs2];
357 if (!RT.isvec) break;
358 if (RT.isvec) { id += 1; }
359 if (RA.isvec) { irs1 += 1; }
360 if (RB.isvec) { irs2 += 1; }
361
362 # Swizzle <a name="subvl"></a>
363
364 Swizzle is particularly important for 3D work. It allows in-place
365 reordering of XYZW, ARGB etc. and access of sub-portions of the same in
366 arbitrary order *without* requiring timeconsuming scalar mv instructions
367 (scalar due to the convoluted offsets). With somewhere around 10% of
368 operations in 3D Shaders involving swizzle this is a huge saving and
369 reduces pressure on register files.
370
371 In SV given the percentage of operations that also involve initialisation
372 to 0.0 or 1.0 into subvector elements the decision was made to include
373 those:
374
375 swizzle = get_swizzle_immed() # 12 bits
376 for (s = 0; s < SUBVL; s++)
377 remap = (swizzle >> 3*s) & 0b111
378 if remap < 4:
379 sm = id*SUBVL + remap
380 ireg[rd+s] <= ireg[RA+sm]
381 elif remap == 4:
382 ireg[rd+s] <= 0.0
383 elif remap == 5:
384 ireg[rd+s] <= 1.0
385
386 Note that a value of 6 (and 7) will leave the target subvector element
387 untouched. This is equivalent to a predicate mask which is built-in,
388 in immediate form, into the [[sv/mv.swizzle]] operation. mv.swizzle is
389 rare in that it is one of the few instructions needed to be added that
390 are never going to be part of a Scalar ISA. Even in High Performance
391 Compute workloads it is unusual: it is only because SV is targetted at
392 3D and Video that it is being considered.
393
394 Some 3D GPU ISAs also allow for two-operand subvector swizzles. These are
395 sufficiently unusual, and the immediate opcode space required so large,
396 that the tradeoff balance was decided in SV to only add mv.swizzle.
397
398 # Twin Predication
399
400 Twin Predication is cool. Essentially it is a back-to-back
401 VCOMPRESS-VEXPAND (a multiple sequentially ordered VINSERT). The compress
402 part is covered by the source predicate and the expand part by the
403 destination predicate. Of course, if either of those is all 1s then
404 the operation degenerates *to* VCOMPRESS or VEXPAND, respectively.
405
406 function op(RT, RS):
407  ps = get_pred_val(FALSE, RS); # predication on src
408  pd = get_pred_val(FALSE, RT); # ... AND on dest
409  for (int i = 0, int j = 0; i < VL && j < VL;):
410 if (RS.isvec) while (!(ps & 1<<i)) i++;
411 if (RT.isvec) while (!(pd & 1<<j)) j++;
412 reg[RT+j] = SCALAR_OPERATION_ON(reg[RS+i])
413 if (int_csr[RS].isvec) i++;
414 if (int_csr[RT].isvec) j++; else break
415
416 Here's the interesting part: given the fact that SV is a "context"
417 extension, the above pattern can be applied to a lot more than just MV,
418 which is normally only what VCOMPRESS and VEXPAND do in traditional
419 Vector ISAs: move registers. Twin Predication can be applied to `extsw`
420 or `fcvt`, LD/ST operations and even `rlwinmi` and other operations
421 taking a single source and immediate(s) such as `addi`. All of these
422 are termed single-source, single-destination (LDST Address-generation,
423 or AGEN, is a single source).
424
425 It also turns out that by using a single bit set in the source or
426 destination, *all* the sequential ordered standard patterns of Vector
427 ISAs are provided: VSPLAT, VSELECT, VINSERT, VCOMPRESS, VEXPAND.
428
429 The only one missing from the list here, because it is non-sequential,
430 is VGATHER: moving registers by specifying a vector of register indices
431 (`regs[rd] = regs[regs[rs]]` in a loop). This one is tricky because it
432 typically does not exist in standard scalar ISAs. If it did it would
433 be called [[sv/mv.x]]
434
435 # CR predicate result analysis
436
437 OpenPOWER has Condition Registers. These store an analysis of the result
438 of an operation to test it for being greater, less than or equal to zero.
439 What if a test could be done, similar to branch BO testing, which hooked
440 into the predication system?
441
442 for i in range(VL):
443 # predication test, skip all masked out elements.
444 if predicate_masked_out(i): continue # skip
445 result = op(iregs[RA+i], iregs[RB+i])
446 CRnew = analyse(result) # calculates eq/lt/gt
447 # Rc=1 always stores the CR
448 if Rc=1: crregs[offs+i] = CRnew
449 # now test CR, similar to branch
450 if CRnew[BO[0:1]] == BO[2]:
451 # result optionally stored but CR always is
452 iregs[RT+i] = result
453
454 Note that whilst the Vector of CRs is always written to the CR regfile,
455 only those result elements that pass the BO test get written to the
456 integer regfile.
457
458 Here for example if FP overflow occurred, and the CR testing was carried
459 out for that, all valid results would be stored but invalid ones would
460 not, but in addition the Vector of CRs would contain the indicators of
461 which ones failed. With the invalid results being simply not written
462 this could save resources (save on register file writes).
463
464 Also expected is, due to the fact that the predicate mask is effectively
465 ANDed with the post-result analysis as a secondary type of predication,
466 that there would be savings to be had in some types of operations where
467 the post-result analysis, if not included in SV, would need a second
468 predicate calculation followed by a predicate mask AND operation.
469
470 Note, hilariously, that Vectorised Condition Register Operations (crand, cror) may
471 also have post-result analysis applied to them. With Vectors of CRs being
472 utilised *for* predication, possibilities for compact and elegant code
473 begin to emerge from this innocuous-looking addition to SV.
474
475 # Exception-based Fail-on-first
476
477 One of the major issues with Vectorised LD/ST operations is when a batch of LDs cross a page-fault boundary. With considerable resources being taken up with in-flight data, a large Vector LD being cancelled or unable to roll back is either a detriment to performance or can cause data corruption.
478
479 What if, then, rather than cancel an entire Vector LD because the last operation would cause a page fault, instead truncate the Vector to the last successful element?
480
481 This is called "fail-on-first". Here is strncpy, illustrated from RVV:
482
483 strncpy:
484 c.mv a3, a0 # Copy dst
485 loop:
486 setvli x0, a2, vint8 # Vectors of bytes.
487 vlbff.v v1, (a1) # Get src bytes
488 vseq.vi v0, v1, 0 # Flag zero bytes
489 vmfirst a4, v0 # Zero found?
490 vmsif.v v0, v0 # Set mask up to and including zero byte.
491 vsb.v v1, (a3), v0.t # Write out bytes
492 c.bgez a4, exit # Done
493 csrr t1, vl # Get number of bytes fetched
494 c.add a1, a1, t1 # Bump src pointer
495 c.sub a2, a2, t1 # Decrement count.
496 c.add a3, a3, t1 # Bump dst pointer
497 c.bnez a2, loop # Anymore?
498
499 exit:
500 c.ret
501
502 Vector Length VL is truncated inherently at the first page faulting byte-level LD. Otherwise, with more powerful hardware the number of elements LOADed from memory could be dozens to hundreds or greater (memory bandwidth permitting).
503
504 With VL truncated the analysis looking for the zero byte and the subsequent STORE (a straight ST, not a ffirst ST) can proceed, safe in the knowledge that every byte loaded in the Vector is valid. Implementors are even permitted to "adapt" VL, truncating it early so that, for example, subsequent iterations of loops will have LD/STs on aligned boundaries.
505
506 SIMD strncpy hand-written assembly routines are, to be blunt about it, a total nightmare. 240 instructions is not uncommon, and the worst thing about them is that they are unable to cope with detection of a page fault condition.
507