(no commit message)
[libreriscv.git] / openpower / sv / overview.mdwn
1 # SV Overview
2
3 **SV is in DRAFT STATUS**. SV has not yet been submitted to the OpenPOWER Foundation ISA WG for review.
4
5 This document provides an overview and introduction as to why SV (a
6 [[!wikipedia Cray]]-style Vector augmentation to [[!wikipedia OpenPOWER]]) exists, and how it works.
7
8 Links:
9
10 * [[discussion]] and
11 [bugreport](https://bugs.libre-soc.org/show_bug.cgi?id=556)
12 feel free to add comments, questions.
13 * [[SV|sv]]
14 * [[sv/svp64]]
15
16 Contents:
17
18 [[!toc]]
19
20 # Introduction: SIMD and Cray Vectors
21
22 SIMD, the primary method for easy parallelism of the
23 past 30 years in Computer Architectures, is
24 [known to be harmful](https://www.sigarch.org/simd-instructions-considered-harmful/).
25 SIMD provides a seductive simplicity that is easy to implement in
26 hardware. With each doubling in width it promises increases in raw
27 performance without the complexity of either multi-issue or out-of-order
28 execution.
29
30 Unfortunately, even with predication added, SIMD only becomes more and
31 more problematic with each power of two SIMD width increase introduced
32 through an ISA revision. The opcode proliferation, at O(N^6), inexorably
33 spirals out of control in the ISA, detrimentally impacting the hardware,
34 the software, the compilers and the testing and compliance. Here are
35 the typical dimensions that result in such massive proliferation:
36
37 * Operation (add, mul)
38 * bitwidth (8, 16, 32, 64, 128)
39 * Conversion between bitwidths (FP16-FP32-64)
40 * Signed/unsigned
41 * HI/LO swizzle (Audio L/R channels)
42 * Saturation (Clamping at max range)
43
44 These typically are multiplied up to produce explicit opcodes numbering
45 in the thousands on, for example the ARC Video/DSP cores.
46
47 Cray-style variable-length Vectors on the other hand result in
48 stunningly elegant and small loops, exceptionally high data throughput
49 per instruction (by one *or greater* orders of magnitude than SIMD), with
50 no alarmingly high setup and cleanup code, where at the hardware level
51 the microarchitecture may execute from one element right the way through
52 to tens of thousands at a time, yet the executable remains exactly the
53 same and the ISA remains clear, true to the RISC paradigm, and clean.
54 Unlike in SIMD, powers of two limitations are not involved in the ISA
55 or in the assembly code.
56
57 SimpleV takes the Cray style Vector principle and applies it in the
58 abstract to a Scalar ISA, in the process allowing register file size
59 increases using "tagging" (similar to how x86 originally extended
60 registers from 32 to 64 bit).
61
62 ## SV
63
64 The fundamentals are:
65
66 * The Program Counter (PC) gains a "Sub Counter" context (Sub-PC)
67 * Vectorisation pauses the PC and runs a Sub-PC loop from 0 to VL-1
68 (where VL is Vector Length)
69 * The [[Program Order]] of "Sub-PC" instructions must be preserved,
70 just as is expected of instructions ordered by the PC.
71 * Some registers may be "tagged" as Vectors
72 * During the loop, "Vector"-tagged register are incremented by
73 one with each iteration, executing the *same instruction*
74 but with *different registers*
75 * Once the loop is completed *only then* is the Program Counter
76 allowed to move to the next instruction.
77
78 Hardware (and simulator) implementors are free and clear to implement this
79 as literally a for-loop, sitting in between instruction decode and issue.
80 Higher performance systems may deploy SIMD backends, multi-issue and
81 out-of-order execution, although it is strongly recommended to add
82 predication capability directly into SIMD backend units.
83
84 In OpenPOWER ISA v3.0B pseudo-code form, an ADD operation, assuming both
85 source and destination have been "tagged" as Vectors, is simply:
86
87 for i = 0 to VL-1:
88 GPR(RT+i) = GPR(RA+i) + GPR(RB+i)
89
90 At its heart, SimpleV really is this simple. On top of this fundamental
91 basis further refinements can be added which build up towards an extremely
92 powerful Vector augmentation system, with very little in the way of
93 additional opcodes required: simply external "context".
94
95 x86 was originally only 80 instructions: prior to AVX512 over 1,300
96 additional instructions have been added, almost all of them SIMD.
97
98 RISC-V RVV as of version 0.9 is over 188 instructions (more than the
99 rest of RV64G combined: 80 for RV64G and 27 for C). Over 95% of that
100 functionality is added to OpenPOWER v3 0B, by SimpleV augmentation,
101 with around 5 to 8 instructions.
102
103 Even in OpenPOWER v3.0B, the Scalar Integer ISA is around 150
104 instructions, with IEEE754 FP adding approximately 80 more. VSX, being
105 based on SIMD design principles, adds somewhere in the region of 600 more.
106 SimpleV again provides over 95% of VSX functionality, simply by augmenting
107 the *Scalar* OpenPOWER ISA, and in the process providing features such
108 as predication, which VSX is entirely missing.
109
110 AVX512, SVE2, VSX, RVV, all of these systems have to provide different
111 types of register files: Scalar and Vector is the minimum. AVX512
112 even provides a mini mask regfile, followed by explicit instructions
113 that handle operations on each of them *and map between all of them*.
114 SV simply not only uses the existing scalar regfiles (including CRs),
115 but because operations exist within OpenPOWER to cover interactions
116 between the scalar regfiles (`mfcr`, `fcvt`) there is very little that
117 needs to be added.
118
119 In fairness to both VSX and RVV, there are things that are not provided
120 by SimpleV:
121
122 * 128 bit or above arithmetic and other operations
123 (VSX Rijndael and SHA primitives; VSX shuffle and bitpermute operations)
124 * register files above 128 entries
125 * Vector lengths over 64
126 * Unit-strided LD/ST and other comprehensive memory operations
127 (struct-based LD/ST from RVV for example)
128 * 32-bit instruction lengths. [[svp64]] had to be added as 64 bit.
129
130 These limitations, which stem inherently from the adaptation process of
131 starting from a Scalar ISA, are not insurmountable. Over time, they may
132 well be addressed in future revisions of SV.
133
134 The rest of this document builds on the above simple loop to add:
135
136 * Vector-Scalar, Scalar-Vector and Scalar-Scalar operation
137 (of all register files: Integer, FP *and CRs*)
138 * Traditional Vector operations (VSPLAT, VINSERT, VCOMPRESS etc)
139 * Predication masks (essential for parallel if/else constructs)
140 * 8, 16 and 32 bit integer operations, and both FP16 and BF16.
141 * Compacted operations into registers (normally only provided by SIMD)
142 * Fail-on-first (introduced in ARM SVE2)
143 * A new concept: Data-dependent fail-first
144 * Condition-Register based *post-result* predication (also new)
145 * A completely new concept: "Twin Predication"
146 * vec2/3/4 "Subvectors" and Swizzling (standard fare for 3D)
147
148 All of this is *without modifying the OpenPOWER v3.0B ISA*, except to add
149 "wrapping context", similar to how v3.1B 64 Prefixes work.
150
151 # Adding Scalar / Vector
152
153 The first augmentation to the simple loop is to add the option for all
154 source and destinations to all be either scalar or vector. As a FSM
155 this is where our "simple" loop gets its first complexity.
156
157 function op_add(RT, RA, RB) # add not VADD!
158 int id=0, irs1=0, irs2=0;
159 for i = 0 to VL-1:
160 ireg[RT+id] <= ireg[RA+irs1] + ireg[RB+irs2];
161 if (!RT.isvec) break;
162 if (RT.isvec) { id += 1; }
163 if (RA.isvec) { irs1 += 1; }
164 if (RB.isvec) { irs2 += 1; }
165
166 This could have been written out as eight separate cases: one each for
167 when each of `RA`, `RB` or `RT` is scalar or vector. Those eight cases,
168 when optimally combined, result in the pseudocode above.
169
170 With some walkthroughs it is clear that the loop exits immediately
171 after the first scalar destination result is written, and that when the
172 destination is a Vector the loop proceeds to fill up the register file,
173 sequentially, starting at `RT` and ending at `RT+VL-1`. The two source
174 registers will, independently, either remain pointing at `RB` or `RA`
175 respectively, or, if marked as Vectors, will march incrementally in
176 lockstep, producing element results along the way, as the destination
177 also progresses through elements.
178
179 In this way all the eight permutations of Scalar and Vector behaviour
180 are covered, although without predication the scalar-destination ones are
181 reduced in usefulness. It does however clearly illustrate the principle.
182
183 Note in particular: there is no separate Scalar add instruction and
184 separate Vector instruction and separate Scalar-Vector instruction, *and
185 there is no separate Vector register file*: it's all the same instruction,
186 on the standard register file, just with a loop. Scalar happens to set
187 that loop size to one.
188
189 The important insight from the above is that, strictly speaking, Simple-V
190 is not really a Vectorisation scheme at all: it is more of a hardware
191 ISA "Compression scheme", allowing as it does for what would normally
192 require multiple sequential instructions to be replaced with just one.
193 This is where the rule that Program Order must be preserved in Sub-PC
194 execution derives from. However in other ways, which will emerge below,
195 the "tagging" concept presents an opportunity to include features
196 definitely not common outside of Vector ISAs, and in that regard it's
197 definitely a class of Vectorisation.
198
199 ## Register "tagging"
200
201 As an aside: in [[sv/svp64]] the encoding which allows SV to both extend
202 the range beyond r0-r31 and to determine whether it is a scalar or vector
203 is encoded in two to three bits, depending on the instruction.
204
205 The reason for using so few bits is because there are up to *four*
206 registers to mark in this way (`fma`, `isel`) which starts to be of
207 concern when there are only 24 available bits to specify the entire SV
208 Vectorisation Context. In fact, for a small subset of instructions it
209 is just not possible to tag every single register. Under these rare
210 circumstances a tag has to be shared between two registers.
211
212 Below is the pseudocode which expresses the relationship which is usually
213 applied to *every* register:
214
215 if extra3_mode:
216 spec = EXTRA3 # bit 2 s/v, 0-1 extends range
217 else:
218 spec = EXTRA2 << 1 # same as EXTRA3, shifted
219 if spec[2]: # vector
220 RA.isvec = True
221 return (RA << 2) | spec[0:1]
222 else: # scalar
223 RA.isvec = False
224 return (spec[0:1] << 5) | RA
225
226 Here we can see that the scalar registers are extended in the top bits,
227 whilst vectors are shifted up by 2 bits, and then extended in the LSBs.
228 Condition Registers have a slightly different scheme, along the same
229 principle, which takes into account the fact that each CR may be bit-level
230 addressed by Condition Register operations.
231
232 Readers familiar with OpenPOWER will know of Rc=1 operations that create
233 an associated post-result "test", placing this test into an implicit
234 Condition Register. The original researchers who created the POWER ISA
235 chose CR0 for Integer, and CR1 for Floating Point. These *also become
236 Vectorised* - implicitly - if the associated destination register is
237 also Vectorised. This allows for some very interesting savings on
238 instruction count due to the very same CR Vectors being predication masks.
239
240 # Adding single predication
241
242 The next step is to add a single predicate mask. This is where it gets
243 interesting. Predicate masks are a bitvector, each bit specifying, in
244 order, whether the element operation is to be skipped ("masked out")
245 or allowed. If there is no predicate, it is set to all 1s, which is
246 effectively the same as "no predicate".
247
248 function op_add(RT, RA, RB) # add not VADD!
249 int id=0, irs1=0, irs2=0;
250 predval = get_pred_val(FALSE, rd);
251 for i = 0 to VL-1:
252 if (predval & 1<<i) # predication bit test
253 ireg[RT+id] <= ireg[RA+irs1] + ireg[RB+irs2];
254 if (!RT.isvec) break;
255 if (RT.isvec) { id += 1; }
256 if (RA.isvec) { irs1 += 1; }
257 if (RB.isvec) { irs2 += 1; }
258
259 The key modification is to skip the creation and storage of the result
260 if the relevant predicate mask bit is clear, but *not the progression
261 through the registers*.
262
263 A particularly interesting case is if the destination is scalar, and the
264 first few bits of the predicate are zero. The loop proceeds to increment
265 the Scalar *source* registers until the first nonzero predicate bit is
266 found, whereupon a single result is computed, and *then* the loop exits.
267 This therefore uses the predicate to perform Vector source indexing.
268 This case was not possible without the predicate mask.
269
270 If all three registers are marked as Vector then the "traditional"
271 predicated Vector behaviour is provided. Yet, just as before, all other
272 options are still provided, right the way back to the pure-scalar case,
273 as if this were a straight OpenPOWER v3.0B non-augmented instruction.
274
275 Single Predication therefore provides several modes traditionally seen
276 in Vector ISAs:
277
278 * VINSERT: the predicate may be set as a single bit, the sources are
279 scalar and the destination a vector.
280 * VSPLAT (result broadcasting) is provided by making the sources scalar
281 and the destination a vector, and having no predicate set or having
282 multiple bits set.
283 * VSELECT is provided by setting up (at least one of) the sources as a
284 vector, using a single bit in olthe predicate, and the destination as
285 a scalar.
286
287 All of this capability and coverage without even adding one single actual
288 Vector opcode, let alone 180, 600 or 1,300!
289
290 # Predicate "zeroing" mode
291
292 Sometimes with predication it is ok to leave the masked-out element
293 alone (not modify the result) however sometimes it is better to zero the
294 masked-out elements. Zeroing can be combined with bit-wise ORing to build
295 up vectors from multiple predicate patterns: the same combining with
296 nonzeroing involves more mv operations and predicate mask operations.
297 Our pseudocode therefore ends up as follows, to take the enhancement
298 into account:
299
300 function op_add(RT, RA, RB) # add not VADD!
301 int id=0, irs1=0, irs2=0;
302 predval = get_pred_val(FALSE, rd);
303 for i = 0 to VL-1:
304 if (predval & 1<<i) # predication bit test
305 ireg[RT+id] <= ireg[RA+irs1] + ireg[RB+irs2];
306 if (!RT.isvec) break;
307 else if zeroing: # predicate failed
308 ireg[RT+id] = 0 # set element to zero
309 if (RT.isvec) { id += 1; }
310 if (RA.isvec) { irs1 += 1; }
311 if (RB.isvec) { irs2 += 1; }
312
313 Many Vector systems either have zeroing or they have nonzeroing, they
314 do not have both. This is because they usually have separate Vector
315 register files. However SV sits on top of standard register files and
316 consequently there are advantages to both, so both are provided.
317
318 # Element Width overrides <a name="elwidths"></a>
319
320 All good Vector ISAs have the usual bitwidths for operations: 8/16/32/64
321 bit integer operations, and IEEE754 FP32 and 64. Often also included
322 is FP16 and more recently BF16. The *really* good Vector ISAs have
323 variable-width vectors right down to bitlevel, and as high as 1024 bit
324 arithmetic per element, as well as IEEE754 FP128.
325
326 SV has an "override" system that *changes* the bitwidth of operations
327 that were intended by the original scalar ISA designers to have (for
328 example) 64 bit operations (only). The override widths are 8, 16 and
329 32 for integer, and FP16 and FP32 for IEEE754 (with BF16 to be added in
330 the future).
331
332 This presents a particularly intriguing conundrum given that the OpenPOWER
333 Scalar ISA was never designed with for example 8 bit operations in mind,
334 let alone Vectors of 8 bit.
335
336 The solution comes in terms of rethinking the definition of a Register
337 File. The typical regfile may be considered to be a multi-ported SRAM
338 block, 64 bits wide and usually 32 entries deep, to give 32 64 bit
339 registers. In c this would be:
340
341 typedef uint64_t reg_t;
342 reg_t int_regfile[32]; // standard scalar 32x 64bit
343
344 Conceptually, to get our variable element width vectors,
345 we may think of the regfile as instead being the following c-based data
346 structure, where all types uint16_t etc. are in little-endian order:
347
348 #pragma(packed)
349 typedef union {
350 uint8_t actual_bytes[8];
351 uint8_t b[0]; // array of type uint8_t
352 uint16_t s[0]; // array of LE ordered uint16_t
353 uint32_t i[0];
354 uint64_t l[0]; // default OpenPOWER ISA uses this
355 } reg_t;
356
357 reg_t int_regfile[128]; // SV extends to 128 regs
358
359 This means that Vector elements start from locations specified by 64 bit
360 "register" but that from that location onwards the elements *overlap
361 subsequent registers*.
362
363 Here is another way to view the same concept, bearing in mind that it
364 is assumed a LE memory order:
365
366 uint8_t reg_sram[8*128];
367 uint8_t *actual_bytes = &reg_sram[RA*8];
368 if elwidth == 8:
369 uint8_t *b = (uint8_t*)actual_bytes;
370 b[idx] = result;
371 if elwidth == 16:
372 uint16_t *s = (uint16_t*)actual_bytes;
373 s[idx] = result;
374 if elwidth == 32:
375 uint32_t *i = (uint32_t*)actual_bytes;
376 i[idx] = result;
377 if elwidth == default:
378 uint64_t *l = (uint64_t*)actual_bytes;
379 l[idx] = result;
380
381 Starting with all zeros, setting `actual_bytes[3]` in any given `reg_t`
382 to 0x01 would mean that:
383
384 * b[0..2] = 0x00 and b[3] = 0x01
385 * s[0] = 0x0000 and s[1] = 0x0001
386 * i[0] = 0x00010000
387 * l[0] = 0x0000000000010000
388
389 In tabular form, starting an elwidth=8 loop from r0 and extending for
390 16 elements would begin at r0 and extend over the entirety of r1:
391
392 | byte0 | byte1 | byte2 | byte3 | byte4 | byte5 | byte6 | byte7 |
393 | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- |
394 r0 | b[0] | b[1] | b[2] | b[3] | b[4] | b[5] | b[6] | b[7] |
395 r1 | b[8] | b[9] | b[10] | b[11] | b[12] | b[13] | b[14] | b[15] |
396
397 Starting an elwidth=16 loop from r0 and extending for
398 7 elements would begin at r0 and extend partly over r1. Note that
399 b0 indicates the low byte (lowest 8 bits) of each 16-bit word, and
400 b1 represents the top byte:
401
402 | byte0 | byte1 | byte2 | byte3 | byte4 | byte5 | byte6 | byte7 |
403 | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- |
404 r0 | s[0].b0 b1 | s[1].b0 b1 | s[2].b0 b1 | s[3].b0 b1 |
405 r1 | s[4].b0 b1 | s[5].b0 b1 | s[6].b0 b1 | unmodified |
406
407 Likewise for elwidth=32, and a loop extending for 3 elements. b0 through
408 b3 represent the bytes (numbered lowest for LSB and highest for MSB) within
409 each element word:
410
411 | byte0 | byte1 | byte2 | byte3 | byte4 | byte5 | byte6 | byte7 |
412 | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- |
413 r0 | w[0].b0 b1 b2 b3 | w[1].b0 b1 b2 b3 |
414 r1 | w[2].b0 b1 b2 b3 | unmodified unmodified |
415
416 64-bit (default) elements access the full registers. In each case the
417 register number (`RT`, `RA`) indicates the *starting* point for the storage
418 and retrieval of the elements.
419
420 Our simple loop, instead of accessing the array of regfile entries
421 with a computed index `iregs[RT+i]`, would access the appropriate element
422 of the appropriate width, such as `iregs[RT].s[i]` in order to access
423 16 bit elements starting from RT. Thus we have a series of overlapping
424 conceptual arrays that each start at what is traditionally thought of as
425 "a register". It then helps if we have a couple of routines:
426
427 get_polymorphed_reg(reg, bitwidth, offset):
428 reg_t res = 0;
429 if (!reg.isvec): # scalar
430 offset = 0
431 if bitwidth == 8:
432 reg.b = int_regfile[reg].b[offset]
433 elif bitwidth == 16:
434 reg.s = int_regfile[reg].s[offset]
435 elif bitwidth == 32:
436 reg.i = int_regfile[reg].i[offset]
437 elif bitwidth == default: # 64
438 reg.l = int_regfile[reg].l[offset]
439 return res
440
441 set_polymorphed_reg(reg, bitwidth, offset, val):
442 if (!reg.isvec): # scalar
443 offset = 0
444 if bitwidth == 8:
445 int_regfile[reg].b[offset] = val
446 elif bitwidth == 16:
447 int_regfile[reg].s[offset] = val
448 elif bitwidth == 32:
449 int_regfile[reg].i[offset] = val
450 elif bitwidth == default: # 64
451 int_regfile[reg].l[offset] = val
452
453 These basically provide a convenient parameterised way to access the
454 register file, at an arbitrary vector element offset and an arbitrary
455 element width. Our first simple loop thus becomes:
456
457 for i = 0 to VL-1:
458 src1 = get_polymorphed_reg(RA, srcwid, i)
459 src2 = get_polymorphed_reg(RB, srcwid, i)
460 result = src1 + src2 # actual add here
461 set_polymorphed_reg(rd, destwid, i, result)
462
463 With this loop, if elwidth=16 and VL=3 the first 48 bits of the target
464 register will contain three 16 bit addition results, and the upper 16
465 bits will be *unaltered*.
466
467 Note that things such as zero/sign-extension (and predication) have
468 been left out to illustrate the elwidth concept. Also note that it turns
469 out to be important to perform the operation internally at effectively an *infinite* bitwidth such that any truncation, rounding errors or
470 other artefacts may all be ironed out. This turns out to be important
471 when applying Saturation for Audio DSP workloads, particularly for multiply and IEEE754 FP rounding. By "infinite" this is conceptual only: in reality, the application of the different truncations and width-extensions set a fixed deterministic practical limit on the internal precision needed, on a per-operation basis.
472
473 Other than that, element width overrides, which can be applied to *either*
474 source or destination or both, are pretty straightforward, conceptually.
475 The details, for hardware engineers, involve byte-level write-enable
476 lines, which is exactly what is used on SRAMs anyway. Compiler writers
477 have to alter Register Allocation Tables to byte-level granularity.
478
479 One critical thing to note: upper parts of the underlying 64 bit
480 register are *not zero'd out* by a write involving a non-aligned Vector
481 Length. An 8 bit operation with VL=7 will *not* overwrite the 8th byte
482 of the destination. The only situation where a full overwrite occurs
483 is on "default" behaviour. This is extremely important to consider the
484 register file as a byte-level store, not a 64-bit-level store.
485
486 ## Why a LE regfile?
487
488 The concept of having a regfile where the byte ordering of the underlying
489 SRAM seems utter nonsense. Surely, a hardware implementation gets to
490 choose the order, right? It's memory only where LE/BE matters, right? The
491 bytes come in, all registers are 64 bit and it's just wiring, right?
492
493 Ordinarily this would be 100% correct, in both a scalar ISA and in a Cray
494 style Vector one. The assumption in that last question was, however, "all
495 registers are 64 bit". SV allows SIMD-style packing of vectors into the
496 64 bit registers, where one instruction and the next may interpret that
497 very same register as containing elements of completely different widths.
498
499 Consequently it becomes critically important to decide a byte-order.
500 That decision was - arbitrarily - LE mode. Actually it wasn't arbitrary
501 at all: it was such hell to implement BE supported interpretations of CRs
502 and LD/ST in LibreSOC, based on a terse spec that provides insufficient
503 clarity and assumes significant working knowledge of OpenPOWER, with
504 arbitrary insertions of 7-index here and 3-bitindex there, the decision
505 to pick LE was extremely easy.
506
507 Without such a decision, if two words are packed as elements into a
508 64 bit register, what does this mean? Should they be inverted so that
509 the lower indexed element goes into the HI or the LO word? should the
510 8 bytes of each register be inverted? Should the bytes in each element
511 be inverted? Should the element indexing loop order be broken onto discontiguous chunks such as 32107654 rather than 01234567, and if so at what granilsrity of discontinuity? These are all equally valid and legitimate interpretations
512 of what constitutes "BE" and they all cause merry mayhem.
513
514 The decision was therefore made: the c typedef union is the canonical
515 definition, and its members are defined as being in LE order. From there,
516 implementations may choose whatever internal HDL wire order they like
517 as long as the results produced conform to the elwidth pseudocode.
518
519 *Note: it turns out that both x86 SIMD and NEON SIMD follow this convention, namely that both are implicitly LE, even though their ISA Manuals may not explicitly spell this out*
520
521 * <https://developer.arm.com/documentation/ddi0406/c/Application-Level-Architecture/Application-Level-Memory-Model/Endian-support/Endianness-in-Advanced-SIMD?lang=en>
522 * <https://stackoverflow.com/questions/24045102/how-does-endianness-work-with-simd-registers>
523 * <https://llvm.org/docs/BigEndianNEON.html>
524
525
526 ## Source and Destination overrides
527
528 A minor fly in the ointment: what happens if the source and destination
529 are over-ridden to different widths? For example, FP16 arithmetic is
530 not accurate enough and may introduce rounding errors when up-converted
531 to FP32 output. The rule is therefore set:
532
533 The operation MUST take place at the larger of the two widths.
534
535 In pseudocode this is:
536
537 for i = 0 to VL-1:
538 src1 = get_polymorphed_reg(RA, srcwid, i)
539 src2 = get_polymorphed_reg(RB, srcwid, i)
540 opwidth = max(srcwid, destwid)
541 result = op_add(src1, src2, opwidth) # at max width
542 set_polymorphed_reg(rd, destwid, i, result)
543
544 It will turn out that under some conditions the combination of the
545 extension of the source registers followed by truncation of the result
546 gets rid of bits that didn't matter, and the operation might as well have
547 taken place at the narrower width and could save resources that way.
548 Examples include Logical OR where the source extension would place
549 zeros in the upper bits, the result will be truncated and throw those
550 zeros away.
551
552 Counterexamples include the previously mentioned FP16 arithmetic,
553 where for operations such as division of large numbers by very small
554 ones it should be clear that internal accuracy will play a major role
555 in influencing the result. Hence the rule that the calculation takes
556 place at the maximum bitwidth, and truncation follows afterwards.
557
558 ## Signed arithmetic
559
560 What happens when the operation involves signed arithmetic? Here the
561 implementor has to use common sense, and make sure behaviour is accurately
562 documented. If the result of the unmodified operation is sign-extended
563 because one of the inputs is signed, then the input source operands must
564 be first read at their overridden bitwidth and *then* sign-extended:
565
566 for i = 0 to VL-1:
567 src1 = get_polymorphed_reg(RA, srcwid, i)
568 src2 = get_polymorphed_reg(RB, srcwid, i)
569 opwidth = max(srcwid, destwid)
570 # srces known to be less than result width
571 src1 = sign_extend(src1, srcwid, destwid)
572 src2 = sign_extend(src2, srcwid, destwid)
573 result = op_signed(src1, src2, opwidth) # at max width
574 set_polymorphed_reg(rd, destwid, i, result)
575
576 The key here is that the cues are taken from the underlying operation.
577
578 ## Saturation
579
580 Audio DSPs need to be able to clip sound when the "volume" is adjusted,
581 but if it is too loud and the signal wraps, distortion occurs. The
582 solution is to clip (saturate) the audio and allow this to be detected.
583 In practical terms this is a post-result analysis however it needs to
584 take place at the largest bitwidth i.e. before a result is element width
585 truncated. Only then can the arithmetic saturation condition be detected:
586
587 for i = 0 to VL-1:
588 src1 = get_polymorphed_reg(RA, srcwid, i)
589 src2 = get_polymorphed_reg(RB, srcwid, i)
590 opwidth = max(srcwid, destwid)
591 # unsigned add
592 result = op_add(src1, src2, opwidth) # at max width
593 # now saturate (unsigned)
594 sat = max(result, (1<<destwid)-1)
595 set_polymorphed_reg(rd, destwid, i, sat)
596 # set sat overflow
597 if Rc=1:
598 CR.ov = (sat != result)
599
600 So the actual computation took place at the larger width, but was
601 post-analysed as an unsigned operation. If however "signed" saturation
602 is requested then the actual arithmetic operation has to be carefully
603 analysed to see what that actually means.
604
605 In terms of FP arithmetic, which by definition has a sign bit (so
606 always takes place as a signed operation anyway), the request to saturate
607 to signed min/max is pretty clear. However for integer arithmetic such
608 as shift (plain shift, not arithmetic shift), or logical operations
609 such as XOR, which were never designed to have the assumption that its
610 inputs be considered as signed numbers, common sense has to kick in,
611 and follow what CR0 does.
612
613 CR0 for Logical operations still applies: the test is still applied to
614 produce CR.eq, CR.lt and CR.gt analysis. Following this lead we may
615 do the same thing: although the input operations for and OR or XOR can
616 in no way be thought of as "signed" we may at least consider the result
617 to be signed, and thus apply min/max range detection -128 to +127 when
618 truncating down to 8 bit for example.
619
620 for i = 0 to VL-1:
621 src1 = get_polymorphed_reg(RA, srcwid, i)
622 src2 = get_polymorphed_reg(RB, srcwid, i)
623 opwidth = max(srcwid, destwid)
624 # logical op, signed has no meaning
625 result = op_xor(src1, src2, opwidth)
626 # now saturate (unsigned)
627 sat = max(result, (1<<destwid-1)-1)
628 sat = min(result, -(1<<destwid-1))
629 set_polymorphed_reg(rd, destwid, i, sat)
630
631 Overall here the rule is: apply common sense then document the behaviour
632 really clearly, for each and every operation.
633
634 # Quick recap so far
635
636 The above functionality pretty much covers around 85% of Vector ISA needs.
637 Predication is provided so that parallel if/then/else constructs can
638 be performed: critical given that sequential if/then statements and
639 branches simply do not translate successfully to Vector workloads.
640 VSPLAT capability is provided which is approximately 20% of all GPU
641 workload operations. Also covered, with elwidth overriding, is the
642 smaller arithmetic operations that caused ISAs developed from the
643 late 80s onwards to get themselves into a tiz when adding "Multimedia"
644 acceleration aka "SIMD" instructions.
645
646 Experienced Vector ISA readers will however have noted that VCOMPRESS
647 and VEXPAND are missing, as is Vector "reduce" (mapreduce) capability
648 and VGATHER and VSCATTER. Compress and Expand are covered by Twin
649 Predication, and yet to also be covered is fail-on-first, CR-based result
650 predication, and Subvectors and Swizzle.
651
652 ## SUBVL <a name="subvl"></a>
653
654 Adding in support for SUBVL is a matter of adding in an extra inner
655 for-loop, where register src and dest are still incremented inside the
656 inner part. Predication is still taken from the VL index, however it
657 is applied to the whole subvector:
658
659 function op_add(RT, RA, RB) # add not VADD!
660  int id=0, irs1=0, irs2=0;
661  predval = get_pred_val(FALSE, rd);
662 for i = 0 to VL-1:
663 if (predval & 1<<i) # predication uses intregs
664 for (s = 0; s < SUBVL; s++)
665 sd = id*SUBVL + s
666 srs1 = irs1*SUBVL + s
667 srs2 = irs2*SUBVL + s
668 ireg[RT+sd] <= ireg[RA+srs1] + ireg[RB+srs2];
669 if (!RT.isvec) break;
670 if (RT.isvec) { id += 1; }
671 if (RA.isvec) { irs1 += 1; }
672 if (RB.isvec) { irs2 += 1; }
673
674 The primary reason for this is because Shader Compilers treat vec2/3/4 as
675 "single units". Recognising this in hardware is just sensible.
676
677 # Swizzle <a name="swizzle"></a>
678
679 Swizzle is particularly important for 3D work. It allows in-place
680 reordering of XYZW, ARGB etc. and access of sub-portions of the same in
681 arbitrary order *without* requiring timeconsuming scalar mv instructions
682 (scalar due to the convoluted offsets).
683
684 Swizzling does not just do permutations: it allows arbitrary selection and multiple copying of
685 vec2/3/4 elements, such as XXXZ as the source operand, which will take
686 3 copies of the vec4 first element (vec4[0]), placing them at positions vec4[0],
687 vec4[1] and vec4[2], whilst the "Z" element (vec4[2]) was copied into vec4[3].
688
689 With somewhere between 10% and 30% of operations in 3D Shaders involving
690 swizzle this is a huge saving and reduces pressure on register files
691 due to having to use significant numbers of mv operations to get vector
692 elements to "line up".
693
694 In SV given the percentage of operations that also involve initialisation
695 to 0.0 or 1.0 into subvector elements the decision was made to include
696 those:
697
698 swizzle = get_swizzle_immed() # 12 bits
699 for (s = 0; s < SUBVL; s++)
700 remap = (swizzle >> 3*s) & 0b111
701 if remap < 4:
702 sm = id*SUBVL + remap
703 ireg[rd+s] <= ireg[RA+sm]
704 elif remap == 4:
705 ireg[rd+s] <= 0.0
706 elif remap == 5:
707 ireg[rd+s] <= 1.0
708
709 Note that a value of 6 (and 7) will leave the target subvector element
710 untouched. This is equivalent to a predicate mask which is built-in,
711 in immediate form, into the [[sv/mv.swizzle]] operation. mv.swizzle is
712 rare in that it is one of the few instructions needed to be added that
713 are never going to be part of a Scalar ISA. Even in High Performance
714 Compute workloads it is unusual: it is only because SV is targetted at
715 3D and Video that it is being considered.
716
717 Some 3D GPU ISAs also allow for two-operand subvector swizzles. These are
718 sufficiently unusual, and the immediate opcode space required so large
719 (12 bits per vec4 source),
720 that the tradeoff balance was decided in SV to only add mv.swizzle.
721
722 # Twin Predication
723
724 Twin Predication is cool. Essentially it is a back-to-back
725 VCOMPRESS-VEXPAND (a multiple sequentially ordered VINSERT). The compress
726 part is covered by the source predicate and the expand part by the
727 destination predicate. Of course, if either of those is all 1s then
728 the operation degenerates *to* VCOMPRESS or VEXPAND, respectively.
729
730 function op(RT, RS):
731  ps = get_pred_val(FALSE, RS); # predication on src
732  pd = get_pred_val(FALSE, RT); # ... AND on dest
733  for (int i = 0, int j = 0; i < VL && j < VL;):
734 if (RS.isvec) while (!(ps & 1<<i)) i++;
735 if (RT.isvec) while (!(pd & 1<<j)) j++;
736 reg[RT+j] = SCALAR_OPERATION_ON(reg[RS+i])
737 if (RS.isvec) i++;
738 if (RT.isvec) j++; else break
739
740 Here's the interesting part: given the fact that SV is a "context"
741 extension, the above pattern can be applied to a lot more than just MV,
742 which is normally only what VCOMPRESS and VEXPAND do in traditional
743 Vector ISAs: move registers. Twin Predication can be applied to `extsw`
744 or `fcvt`, LD/ST operations and even `rlwinmi` and other operations
745 taking a single source and immediate(s) such as `addi`. All of these
746 are termed single-source, single-destination.
747
748 LDST Address-generation, or AGEN, is a special case of single source,
749 because elwidth overriding does not make sense to apply to the computation
750 of the 64 bit address itself, but it *does* make sense to apply elwidth
751 overrides to the data being accessed *at* that memory address.
752
753 It also turns out that by using a single bit set in the source or
754 destination, *all* the sequential ordered standard patterns of Vector
755 ISAs are provided: VSPLAT, VSELECT, VINSERT, VCOMPRESS, VEXPAND.
756
757 The only one missing from the list here, because it is non-sequential,
758 is VGATHER (and VSCATTER): moving registers by specifying a vector of
759 register indices (`regs[rd] = regs[regs[rs]]` in a loop). This one is
760 tricky because it typically does not exist in standard scalar ISAs.
761 If it did it would be called [[sv/mv.x]]. Once Vectorised, it's a
762 VGATHER/VSCATTER.
763
764 # CR predicate result analysis
765
766 OpenPOWER has Condition Registers. These store an analysis of the result
767 of an operation to test it for being greater, less than or equal to zero.
768 What if a test could be done, similar to branch BO testing, which hooked
769 into the predication system?
770
771 for i in range(VL):
772 # predication test, skip all masked out elements.
773 if predicate_masked_out(i): continue # skip
774 result = op(iregs[RA+i], iregs[RB+i])
775 CRnew = analyse(result) # calculates eq/lt/gt
776 # Rc=1 always stores the CR
777 if RC1 or Rc=1: crregs[offs+i] = CRnew
778 if RC1: continue # RC1 mode skips result store
779 # now test CR, similar to branch
780 if CRnew[BO[0:1]] == BO[2]:
781 # result optionally stored but CR always is
782 iregs[RT+i] = result
783
784 Note that whilst the Vector of CRs is always written to the CR regfile,
785 only those result elements that pass the BO test get written to the
786 integer regfile (when RC1 mode is not set). In RC1 mode the CR is always
787 stored, but the result never is. This effectively turns every arithmetic
788 operation into a type of `cmp` instruction.
789
790 Here for example if FP overflow occurred, and the CR testing was carried
791 out for that, all valid results would be stored but invalid ones would
792 not, but in addition the Vector of CRs would contain the indicators of
793 which ones failed. With the invalid results being simply not written
794 this could save resources (save on register file writes).
795
796 Also expected is, due to the fact that the predicate mask is effectively
797 ANDed with the post-result analysis as a secondary type of predication,
798 that there would be savings to be had in some types of operations where
799 the post-result analysis, if not included in SV, would need a second
800 predicate calculation followed by a predicate mask AND operation.
801
802 Note, hilariously, that Vectorised Condition Register Operations (crand,
803 cror) may also have post-result analysis applied to them. With Vectors
804 of CRs being utilised *for* predication, possibilities for compact and
805 elegant code begin to emerge from this innocuous-looking addition to SV.
806
807 # Exception-based Fail-on-first
808
809 One of the major issues with Vectorised LD/ST operations is when a
810 batch of LDs cross a page-fault boundary. With considerable resources
811 being taken up with in-flight data, a large Vector LD being cancelled
812 or unable to roll back is either a detriment to performance or can cause
813 data corruption.
814
815 What if, then, rather than cancel an entire Vector LD because the last
816 operation would cause a page fault, instead truncate the Vector to the
817 last successful element?
818
819 This is called "fail-on-first". Here is strncpy, illustrated from RVV:
820
821 strncpy:
822 c.mv a3, a0 # Copy dst
823 loop:
824 setvli x0, a2, vint8 # Vectors of bytes.
825 vlbff.v v1, (a1) # Get src bytes
826 vseq.vi v0, v1, 0 # Flag zero bytes
827 vmfirst a4, v0 # Zero found?
828 vmsif.v v0, v0 # Set mask up to and including zero byte.
829 vsb.v v1, (a3), v0.t # Write out bytes
830 c.bgez a4, exit # Done
831 csrr t1, vl # Get number of bytes fetched
832 c.add a1, a1, t1 # Bump src pointer
833 c.sub a2, a2, t1 # Decrement count.
834 c.add a3, a3, t1 # Bump dst pointer
835 c.bnez a2, loop # Anymore?
836 exit:
837 c.ret
838
839 Vector Length VL is truncated inherently at the first page faulting
840 byte-level LD. Otherwise, with more powerful hardware the number of
841 elements LOADed from memory could be dozens to hundreds or greater
842 (memory bandwidth permitting).
843
844 With VL truncated the analysis looking for the zero byte and the
845 subsequent STORE (a straight ST, not a ffirst ST) can proceed, safe in the
846 knowledge that every byte loaded in the Vector is valid. Implementors are
847 even permitted to "adapt" VL, truncating it early so that, for example,
848 subsequent iterations of loops will have LD/STs on aligned boundaries.
849
850 SIMD strncpy hand-written assembly routines are, to be blunt about it,
851 a total nightmare. 240 instructions is not uncommon, and the worst
852 thing about them is that they are unable to cope with detection of a
853 page fault condition.
854
855 Note: see <https://bugs.libre-soc.org/show_bug.cgi?id=561>
856
857 # Data-dependent fail-first
858
859 This is a minor variant on the CR-based predicate-result mode. Where
860 pred-result continues with independent element testing (any of which may
861 be parallelised), data-dependent fail-first *stops* at the first failure:
862
863 if Rc=0: BO = inv<<2 | 0b00 # test CR.eq bit z/nz
864 for i in range(VL):
865 # predication test, skip all masked out elements.
866 if predicate_masked_out(i): continue # skip
867 result = op(iregs[RA+i], iregs[RB+i])
868 CRnew = analyse(result) # calculates eq/lt/gt
869 # now test CR, similar to branch
870 if CRnew[BO[0:1]] != BO[2]:
871 VL = i # truncate: only successes allowed
872 break
873 # test passed: store result (and CR?)
874 if not RC1: iregs[RT+i] = result
875 if RC1 or Rc=1: crregs[offs+i] = CRnew
876
877 This is particularly useful, again, for FP operations that might overflow,
878 where it is desirable to end the loop early, but also desirable to
879 complete at least those operations that were okay (passed the test)
880 without also having to slow down execution by adding extra instructions
881 that tested for the possibility of that failure, in advance of doing
882 the actual calculation.
883
884 The only minor downside here though is the change to VL, which in some
885 implementations may cause pipeline stalls. This was one of the reasons
886 why CR-based pred-result analysis was added, because that at least is
887 entirely paralleliseable.
888
889 # Instruction format
890
891 Whilst this overview shows the internals, it does not go into detail
892 on the actual instruction format itself. There are a couple of reasons
893 for this: firstly, it's under development, and secondly, it needs to be
894 proposed to the OpenPOWER Foundation ISA WG for consideration and review.
895
896 That said: draft pages for [[sv/setvl]] and [[sv/svp64]] are written up.
897 The `setvl` instruction is pretty much as would be expected from a
898 Cray style VL instruction: the only differences being that, firstly,
899 the MAXVL (Maximum Vector Length) has to be specified, because that
900 determines - precisely - how many of the *scalar* registers are to be
901 used for a given Vector. Secondly: within the limit of MAXVL, VL is
902 required to be set to the requested value. By contrast, RVV systems
903 permit the hardware to set arbitrary values of VL.
904
905 The other key question is of course: what's the actual instruction format,
906 and what's in it? Bearing in mind that this requires OPF review, the
907 current draft is at the [[sv/svp64]] page, and includes space for all the
908 different modes, the predicates, element width overrides, SUBVL and the
909 register extensions, in 24 bits. This just about fits into an OpenPOWER
910 v3.1B 64 bit Prefix by borrowing some of the Reserved Encoding space.
911 The v3.1B suffix - containing as it does a 32 bit OpenPOWER instruction -
912 aligns perfectly with SV.
913
914 Further reading is at the main [[SV|sv]] page.
915
916 # Conclusion
917
918 Starting from a scalar ISA - OpenPOWER v3.0B - it was shown above that,
919 with conceptual sub-loops, a Scalar ISA can be turned into a Vector one,
920 by embedding Scalar instructions - unmodified - into a Vector "context"
921 using "Prefixing". With careful thought, this technique reaches 90%
922 par with good Vector ISAs, increasing to 95% with the addition of a
923 mere handful of additional context-vectoriseable scalar instructions
924 ([[sv/mv.x]] amongst them).
925
926 What is particularly cool about the SV concept is that custom extensions
927 and research need not be concerned about inventing new Vector instructions
928 and how to get them to interact with the Scalar ISA: they are effectively
929 one and the same. Any new instruction added at the Scalar level is
930 inherently and automatically Vectorised, following some simple rules.
931