85f7e9038f8803bab44196a0b035d516f8126931
[libreriscv.git] / openpower / sv / overview.mdwn
1 # SV Overview
2
3 This document provides an overview and introduction as to why SV (a
4 Cray-style Vector augmentation to OpenPOWER) exists, and how it works.
5
6 Links:
7
8 * [[discussion]] and
9 [bugreport](https://bugs.libre-soc.org/show_bug.cgi?id=556)
10 feel free to add comments, questions.
11 * [[SV|sv]]
12 * [[sv/svp64]]
13
14 Contents:
15
16 [[!toc]]
17
18 # Introduction: SIMD and Cray Vectors
19
20 SIMD, the primary method for easy parallelism of the
21 past 30 years in Computer Architectures, is
22 [known to be harmful](https://www.sigarch.org/simd-instructions-considered-harmful/).
23 SIMD provides a seductive simplicity that is easy to implement in
24 hardware. With each doubling in width it promises increases in raw
25 performance without the complexity of either multi-issue or out-of-order
26 execution.
27
28 Unfortunately, even with predication added, SIMD only becomes more and
29 more problematic with each power of two SIMD width increase introduced
30 through an ISA revision. The opcode proliferation, at O(N^6), inexorably
31 spirals out of control in the ISA, detrimentally impacting the hardware,
32 the software, the compilers and the testing and compliance. Here are
33 the typical dimensions that result in such massive proliferation:
34
35 * Operation (add, mul)
36 * bitwidth (8, 16, 32, 64, 128)
37 * Conversion between bitwidths (FP16-FP32-64)
38 * Signed/unsigned
39 * HI/LO swizzle (Audio L/R channels)
40 * Saturation (Clamping at max range)
41
42 These typically are multiplied up to produce explicit opcodes numbering
43 in the thousands on, for example the ARC Video/DSP cores.
44
45 Cray-style variable-length Vectors on the other hand result in
46 stunningly elegant and small loops, exceptionally high data throughput
47 per instruction (by one *or greater* orders of magnitude than SIMD), with
48 no alarmingly high setup and cleanup code, where at the hardware level
49 the microarchitecture may execute from one element right the way through
50 to tens of thousands at a time, yet the executable remains exactly the
51 same and the ISA remains clear, true to the RISC paradigm, and clean.
52 Unlike in SIMD, powers of two limitations are not involved in the ISA
53 or in the assembly code.
54
55 SimpleV takes the Cray style Vector principle and applies it in the
56 abstract to a Scalar ISA, in the process allowing register file size
57 increases using "tagging" (similar to how x86 originally extended
58 registers from 32 to 64 bit).
59
60 ## SV
61
62 The fundamentals are:
63
64 * The Program Counter (PC) gains a "Sub Counter" context (Sub-PC)
65 * Vectorisation pauses the PC and runs a Sub-PC loop from 0 to VL-1
66 (where VL is Vector Length)
67 * The [[Program Order]] of "Sub-PC" instructions must be preserved,
68 just as is expected of instructions ordered by the PC.
69 * Some registers may be "tagged" as Vectors
70 * During the loop, "Vector"-tagged register are incremented by
71 one with each iteration, executing the *same instruction*
72 but with *different registers*
73 * Once the loop is completed *only then* is the Program Counter
74 allowed to move to the next instruction.
75
76 Hardware (and simulator) implementors are free and clear to implement this
77 as literally a for-loop, sitting in between instruction decode and issue.
78 Higher performance systems may deploy SIMD backends, multi-issue and
79 out-of-order execution, although it is strongly recommended to add
80 predication capability directly into SIMD backend units.
81
82 In OpenPOWER ISA v3.0B pseudo-code form, an ADD operation, assuming both
83 source and destination have been "tagged" as Vectors, is simply:
84
85 for i = 0 to VL-1:
86 GPR(RT+i) = GPR(RA+i) + GPR(RB+i)
87
88 At its heart, SimpleV really is this simple. On top of this fundamental
89 basis further refinements can be added which build up towards an extremely
90 powerful Vector augmentation system, with very little in the way of
91 additional opcodes required: simply external "context".
92
93 x86 was originally only 80 instructions: prior to AVX512 over 1,300
94 additional instructions have been added, almost all of them SIMD.
95
96 RISC-V RVV as of version 0.9 is over 188 instructions (more than the
97 rest of RV64G combined: 80 for RV64G and 27 for C). Over 95% of that
98 functionality is added to OpenPOWER v3 0B, by SimpleV augmentation,
99 with around 5 to 8 instructions.
100
101 Even in OpenPOWER v3.0B, the Scalar Integer ISA is around 150
102 instructions, with IEEE754 FP adding approximately 80 more. VSX, being
103 based on SIMD design principles, adds somewhere in the region of 600 more.
104 SimpleV again provides over 95% of VSX functionality, simply by augmenting
105 the *Scalar* OpenPOWER ISA, and in the process providing features such
106 as predication, which VSX is entirely missing.
107
108 AVX512, SVE2, VSX, RVV, all of these systems have to provide different
109 types of register files: Scalar and Vector is the minimum. AVX512
110 even provides a mini mask regfile, followed by explicit instructions
111 that handle operations on each of them *and map between all of them*.
112 SV simply not only uses the existing scalar regfiles (including CRs),
113 but because operations exist within OpenPOWER to cover interactions
114 between the scalar regfiles (`mfcr`, `fcvt`) there is very little that
115 needs to be added.
116
117 In fairness to both VSX and RVV, there are things that are not provided
118 by SimpleV:
119
120 * 128 bit or above arithmetic and other operations
121 (VSX Rijndael and SHA primitives; VSX shuffle and bitpermute operations)
122 * register files above 128 entries
123 * Vector lengths over 64
124 * Unit-strided LD/ST and other comprehensive memory operations
125 (struct-based LD/ST from RVV for example)
126 * 32-bit instruction lengths. [[svp64]] had to be added as 64 bit.
127
128 These limitations, which stem inherently from the adaptation process of
129 starting from a Scalar ISA, are not insurmountable. Over time, they may
130 well be addressed in future revisions of SV.
131
132 The rest of this document builds on the above simple loop to add:
133
134 * Vector-Scalar, Scalar-Vector and Scalar-Scalar operation
135 (of all register files: Integer, FP *and CRs*)
136 * Traditional Vector operations (VSPLAT, VINSERT, VCOMPRESS etc)
137 * Predication masks (essential for parallel if/else constructs)
138 * 8, 16 and 32 bit integer operations, and both FP16 and BF16.
139 * Compacted operations into registers (normally only provided by SIMD)
140 * Fail-on-first (introduced in ARM SVE2)
141 * A new concept: Data-dependent fail-first
142 * Condition-Register based *post-result* predication (also new)
143 * A completely new concept: "Twin Predication"
144 * vec2/3/4 "Subvectors" and Swizzling (standard fare for 3D)
145
146 All of this is *without modifying the OpenPOWER v3.0B ISA*, except to add
147 "wrapping context", similar to how v3.1B 64 Prefixes work.
148
149 # Adding Scalar / Vector
150
151 The first augmentation to the simple loop is to add the option for all
152 source and destinations to all be either scalar or vector. As a FSM
153 this is where our "simple" loop gets its first complexity.
154
155 function op_add(RT, RA, RB) # add not VADD!
156 int id=0, irs1=0, irs2=0;
157 for i = 0 to VL-1:
158 ireg[RT+id] <= ireg[RA+irs1] + ireg[RB+irs2];
159 if (!RT.isvec) break;
160 if (RT.isvec) { id += 1; }
161 if (RA.isvec) { irs1 += 1; }
162 if (RB.isvec) { irs2 += 1; }
163
164 This could have been written out as eight separate cases: one each for
165 when each of `RA`, `RB` or `RT` is scalar or vector. Those eight cases,
166 when optimally combined, result in the pseudocode above.
167
168 With some walkthroughs it is clear that the loop exits immediately
169 after the first scalar destination result is written, and that when the
170 destination is a Vector the loop proceeds to fill up the register file,
171 sequentially, starting at `RT` and ending at `RT+VL-1`. The two source
172 registers will, independently, either remain pointing at `RB` or `RA`
173 respectively, or, if marked as Vectors, will march incrementally in
174 lockstep, producing element results along the way, as the destination
175 also progresses through elements.
176
177 In this way all the eight permutations of Scalar and Vector behaviour
178 are covered, although without predication the scalar-destination ones are
179 reduced in usefulness. It does however clearly illustrate the principle.
180
181 Note in particular: there is no separate Scalar add instruction and
182 separate Vector instruction and separate Scalar-Vector instruction, *and
183 there is no separate Vector register file*: it's all the same instruction,
184 on the standard register file, just with a loop. Scalar happens to set
185 that loop size to one.
186
187 The important insight from the above is that, strictly speaking, Simple-V
188 is not really a Vectorisation scheme at all: it is more of a hardware
189 ISA "Compression scheme", allowing as it does for what would normally
190 require multiple sequential instructions to be replaced with just one.
191 This is where the rule that Program Order must be preserved in Sub-PC
192 execution derives from. However in other ways, which will emerge below,
193 the "tagging" concept presents an opportunity to include features
194 definitely not common outside of Vector ISAs, and in that regard it's
195 definitely a class of Vectorisation.
196
197 ## Register "tagging"
198
199 As an aside: in [[sv/svp64]] the encoding which allows SV to both extend
200 the range beyond r0-r31 and to determine whether it is a scalar or vector
201 is encoded in two to three bits, depending on the instruction.
202
203 The reason for using so few bits is because there are up to *four*
204 registers to mark in this way (`fma`, `isel`) which starts to be of
205 concern when there are only 24 available bits to specify the entire SV
206 Vectorisation Context. In fact, for a small subset of instructions it
207 is just not possible to tag every single register. Under these rare
208 circumstances a tag has to be shared between two registers.
209
210 Below is the pseudocode which expresses the relationship which is usually
211 applied to *every* register:
212
213 if extra3_mode:
214 spec = EXTRA3 # bit 2 s/v, 0-1 extends range
215 else:
216 spec = EXTRA2 << 1 # same as EXTRA3, shifted
217 if spec[2]: # vector
218 RA.isvec = True
219 return (RA << 2) | spec[0:1]
220 else: # scalar
221 RA.isvec = False
222 return (spec[0:1] << 5) | RA
223
224 Here we can see that the scalar registers are extended in the top bits,
225 whilst vectors are shifted up by 2 bits, and then extended in the LSBs.
226 Condition Registers have a slightly different scheme, along the same
227 principle, which takes into account the fact that each CR may be bit-level
228 addressed by Condition Register operations.
229
230 Readers familiar with OpenPOWER will know of Rc=1 operations that create
231 an associated post-result "test", placing this test into an implicit
232 Condition Register. The original researchers who created the POWER ISA
233 chose CR0 for Integer, and CR1 for Floating Point. These *also become
234 Vectorised* - implicitly - if the associated destination register is
235 also Vectorised. This allows for some very interesting savings on
236 instruction count due to the very same CR Vectors being predication masks.
237
238 # Adding single predication
239
240 The next step is to add a single predicate mask. This is where it gets
241 interesting. Predicate masks are a bitvector, each bit specifying, in
242 order, whether the element operation is to be skipped ("masked out")
243 or allowed. If there is no predicate, it is set to all 1s, which is
244 effectively the same as "no predicate".
245
246 function op_add(RT, RA, RB) # add not VADD!
247 int id=0, irs1=0, irs2=0;
248 predval = get_pred_val(FALSE, rd);
249 for i = 0 to VL-1:
250 if (predval & 1<<i) # predication bit test
251 ireg[RT+id] <= ireg[RA+irs1] + ireg[RB+irs2];
252 if (!RT.isvec) break;
253 if (RT.isvec) { id += 1; }
254 if (RA.isvec) { irs1 += 1; }
255 if (RB.isvec) { irs2 += 1; }
256
257 The key modification is to skip the creation and storage of the result
258 if the relevant predicate mask bit is clear, but *not the progression
259 through the registers*.
260
261 A particularly interesting case is if the destination is scalar, and the
262 first few bits of the predicate are zero. The loop proceeds to increment
263 the Scalar *source* registers until the first nonzero predicate bit is
264 found, whereupon a single result is computed, and *then* the loop exits.
265 This therefore uses the predicate to perform Vector source indexing.
266 This case was not possible without the predicate mask.
267
268 If all three registers are marked as Vector then the "traditional"
269 predicated Vector behaviour is provided. Yet, just as before, all other
270 options are still provided, right the way back to the pure-scalar case,
271 as if this were a straight OpenPOWER v3.0B non-augmented instruction.
272
273 Single Predication therefore provides several modes traditionally seen
274 in Vector ISAs:
275
276 * VINSERT: the predicate may be set as a single bit, the sources are
277 scalar and the destination a vector.
278 * VSPLAT (result broadcasting) is provided by making the sources scalar
279 and the destination a vector, and having no predicate set or having
280 multiple bits set.
281 * VSELECT is provided by setting up (at least one of) the sources as a
282 vector, using a single bit in olthe predicate, and the destination as
283 a scalar.
284
285 All of this capability and coverage without even adding one single actual
286 Vector opcode, let alone 180, 600 or 1,300!
287
288 # Predicate "zeroing" mode
289
290 Sometimes with predication it is ok to leave the masked-out element
291 alone (not modify the result) however sometimes it is better to zero the
292 masked-out elements. Zeroing can be combined with bit-wise ORing to build
293 up vectors from multiple predicate patterns: the same combining with
294 nonzeroing involves more mv operations and predicate mask operations.
295 Our pseudocode therefore ends up as follows, to take the enhancement
296 into account:
297
298 function op_add(RT, RA, RB) # add not VADD!
299 int id=0, irs1=0, irs2=0;
300 predval = get_pred_val(FALSE, rd);
301 for i = 0 to VL-1:
302 if (predval & 1<<i) # predication bit test
303 ireg[RT+id] <= ireg[RA+irs1] + ireg[RB+irs2];
304 if (!RT.isvec) break;
305 else if zeroing: # predicate failed
306 ireg[RT+id] = 0 # set element to zero
307 if (RT.isvec) { id += 1; }
308 if (RA.isvec) { irs1 += 1; }
309 if (RB.isvec) { irs2 += 1; }
310
311 Many Vector systems either have zeroing or they have nonzeroing, they
312 do not have both. This is because they usually have separate Vector
313 register files. However SV sits on top of standard register files and
314 consequently there are advantages to both, so both are provided.
315
316 # Element Width overrides <a name="elwidths"></a>
317
318 All good Vector ISAs have the usual bitwidths for operations: 8/16/32/64
319 bit integer operations, and IEEE754 FP32 and 64. Often also included
320 is FP16 and more recently BF16. The *really* good Vector ISAs have
321 variable-width vectors right down to bitlevel, and as high as 1024 bit
322 arithmetic per element, as well as IEEE754 FP128.
323
324 SV has an "override" system that *changes* the bitwidth of operations
325 that were intended by the original scalar ISA designers to have (for
326 example) 64 bit operations (only). The override widths are 8, 16 and
327 32 for integer, and FP16 and FP32 for IEEE754 (with BF16 to be added in
328 the future).
329
330 This presents a particularly intriguing conundrum given that the OpenPOWER
331 Scalar ISA was never designed with for example 8 bit operations in mind,
332 let alone Vectors of 8 bit.
333
334 The solution comes in terms of rethinking the definition of a Register
335 File. The typical regfile may be considered to be a multi-ported SRAM
336 block, 64 bits wide and usually 32 entries deep, to give 32 64 bit
337 registers. In c this would be:
338
339 typedef uint64_t reg_t;
340 reg_t int_regfile[32]; // standard scalar 32x 64bit
341
342 Conceptually, to get our variable element width vectors,
343 we may think of the regfile as instead being the following c-based data
344 structure, where all types uint16_t etc. are in little-endian order:
345
346 #pragma(packed)
347 typedef union {
348 uint8_t actual_bytes[8];
349 uint8_t b[0]; // array of type uint8_t
350 uint16_t s[0]; // array of LE ordered uint16_t
351 uint32_t i[0];
352 uint64_t l[0]; // default OpenPOWER ISA uses this
353 } reg_t;
354
355 reg_t int_regfile[128]; // SV extends to 128 regs
356
357 This means that Vector elements start from locations specified by 64 bit
358 "register" but that from that location onwards the elements *overlap
359 subsequent registers*.
360
361 Here is another way to view the same concept, bearing in mind that it
362 is assumed a LE memory order:
363
364 uint8_t reg_sram[8*128];
365 uint8_t *actual_bytes = &reg_sram[RA*8];
366 if elwidth == 8:
367 uint8_t *b = (uint8_t*)actual_bytes;
368 b[idx] = result;
369 if elwidth == 16:
370 uint16_t *s = (uint16_t*)actual_bytes;
371 s[idx] = result;
372 if elwidth == 32:
373 uint32_t *i = (uint32_t*)actual_bytes;
374 i[idx] = result;
375 if elwidth == default:
376 uint64_t *l = (uint64_t*)actual_bytes;
377 l[idx] = result;
378
379 Starting with all zeros, setting `actual_bytes[3]` in any given `reg_t`
380 to 0x01 would mean that:
381
382 * b[0..2] = 0x00 and b[3] = 0x01
383 * s[0] = 0x0000 and s[1] = 0x0001
384 * i[0] = 0x00010000
385 * l[0] = 0x0000000000010000
386
387 In tabular form, starting an elwidth=8 loop from r0 and extending for
388 16 elements would begin at r0 and extend over the entirety of r1:
389
390 | byte0 | byte1 | byte2 | byte3 | byte4 | byte5 | byte6 | byte7 |
391 | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- |
392 r0 | b[0] | b[1] | b[2] | b[3] | b[4] | b[5] | b[6] | b[7] |
393 r1 | b[8] | b[9] | b[10] | b[11] | b[12] | b[13] | b[14] | b[15] |
394
395 Starting an elwidth=16 loop from r0 and extending for
396 7 elements would begin at r0 and extend partly over r1. Note that
397 b0 indicates the low byte (lowest 8 bits) of each 16-bit word, and
398 b1 represents the top byte:
399
400 | byte0 | byte1 | byte2 | byte3 | byte4 | byte5 | byte6 | byte7 |
401 | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- |
402 r0 | s[0].b0 b1 | s[1].b0 b1 | s[2].b0 b1 | s[3].b0 b1 |
403 r1 | s[4].b0 b1 | s[5].b0 b1 | s[6].b0 b1 | unmodified |
404
405 Likewise for elwidth=32, and a loop extending for 3 elements. b0 through
406 b3 represent the bytes (numbered lowest for LSB and highest for MSB) within
407 each element word:
408
409 | byte0 | byte1 | byte2 | byte3 | byte4 | byte5 | byte6 | byte7 |
410 | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- |
411 r0 | w[0].b0 b1 b2 b3 | w[1].b0 b1 b2 b3 |
412 r1 | w[2].b0 b1 b2 b3 | unmodified unmodified |
413
414 64-bit (default) elements access the full registers. In each case the
415 register number (`RT`, `RA`) indicates the *starting* point for the storage
416 and retrieval of the elements.
417
418 Our simple loop, instead of accessing the array of regfile entries
419 with a computed index `iregs[RT+i]`, would access the appropriate element
420 of the appropriate width, such as `iregs[RT].s[i]` in order to access
421 16 bit elements starting from RT. Thus we have a series of overlapping
422 conceptual arrays that each start at what is traditionally thought of as
423 "a register". It then helps if we have a couple of routines:
424
425 get_polymorphed_reg(reg, bitwidth, offset):
426 reg_t res = 0;
427 if (!reg.isvec): # scalar
428 offset = 0
429 if bitwidth == 8:
430 reg.b = int_regfile[reg].b[offset]
431 elif bitwidth == 16:
432 reg.s = int_regfile[reg].s[offset]
433 elif bitwidth == 32:
434 reg.i = int_regfile[reg].i[offset]
435 elif bitwidth == default: # 64
436 reg.l = int_regfile[reg].l[offset]
437 return res
438
439 set_polymorphed_reg(reg, bitwidth, offset, val):
440 if (!reg.isvec): # scalar
441 offset = 0
442 if bitwidth == 8:
443 int_regfile[reg].b[offset] = val
444 elif bitwidth == 16:
445 int_regfile[reg].s[offset] = val
446 elif bitwidth == 32:
447 int_regfile[reg].i[offset] = val
448 elif bitwidth == default: # 64
449 int_regfile[reg].l[offset] = val
450
451 These basically provide a convenient parameterised way to access the
452 register file, at an arbitrary vector element offset and an arbitrary
453 element width. Our first simple loop thus becomes:
454
455 for i = 0 to VL-1:
456 src1 = get_polymorphed_reg(RA, srcwid, i)
457 src2 = get_polymorphed_reg(RB, srcwid, i)
458 result = src1 + src2 # actual add here
459 set_polymorphed_reg(rd, destwid, i, result)
460
461 With this loop, if elwidth=16 and VL=3 the first 48 bits of the target
462 register will contain three 16 bit addition results, and the upper 16
463 bits will be *unaltered*.
464
465 Note that things such as zero/sign-extension (and predication) have
466 been left out to illustrate the elwidth concept. Also note that it turns
467 out to be important to perform the operation at the maximum bitwidth -
468 `max(srcwid, destwid)` - such that any truncation, rounding errors or
469 other artefacts may all be ironed out. This turns out to be important
470 when applying Saturation for Audio DSP workloads.
471
472 Other than that, element width overrides, which can be applied to *either*
473 source or destination or both, are pretty straightforward, conceptually.
474 The details, for hardware engineers, involve byte-level write-enable
475 lines, which is exactly what is used on SRAMs anyway. Compiler writers
476 have to alter Register Allocation Tables to byte-level granularity.
477
478 One critical thing to note: upper parts of the underlying 64 bit
479 register are *not zero'd out* by a write involving a non-aligned Vector
480 Length. An 8 bit operation with VL=7 will *not* overwrite the 8th byte
481 of the destination. The only situation where a full overwrite occurs
482 is on "default" behaviour. This is extremely important to consider the
483 register file as a byte-level store, not a 64-bit-level store.
484
485 ## Why a LE regfile?
486
487 The concept of having a regfile where the byte ordering of the underlying
488 SRAM seems utter nonsense. Surely, a hardware implementation gets to
489 choose the order, right? It's memory only where LE/BE matters, right? The
490 bytes come in, all registers are 64 bit and it's just wiring, right?
491
492 Ordinarily this would be 100% correct, in both a scalar ISA and in a Cray
493 style Vector one. The assumption in that last question was, however, "all
494 registers are 64 bit". SV allows SIMD-style packing of vectors into the
495 64 bit registers, where one instruction and the next may interpret that
496 very same register as containing elements of completely different widths.
497
498 Consequently it becomes critically important to decide a byte-order.
499 That decision was - arbitrarily - LE mode. Actually it wasn't arbitrary
500 at all: it was such hell to implement BE supported interpretations of CRs
501 and LD/ST in LibreSOC, based on a terse spec that provides insufficient
502 clarity and assumes significant working knowledge of OpenPOWER, with
503 arbitrary insertions of 7-index here and 3-bitindex there, the decision
504 to pick LE was extremely easy.
505
506 Without such a decision, if two words are packed as elements into a
507 64 bit register, what does this mean? Should they be inverted so that
508 the lower indexed element goes into the HI or the LO word? should the
509 8 bytes of each register be inverted? Should the bytes in each element
510 be inverted? Should the element indexing loop order be broken onto discontiguous chunks such as 32107654 rather than 01234567, and if so at what granilsrity of discontinuity? These are all equally valid and legitimate interpretations
511 of what constitutes "BE" and they all cause merry mayhem.
512
513 The decision was therefore made: the c typedef union is the canonical
514 definition, and its members are defined as being in LE order. From there,
515 implementations may choose whatever internal HDL wire order they like
516 as long as the results produced conform to the elwidth pseudocode.
517
518 *Note: it turns out that both x86 SIMD and NEON SIMD follow this convention, namely that both are implicitly LE, even though their ISA Manuals may not explicitly spell this out*
519
520 * <https://developer.arm.com/documentation/ddi0406/c/Application-Level-Architecture/Application-Level-Memory-Model/Endian-support/Endianness-in-Advanced-SIMD?lang=en>
521 * <https://stackoverflow.com/questions/24045102/how-does-endianness-work-with-simd-registers>
522
523
524 ## Source and Destination overrides
525
526 A minor fly in the ointment: what happens if the source and destination
527 are over-ridden to different widths? For example, FP16 arithmetic is
528 not accurate enough and may introduce rounding errors when up-converted
529 to FP32 output. The rule is therefore set:
530
531 The operation MUST take place at the larger of the two widths.
532
533 In pseudocode this is:
534
535 for i = 0 to VL-1:
536 src1 = get_polymorphed_reg(RA, srcwid, i)
537 src2 = get_polymorphed_reg(RB, srcwid, i)
538 opwidth = max(srcwid, destwid)
539 result = op_add(src1, src2, opwidth) # at max width
540 set_polymorphed_reg(rd, destwid, i, result)
541
542 It will turn out that under some conditions the combination of the
543 extension of the source registers followed by truncation of the result
544 gets rid of bits that didn't matter, and the operation might as well have
545 taken place at the narrower width and could save resources that way.
546 Examples include Logical OR where the source extension would place
547 zeros in the upper bits, the result will be truncated and throw those
548 zeros away.
549
550 Counterexamples include the previously mentioned FP16 arithmetic,
551 where for operations such as division of large numbers by very small
552 ones it should be clear that internal accuracy will play a major role
553 in influencing the result. Hence the rule that the calculation takes
554 place at the maximum bitwidth, and truncation follows afterwards.
555
556 ## Signed arithmetic
557
558 What happens when the operation involves signed arithmetic? Here the
559 implementor has to use common sense, and make sure behaviour is accurately
560 documented. If the result of the unmodified operation is sign-extended
561 because one of the inputs is signed, then the input source operands must
562 be first read at their overridden bitwidth and *then* sign-extended:
563
564 for i = 0 to VL-1:
565 src1 = get_polymorphed_reg(RA, srcwid, i)
566 src2 = get_polymorphed_reg(RB, srcwid, i)
567 opwidth = max(srcwid, destwid)
568 # srces known to be less than result width
569 src1 = sign_extend(src1, srcwid, destwid)
570 src2 = sign_extend(src2, srcwid, destwid)
571 result = op_signed(src1, src2, opwidth) # at max width
572 set_polymorphed_reg(rd, destwid, i, result)
573
574 The key here is that the cues are taken from the underlying operation.
575
576 ## Saturation
577
578 Audio DSPs need to be able to clip sound when the "volume" is adjusted,
579 but if it is too loud and the signal wraps, distortion occurs. The
580 solution is to clip (saturate) the audio and allow this to be detected.
581 In practical terms this is a post-result analysis however it needs to
582 take place at the largest bitwidth i.e. before a result is element width
583 truncated. Only then can the arithmetic saturation condition be detected:
584
585 for i = 0 to VL-1:
586 src1 = get_polymorphed_reg(RA, srcwid, i)
587 src2 = get_polymorphed_reg(RB, srcwid, i)
588 opwidth = max(srcwid, destwid)
589 # unsigned add
590 result = op_add(src1, src2, opwidth) # at max width
591 # now saturate (unsigned)
592 sat = max(result, (1<<destwid)-1)
593 set_polymorphed_reg(rd, destwid, i, sat)
594 # set sat overflow
595 if Rc=1:
596 CR.ov = (sat != result)
597
598 So the actual computation took place at the larger width, but was
599 post-analysed as an unsigned operation. If however "signed" saturation
600 is requested then the actual arithmetic operation has to be carefully
601 analysed to see what that actually means.
602
603 In terms of FP arithmetic, which by definition has a sign bit (so
604 always takes place as a signed operation anyway), the request to saturate
605 to signed min/max is pretty clear. However for integer arithmetic such
606 as shift (plain shift, not arithmetic shift), or logical operations
607 such as XOR, which were never designed to have the assumption that its
608 inputs be considered as signed numbers, common sense has to kick in,
609 and follow what CR0 does.
610
611 CR0 for Logical operations still applies: the test is still applied to
612 produce CR.eq, CR.lt and CR.gt analysis. Following this lead we may
613 do the same thing: although the input operations for and OR or XOR can
614 in no way be thought of as "signed" we may at least consider the result
615 to be signed, and thus apply min/max range detection -128 to +127 when
616 truncating down to 8 bit for example.
617
618 for i = 0 to VL-1:
619 src1 = get_polymorphed_reg(RA, srcwid, i)
620 src2 = get_polymorphed_reg(RB, srcwid, i)
621 opwidth = max(srcwid, destwid)
622 # logical op, signed has no meaning
623 result = op_xor(src1, src2, opwidth)
624 # now saturate (unsigned)
625 sat = max(result, (1<<destwid-1)-1)
626 sat = min(result, -(1<<destwid-1))
627 set_polymorphed_reg(rd, destwid, i, sat)
628
629 Overall here the rule is: apply common sense then document the behaviour
630 really clearly, for each and every operation.
631
632 # Quick recap so far
633
634 The above functionality pretty much covers around 85% of Vector ISA needs.
635 Predication is provided so that parallel if/then/else constructs can
636 be performed: critical given that sequential if/then statements and
637 branches simply do not translate successfully to Vector workloads.
638 VSPLAT capability is provided which is approximately 20% of all GPU
639 workload operations. Also covered, with elwidth overriding, is the
640 smaller arithmetic operations that caused ISAs developed from the
641 late 80s onwards to get themselves into a tiz when adding "Multimedia"
642 acceleration aka "SIMD" instructions.
643
644 Experienced Vector ISA readers will however have noted that VCOMPRESS
645 and VEXPAND are missing, as is Vector "reduce" (mapreduce) capability
646 and VGATHER and VSCATTER. Compress and Expand are covered by Twin
647 Predication, and yet to also be covered is fail-on-first, CR-based result
648 predication, and Subvectors and Swizzle.
649
650 ## SUBVL <a name="subvl"></a>
651
652 Adding in support for SUBVL is a matter of adding in an extra inner
653 for-loop, where register src and dest are still incremented inside the
654 inner part. Predication is still taken from the VL index, however it
655 is applied to the whole subvector:
656
657 function op_add(RT, RA, RB) # add not VADD!
658  int id=0, irs1=0, irs2=0;
659  predval = get_pred_val(FALSE, rd);
660 for i = 0 to VL-1:
661 if (predval & 1<<i) # predication uses intregs
662 for (s = 0; s < SUBVL; s++)
663 sd = id*SUBVL + s
664 srs1 = irs1*SUBVL + s
665 srs2 = irs2*SUBVL + s
666 ireg[RT+sd] <= ireg[RA+srs1] + ireg[RB+srs2];
667 if (!RT.isvec) break;
668 if (RT.isvec) { id += 1; }
669 if (RA.isvec) { irs1 += 1; }
670 if (RB.isvec) { irs2 += 1; }
671
672 The primary reason for this is because Shader Compilers treat vec2/3/4 as
673 "single units". Recognising this in hardware is just sensible.
674
675 # Swizzle <a name="swizzle"></a>
676
677 Swizzle is particularly important for 3D work. It allows in-place
678 reordering of XYZW, ARGB etc. and access of sub-portions of the same in
679 arbitrary order *without* requiring timeconsuming scalar mv instructions
680 (scalar due to the convoluted offsets).
681
682 Swizzling does not just do permutations: it allows arbitrary selection and multiple copying of
683 vec2/3/4 elements, such as XXXZ as the source operand, which will take
684 3 copies of the vec4 first element (vec4[0]), placing them at positions vec4[0],
685 vec4[1] and vec4[2], whilst the "Z" element (vec4[2]) was copied into vec4[3].
686
687 With somewhere between 10% and 30% of operations in 3D Shaders involving
688 swizzle this is a huge saving and reduces pressure on register files
689 due to having to use significant numbers of mv operations to get vector
690 elements to "line up".
691
692 In SV given the percentage of operations that also involve initialisation
693 to 0.0 or 1.0 into subvector elements the decision was made to include
694 those:
695
696 swizzle = get_swizzle_immed() # 12 bits
697 for (s = 0; s < SUBVL; s++)
698 remap = (swizzle >> 3*s) & 0b111
699 if remap < 4:
700 sm = id*SUBVL + remap
701 ireg[rd+s] <= ireg[RA+sm]
702 elif remap == 4:
703 ireg[rd+s] <= 0.0
704 elif remap == 5:
705 ireg[rd+s] <= 1.0
706
707 Note that a value of 6 (and 7) will leave the target subvector element
708 untouched. This is equivalent to a predicate mask which is built-in,
709 in immediate form, into the [[sv/mv.swizzle]] operation. mv.swizzle is
710 rare in that it is one of the few instructions needed to be added that
711 are never going to be part of a Scalar ISA. Even in High Performance
712 Compute workloads it is unusual: it is only because SV is targetted at
713 3D and Video that it is being considered.
714
715 Some 3D GPU ISAs also allow for two-operand subvector swizzles. These are
716 sufficiently unusual, and the immediate opcode space required so large
717 (12 bits per vec4 source),
718 that the tradeoff balance was decided in SV to only add mv.swizzle.
719
720 # Twin Predication
721
722 Twin Predication is cool. Essentially it is a back-to-back
723 VCOMPRESS-VEXPAND (a multiple sequentially ordered VINSERT). The compress
724 part is covered by the source predicate and the expand part by the
725 destination predicate. Of course, if either of those is all 1s then
726 the operation degenerates *to* VCOMPRESS or VEXPAND, respectively.
727
728 function op(RT, RS):
729  ps = get_pred_val(FALSE, RS); # predication on src
730  pd = get_pred_val(FALSE, RT); # ... AND on dest
731  for (int i = 0, int j = 0; i < VL && j < VL;):
732 if (RS.isvec) while (!(ps & 1<<i)) i++;
733 if (RT.isvec) while (!(pd & 1<<j)) j++;
734 reg[RT+j] = SCALAR_OPERATION_ON(reg[RS+i])
735 if (RS.isvec) i++;
736 if (RT.isvec) j++; else break
737
738 Here's the interesting part: given the fact that SV is a "context"
739 extension, the above pattern can be applied to a lot more than just MV,
740 which is normally only what VCOMPRESS and VEXPAND do in traditional
741 Vector ISAs: move registers. Twin Predication can be applied to `extsw`
742 or `fcvt`, LD/ST operations and even `rlwinmi` and other operations
743 taking a single source and immediate(s) such as `addi`. All of these
744 are termed single-source, single-destination.
745
746 LDST Address-generation, or AGEN, is a special case of single source,
747 because elwidth overriding does not make sense to apply to the computation
748 of the 64 bit address itself, but it *does* make sense to apply elwidth
749 overrides to the data being accessed *at* that memory address.
750
751 It also turns out that by using a single bit set in the source or
752 destination, *all* the sequential ordered standard patterns of Vector
753 ISAs are provided: VSPLAT, VSELECT, VINSERT, VCOMPRESS, VEXPAND.
754
755 The only one missing from the list here, because it is non-sequential,
756 is VGATHER (and VSCATTER): moving registers by specifying a vector of
757 register indices (`regs[rd] = regs[regs[rs]]` in a loop). This one is
758 tricky because it typically does not exist in standard scalar ISAs.
759 If it did it would be called [[sv/mv.x]]. Once Vectorised, it's a
760 VGATHER/VSCATTER.
761
762 # CR predicate result analysis
763
764 OpenPOWER has Condition Registers. These store an analysis of the result
765 of an operation to test it for being greater, less than or equal to zero.
766 What if a test could be done, similar to branch BO testing, which hooked
767 into the predication system?
768
769 for i in range(VL):
770 # predication test, skip all masked out elements.
771 if predicate_masked_out(i): continue # skip
772 result = op(iregs[RA+i], iregs[RB+i])
773 CRnew = analyse(result) # calculates eq/lt/gt
774 # Rc=1 always stores the CR
775 if RC1 or Rc=1: crregs[offs+i] = CRnew
776 if RC1: continue # RC1 mode skips result store
777 # now test CR, similar to branch
778 if CRnew[BO[0:1]] == BO[2]:
779 # result optionally stored but CR always is
780 iregs[RT+i] = result
781
782 Note that whilst the Vector of CRs is always written to the CR regfile,
783 only those result elements that pass the BO test get written to the
784 integer regfile (when RC1 mode is not set). In RC1 mode the CR is always
785 stored, but the result never is. This effectively turns every arithmetic
786 operation into a type of `cmp` instruction.
787
788 Here for example if FP overflow occurred, and the CR testing was carried
789 out for that, all valid results would be stored but invalid ones would
790 not, but in addition the Vector of CRs would contain the indicators of
791 which ones failed. With the invalid results being simply not written
792 this could save resources (save on register file writes).
793
794 Also expected is, due to the fact that the predicate mask is effectively
795 ANDed with the post-result analysis as a secondary type of predication,
796 that there would be savings to be had in some types of operations where
797 the post-result analysis, if not included in SV, would need a second
798 predicate calculation followed by a predicate mask AND operation.
799
800 Note, hilariously, that Vectorised Condition Register Operations (crand,
801 cror) may also have post-result analysis applied to them. With Vectors
802 of CRs being utilised *for* predication, possibilities for compact and
803 elegant code begin to emerge from this innocuous-looking addition to SV.
804
805 # Exception-based Fail-on-first
806
807 One of the major issues with Vectorised LD/ST operations is when a
808 batch of LDs cross a page-fault boundary. With considerable resources
809 being taken up with in-flight data, a large Vector LD being cancelled
810 or unable to roll back is either a detriment to performance or can cause
811 data corruption.
812
813 What if, then, rather than cancel an entire Vector LD because the last
814 operation would cause a page fault, instead truncate the Vector to the
815 last successful element?
816
817 This is called "fail-on-first". Here is strncpy, illustrated from RVV:
818
819 strncpy:
820 c.mv a3, a0 # Copy dst
821 loop:
822 setvli x0, a2, vint8 # Vectors of bytes.
823 vlbff.v v1, (a1) # Get src bytes
824 vseq.vi v0, v1, 0 # Flag zero bytes
825 vmfirst a4, v0 # Zero found?
826 vmsif.v v0, v0 # Set mask up to and including zero byte.
827 vsb.v v1, (a3), v0.t # Write out bytes
828 c.bgez a4, exit # Done
829 csrr t1, vl # Get number of bytes fetched
830 c.add a1, a1, t1 # Bump src pointer
831 c.sub a2, a2, t1 # Decrement count.
832 c.add a3, a3, t1 # Bump dst pointer
833 c.bnez a2, loop # Anymore?
834 exit:
835 c.ret
836
837 Vector Length VL is truncated inherently at the first page faulting
838 byte-level LD. Otherwise, with more powerful hardware the number of
839 elements LOADed from memory could be dozens to hundreds or greater
840 (memory bandwidth permitting).
841
842 With VL truncated the analysis looking for the zero byte and the
843 subsequent STORE (a straight ST, not a ffirst ST) can proceed, safe in the
844 knowledge that every byte loaded in the Vector is valid. Implementors are
845 even permitted to "adapt" VL, truncating it early so that, for example,
846 subsequent iterations of loops will have LD/STs on aligned boundaries.
847
848 SIMD strncpy hand-written assembly routines are, to be blunt about it,
849 a total nightmare. 240 instructions is not uncommon, and the worst
850 thing about them is that they are unable to cope with detection of a
851 page fault condition.
852
853 Note: see <https://bugs.libre-soc.org/show_bug.cgi?id=561>
854
855 # Data-dependent fail-first
856
857 This is a minor variant on the CR-based predicate-result mode. Where
858 pred-result continues with independent element testing (any of which may
859 be parallelised), data-dependent fail-first *stops* at the first failure:
860
861 if Rc=0: BO = inv<<2 | 0b00 # test CR.eq bit z/nz
862 for i in range(VL):
863 # predication test, skip all masked out elements.
864 if predicate_masked_out(i): continue # skip
865 result = op(iregs[RA+i], iregs[RB+i])
866 CRnew = analyse(result) # calculates eq/lt/gt
867 # now test CR, similar to branch
868 if CRnew[BO[0:1]] != BO[2]:
869 VL = i # truncate: only successes allowed
870 break
871 # test passed: store result (and CR?)
872 if not RC1: iregs[RT+i] = result
873 if RC1 or Rc=1: crregs[offs+i] = CRnew
874
875 This is particularly useful, again, for FP operations that might overflow,
876 where it is desirable to end the loop early, but also desirable to
877 complete at least those operations that were okay (passed the test)
878 without also having to slow down execution by adding extra instructions
879 that tested for the possibility of that failure, in advance of doing
880 the actual calculation.
881
882 The only minor downside here though is the change to VL, which in some
883 implementations may cause pipeline stalls. This was one of the reasons
884 why CR-based pred-result analysis was added, because that at least is
885 entirely paralleliseable.
886
887 # Instruction format
888
889 Whilst this overview shows the internals, it does not go into detail
890 on the actual instruction format itself. There are a couple of reasons
891 for this: firstly, it's under development, and secondly, it needs to be
892 proposed to the OpenPOWER Foundation ISA WG for consideration and review.
893
894 That said: draft pages for [[sv/setvl]] and [[sv/svp64]] are written up.
895 The `setvl` instruction is pretty much as would be expected from a
896 Cray style VL instruction: the only differences being that, firstly,
897 the MAXVL (Maximum Vector Length) has to be specified, because that
898 determines - precisely - how many of the *scalar* registers are to be
899 used for a given Vector. Secondly: within the limit of MAXVL, VL is
900 required to be set to the requested value. By contrast, RVV systems
901 permit the hardware to set arbitrary values of VL.
902
903 The other key question is of course: what's the actual instruction format,
904 and what's in it? Bearing in mind that this requires OPF review, the
905 current draft is at the [[sv/svp64]] page, and includes space for all the
906 different modes, the predicates, element width overrides, SUBVL and the
907 register extensions, in 24 bits. This just about fits into an OpenPOWER
908 v3.1B 64 bit Prefix by borrowing some of the Reserved Encoding space.
909 The v3.1B suffix - containing as it does a 32 bit OpenPOWER instruction -
910 aligns perfectly with SV.
911
912 Further reading is at the main [[SV|sv]] page.
913
914 # Conclusion
915
916 Starting from a scalar ISA - OpenPOWER v3.0B - it was shown above that,
917 with conceptual sub-loops, a Scalar ISA can be turned into a Vector one,
918 by embedding Scalar instructions - unmodified - into a Vector "context"
919 using "Prefixing". With careful thought, this technique reaches 90%
920 par with good Vector ISAs, increasing to 95% with the addition of a
921 mere handful of additional context-vectoriseable scalar instructions
922 ([[sv/mv.x]] amongst them).
923
924 What is particularly cool about the SV concept is that custom extensions
925 and research need not be concerned about inventing new Vector instructions
926 and how to get them to interact with the Scalar ISA: they are effectively
927 one and the same. Any new instruction added at the Scalar level is
928 inherently and automatically Vectorised, following some simple rules.
929