(no commit message)
[libreriscv.git] / openpower / sv / overview.mdwn
1 # SV Overview
2
3 **SV is in DRAFT STATUS**. SV has not yet been submitted to the OpenPOWER Foundation ISA WG for review.
4
5 This document provides an overview and introduction as to why SV (a
6 [[!wikipedia Cray]]-style Vector augmentation to [[!wikipedia OpenPOWER]]) exists, and how it works.
7
8 Links:
9
10 * This page: [http://libre-soc.org/openpower/sv/overview](http://libre-soc.org/openpower/sv/overview)
11 * [https://fosdem.org/2021/schedule/event/the_libresoc_project_simple_v_vectorisation/](FOSDEM2021 SimpleV for OpenPOWER]
12 * [[discussion]] and
13 [bugreport](https://bugs.libre-soc.org/show_bug.cgi?id=556)
14 feel free to add comments, questions.
15 * [[SV|sv]]
16 * [[sv/svp64]]
17
18 Contents:
19
20 [[!toc]]
21
22 # Introduction: SIMD and Cray Vectors
23
24 SIMD, the primary method for easy parallelism of the
25 past 30 years in Computer Architectures, is
26 [known to be harmful](https://www.sigarch.org/simd-instructions-considered-harmful/).
27 SIMD provides a seductive simplicity that is easy to implement in
28 hardware. With each doubling in width it promises increases in raw
29 performance without the complexity of either multi-issue or out-of-order
30 execution.
31
32 Unfortunately, even with predication added, SIMD only becomes more and
33 more problematic with each power of two SIMD width increase introduced
34 through an ISA revision. The opcode proliferation, at O(N^6), inexorably
35 spirals out of control in the ISA, detrimentally impacting the hardware,
36 the software, the compilers and the testing and compliance. Here are
37 the typical dimensions that result in such massive proliferation:
38
39 * Operation (add, mul)
40 * bitwidth (8, 16, 32, 64, 128)
41 * Conversion between bitwidths (FP16-FP32-64)
42 * Signed/unsigned
43 * HI/LO swizzle (Audio L/R channels)
44 - HI/LO selection on src 1
45 - selection on src 2
46 - selection on dest
47 - Example: AndesSTAR Audio DSP
48 * Saturation (Clamping at max range)
49
50 These typically are multiplied up to produce explicit opcodes numbering
51 in the thousands on, for example the ARC Video/DSP cores.
52
53 Cray-style variable-length Vectors on the other hand result in
54 stunningly elegant and small loops, exceptionally high data throughput
55 per instruction (by one *or greater* orders of magnitude than SIMD), with
56 no alarmingly high setup and cleanup code, where at the hardware level
57 the microarchitecture may execute from one element right the way through
58 to tens of thousands at a time, yet the executable remains exactly the
59 same and the ISA remains clear, true to the RISC paradigm, and clean.
60 Unlike in SIMD, powers of two limitations are not involved in the ISA
61 or in the assembly code.
62
63 SimpleV takes the Cray style Vector principle and applies it in the
64 abstract to a Scalar ISA, in the process allowing register file size
65 increases using "tagging" (similar to how x86 originally extended
66 registers from 32 to 64 bit).
67
68 ## SV
69
70 The fundamentals are:
71
72 * The Program Counter (PC) gains a "Sub Counter" context (Sub-PC)
73 * Vectorisation pauses the PC and runs a Sub-PC loop from 0 to VL-1
74 (where VL is Vector Length)
75 * The [[Program Order]] of "Sub-PC" instructions must be preserved,
76 just as is expected of instructions ordered by the PC.
77 * Some registers may be "tagged" as Vectors
78 * During the loop, "Vector"-tagged register are incremented by
79 one with each iteration, executing the *same instruction*
80 but with *different registers*
81 * Once the loop is completed *only then* is the Program Counter
82 allowed to move to the next instruction.
83
84 Hardware (and simulator) implementors are free and clear to implement this
85 as literally a for-loop, sitting in between instruction decode and issue.
86 Higher performance systems may deploy SIMD backends, multi-issue and
87 out-of-order execution, although it is strongly recommended to add
88 predication capability directly into SIMD backend units.
89
90 In OpenPOWER ISA v3.0B pseudo-code form, an ADD operation, assuming both
91 source and destination have been "tagged" as Vectors, is simply:
92
93 for i = 0 to VL-1:
94 GPR(RT+i) = GPR(RA+i) + GPR(RB+i)
95
96 At its heart, SimpleV really is this simple. On top of this fundamental
97 basis further refinements can be added which build up towards an extremely
98 powerful Vector augmentation system, with very little in the way of
99 additional opcodes required: simply external "context".
100
101 x86 was originally only 80 instructions: prior to AVX512 over 1,300
102 additional instructions have been added, almost all of them SIMD.
103
104 RISC-V RVV as of version 0.9 is over 188 instructions (more than the
105 rest of RV64G combined: 80 for RV64G and 27 for C). Over 95% of that
106 functionality is added to OpenPOWER v3 0B, by SimpleV augmentation,
107 with around 5 to 8 instructions.
108
109 Even in OpenPOWER v3.0B, the Scalar Integer ISA is around 150
110 instructions, with IEEE754 FP adding approximately 80 more. VSX, being
111 based on SIMD design principles, adds somewhere in the region of 600 more.
112 SimpleV again provides over 95% of VSX functionality, simply by augmenting
113 the *Scalar* OpenPOWER ISA, and in the process providing features such
114 as predication, which VSX is entirely missing.
115
116 AVX512, SVE2, VSX, RVV, all of these systems have to provide different
117 types of register files: Scalar and Vector is the minimum. AVX512
118 even provides a mini mask regfile, followed by explicit instructions
119 that handle operations on each of them *and map between all of them*.
120 SV simply not only uses the existing scalar regfiles (including CRs),
121 but because operations exist within OpenPOWER to cover interactions
122 between the scalar regfiles (`mfcr`, `fcvt`) there is very little that
123 needs to be added.
124
125 In fairness to both VSX and RVV, there are things that are not provided
126 by SimpleV:
127
128 * 128 bit or above arithmetic and other operations
129 (VSX Rijndael and SHA primitives; VSX shuffle and bitpermute operations)
130 * register files above 128 entries
131 * Vector lengths over 64
132 * Unit-strided LD/ST and other comprehensive memory operations
133 (struct-based LD/ST from RVV for example)
134 * 32-bit instruction lengths. [[svp64]] had to be added as 64 bit.
135
136 These limitations, which stem inherently from the adaptation process of
137 starting from a Scalar ISA, are not insurmountable. Over time, they may
138 well be addressed in future revisions of SV.
139
140 The rest of this document builds on the above simple loop to add:
141
142 * Vector-Scalar, Scalar-Vector and Scalar-Scalar operation
143 (of all register files: Integer, FP *and CRs*)
144 * Traditional Vector operations (VSPLAT, VINSERT, VCOMPRESS etc)
145 * Predication masks (essential for parallel if/else constructs)
146 * 8, 16 and 32 bit integer operations, and both FP16 and BF16.
147 * Compacted operations into registers (normally only provided by SIMD)
148 * Fail-on-first (introduced in ARM SVE2)
149 * A new concept: Data-dependent fail-first
150 * Condition-Register based *post-result* predication (also new)
151 * A completely new concept: "Twin Predication"
152 * vec2/3/4 "Subvectors" and Swizzling (standard fare for 3D)
153
154 All of this is *without modifying the OpenPOWER v3.0B ISA*, except to add
155 "wrapping context", similar to how v3.1B 64 Prefixes work.
156
157 # Adding Scalar / Vector
158
159 The first augmentation to the simple loop is to add the option for all
160 source and destinations to all be either scalar or vector. As a FSM
161 this is where our "simple" loop gets its first complexity.
162
163 function op_add(RT, RA, RB) # add not VADD!
164 int id=0, irs1=0, irs2=0;
165 for i = 0 to VL-1:
166 ireg[RT+id] <= ireg[RA+irs1] + ireg[RB+irs2];
167 if (!RT.isvec) break;
168 if (RT.isvec) { id += 1; }
169 if (RA.isvec) { irs1 += 1; }
170 if (RB.isvec) { irs2 += 1; }
171
172 This could have been written out as eight separate cases: one each for
173 when each of `RA`, `RB` or `RT` is scalar or vector. Those eight cases,
174 when optimally combined, result in the pseudocode above.
175
176 With some walkthroughs it is clear that the loop exits immediately
177 after the first scalar destination result is written, and that when the
178 destination is a Vector the loop proceeds to fill up the register file,
179 sequentially, starting at `RT` and ending at `RT+VL-1`. The two source
180 registers will, independently, either remain pointing at `RB` or `RA`
181 respectively, or, if marked as Vectors, will march incrementally in
182 lockstep, producing element results along the way, as the destination
183 also progresses through elements.
184
185 In this way all the eight permutations of Scalar and Vector behaviour
186 are covered, although without predication the scalar-destination ones are
187 reduced in usefulness. It does however clearly illustrate the principle.
188
189 Note in particular: there is no separate Scalar add instruction and
190 separate Vector instruction and separate Scalar-Vector instruction, *and
191 there is no separate Vector register file*: it's all the same instruction,
192 on the standard register file, just with a loop. Scalar happens to set
193 that loop size to one.
194
195 The important insight from the above is that, strictly speaking, Simple-V
196 is not really a Vectorisation scheme at all: it is more of a hardware
197 ISA "Compression scheme", allowing as it does for what would normally
198 require multiple sequential instructions to be replaced with just one.
199 This is where the rule that Program Order must be preserved in Sub-PC
200 execution derives from. However in other ways, which will emerge below,
201 the "tagging" concept presents an opportunity to include features
202 definitely not common outside of Vector ISAs, and in that regard it's
203 definitely a class of Vectorisation.
204
205 ## Register "tagging"
206
207 As an aside: in [[sv/svp64]] the encoding which allows SV to both extend
208 the range beyond r0-r31 and to determine whether it is a scalar or vector
209 is encoded in two to three bits, depending on the instruction.
210
211 The reason for using so few bits is because there are up to *four*
212 registers to mark in this way (`fma`, `isel`) which starts to be of
213 concern when there are only 24 available bits to specify the entire SV
214 Vectorisation Context. In fact, for a small subset of instructions it
215 is just not possible to tag every single register. Under these rare
216 circumstances a tag has to be shared between two registers.
217
218 Below is the pseudocode which expresses the relationship which is usually
219 applied to *every* register:
220
221 if extra3_mode:
222 spec = EXTRA3 # bit 2 s/v, 0-1 extends range
223 else:
224 spec = EXTRA2 << 1 # same as EXTRA3, shifted
225 if spec[2]: # vector
226 RA.isvec = True
227 return (RA << 2) | spec[0:1]
228 else: # scalar
229 RA.isvec = False
230 return (spec[0:1] << 5) | RA
231
232 Here we can see that the scalar registers are extended in the top bits,
233 whilst vectors are shifted up by 2 bits, and then extended in the LSBs.
234 Condition Registers have a slightly different scheme, along the same
235 principle, which takes into account the fact that each CR may be bit-level
236 addressed by Condition Register operations.
237
238 Readers familiar with OpenPOWER will know of Rc=1 operations that create
239 an associated post-result "test", placing this test into an implicit
240 Condition Register. The original researchers who created the POWER ISA
241 chose CR0 for Integer, and CR1 for Floating Point. These *also become
242 Vectorised* - implicitly - if the associated destination register is
243 also Vectorised. This allows for some very interesting savings on
244 instruction count due to the very same CR Vectors being predication masks.
245
246 # Adding single predication
247
248 The next step is to add a single predicate mask. This is where it gets
249 interesting. Predicate masks are a bitvector, each bit specifying, in
250 order, whether the element operation is to be skipped ("masked out")
251 or allowed. If there is no predicate, it is set to all 1s, which is
252 effectively the same as "no predicate".
253
254 function op_add(RT, RA, RB) # add not VADD!
255 int id=0, irs1=0, irs2=0;
256 predval = get_pred_val(FALSE, rd);
257 for i = 0 to VL-1:
258 if (predval & 1<<i) # predication bit test
259 ireg[RT+id] <= ireg[RA+irs1] + ireg[RB+irs2];
260 if (!RT.isvec) break;
261 if (RT.isvec) { id += 1; }
262 if (RA.isvec) { irs1 += 1; }
263 if (RB.isvec) { irs2 += 1; }
264
265 The key modification is to skip the creation and storage of the result
266 if the relevant predicate mask bit is clear, but *not the progression
267 through the registers*.
268
269 A particularly interesting case is if the destination is scalar, and the
270 first few bits of the predicate are zero. The loop proceeds to increment
271 the Scalar *source* registers until the first nonzero predicate bit is
272 found, whereupon a single result is computed, and *then* the loop exits.
273 This therefore uses the predicate to perform Vector source indexing.
274 This case was not possible without the predicate mask.
275
276 If all three registers are marked as Vector then the "traditional"
277 predicated Vector behaviour is provided. Yet, just as before, all other
278 options are still provided, right the way back to the pure-scalar case,
279 as if this were a straight OpenPOWER v3.0B non-augmented instruction.
280
281 Single Predication therefore provides several modes traditionally seen
282 in Vector ISAs:
283
284 * VINSERT: the predicate may be set as a single bit, the sources are
285 scalar and the destination a vector.
286 * VSPLAT (result broadcasting) is provided by making the sources scalar
287 and the destination a vector, and having no predicate set or having
288 multiple bits set.
289 * VSELECT is provided by setting up (at least one of) the sources as a
290 vector, using a single bit in the predicate, and the destination as
291 a scalar.
292
293 All of this capability and coverage without even adding one single actual
294 Vector opcode, let alone 180, 600 or 1,300!
295
296 # Predicate "zeroing" mode
297
298 Sometimes with predication it is ok to leave the masked-out element
299 alone (not modify the result) however sometimes it is better to zero the
300 masked-out elements. Zeroing can be combined with bit-wise ORing to build
301 up vectors from multiple predicate patterns: the same combining with
302 nonzeroing involves more mv operations and predicate mask operations.
303 Our pseudocode therefore ends up as follows, to take the enhancement
304 into account:
305
306 function op_add(RT, RA, RB) # add not VADD!
307 int id=0, irs1=0, irs2=0;
308 predval = get_pred_val(FALSE, rd);
309 for i = 0 to VL-1:
310 if (predval & 1<<i) # predication bit test
311 ireg[RT+id] <= ireg[RA+irs1] + ireg[RB+irs2];
312 if (!RT.isvec) break;
313 else if zeroing: # predicate failed
314 ireg[RT+id] = 0 # set element to zero
315 if (RT.isvec) { id += 1; }
316 if (RA.isvec) { irs1 += 1; }
317 if (RB.isvec) { irs2 += 1; }
318
319 Many Vector systems either have zeroing or they have nonzeroing, they
320 do not have both. This is because they usually have separate Vector
321 register files. However SV sits on top of standard register files and
322 consequently there are advantages to both, so both are provided.
323
324 # Element Width overrides <a name="elwidths"></a>
325
326 All good Vector ISAs have the usual bitwidths for operations: 8/16/32/64
327 bit integer operations, and IEEE754 FP32 and 64. Often also included
328 is FP16 and more recently BF16. The *really* good Vector ISAs have
329 variable-width vectors right down to bitlevel, and as high as 1024 bit
330 arithmetic per element, as well as IEEE754 FP128.
331
332 SV has an "override" system that *changes* the bitwidth of operations
333 that were intended by the original scalar ISA designers to have (for
334 example) 64 bit operations (only). The override widths are 8, 16 and
335 32 for integer, and FP16 and FP32 for IEEE754 (with BF16 to be added in
336 the future).
337
338 This presents a particularly intriguing conundrum given that the OpenPOWER
339 Scalar ISA was never designed with for example 8 bit operations in mind,
340 let alone Vectors of 8 bit.
341
342 The solution comes in terms of rethinking the definition of a Register
343 File. The typical regfile may be considered to be a multi-ported SRAM
344 block, 64 bits wide and usually 32 entries deep, to give 32 64 bit
345 registers. In c this would be:
346
347 typedef uint64_t reg_t;
348 reg_t int_regfile[32]; // standard scalar 32x 64bit
349
350 Conceptually, to get our variable element width vectors,
351 we may think of the regfile as instead being the following c-based data
352 structure, where all types uint16_t etc. are in little-endian order:
353
354 #pragma(packed)
355 typedef union {
356 uint8_t actual_bytes[8];
357 uint8_t b[0]; // array of type uint8_t
358 uint16_t s[0]; // array of LE ordered uint16_t
359 uint32_t i[0];
360 uint64_t l[0]; // default OpenPOWER ISA uses this
361 } reg_t;
362
363 reg_t int_regfile[128]; // SV extends to 128 regs
364
365 This means that Vector elements start from locations specified by 64 bit
366 "register" but that from that location onwards the elements *overlap
367 subsequent registers*.
368
369 Here is another way to view the same concept, bearing in mind that it
370 is assumed a LE memory order:
371
372 uint8_t reg_sram[8*128];
373 uint8_t *actual_bytes = &reg_sram[RA*8];
374 if elwidth == 8:
375 uint8_t *b = (uint8_t*)actual_bytes;
376 b[idx] = result;
377 if elwidth == 16:
378 uint16_t *s = (uint16_t*)actual_bytes;
379 s[idx] = result;
380 if elwidth == 32:
381 uint32_t *i = (uint32_t*)actual_bytes;
382 i[idx] = result;
383 if elwidth == default:
384 uint64_t *l = (uint64_t*)actual_bytes;
385 l[idx] = result;
386
387 Starting with all zeros, setting `actual_bytes[3]` in any given `reg_t`
388 to 0x01 would mean that:
389
390 * b[0..2] = 0x00 and b[3] = 0x01
391 * s[0] = 0x0000 and s[1] = 0x0001
392 * i[0] = 0x00010000
393 * l[0] = 0x0000000000010000
394
395 In tabular form, starting an elwidth=8 loop from r0 and extending for
396 16 elements would begin at r0 and extend over the entirety of r1:
397
398 | byte0 | byte1 | byte2 | byte3 | byte4 | byte5 | byte6 | byte7 |
399 | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- |
400 r0 | b[0] | b[1] | b[2] | b[3] | b[4] | b[5] | b[6] | b[7] |
401 r1 | b[8] | b[9] | b[10] | b[11] | b[12] | b[13] | b[14] | b[15] |
402
403 Starting an elwidth=16 loop from r0 and extending for
404 7 elements would begin at r0 and extend partly over r1. Note that
405 b0 indicates the low byte (lowest 8 bits) of each 16-bit word, and
406 b1 represents the top byte:
407
408 | byte0 | byte1 | byte2 | byte3 | byte4 | byte5 | byte6 | byte7 |
409 | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- |
410 r0 | s[0].b0 b1 | s[1].b0 b1 | s[2].b0 b1 | s[3].b0 b1 |
411 r1 | s[4].b0 b1 | s[5].b0 b1 | s[6].b0 b1 | unmodified |
412
413 Likewise for elwidth=32, and a loop extending for 3 elements. b0 through
414 b3 represent the bytes (numbered lowest for LSB and highest for MSB) within
415 each element word:
416
417 | byte0 | byte1 | byte2 | byte3 | byte4 | byte5 | byte6 | byte7 |
418 | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- |
419 r0 | w[0].b0 b1 b2 b3 | w[1].b0 b1 b2 b3 |
420 r1 | w[2].b0 b1 b2 b3 | unmodified unmodified |
421
422 64-bit (default) elements access the full registers. In each case the
423 register number (`RT`, `RA`) indicates the *starting* point for the storage
424 and retrieval of the elements.
425
426 Our simple loop, instead of accessing the array of regfile entries
427 with a computed index `iregs[RT+i]`, would access the appropriate element
428 of the appropriate width, such as `iregs[RT].s[i]` in order to access
429 16 bit elements starting from RT. Thus we have a series of overlapping
430 conceptual arrays that each start at what is traditionally thought of as
431 "a register". It then helps if we have a couple of routines:
432
433 get_polymorphed_reg(reg, bitwidth, offset):
434 reg_t res = 0;
435 if (!reg.isvec): # scalar
436 offset = 0
437 if bitwidth == 8:
438 reg.b = int_regfile[reg].b[offset]
439 elif bitwidth == 16:
440 reg.s = int_regfile[reg].s[offset]
441 elif bitwidth == 32:
442 reg.i = int_regfile[reg].i[offset]
443 elif bitwidth == default: # 64
444 reg.l = int_regfile[reg].l[offset]
445 return res
446
447 set_polymorphed_reg(reg, bitwidth, offset, val):
448 if (!reg.isvec): # scalar
449 offset = 0
450 if bitwidth == 8:
451 int_regfile[reg].b[offset] = val
452 elif bitwidth == 16:
453 int_regfile[reg].s[offset] = val
454 elif bitwidth == 32:
455 int_regfile[reg].i[offset] = val
456 elif bitwidth == default: # 64
457 int_regfile[reg].l[offset] = val
458
459 These basically provide a convenient parameterised way to access the
460 register file, at an arbitrary vector element offset and an arbitrary
461 element width. Our first simple loop thus becomes:
462
463 for i = 0 to VL-1:
464 src1 = get_polymorphed_reg(RA, srcwid, i)
465 src2 = get_polymorphed_reg(RB, srcwid, i)
466 result = src1 + src2 # actual add here
467 set_polymorphed_reg(rd, destwid, i, result)
468
469 With this loop, if elwidth=16 and VL=3 the first 48 bits of the target
470 register will contain three 16 bit addition results, and the upper 16
471 bits will be *unaltered*.
472
473 Note that things such as zero/sign-extension (and predication) have
474 been left out to illustrate the elwidth concept. Also note that it turns
475 out to be important to perform the operation internally at effectively an *infinite* bitwidth such that any truncation, rounding errors or
476 other artefacts may all be ironed out. This turns out to be important
477 when applying Saturation for Audio DSP workloads, particularly for multiply and IEEE754 FP rounding. By "infinite" this is conceptual only: in reality, the application of the different truncations and width-extensions set a fixed deterministic practical limit on the internal precision needed, on a per-operation basis.
478
479 Other than that, element width overrides, which can be applied to *either*
480 source or destination or both, are pretty straightforward, conceptually.
481 The details, for hardware engineers, involve byte-level write-enable
482 lines, which is exactly what is used on SRAMs anyway. Compiler writers
483 have to alter Register Allocation Tables to byte-level granularity.
484
485 One critical thing to note: upper parts of the underlying 64 bit
486 register are *not zero'd out* by a write involving a non-aligned Vector
487 Length. An 8 bit operation with VL=7 will *not* overwrite the 8th byte
488 of the destination. The only situation where a full overwrite occurs
489 is on "default" behaviour. This is extremely important to consider the
490 register file as a byte-level store, not a 64-bit-level store.
491
492 ## Why a LE regfile?
493
494 The concept of having a regfile where the byte ordering of the underlying
495 SRAM seems utter nonsense. Surely, a hardware implementation gets to
496 choose the order, right? It's memory only where LE/BE matters, right? The
497 bytes come in, all registers are 64 bit and it's just wiring, right?
498
499 Ordinarily this would be 100% correct, in both a scalar ISA and in a Cray
500 style Vector one. The assumption in that last question was, however, "all
501 registers are 64 bit". SV allows SIMD-style packing of vectors into the
502 64 bit registers, where one instruction and the next may interpret that
503 very same register as containing elements of completely different widths.
504
505 Consequently it becomes critically important to decide a byte-order.
506 That decision was - arbitrarily - LE mode. Actually it wasn't arbitrary
507 at all: it was such hell to implement BE supported interpretations of CRs
508 and LD/ST in LibreSOC, based on a terse spec that provides insufficient
509 clarity and assumes significant working knowledge of OpenPOWER, with
510 arbitrary insertions of 7-index here and 3-bitindex there, the decision
511 to pick LE was extremely easy.
512
513 Without such a decision, if two words are packed as elements into a 64
514 bit register, what does this mean? Should they be inverted so that the
515 lower indexed element goes into the HI or the LO word? should the 8
516 bytes of each register be inverted? Should the bytes in each element
517 be inverted? Should the element indexing loop order be broken onto
518 discontiguous chunks such as 32107654 rather than 01234567, and if so
519 at what granularity of discontinuity? These are all equally valid and
520 legitimate interpretations of what constitutes "BE" and they all cause
521 merry mayhem.
522
523 The decision was therefore made: the c typedef union is the canonical
524 definition, and its members are defined as being in LE order. From there,
525 implementations may choose whatever internal HDL wire order they like
526 as long as the results produced conform to the elwidth pseudocode.
527
528 *Note: it turns out that both x86 SIMD and NEON SIMD follow this convention, namely that both are implicitly LE, even though their ISA Manuals may not explicitly spell this out*
529
530 * <https://developer.arm.com/documentation/ddi0406/c/Application-Level-Architecture/Application-Level-Memory-Model/Endian-support/Endianness-in-Advanced-SIMD?lang=en>
531 * <https://stackoverflow.com/questions/24045102/how-does-endianness-work-with-simd-registers>
532 * <https://llvm.org/docs/BigEndianNEON.html>
533
534
535 ## Source and Destination overrides
536
537 A minor fly in the ointment: what happens if the source and destination
538 are over-ridden to different widths? For example, FP16 arithmetic is
539 not accurate enough and may introduce rounding errors when up-converted
540 to FP32 output. The rule is therefore set:
541
542 The operation MUST take place effectively at infinite precision:
543 actual precision determined by the operation and the operand widths
544
545 In pseudocode this is:
546
547 for i = 0 to VL-1:
548 src1 = get_polymorphed_reg(RA, srcwid, i)
549 src2 = get_polymorphed_reg(RB, srcwid, i)
550 opwidth = max(srcwid, destwid) # usually
551 result = op_add(src1, src2, opwidth) # at max width
552 set_polymorphed_reg(rd, destwid, i, result)
553
554 In reality the source and destination widths determine the actual required
555 precision in a given ALU. The reason for setting "effectively" infinite precision
556 is illustrated for example by Saturated-multiply, where if the internal precision was insufficient it would not be possible to correctly determine the maximum clip range had been exceeded.
557
558 Thus it will turn out that under some conditions the combination of the
559 extension of the source registers followed by truncation of the result
560 gets rid of bits that didn't matter, and the operation might as well have
561 taken place at the narrower width and could save resources that way.
562 Examples include Logical OR where the source extension would place
563 zeros in the upper bits, the result will be truncated and throw those
564 zeros away.
565
566 Counterexamples include the previously mentioned FP16 arithmetic,
567 where for operations such as division of large numbers by very small
568 ones it should be clear that internal accuracy will play a major role
569 in influencing the result. Hence the rule that the calculation takes
570 place at the maximum bitwidth, and truncation follows afterwards.
571
572 ## Signed arithmetic
573
574 What happens when the operation involves signed arithmetic? Here the
575 implementor has to use common sense, and make sure behaviour is accurately
576 documented. If the result of the unmodified operation is sign-extended
577 because one of the inputs is signed, then the input source operands must
578 be first read at their overridden bitwidth and *then* sign-extended:
579
580 for i = 0 to VL-1:
581 src1 = get_polymorphed_reg(RA, srcwid, i)
582 src2 = get_polymorphed_reg(RB, srcwid, i)
583 opwidth = max(srcwid, destwid)
584 # srces known to be less than result width
585 src1 = sign_extend(src1, srcwid, opwidth)
586 src2 = sign_extend(src2, srcwid, opwidth)
587 result = op_signed(src1, src2, opwidth) # at max width
588 set_polymorphed_reg(rd, destwid, i, result)
589
590 The key here is that the cues are taken from the underlying operation.
591
592 ## Saturation
593
594 Audio DSPs need to be able to clip sound when the "volume" is adjusted,
595 but if it is too loud and the signal wraps, distortion occurs. The
596 solution is to clip (saturate) the audio and allow this to be detected.
597 In practical terms this is a post-result analysis however it needs to
598 take place at the largest bitwidth i.e. before a result is element width
599 truncated. Only then can the arithmetic saturation condition be detected:
600
601 for i = 0 to VL-1:
602 src1 = get_polymorphed_reg(RA, srcwid, i)
603 src2 = get_polymorphed_reg(RB, srcwid, i)
604 opwidth = max(srcwid, destwid)
605 # unsigned add
606 result = op_add(src1, src2, opwidth) # at max width
607 # now saturate (unsigned)
608 sat = max(result, (1<<destwid)-1)
609 set_polymorphed_reg(rd, destwid, i, sat)
610 # set sat overflow
611 if Rc=1:
612 CR[i].ov = (sat != result)
613
614 So the actual computation took place at the larger width, but was
615 post-analysed as an unsigned operation. If however "signed" saturation
616 is requested then the actual arithmetic operation has to be carefully
617 analysed to see what that actually means.
618
619 In terms of FP arithmetic, which by definition has a sign bit (so
620 always takes place as a signed operation anyway), the request to saturate
621 to signed min/max is pretty clear. However for integer arithmetic such
622 as shift (plain shift, not arithmetic shift), or logical operations
623 such as XOR, which were never designed to have the assumption that its
624 inputs be considered as signed numbers, common sense has to kick in,
625 and follow what CR0 does.
626
627 CR0 for Logical operations still applies: the test is still applied to
628 produce CR.eq, CR.lt and CR.gt analysis. Following this lead we may
629 do the same thing: although the input operations for and OR or XOR can
630 in no way be thought of as "signed" we may at least consider the result
631 to be signed, and thus apply min/max range detection -128 to +127 when
632 truncating down to 8 bit for example.
633
634 for i = 0 to VL-1:
635 src1 = get_polymorphed_reg(RA, srcwid, i)
636 src2 = get_polymorphed_reg(RB, srcwid, i)
637 opwidth = max(srcwid, destwid)
638 # logical op, signed has no meaning
639 result = op_xor(src1, src2, opwidth)
640 # now saturate (signed)
641 sat = max(result, (1<<destwid-1)-1)
642 sat = min(result, -(1<<destwid-1))
643 set_polymorphed_reg(rd, destwid, i, sat)
644
645 Overall here the rule is: apply common sense then document the behaviour
646 really clearly, for each and every operation.
647
648 # Quick recap so far
649
650 The above functionality pretty much covers around 85% of Vector ISA needs.
651 Predication is provided so that parallel if/then/else constructs can
652 be performed: critical given that sequential if/then statements and
653 branches simply do not translate successfully to Vector workloads.
654 VSPLAT capability is provided which is approximately 20% of all GPU
655 workload operations. Also covered, with elwidth overriding, is the
656 smaller arithmetic operations that caused ISAs developed from the
657 late 80s onwards to get themselves into a tiz when adding "Multimedia"
658 acceleration aka "SIMD" instructions.
659
660 Experienced Vector ISA readers will however have noted that VCOMPRESS
661 and VEXPAND are missing, as is Vector "reduce" (mapreduce) capability
662 and VGATHER and VSCATTER. Compress and Expand are covered by Twin
663 Predication, and yet to also be covered is fail-on-first, CR-based result
664 predication, and Subvectors and Swizzle.
665
666 ## SUBVL <a name="subvl"></a>
667
668 Adding in support for SUBVL is a matter of adding in an extra inner
669 for-loop, where register src and dest are still incremented inside the
670 inner part. Predication is still taken from the VL index, however it
671 is applied to the whole subvector:
672
673 function op_add(RT, RA, RB) # add not VADD!
674  int id=0, irs1=0, irs2=0;
675  predval = get_pred_val(FALSE, rd);
676 for i = 0 to VL-1:
677 if (predval & 1<<i) # predication uses intregs
678 for (s = 0; s < SUBVL; s++)
679 sd = id*SUBVL + s
680 srs1 = irs1*SUBVL + s
681 srs2 = irs2*SUBVL + s
682 ireg[RT+sd] <= ireg[RA+srs1] + ireg[RB+srs2];
683 if (!RT.isvec) break;
684 if (RT.isvec) { id += 1; }
685 if (RA.isvec) { irs1 += 1; }
686 if (RB.isvec) { irs2 += 1; }
687
688 The primary reason for this is because Shader Compilers treat vec2/3/4 as
689 "single units". Recognising this in hardware is just sensible.
690
691 # Swizzle <a name="swizzle"></a>
692
693 Swizzle is particularly important for 3D work. It allows in-place
694 reordering of XYZW, ARGB etc. and access of sub-portions of the same in
695 arbitrary order *without* requiring timeconsuming scalar mv instructions
696 (scalar due to the convoluted offsets).
697
698 Swizzling does not just do permutations: it allows arbitrary selection and multiple copying of
699 vec2/3/4 elements, such as XXXZ as the source operand, which will take
700 3 copies of the vec4 first element (vec4[0]), placing them at positions vec4[0],
701 vec4[1] and vec4[2], whilst the "Z" element (vec4[2]) was copied into vec4[3].
702
703 With somewhere between 10% and 30% of operations in 3D Shaders involving
704 swizzle this is a huge saving and reduces pressure on register files
705 due to having to use significant numbers of mv operations to get vector
706 elements to "line up".
707
708 In SV given the percentage of operations that also involve initialisation
709 to 0.0 or 1.0 into subvector elements the decision was made to include
710 those:
711
712 swizzle = get_swizzle_immed() # 12 bits
713 for (s = 0; s < SUBVL; s++)
714 remap = (swizzle >> 3*s) & 0b111
715 if remap < 4:
716 sm = id*SUBVL + remap
717 ireg[rd+s] <= ireg[RA+sm]
718 elif remap == 4:
719 ireg[rd+s] <= 0.0
720 elif remap == 5:
721 ireg[rd+s] <= 1.0
722
723 Note that a value of 6 (and 7) will leave the target subvector element
724 untouched. This is equivalent to a predicate mask which is built-in,
725 in immediate form, into the [[sv/mv.swizzle]] operation. mv.swizzle is
726 rare in that it is one of the few instructions needed to be added that
727 are never going to be part of a Scalar ISA. Even in High Performance
728 Compute workloads it is unusual: it is only because SV is targetted at
729 3D and Video that it is being considered.
730
731 Some 3D GPU ISAs also allow for two-operand subvector swizzles. These are
732 sufficiently unusual, and the immediate opcode space required so large
733 (12 bits per vec4 source),
734 that the tradeoff balance was decided in SV to only add mv.swizzle.
735
736 # Twin Predication
737
738 Twin Predication is cool. Essentially it is a back-to-back
739 VCOMPRESS-VEXPAND (a multiple sequentially ordered VINSERT). The compress
740 part is covered by the source predicate and the expand part by the
741 destination predicate. Of course, if either of those is all 1s then
742 the operation degenerates *to* VCOMPRESS or VEXPAND, respectively.
743
744 function op(RT, RS):
745  ps = get_pred_val(FALSE, RS); # predication on src
746  pd = get_pred_val(FALSE, RT); # ... AND on dest
747  for (int i = 0, int j = 0; i < VL && j < VL;):
748 if (RS.isvec) while (!(ps & 1<<i)) i++;
749 if (RT.isvec) while (!(pd & 1<<j)) j++;
750 reg[RT+j] = SCALAR_OPERATION_ON(reg[RS+i])
751 if (RS.isvec) i++;
752 if (RT.isvec) j++; else break
753
754 Here's the interesting part: given the fact that SV is a "context"
755 extension, the above pattern can be applied to a lot more than just MV,
756 which is normally only what VCOMPRESS and VEXPAND do in traditional
757 Vector ISAs: move registers. Twin Predication can be applied to `extsw`
758 or `fcvt`, LD/ST operations and even `rlwinmi` and other operations
759 taking a single source and immediate(s) such as `addi`. All of these
760 are termed single-source, single-destination.
761
762 LDST Address-generation, or AGEN, is a special case of single source,
763 because elwidth overriding does not make sense to apply to the computation
764 of the 64 bit address itself, but it *does* make sense to apply elwidth
765 overrides to the data being accessed *at* that memory address.
766
767 It also turns out that by using a single bit set in the source or
768 destination, *all* the sequential ordered standard patterns of Vector
769 ISAs are provided: VSPLAT, VSELECT, VINSERT, VCOMPRESS, VEXPAND.
770
771 The only one missing from the list here, because it is non-sequential,
772 is VGATHER (and VSCATTER): moving registers by specifying a vector of
773 register indices (`regs[rd] = regs[regs[rs]]` in a loop). This one is
774 tricky because it typically does not exist in standard scalar ISAs.
775 If it did it would be called [[sv/mv.x]]. Once Vectorised, it's a
776 VGATHER/VSCATTER.
777
778 # CR predicate result analysis
779
780 OpenPOWER has Condition Registers. These store an analysis of the result
781 of an operation to test it for being greater, less than or equal to zero.
782 What if a test could be done, similar to branch BO testing, which hooked
783 into the predication system?
784
785 for i in range(VL):
786 # predication test, skip all masked out elements.
787 if predicate_masked_out(i): continue # skip
788 result = op(iregs[RA+i], iregs[RB+i])
789 CRnew = analyse(result) # calculates eq/lt/gt
790 # Rc=1 always stores the CR
791 if RC1 or Rc=1: crregs[offs+i] = CRnew
792 if RC1: continue # RC1 mode skips result store
793 # now test CR, similar to branch
794 if CRnew[BO[0:1]] == BO[2]:
795 # result optionally stored but CR always is
796 iregs[RT+i] = result
797
798 Note that whilst the Vector of CRs is always written to the CR regfile,
799 only those result elements that pass the BO test get written to the
800 integer regfile (when RC1 mode is not set). In RC1 mode the CR is always
801 stored, but the result never is. This effectively turns every arithmetic
802 operation into a type of `cmp` instruction.
803
804 Here for example if FP overflow occurred, and the CR testing was carried
805 out for that, all valid results would be stored but invalid ones would
806 not, but in addition the Vector of CRs would contain the indicators of
807 which ones failed. With the invalid results being simply not written
808 this could save resources (save on register file writes).
809
810 Also expected is, due to the fact that the predicate mask is effectively
811 ANDed with the post-result analysis as a secondary type of predication,
812 that there would be savings to be had in some types of operations where
813 the post-result analysis, if not included in SV, would need a second
814 predicate calculation followed by a predicate mask AND operation.
815
816 Note, hilariously, that Vectorised Condition Register Operations (crand,
817 cror) may also have post-result analysis applied to them. With Vectors
818 of CRs being utilised *for* predication, possibilities for compact and
819 elegant code begin to emerge from this innocuous-looking addition to SV.
820
821 # Exception-based Fail-on-first
822
823 One of the major issues with Vectorised LD/ST operations is when a
824 batch of LDs cross a page-fault boundary. With considerable resources
825 being taken up with in-flight data, a large Vector LD being cancelled
826 or unable to roll back is either a detriment to performance or can cause
827 data corruption.
828
829 What if, then, rather than cancel an entire Vector LD because the last
830 operation would cause a page fault, instead truncate the Vector to the
831 last successful element?
832
833 This is called "fail-on-first". Here is strncpy, illustrated from RVV:
834
835 strncpy:
836 c.mv a3, a0 # Copy dst
837 loop:
838 setvli x0, a2, vint8 # Vectors of bytes.
839 vlbff.v v1, (a1) # Get src bytes
840 vseq.vi v0, v1, 0 # Flag zero bytes
841 vmfirst a4, v0 # Zero found?
842 vmsif.v v0, v0 # Set mask up to and including zero byte.
843 vsb.v v1, (a3), v0.t # Write out bytes
844 c.bgez a4, exit # Done
845 csrr t1, vl # Get number of bytes fetched
846 c.add a1, a1, t1 # Bump src pointer
847 c.sub a2, a2, t1 # Decrement count.
848 c.add a3, a3, t1 # Bump dst pointer
849 c.bnez a2, loop # Anymore?
850 exit:
851 c.ret
852
853 Vector Length VL is truncated inherently at the first page faulting
854 byte-level LD. Otherwise, with more powerful hardware the number of
855 elements LOADed from memory could be dozens to hundreds or greater
856 (memory bandwidth permitting).
857
858 With VL truncated the analysis looking for the zero byte and the
859 subsequent STORE (a straight ST, not a ffirst ST) can proceed, safe in the
860 knowledge that every byte loaded in the Vector is valid. Implementors are
861 even permitted to "adapt" VL, truncating it early so that, for example,
862 subsequent iterations of loops will have LD/STs on aligned boundaries.
863
864 SIMD strncpy hand-written assembly routines are, to be blunt about it,
865 a total nightmare. 240 instructions is not uncommon, and the worst
866 thing about them is that they are unable to cope with detection of a
867 page fault condition.
868
869 Note: see <https://bugs.libre-soc.org/show_bug.cgi?id=561>
870
871 # Data-dependent fail-first
872
873 This is a minor variant on the CR-based predicate-result mode. Where
874 pred-result continues with independent element testing (any of which may
875 be parallelised), data-dependent fail-first *stops* at the first failure:
876
877 if Rc=0: BO = inv<<2 | 0b00 # test CR.eq bit z/nz
878 for i in range(VL):
879 # predication test, skip all masked out elements.
880 if predicate_masked_out(i): continue # skip
881 result = op(iregs[RA+i], iregs[RB+i])
882 CRnew = analyse(result) # calculates eq/lt/gt
883 # now test CR, similar to branch
884 if CRnew[BO[0:1]] != BO[2]:
885 VL = i # truncate: only successes allowed
886 break
887 # test passed: store result (and CR?)
888 if not RC1: iregs[RT+i] = result
889 if RC1 or Rc=1: crregs[offs+i] = CRnew
890
891 This is particularly useful, again, for FP operations that might overflow,
892 where it is desirable to end the loop early, but also desirable to
893 complete at least those operations that were okay (passed the test)
894 without also having to slow down execution by adding extra instructions
895 that tested for the possibility of that failure, in advance of doing
896 the actual calculation.
897
898 The only minor downside here though is the change to VL, which in some
899 implementations may cause pipeline stalls. This was one of the reasons
900 why CR-based pred-result analysis was added, because that at least is
901 entirely paralleliseable.
902
903 # Instruction format
904
905 Whilst this overview shows the internals, it does not go into detail
906 on the actual instruction format itself. There are a couple of reasons
907 for this: firstly, it's under development, and secondly, it needs to be
908 proposed to the OpenPOWER Foundation ISA WG for consideration and review.
909
910 That said: draft pages for [[sv/setvl]] and [[sv/svp64]] are written up.
911 The `setvl` instruction is pretty much as would be expected from a
912 Cray style VL instruction: the only differences being that, firstly,
913 the MAXVL (Maximum Vector Length) has to be specified, because that
914 determines - precisely - how many of the *scalar* registers are to be
915 used for a given Vector. Secondly: within the limit of MAXVL, VL is
916 required to be set to the requested value. By contrast, RVV systems
917 permit the hardware to set arbitrary values of VL.
918
919 The other key question is of course: what's the actual instruction format,
920 and what's in it? Bearing in mind that this requires OPF review, the
921 current draft is at the [[sv/svp64]] page, and includes space for all the
922 different modes, the predicates, element width overrides, SUBVL and the
923 register extensions, in 24 bits. This just about fits into an OpenPOWER
924 v3.1B 64 bit Prefix by borrowing some of the Reserved Encoding space.
925 The v3.1B suffix - containing as it does a 32 bit OpenPOWER instruction -
926 aligns perfectly with SV.
927
928 Further reading is at the main [[SV|sv]] page.
929
930 # Conclusion
931
932 Starting from a scalar ISA - OpenPOWER v3.0B - it was shown above that,
933 with conceptual sub-loops, a Scalar ISA can be turned into a Vector one,
934 by embedding Scalar instructions - unmodified - into a Vector "context"
935 using "Prefixing". With careful thought, this technique reaches 90%
936 par with good Vector ISAs, increasing to 95% with the addition of a
937 mere handful of additional context-vectoriseable scalar instructions
938 ([[sv/mv.x]] amongst them).
939
940 What is particularly cool about the SV concept is that custom extensions
941 and research need not be concerned about inventing new Vector instructions
942 and how to get them to interact with the Scalar ISA: they are effectively
943 one and the same. Any new instruction added at the Scalar level is
944 inherently and automatically Vectorised, following some simple rules.
945