(no commit message)
[libreriscv.git] / openpower / sv / overview.mdwn
1 # SV Overview
2
3 **SV is in DRAFT STATUS**. SV has not yet been submitted to the OpenPOWER Foundation ISA WG for review.
4
5 This document provides an overview and introduction as to why SV (a
6 [[!wikipedia Cray]]-style Vector augmentation to [[!wikipedia OpenPOWER]]) exists, and how it works.
7
8 **Sponsored by NLnet under the Privacy and Enhanced Trust Programme**
9
10 Links:
11
12 * This page: [http://libre-soc.org/openpower/sv/overview](http://libre-soc.org/openpower/sv/overview)
13 * [FOSDEM2021 SimpleV for OpenPOWER](https://fosdem.org/2021/schedule/event/the_libresoc_project_simple_v_vectorisation/)
14 * [[discussion]] and
15 [bugreport](https://bugs.libre-soc.org/show_bug.cgi?id=556)
16 feel free to add comments, questions.
17 * [[SV|sv]]
18 * [[sv/svp64]]
19
20 Contents:
21
22 [[!toc]]
23
24 # Introduction: SIMD and Cray Vectors
25
26 SIMD, the primary method for easy parallelism of the
27 past 30 years in Computer Architectures, is
28 [known to be harmful](https://www.sigarch.org/simd-instructions-considered-harmful/).
29 SIMD provides a seductive simplicity that is easy to implement in
30 hardware. With each doubling in width it promises increases in raw
31 performance without the complexity of either multi-issue or out-of-order
32 execution.
33
34 Unfortunately, even with predication added, SIMD only becomes more and
35 more problematic with each power of two SIMD width increase introduced
36 through an ISA revision. The opcode proliferation, at O(N^6), inexorably
37 spirals out of control in the ISA, detrimentally impacting the hardware,
38 the software, the compilers and the testing and compliance. Here are
39 the typical dimensions that result in such massive proliferation:
40
41 * Operation (add, mul)
42 * bitwidth (8, 16, 32, 64, 128)
43 * Conversion between bitwidths (FP16-FP32-64)
44 * Signed/unsigned
45 * HI/LO swizzle (Audio L/R channels)
46 - HI/LO selection on src 1
47 - selection on src 2
48 - selection on dest
49 - Example: AndesSTAR Audio DSP
50 * Saturation (Clamping at max range)
51
52 These typically are multiplied up to produce explicit opcodes numbering
53 in the thousands on, for example the ARC Video/DSP cores.
54
55 Cray-style variable-length Vectors on the other hand result in
56 stunningly elegant and small loops, exceptionally high data throughput
57 per instruction (by one *or greater* orders of magnitude than SIMD), with
58 no alarmingly high setup and cleanup code, where at the hardware level
59 the microarchitecture may execute from one element right the way through
60 to tens of thousands at a time, yet the executable remains exactly the
61 same and the ISA remains clear, true to the RISC paradigm, and clean.
62 Unlike in SIMD, powers of two limitations are not involved in the ISA
63 or in the assembly code.
64
65 SimpleV takes the Cray style Vector principle and applies it in the
66 abstract to a Scalar ISA, in the process allowing register file size
67 increases using "tagging" (similar to how x86 originally extended
68 registers from 32 to 64 bit).
69
70 ## SV
71
72 The fundamentals are:
73
74 * The Program Counter (PC) gains a "Sub Counter" context (Sub-PC)
75 * Vectorisation pauses the PC and runs a Sub-PC loop from 0 to VL-1
76 (where VL is Vector Length)
77 * The [[Program Order]] of "Sub-PC" instructions must be preserved,
78 just as is expected of instructions ordered by the PC.
79 * Some registers may be "tagged" as Vectors
80 * During the loop, "Vector"-tagged register are incremented by
81 one with each iteration, executing the *same instruction*
82 but with *different registers*
83 * Once the loop is completed *only then* is the Program Counter
84 allowed to move to the next instruction.
85
86 Hardware (and simulator) implementors are free and clear to implement this
87 as literally a for-loop, sitting in between instruction decode and issue.
88 Higher performance systems may deploy SIMD backends, multi-issue and
89 out-of-order execution, although it is strongly recommended to add
90 predication capability directly into SIMD backend units.
91
92 In OpenPOWER ISA v3.0B pseudo-code form, an ADD operation, assuming both
93 source and destination have been "tagged" as Vectors, is simply:
94
95 for i = 0 to VL-1:
96 GPR(RT+i) = GPR(RA+i) + GPR(RB+i)
97
98 At its heart, SimpleV really is this simple. On top of this fundamental
99 basis further refinements can be added which build up towards an extremely
100 powerful Vector augmentation system, with very little in the way of
101 additional opcodes required: simply external "context".
102
103 x86 was originally only 80 instructions: prior to AVX512 over 1,300
104 additional instructions have been added, almost all of them SIMD.
105
106 RISC-V RVV as of version 0.9 is over 188 instructions (more than the
107 rest of RV64G combined: 80 for RV64G and 27 for C). Over 95% of that
108 functionality is added to OpenPOWER v3 0B, by SimpleV augmentation,
109 with around 5 to 8 instructions.
110
111 Even in OpenPOWER v3.0B, the Scalar Integer ISA is around 150
112 instructions, with IEEE754 FP adding approximately 80 more. VSX, being
113 based on SIMD design principles, adds somewhere in the region of 600 more.
114 SimpleV again provides over 95% of VSX functionality, simply by augmenting
115 the *Scalar* OpenPOWER ISA, and in the process providing features such
116 as predication, which VSX is entirely missing.
117
118 AVX512, SVE2, VSX, RVV, all of these systems have to provide different
119 types of register files: Scalar and Vector is the minimum. AVX512
120 even provides a mini mask regfile, followed by explicit instructions
121 that handle operations on each of them *and map between all of them*.
122 SV simply not only uses the existing scalar regfiles (including CRs),
123 but because operations exist within OpenPOWER to cover interactions
124 between the scalar regfiles (`mfcr`, `fcvt`) there is very little that
125 needs to be added.
126
127 In fairness to both VSX and RVV, there are things that are not provided
128 by SimpleV:
129
130 * 128 bit or above arithmetic and other operations
131 (VSX Rijndael and SHA primitives; VSX shuffle and bitpermute operations)
132 * register files above 128 entries
133 * Vector lengths over 64
134 * Unit-strided LD/ST and other comprehensive memory operations
135 (struct-based LD/ST from RVV for example)
136 * 32-bit instruction lengths. [[svp64]] had to be added as 64 bit.
137
138 These limitations, which stem inherently from the adaptation process of
139 starting from a Scalar ISA, are not insurmountable. Over time, they may
140 well be addressed in future revisions of SV.
141
142 The rest of this document builds on the above simple loop to add:
143
144 * Vector-Scalar, Scalar-Vector and Scalar-Scalar operation
145 (of all register files: Integer, FP *and CRs*)
146 * Traditional Vector operations (VSPLAT, VINSERT, VCOMPRESS etc)
147 * Predication masks (essential for parallel if/else constructs)
148 * 8, 16 and 32 bit integer operations, and both FP16 and BF16.
149 * Compacted operations into registers (normally only provided by SIMD)
150 * Fail-on-first (introduced in ARM SVE2)
151 * A new concept: Data-dependent fail-first
152 * Condition-Register based *post-result* predication (also new)
153 * A completely new concept: "Twin Predication"
154 * vec2/3/4 "Subvectors" and Swizzling (standard fare for 3D)
155
156 All of this is *without modifying the OpenPOWER v3.0B ISA*, except to add
157 "wrapping context", similar to how v3.1B 64 Prefixes work.
158
159 # Adding Scalar / Vector
160
161 The first augmentation to the simple loop is to add the option for all
162 source and destinations to all be either scalar or vector. As a FSM
163 this is where our "simple" loop gets its first complexity.
164
165 function op_add(RT, RA, RB) # add not VADD!
166 int id=0, irs1=0, irs2=0;
167 for i = 0 to VL-1:
168 ireg[RT+id] <= ireg[RA+irs1] + ireg[RB+irs2];
169 if (!RT.isvec) break;
170 if (RT.isvec) { id += 1; }
171 if (RA.isvec) { irs1 += 1; }
172 if (RB.isvec) { irs2 += 1; }
173
174 This could have been written out as eight separate cases: one each for
175 when each of `RA`, `RB` or `RT` is scalar or vector. Those eight cases,
176 when optimally combined, result in the pseudocode above.
177
178 With some walkthroughs it is clear that the loop exits immediately
179 after the first scalar destination result is written, and that when the
180 destination is a Vector the loop proceeds to fill up the register file,
181 sequentially, starting at `RT` and ending at `RT+VL-1`. The two source
182 registers will, independently, either remain pointing at `RB` or `RA`
183 respectively, or, if marked as Vectors, will march incrementally in
184 lockstep, producing element results along the way, as the destination
185 also progresses through elements.
186
187 In this way all the eight permutations of Scalar and Vector behaviour
188 are covered, although without predication the scalar-destination ones are
189 reduced in usefulness. It does however clearly illustrate the principle.
190
191 Note in particular: there is no separate Scalar add instruction and
192 separate Vector instruction and separate Scalar-Vector instruction, *and
193 there is no separate Vector register file*: it's all the same instruction,
194 on the standard register file, just with a loop. Scalar happens to set
195 that loop size to one.
196
197 The important insight from the above is that, strictly speaking, Simple-V
198 is not really a Vectorisation scheme at all: it is more of a hardware
199 ISA "Compression scheme", allowing as it does for what would normally
200 require multiple sequential instructions to be replaced with just one.
201 This is where the rule that Program Order must be preserved in Sub-PC
202 execution derives from. However in other ways, which will emerge below,
203 the "tagging" concept presents an opportunity to include features
204 definitely not common outside of Vector ISAs, and in that regard it's
205 definitely a class of Vectorisation.
206
207 ## Register "tagging"
208
209 As an aside: in [[sv/svp64]] the encoding which allows SV to both extend
210 the range beyond r0-r31 and to determine whether it is a scalar or vector
211 is encoded in two to three bits, depending on the instruction.
212
213 The reason for using so few bits is because there are up to *four*
214 registers to mark in this way (`fma`, `isel`) which starts to be of
215 concern when there are only 24 available bits to specify the entire SV
216 Vectorisation Context. In fact, for a small subset of instructions it
217 is just not possible to tag every single register. Under these rare
218 circumstances a tag has to be shared between two registers.
219
220 Below is the pseudocode which expresses the relationship which is usually
221 applied to *every* register:
222
223 if extra3_mode:
224 spec = EXTRA3 # bit 2 s/v, 0-1 extends range
225 else:
226 spec = EXTRA2 << 1 # same as EXTRA3, shifted
227 if spec[2]: # vector
228 RA.isvec = True
229 return (RA << 2) | spec[0:1]
230 else: # scalar
231 RA.isvec = False
232 return (spec[0:1] << 5) | RA
233
234 Here we can see that the scalar registers are extended in the top bits,
235 whilst vectors are shifted up by 2 bits, and then extended in the LSBs.
236 Condition Registers have a slightly different scheme, along the same
237 principle, which takes into account the fact that each CR may be bit-level
238 addressed by Condition Register operations.
239
240 Readers familiar with OpenPOWER will know of Rc=1 operations that create
241 an associated post-result "test", placing this test into an implicit
242 Condition Register. The original researchers who created the POWER ISA
243 chose CR0 for Integer, and CR1 for Floating Point. These *also become
244 Vectorised* - implicitly - if the associated destination register is
245 also Vectorised. This allows for some very interesting savings on
246 instruction count due to the very same CR Vectors being predication masks.
247
248 # Adding single predication
249
250 The next step is to add a single predicate mask. This is where it gets
251 interesting. Predicate masks are a bitvector, each bit specifying, in
252 order, whether the element operation is to be skipped ("masked out")
253 or allowed. If there is no predicate, it is set to all 1s, which is
254 effectively the same as "no predicate".
255
256 function op_add(RT, RA, RB) # add not VADD!
257 int id=0, irs1=0, irs2=0;
258 predval = get_pred_val(FALSE, rd);
259 for i = 0 to VL-1:
260 if (predval & 1<<i) # predication bit test
261 ireg[RT+id] <= ireg[RA+irs1] + ireg[RB+irs2];
262 if (!RT.isvec) break;
263 if (RT.isvec) { id += 1; }
264 if (RA.isvec) { irs1 += 1; }
265 if (RB.isvec) { irs2 += 1; }
266
267 The key modification is to skip the creation and storage of the result
268 if the relevant predicate mask bit is clear, but *not the progression
269 through the registers*.
270
271 A particularly interesting case is if the destination is scalar, and the
272 first few bits of the predicate are zero. The loop proceeds to increment
273 the Scalar *source* registers until the first nonzero predicate bit is
274 found, whereupon a single result is computed, and *then* the loop exits.
275 This therefore uses the predicate to perform Vector source indexing.
276 This case was not possible without the predicate mask.
277
278 If all three registers are marked as Vector then the "traditional"
279 predicated Vector behaviour is provided. Yet, just as before, all other
280 options are still provided, right the way back to the pure-scalar case,
281 as if this were a straight OpenPOWER v3.0B non-augmented instruction.
282
283 Single Predication therefore provides several modes traditionally seen
284 in Vector ISAs:
285
286 * VINSERT: the predicate may be set as a single bit, the sources are
287 scalar and the destination a vector.
288 * VSPLAT (result broadcasting) is provided by making the sources scalar
289 and the destination a vector, and having no predicate set or having
290 multiple bits set.
291 * VSELECT is provided by setting up (at least one of) the sources as a
292 vector, using a single bit in the predicate, and the destination as
293 a scalar.
294
295 All of this capability and coverage without even adding one single actual
296 Vector opcode, let alone 180, 600 or 1,300!
297
298 # Predicate "zeroing" mode
299
300 Sometimes with predication it is ok to leave the masked-out element
301 alone (not modify the result) however sometimes it is better to zero the
302 masked-out elements. Zeroing can be combined with bit-wise ORing to build
303 up vectors from multiple predicate patterns: the same combining with
304 nonzeroing involves more mv operations and predicate mask operations.
305 Our pseudocode therefore ends up as follows, to take the enhancement
306 into account:
307
308 function op_add(RT, RA, RB) # add not VADD!
309 int id=0, irs1=0, irs2=0;
310 predval = get_pred_val(FALSE, rd);
311 for i = 0 to VL-1:
312 if (predval & 1<<i) # predication bit test
313 ireg[RT+id] <= ireg[RA+irs1] + ireg[RB+irs2];
314 if (!RT.isvec) break;
315 else if zeroing: # predicate failed
316 ireg[RT+id] = 0 # set element to zero
317 if (RT.isvec) { id += 1; }
318 if (RA.isvec) { irs1 += 1; }
319 if (RB.isvec) { irs2 += 1; }
320
321 Many Vector systems either have zeroing or they have nonzeroing, they
322 do not have both. This is because they usually have separate Vector
323 register files. However SV sits on top of standard register files and
324 consequently there are advantages to both, so both are provided.
325
326 # Element Width overrides <a name="elwidths"></a>
327
328 All good Vector ISAs have the usual bitwidths for operations: 8/16/32/64
329 bit integer operations, and IEEE754 FP32 and 64. Often also included
330 is FP16 and more recently BF16. The *really* good Vector ISAs have
331 variable-width vectors right down to bitlevel, and as high as 1024 bit
332 arithmetic per element, as well as IEEE754 FP128.
333
334 SV has an "override" system that *changes* the bitwidth of operations
335 that were intended by the original scalar ISA designers to have (for
336 example) 64 bit operations (only). The override widths are 8, 16 and
337 32 for integer, and FP16 and FP32 for IEEE754 (with BF16 to be added in
338 the future).
339
340 This presents a particularly intriguing conundrum given that the OpenPOWER
341 Scalar ISA was never designed with for example 8 bit operations in mind,
342 let alone Vectors of 8 bit.
343
344 The solution comes in terms of rethinking the definition of a Register
345 File. The typical regfile may be considered to be a multi-ported SRAM
346 block, 64 bits wide and usually 32 entries deep, to give 32 64 bit
347 registers. In c this would be:
348
349 typedef uint64_t reg_t;
350 reg_t int_regfile[32]; // standard scalar 32x 64bit
351
352 Conceptually, to get our variable element width vectors,
353 we may think of the regfile as instead being the following c-based data
354 structure, where all types uint16_t etc. are in little-endian order:
355
356 #pragma(packed)
357 typedef union {
358 uint8_t actual_bytes[8];
359 uint8_t b[0]; // array of type uint8_t
360 uint16_t s[0]; // array of LE ordered uint16_t
361 uint32_t i[0];
362 uint64_t l[0]; // default OpenPOWER ISA uses this
363 } reg_t;
364
365 reg_t int_regfile[128]; // SV extends to 128 regs
366
367 This means that Vector elements start from locations specified by 64 bit
368 "register" but that from that location onwards the elements *overlap
369 subsequent registers*.
370
371 Here is another way to view the same concept, bearing in mind that it
372 is assumed a LE memory order:
373
374 uint8_t reg_sram[8*128];
375 uint8_t *actual_bytes = &reg_sram[RA*8];
376 if elwidth == 8:
377 uint8_t *b = (uint8_t*)actual_bytes;
378 b[idx] = result;
379 if elwidth == 16:
380 uint16_t *s = (uint16_t*)actual_bytes;
381 s[idx] = result;
382 if elwidth == 32:
383 uint32_t *i = (uint32_t*)actual_bytes;
384 i[idx] = result;
385 if elwidth == default:
386 uint64_t *l = (uint64_t*)actual_bytes;
387 l[idx] = result;
388
389 Starting with all zeros, setting `actual_bytes[3]` in any given `reg_t`
390 to 0x01 would mean that:
391
392 * b[0..2] = 0x00 and b[3] = 0x01
393 * s[0] = 0x0000 and s[1] = 0x0001
394 * i[0] = 0x00010000
395 * l[0] = 0x0000000000010000
396
397 In tabular form, starting an elwidth=8 loop from r0 and extending for
398 16 elements would begin at r0 and extend over the entirety of r1:
399
400 | byte0 | byte1 | byte2 | byte3 | byte4 | byte5 | byte6 | byte7 |
401 | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- |
402 r0 | b[0] | b[1] | b[2] | b[3] | b[4] | b[5] | b[6] | b[7] |
403 r1 | b[8] | b[9] | b[10] | b[11] | b[12] | b[13] | b[14] | b[15] |
404
405 Starting an elwidth=16 loop from r0 and extending for
406 7 elements would begin at r0 and extend partly over r1. Note that
407 b0 indicates the low byte (lowest 8 bits) of each 16-bit word, and
408 b1 represents the top byte:
409
410 | byte0 | byte1 | byte2 | byte3 | byte4 | byte5 | byte6 | byte7 |
411 | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- |
412 r0 | s[0].b0 b1 | s[1].b0 b1 | s[2].b0 b1 | s[3].b0 b1 |
413 r1 | s[4].b0 b1 | s[5].b0 b1 | s[6].b0 b1 | unmodified |
414
415 Likewise for elwidth=32, and a loop extending for 3 elements. b0 through
416 b3 represent the bytes (numbered lowest for LSB and highest for MSB) within
417 each element word:
418
419 | byte0 | byte1 | byte2 | byte3 | byte4 | byte5 | byte6 | byte7 |
420 | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- |
421 r0 | w[0].b0 b1 b2 b3 | w[1].b0 b1 b2 b3 |
422 r1 | w[2].b0 b1 b2 b3 | unmodified unmodified |
423
424 64-bit (default) elements access the full registers. In each case the
425 register number (`RT`, `RA`) indicates the *starting* point for the storage
426 and retrieval of the elements.
427
428 Our simple loop, instead of accessing the array of regfile entries
429 with a computed index `iregs[RT+i]`, would access the appropriate element
430 of the appropriate width, such as `iregs[RT].s[i]` in order to access
431 16 bit elements starting from RT. Thus we have a series of overlapping
432 conceptual arrays that each start at what is traditionally thought of as
433 "a register". It then helps if we have a couple of routines:
434
435 get_polymorphed_reg(reg, bitwidth, offset):
436 reg_t res = 0;
437 if (!reg.isvec): # scalar
438 offset = 0
439 if bitwidth == 8:
440 reg.b = int_regfile[reg].b[offset]
441 elif bitwidth == 16:
442 reg.s = int_regfile[reg].s[offset]
443 elif bitwidth == 32:
444 reg.i = int_regfile[reg].i[offset]
445 elif bitwidth == default: # 64
446 reg.l = int_regfile[reg].l[offset]
447 return res
448
449 set_polymorphed_reg(reg, bitwidth, offset, val):
450 if (!reg.isvec): # scalar
451 offset = 0
452 if bitwidth == 8:
453 int_regfile[reg].b[offset] = val
454 elif bitwidth == 16:
455 int_regfile[reg].s[offset] = val
456 elif bitwidth == 32:
457 int_regfile[reg].i[offset] = val
458 elif bitwidth == default: # 64
459 int_regfile[reg].l[offset] = val
460
461 These basically provide a convenient parameterised way to access the
462 register file, at an arbitrary vector element offset and an arbitrary
463 element width. Our first simple loop thus becomes:
464
465 for i = 0 to VL-1:
466 src1 = get_polymorphed_reg(RA, srcwid, i)
467 src2 = get_polymorphed_reg(RB, srcwid, i)
468 result = src1 + src2 # actual add here
469 set_polymorphed_reg(RT, destwid, i, result)
470
471 With this loop, if elwidth=16 and VL=3 the first 48 bits of the target
472 register will contain three 16 bit addition results, and the upper 16
473 bits will be *unaltered*.
474
475 Note that things such as zero/sign-extension (and predication) have
476 been left out to illustrate the elwidth concept. Also note that it turns
477 out to be important to perform the operation internally at effectively an *infinite* bitwidth such that any truncation, rounding errors or
478 other artefacts may all be ironed out. This turns out to be important
479 when applying Saturation for Audio DSP workloads, particularly for multiply and IEEE754 FP rounding. By "infinite" this is conceptual only: in reality, the application of the different truncations and width-extensions set a fixed deterministic practical limit on the internal precision needed, on a per-operation basis.
480
481 Other than that, element width overrides, which can be applied to *either*
482 source or destination or both, are pretty straightforward, conceptually.
483 The details, for hardware engineers, involve byte-level write-enable
484 lines, which is exactly what is used on SRAMs anyway. Compiler writers
485 have to alter Register Allocation Tables to byte-level granularity.
486
487 One critical thing to note: upper parts of the underlying 64 bit
488 register are *not zero'd out* by a write involving a non-aligned Vector
489 Length. An 8 bit operation with VL=7 will *not* overwrite the 8th byte
490 of the destination. The only situation where a full overwrite occurs
491 is on "default" behaviour. This is extremely important to consider the
492 register file as a byte-level store, not a 64-bit-level store.
493
494 ## Why a LE regfile?
495
496 The concept of having a regfile where the byte ordering of the underlying
497 SRAM seems utter nonsense. Surely, a hardware implementation gets to
498 choose the order, right? It's memory only where LE/BE matters, right? The
499 bytes come in, all registers are 64 bit and it's just wiring, right?
500
501 Ordinarily this would be 100% correct, in both a scalar ISA and in a Cray
502 style Vector one. The assumption in that last question was, however, "all
503 registers are 64 bit". SV allows SIMD-style packing of vectors into the
504 64 bit registers, where one instruction and the next may interpret that
505 very same register as containing elements of completely different widths.
506
507 Consequently it becomes critically important to decide a byte-order.
508 That decision was - arbitrarily - LE mode. Actually it wasn't arbitrary
509 at all: it was such hell to implement BE supported interpretations of CRs
510 and LD/ST in LibreSOC, based on a terse spec that provides insufficient
511 clarity and assumes significant working knowledge of OpenPOWER, with
512 arbitrary insertions of 7-index here and 3-bitindex there, the decision
513 to pick LE was extremely easy.
514
515 Without such a decision, if two words are packed as elements into a 64
516 bit register, what does this mean? Should they be inverted so that the
517 lower indexed element goes into the HI or the LO word? should the 8
518 bytes of each register be inverted? Should the bytes in each element
519 be inverted? Should the element indexing loop order be broken onto
520 discontiguous chunks such as 32107654 rather than 01234567, and if so
521 at what granularity of discontinuity? These are all equally valid and
522 legitimate interpretations of what constitutes "BE" and they all cause
523 merry mayhem.
524
525 The decision was therefore made: the c typedef union is the canonical
526 definition, and its members are defined as being in LE order. From there,
527 implementations may choose whatever internal HDL wire order they like
528 as long as the results produced conform to the elwidth pseudocode.
529
530 *Note: it turns out that both x86 SIMD and NEON SIMD follow this convention, namely that both are implicitly LE, even though their ISA Manuals may not explicitly spell this out*
531
532 * <https://developer.arm.com/documentation/ddi0406/c/Application-Level-Architecture/Application-Level-Memory-Model/Endian-support/Endianness-in-Advanced-SIMD?lang=en>
533 * <https://stackoverflow.com/questions/24045102/how-does-endianness-work-with-simd-registers>
534 * <https://llvm.org/docs/BigEndianNEON.html>
535
536
537 ## Source and Destination overrides
538
539 A minor fly in the ointment: what happens if the source and destination
540 are over-ridden to different widths? For example, FP16 arithmetic is
541 not accurate enough and may introduce rounding errors when up-converted
542 to FP32 output. The rule is therefore set:
543
544 The operation MUST take place effectively at infinite precision:
545 actual precision determined by the operation and the operand widths
546
547 In pseudocode this is:
548
549 for i = 0 to VL-1:
550 src1 = get_polymorphed_reg(RA, srcwid, i)
551 src2 = get_polymorphed_reg(RB, srcwid, i)
552 opwidth = max(srcwid, destwid) # usually
553 result = op_add(src1, src2, opwidth) # at max width
554 set_polymorphed_reg(rd, destwid, i, result)
555
556 In reality the source and destination widths determine the actual required
557 precision in a given ALU. The reason for setting "effectively" infinite precision
558 is illustrated for example by Saturated-multiply, where if the internal precision was insufficient it would not be possible to correctly determine the maximum clip range had been exceeded.
559
560 Thus it will turn out that under some conditions the combination of the
561 extension of the source registers followed by truncation of the result
562 gets rid of bits that didn't matter, and the operation might as well have
563 taken place at the narrower width and could save resources that way.
564 Examples include Logical OR where the source extension would place
565 zeros in the upper bits, the result will be truncated and throw those
566 zeros away.
567
568 Counterexamples include the previously mentioned FP16 arithmetic,
569 where for operations such as division of large numbers by very small
570 ones it should be clear that internal accuracy will play a major role
571 in influencing the result. Hence the rule that the calculation takes
572 place at the maximum bitwidth, and truncation follows afterwards.
573
574 ## Signed arithmetic
575
576 What happens when the operation involves signed arithmetic? Here the
577 implementor has to use common sense, and make sure behaviour is accurately
578 documented. If the result of the unmodified operation is sign-extended
579 because one of the inputs is signed, then the input source operands must
580 be first read at their overridden bitwidth and *then* sign-extended:
581
582 for i = 0 to VL-1:
583 src1 = get_polymorphed_reg(RA, srcwid, i)
584 src2 = get_polymorphed_reg(RB, srcwid, i)
585 opwidth = max(srcwid, destwid)
586 # srces known to be less than result width
587 src1 = sign_extend(src1, srcwid, opwidth)
588 src2 = sign_extend(src2, srcwid, opwidth)
589 result = op_signed(src1, src2, opwidth) # at max width
590 set_polymorphed_reg(rd, destwid, i, result)
591
592 The key here is that the cues are taken from the underlying operation.
593
594 ## Saturation
595
596 Audio DSPs need to be able to clip sound when the "volume" is adjusted,
597 but if it is too loud and the signal wraps, distortion occurs. The
598 solution is to clip (saturate) the audio and allow this to be detected.
599 In practical terms this is a post-result analysis however it needs to
600 take place at the largest bitwidth i.e. before a result is element width
601 truncated. Only then can the arithmetic saturation condition be detected:
602
603 for i = 0 to VL-1:
604 src1 = get_polymorphed_reg(RA, srcwid, i)
605 src2 = get_polymorphed_reg(RB, srcwid, i)
606 opwidth = max(srcwid, destwid)
607 # unsigned add
608 result = op_add(src1, src2, opwidth) # at max width
609 # now saturate (unsigned)
610 sat = max(result, (1<<destwid)-1)
611 set_polymorphed_reg(rd, destwid, i, sat)
612 # set sat overflow
613 if Rc=1:
614 CR[i].ov = (sat != result)
615
616 So the actual computation took place at the larger width, but was
617 post-analysed as an unsigned operation. If however "signed" saturation
618 is requested then the actual arithmetic operation has to be carefully
619 analysed to see what that actually means.
620
621 In terms of FP arithmetic, which by definition has a sign bit (so
622 always takes place as a signed operation anyway), the request to saturate
623 to signed min/max is pretty clear. However for integer arithmetic such
624 as shift (plain shift, not arithmetic shift), or logical operations
625 such as XOR, which were never designed to have the assumption that its
626 inputs be considered as signed numbers, common sense has to kick in,
627 and follow what CR0 does.
628
629 CR0 for Logical operations still applies: the test is still applied to
630 produce CR.eq, CR.lt and CR.gt analysis. Following this lead we may
631 do the same thing: although the input operations for and OR or XOR can
632 in no way be thought of as "signed" we may at least consider the result
633 to be signed, and thus apply min/max range detection -128 to +127 when
634 truncating down to 8 bit for example.
635
636 for i = 0 to VL-1:
637 src1 = get_polymorphed_reg(RA, srcwid, i)
638 src2 = get_polymorphed_reg(RB, srcwid, i)
639 opwidth = max(srcwid, destwid)
640 # logical op, signed has no meaning
641 result = op_xor(src1, src2, opwidth)
642 # now saturate (signed)
643 sat = max(result, (1<<destwid-1)-1)
644 sat = min(result, -(1<<destwid-1))
645 set_polymorphed_reg(rd, destwid, i, sat)
646
647 Overall here the rule is: apply common sense then document the behaviour
648 really clearly, for each and every operation.
649
650 # Quick recap so far
651
652 The above functionality pretty much covers around 85% of Vector ISA needs.
653 Predication is provided so that parallel if/then/else constructs can
654 be performed: critical given that sequential if/then statements and
655 branches simply do not translate successfully to Vector workloads.
656 VSPLAT capability is provided which is approximately 20% of all GPU
657 workload operations. Also covered, with elwidth overriding, is the
658 smaller arithmetic operations that caused ISAs developed from the
659 late 80s onwards to get themselves into a tiz when adding "Multimedia"
660 acceleration aka "SIMD" instructions.
661
662 Experienced Vector ISA readers will however have noted that VCOMPRESS
663 and VEXPAND are missing, as is Vector "reduce" (mapreduce) capability
664 and VGATHER and VSCATTER. Compress and Expand are covered by Twin
665 Predication, and yet to also be covered is fail-on-first, CR-based result
666 predication, and Subvectors and Swizzle.
667
668 ## SUBVL <a name="subvl"></a>
669
670 Adding in support for SUBVL is a matter of adding in an extra inner
671 for-loop, where register src and dest are still incremented inside the
672 inner part. Predication is still taken from the VL index, however it
673 is applied to the whole subvector:
674
675 function op_add(RT, RA, RB) # add not VADD!
676  int id=0, irs1=0, irs2=0;
677  predval = get_pred_val(FALSE, rd);
678 for i = 0 to VL-1:
679 if (predval & 1<<i) # predication uses intregs
680 for (s = 0; s < SUBVL; s++)
681 sd = id*SUBVL + s
682 srs1 = irs1*SUBVL + s
683 srs2 = irs2*SUBVL + s
684 ireg[RT+sd] <= ireg[RA+srs1] + ireg[RB+srs2];
685 if (!RT.isvec) break;
686 if (RT.isvec) { id += 1; }
687 if (RA.isvec) { irs1 += 1; }
688 if (RB.isvec) { irs2 += 1; }
689
690 The primary reason for this is because Shader Compilers treat vec2/3/4 as
691 "single units". Recognising this in hardware is just sensible.
692
693 # Swizzle <a name="swizzle"></a>
694
695 Swizzle is particularly important for 3D work. It allows in-place
696 reordering of XYZW, ARGB etc. and access of sub-portions of the same in
697 arbitrary order *without* requiring timeconsuming scalar mv instructions
698 (scalar due to the convoluted offsets).
699
700 Swizzling does not just do permutations: it allows arbitrary selection and multiple copying of
701 vec2/3/4 elements, such as XXXZ as the source operand, which will take
702 3 copies of the vec4 first element (vec4[0]), placing them at positions vec4[0],
703 vec4[1] and vec4[2], whilst the "Z" element (vec4[2]) was copied into vec4[3].
704
705 With somewhere between 10% and 30% of operations in 3D Shaders involving
706 swizzle this is a huge saving and reduces pressure on register files
707 due to having to use significant numbers of mv operations to get vector
708 elements to "line up".
709
710 In SV given the percentage of operations that also involve initialisation
711 to 0.0 or 1.0 into subvector elements the decision was made to include
712 those:
713
714 swizzle = get_swizzle_immed() # 12 bits
715 for (s = 0; s < SUBVL; s++)
716 remap = (swizzle >> 3*s) & 0b111
717 if remap < 4:
718 sm = id*SUBVL + remap
719 ireg[rd+s] <= ireg[RA+sm]
720 elif remap == 4:
721 ireg[rd+s] <= 0.0
722 elif remap == 5:
723 ireg[rd+s] <= 1.0
724
725 Note that a value of 6 (and 7) will leave the target subvector element
726 untouched. This is equivalent to a predicate mask which is built-in,
727 in immediate form, into the [[sv/mv.swizzle]] operation. mv.swizzle is
728 rare in that it is one of the few instructions needed to be added that
729 are never going to be part of a Scalar ISA. Even in High Performance
730 Compute workloads it is unusual: it is only because SV is targetted at
731 3D and Video that it is being considered.
732
733 Some 3D GPU ISAs also allow for two-operand subvector swizzles. These are
734 sufficiently unusual, and the immediate opcode space required so large
735 (12 bits per vec4 source),
736 that the tradeoff balance was decided in SV to only add mv.swizzle.
737
738 # Twin Predication
739
740 Twin Predication is cool. Essentially it is a back-to-back
741 VCOMPRESS-VEXPAND (a multiple sequentially ordered VINSERT). The compress
742 part is covered by the source predicate and the expand part by the
743 destination predicate. Of course, if either of those is all 1s then
744 the operation degenerates *to* VCOMPRESS or VEXPAND, respectively.
745
746 function op(RT, RS):
747  ps = get_pred_val(FALSE, RS); # predication on src
748  pd = get_pred_val(FALSE, RT); # ... AND on dest
749  for (int i = 0, int j = 0; i < VL && j < VL;):
750 if (RS.isvec) while (!(ps & 1<<i)) i++;
751 if (RT.isvec) while (!(pd & 1<<j)) j++;
752 reg[RT+j] = SCALAR_OPERATION_ON(reg[RS+i])
753 if (RS.isvec) i++;
754 if (RT.isvec) j++; else break
755
756 Here's the interesting part: given the fact that SV is a "context"
757 extension, the above pattern can be applied to a lot more than just MV,
758 which is normally only what VCOMPRESS and VEXPAND do in traditional
759 Vector ISAs: move registers. Twin Predication can be applied to `extsw`
760 or `fcvt`, LD/ST operations and even `rlwinmi` and other operations
761 taking a single source and immediate(s) such as `addi`. All of these
762 are termed single-source, single-destination.
763
764 LDST Address-generation, or AGEN, is a special case of single source,
765 because elwidth overriding does not make sense to apply to the computation
766 of the 64 bit address itself, but it *does* make sense to apply elwidth
767 overrides to the data being accessed *at* that memory address.
768
769 It also turns out that by using a single bit set in the source or
770 destination, *all* the sequential ordered standard patterns of Vector
771 ISAs are provided: VSPLAT, VSELECT, VINSERT, VCOMPRESS, VEXPAND.
772
773 The only one missing from the list here, because it is non-sequential,
774 is VGATHER (and VSCATTER): moving registers by specifying a vector of
775 register indices (`regs[rd] = regs[regs[rs]]` in a loop). This one is
776 tricky because it typically does not exist in standard scalar ISAs.
777 If it did it would be called [[sv/mv.x]]. Once Vectorised, it's a
778 VGATHER/VSCATTER.
779
780 # CR predicate result analysis
781
782 OpenPOWER has Condition Registers. These store an analysis of the result
783 of an operation to test it for being greater, less than or equal to zero.
784 What if a test could be done, similar to branch BO testing, which hooked
785 into the predication system?
786
787 for i in range(VL):
788 # predication test, skip all masked out elements.
789 if predicate_masked_out(i): continue # skip
790 result = op(iregs[RA+i], iregs[RB+i])
791 CRnew = analyse(result) # calculates eq/lt/gt
792 # Rc=1 always stores the CR
793 if RC1 or Rc=1: crregs[offs+i] = CRnew
794 if RC1: continue # RC1 mode skips result store
795 # now test CR, similar to branch
796 if CRnew[BO[0:1]] == BO[2]:
797 # result optionally stored but CR always is
798 iregs[RT+i] = result
799
800 Note that whilst the Vector of CRs is always written to the CR regfile,
801 only those result elements that pass the BO test get written to the
802 integer regfile (when RC1 mode is not set). In RC1 mode the CR is always
803 stored, but the result never is. This effectively turns every arithmetic
804 operation into a type of `cmp` instruction.
805
806 Here for example if FP overflow occurred, and the CR testing was carried
807 out for that, all valid results would be stored but invalid ones would
808 not, but in addition the Vector of CRs would contain the indicators of
809 which ones failed. With the invalid results being simply not written
810 this could save resources (save on register file writes).
811
812 Also expected is, due to the fact that the predicate mask is effectively
813 ANDed with the post-result analysis as a secondary type of predication,
814 that there would be savings to be had in some types of operations where
815 the post-result analysis, if not included in SV, would need a second
816 predicate calculation followed by a predicate mask AND operation.
817
818 Note, hilariously, that Vectorised Condition Register Operations (crand,
819 cror) may also have post-result analysis applied to them. With Vectors
820 of CRs being utilised *for* predication, possibilities for compact and
821 elegant code begin to emerge from this innocuous-looking addition to SV.
822
823 # Exception-based Fail-on-first
824
825 One of the major issues with Vectorised LD/ST operations is when a
826 batch of LDs cross a page-fault boundary. With considerable resources
827 being taken up with in-flight data, a large Vector LD being cancelled
828 or unable to roll back is either a detriment to performance or can cause
829 data corruption.
830
831 What if, then, rather than cancel an entire Vector LD because the last
832 operation would cause a page fault, instead truncate the Vector to the
833 last successful element?
834
835 This is called "fail-on-first". Here is strncpy, illustrated from RVV:
836
837 strncpy:
838 c.mv a3, a0 # Copy dst
839 loop:
840 setvli x0, a2, vint8 # Vectors of bytes.
841 vlbff.v v1, (a1) # Get src bytes
842 vseq.vi v0, v1, 0 # Flag zero bytes
843 vmfirst a4, v0 # Zero found?
844 vmsif.v v0, v0 # Set mask up to and including zero byte.
845 vsb.v v1, (a3), v0.t # Write out bytes
846 c.bgez a4, exit # Done
847 csrr t1, vl # Get number of bytes fetched
848 c.add a1, a1, t1 # Bump src pointer
849 c.sub a2, a2, t1 # Decrement count.
850 c.add a3, a3, t1 # Bump dst pointer
851 c.bnez a2, loop # Anymore?
852 exit:
853 c.ret
854
855 Vector Length VL is truncated inherently at the first page faulting
856 byte-level LD. Otherwise, with more powerful hardware the number of
857 elements LOADed from memory could be dozens to hundreds or greater
858 (memory bandwidth permitting).
859
860 With VL truncated the analysis looking for the zero byte and the
861 subsequent STORE (a straight ST, not a ffirst ST) can proceed, safe in the
862 knowledge that every byte loaded in the Vector is valid. Implementors are
863 even permitted to "adapt" VL, truncating it early so that, for example,
864 subsequent iterations of loops will have LD/STs on aligned boundaries.
865
866 SIMD strncpy hand-written assembly routines are, to be blunt about it,
867 a total nightmare. 240 instructions is not uncommon, and the worst
868 thing about them is that they are unable to cope with detection of a
869 page fault condition.
870
871 Note: see <https://bugs.libre-soc.org/show_bug.cgi?id=561>
872
873 # Data-dependent fail-first
874
875 This is a minor variant on the CR-based predicate-result mode. Where
876 pred-result continues with independent element testing (any of which may
877 be parallelised), data-dependent fail-first *stops* at the first failure:
878
879 if Rc=0: BO = inv<<2 | 0b00 # test CR.eq bit z/nz
880 for i in range(VL):
881 # predication test, skip all masked out elements.
882 if predicate_masked_out(i): continue # skip
883 result = op(iregs[RA+i], iregs[RB+i])
884 CRnew = analyse(result) # calculates eq/lt/gt
885 # now test CR, similar to branch
886 if CRnew[BO[0:1]] != BO[2]:
887 VL = i # truncate: only successes allowed
888 break
889 # test passed: store result (and CR?)
890 if not RC1: iregs[RT+i] = result
891 if RC1 or Rc=1: crregs[offs+i] = CRnew
892
893 This is particularly useful, again, for FP operations that might overflow,
894 where it is desirable to end the loop early, but also desirable to
895 complete at least those operations that were okay (passed the test)
896 without also having to slow down execution by adding extra instructions
897 that tested for the possibility of that failure, in advance of doing
898 the actual calculation.
899
900 The only minor downside here though is the change to VL, which in some
901 implementations may cause pipeline stalls. This was one of the reasons
902 why CR-based pred-result analysis was added, because that at least is
903 entirely paralleliseable.
904
905 # Instruction format
906
907 Whilst this overview shows the internals, it does not go into detail
908 on the actual instruction format itself. There are a couple of reasons
909 for this: firstly, it's under development, and secondly, it needs to be
910 proposed to the OpenPOWER Foundation ISA WG for consideration and review.
911
912 That said: draft pages for [[sv/setvl]] and [[sv/svp64]] are written up.
913 The `setvl` instruction is pretty much as would be expected from a
914 Cray style VL instruction: the only differences being that, firstly,
915 the MAXVL (Maximum Vector Length) has to be specified, because that
916 determines - precisely - how many of the *scalar* registers are to be
917 used for a given Vector. Secondly: within the limit of MAXVL, VL is
918 required to be set to the requested value. By contrast, RVV systems
919 permit the hardware to set arbitrary values of VL.
920
921 The other key question is of course: what's the actual instruction format,
922 and what's in it? Bearing in mind that this requires OPF review, the
923 current draft is at the [[sv/svp64]] page, and includes space for all the
924 different modes, the predicates, element width overrides, SUBVL and the
925 register extensions, in 24 bits. This just about fits into an OpenPOWER
926 v3.1B 64 bit Prefix by borrowing some of the Reserved Encoding space.
927 The v3.1B suffix - containing as it does a 32 bit OpenPOWER instruction -
928 aligns perfectly with SV.
929
930 Further reading is at the main [[SV|sv]] page.
931
932 # Conclusion
933
934 Starting from a scalar ISA - OpenPOWER v3.0B - it was shown above that,
935 with conceptual sub-loops, a Scalar ISA can be turned into a Vector one,
936 by embedding Scalar instructions - unmodified - into a Vector "context"
937 using "Prefixing". With careful thought, this technique reaches 90%
938 par with good Vector ISAs, increasing to 95% with the addition of a
939 mere handful of additional context-vectoriseable scalar instructions
940 ([[sv/mv.x]] amongst them).
941
942 What is particularly cool about the SV concept is that custom extensions
943 and research need not be concerned about inventing new Vector instructions
944 and how to get them to interact with the Scalar ISA: they are effectively
945 one and the same. Any new instruction added at the Scalar level is
946 inherently and automatically Vectorised, following some simple rules.
947