bug 1244: update pospopcnt.s assembler comments
[libreriscv.git] / openpower / sv / overview.mdwn
1 # SV Overview
2
3 **SV is in DRAFT STATUS**. SV has not yet been submitted to the OpenPOWER
4 Foundation ISA WG for review.
5
6 This document provides an overview and introduction as to why SV (a
7 [[!wikipedia Cray]]-style Vector augmentation to
8 [[!wikipedia OpenPOWER]]) exists, and how it works.
9
10 **Sponsored by NLnet under the Privacy and Enhanced Trust Programme**
11
12 Links:
13
14 * This page: [http://libre-soc.org/openpower/sv/overview](http://libre-soc.org/openpower/sv/overview)
15 * [FOSDEM2021 SimpleV for Power ISA](https://fosdem.org/2021/schedule/event/the_libresoc_project_simple_v_vectorization/)
16 * FOSDEM2021 presentation <https://www.youtube.com/watch?v=FS6tbfyb2VA>
17 * [[discussion]] and
18 [bugreport](https://bugs.libre-soc.org/show_bug.cgi?id=556)
19 feel free to add comments, questions.
20 * [[SV|sv]]
21 * [[sv/svp64]]
22 * [x86 REP instruction](https://c9x.me/x86/html/file_module_x86_id_279.html):
23 a useful way to quickly understand that the core of the SV concept
24 is not new.
25 * [Article about register tagging](http://science.lpnu.ua/sites/default/files/journal-paper/2019/jul/17084/volum3number1text-9-16_1.pdf) showing
26 that tagging is not a new idea either. Register tags
27 are also used in the Mill Architecture.
28
29 Table of contents:
30
31 [[!toc]]
32
33 # Introduction: SIMD and Cray Vectors
34
35 SIMD, the primary method for easy parallelism of the
36 past 30 years in Computer Architectures, is
37 [known to be harmful](https://www.sigarch.org/simd-instructions-considered-harmful/).
38 SIMD provides a seductive simplicity that is easy to implement in
39 hardware. With each doubling in width it promises increases in raw
40 performance without the complexity of either multi-issue or out-of-order
41 execution.
42
43 Unfortunately, even with predication added, SIMD only becomes more and
44 more problematic with each power of two SIMD width increase introduced
45 through an ISA revision. The opcode proliferation, at O(N^6), inexorably
46 spirals out of control in the ISA, detrimentally impacting the hardware,
47 the software, the compilers and the testing and compliance. Here are
48 the typical dimensions that result in such massive proliferation,
49 based on mass-volume DSPs and Micro-Processors:
50
51 * Operation (add, mul)
52 * bitwidth (8, 16, 32, 64, 128)
53 * Conversion between bitwidths (FP16-FP32-64)
54 * Signed/unsigned
55 * HI/LO swizzle (Audio L/R channels)
56 - HI/LO selection on src 1
57 - selection on src 2
58 - selection on dest
59 - Example: AndesSTAR Audio DSP
60 * Saturation (Clamping at max range)
61
62 These typically are multiplied up to produce explicit opcodes numbering
63 in the thousands on, for example the ARC Video/DSP cores.
64
65 Cray-style variable-length Vectors on the other hand result in
66 stunningly elegant and small loops, exceptionally high data throughput
67 per instruction (by one *or greater* orders of magnitude than SIMD), with
68 no alarmingly high setup and cleanup code, where at the hardware level
69 the microarchitecture may execute from one element right the way through
70 to tens of thousands at a time, yet the executable remains exactly the
71 same and the ISA remains clear, true to the RISC paradigm, and clean.
72 Unlike in SIMD, powers of two limitations are not involved in the ISA
73 or in the assembly code.
74
75 SimpleV takes the Cray style Vector principle and applies it in the
76 abstract to a Scalar ISA in the same way that x86 used to do its "REP" instruction. In the process, "context" is applied, allowing amongst other things
77 a register file size
78 increase using "tagging" (similar to how x86 originally extended
79 registers from 32 to 64 bit).
80
81 ![Single-Issue concept](/openpower/svp64-primer/img/power_pipelines.svg){ width=40% height=20% }
82
83 ## SV
84
85 The fundamentals are (just like x86 "REP"):
86
87 * The Program Counter (PC) gains a "Sub Counter" context (Sub-PC)
88 * Vectorization pauses the PC and runs a Sub-PC loop from 0 to VL-1
89 (where VL is Vector Length)
90 * The [[Program Order]] of "Sub-PC" instructions must be preserved,
91 just as is expected of instructions ordered by the PC.
92 * Some registers may be "tagged" as Vectors
93 * During the loop, "Vector"-tagged register are incremented by
94 one with each iteration, executing the *same instruction*
95 but with *different registers*
96 * Once the loop is completed *only then* is the Program Counter
97 allowed to move to the next instruction.
98
99 ![Multi-Issue with Predicated SIMD back-end ALUs](/openpower/svp64-primer/img/sv_multi_issue.svg){ width=40% height=40% }
100
101 Hardware (and simulator) implementors are free and clear to implement this
102 as literally a for-loop, sitting in between instruction decode and issue.
103 Higher performance systems may deploy SIMD backends, multi-issue and
104 out-of-order execution, although it is strongly recommended to add
105 predication capability directly into SIMD backend units.
106
107 A typical Cray-style Scalable Vector ISA (where a SIMD one has a fixed
108 non-negotiable static parameter instead of a runtime-dynamic VL)
109 performs its arithmetic as:
110
111 for i = 0 to VL-1:
112 VPR(RT)[i] = VPR[RA][i] + VPR(RB)[i]
113
114 In Power ISA v3.0B pseudo-code form, an ADD operation in Simple-V,
115 assuming both source and destination have been "tagged" as Vectors,
116 is simply:
117
118 for i = 0 to VL-1:
119 GPR(RT+i) = GPR(RA+i) + GPR(RB+i)
120
121 At its heart, SimpleV really is this simple. On top of this fundamental
122 basis further refinements can be added which build up towards an extremely
123 powerful Vector augmentation system, with very little in the way of
124 additional opcodes required: simply external "context".
125
126 x86 was originally only 80 instructions: prior to AVX512 over 1,300
127 additional instructions have been added, almost all of them SIMD.
128
129 RISC-V RVV as of version 0.9 is over 188 instructions (more than the
130 rest of RV64G combined: 80 for RV64G and 27 for C). Over 95% of that
131 functionality is added to Power v3.0B, by SimpleV augmentation,
132 with around 5 to 8 instructions.
133
134 Even in Power ISA v3.0B, the Scalar Integer ISA is around 150
135 instructions, with IEEE754 FP adding approximately 80 more. VSX, being
136 based on SIMD design principles, adds somewhere in the region of 600 more.
137 SimpleV again provides over 95% of VSX functionality, simply by augmenting
138 the *Scalar* Power ISA, and in the process providing features such
139 as predication, which VSX is entirely missing.
140
141 AVX512, SVE2, VSX, RVV, all of these systems have to provide different
142 types of register files: Scalar and Vector is the minimum. AVX512
143 even provides a mini mask regfile, followed by explicit instructions
144 that handle operations on each of them *and map between all of them*.
145 SV simply not only uses the existing scalar regfiles (including CRs),
146 but because operations exist within Power ISA to cover interactions
147 between the scalar regfiles (`mfcr`, `fcvt`) there is very little that
148 needs to be added.
149
150 In fairness to both VSX and RVV, there are things that are not provided
151 by SimpleV:
152
153 * 128 bit or above arithmetic and other operations
154 (VSX Rijndael and SHA primitives; VSX shuffle and bitpermute operations)
155 * register files above 128 entries
156 * Vector lengths over 64
157 * 32-bit instruction lengths. [[sv/svp64]] had to be added as 64 bit.
158
159 These limitations, which stem inherently from the adaptation process of
160 starting from a Scalar ISA, are not insurmountable. Over time, they may
161 well be addressed in future revisions of SV.
162
163 The rest of this document builds on the above simple loop to add:
164
165 * Vector-Scalar, Scalar-Vector and Scalar-Scalar operation
166 (of all register files: Integer, FP *and CRs*)
167 * Traditional Vector operations (VSPLAT, VINSERT, VCOMPRESS etc)
168 * Predication masks (essential for parallel if/else constructs)
169 * 8, 16 and 32 bit integer operations, and both FP16 and BF16.
170 * Compacted operations into registers (normally only provided by SIMD)
171 * Fail-on-first (introduced in ARM SVE2)
172 * A new concept: Data-dependent fail-first
173 * A completely new concept: "Twin Predication"
174 * vec2/3/4 "Subvectors" and Swizzling (standard fare for 3D)
175
176 All of this is *without modifying the Power v3.0B ISA*, except to add
177 "wrapping context", similar to how v3.1B 64 Prefixes work.
178
179 # Adding Scalar / Vector
180
181 The first augmentation to the simple loop is to add the option for all
182 source and destinations to all be either scalar or vector. As a FSM
183 this is where our "simple" loop gets its first complexity.
184
185 function op_add(RT, RA, RB) # add not VADD!
186 int id=0, irs1=0, irs2=0;
187 for i = 0 to VL-1:
188 ireg[RT+id] <= ireg[RA+irs1] + ireg[RB+irs2];
189 if (!RT.isvec) break;
190 if (RT.isvec) { id += 1; }
191 if (RA.isvec) { irs1 += 1; }
192 if (RB.isvec) { irs2 += 1; }
193
194 This could have been written out as eight separate cases: one each for
195 when each of `RA`, `RB` or `RT` is scalar or vector. Those eight cases,
196 when optimally combined, result in the pseudocode above.
197
198 With some walkthroughs it is clear that the loop exits immediately
199 after the first scalar destination result is written, and that when the
200 destination is a Vector the loop proceeds to fill up the register file,
201 sequentially, starting at `RT` and ending at `RT+VL-1`. The two source
202 registers will, independently, either remain pointing at `RB` or `RA`
203 respectively, or, if marked as Vectors, will march incrementally in
204 lockstep, producing element results along the way, as the destination
205 also progresses through elements.
206
207 In this way all the eight permutations of Scalar and Vector behaviour
208 are covered, although without predication the scalar-destination ones are
209 reduced in usefulness. It does however clearly illustrate the principle.
210
211 Note in particular: there is no separate Scalar add instruction and
212 separate Vector instruction and separate Scalar-Vector instruction, *and
213 there is no separate Vector register file*: it's all the same instruction,
214 on the standard register file, just with a loop. Scalar happens to set
215 that loop size to one.
216
217 The important insight from the above is that, strictly speaking, Simple-V
218 is not really a Vectorization scheme at all: it is more of a hardware
219 ISA "Compression scheme", allowing as it does for what would normally
220 require multiple sequential instructions to be replaced with just one.
221 This is where the rule that Program Order must be preserved in Sub-PC
222 execution derives from. However in other ways, which will emerge below,
223 the "tagging" concept presents an opportunity to include features
224 definitely not common outside of Vector ISAs, and in that regard it's
225 definitely a class of Vectorization.
226
227 ## Register "tagging"
228
229 As an aside: in [[sv/svp64]] the encoding which allows SV to both extend
230 the range beyond r0-r31 and to determine whether it is a scalar or vector
231 is encoded in two to three bits, depending on the instruction.
232
233 The reason for using so few bits is because there are up to *four*
234 registers to mark in this way (`fma`, `isel`) which starts to be of
235 concern when there are only 24 available bits to specify the entire SV
236 Vectorization Context. In fact, for a small subset of instructions it
237 is just not possible to tag every single register. Under these rare
238 circumstances a tag has to be shared between two registers.
239
240 Below is the pseudocode which expresses the relationship which is usually
241 applied to *every* register:
242
243 if extra3_mode:
244 spec = EXTRA3 # bit 2 s/v, 0-1 extends range
245 else:
246 spec = EXTRA2 << 1 # same as EXTRA3, shifted
247 if spec[2]: # vector
248 RA.isvec = True
249 return (RA << 2) | spec[0:1]
250 else: # scalar
251 RA.isvec = False
252 return (spec[0:1] << 5) | RA
253
254 Here we can see that the scalar registers are extended in the top bits,
255 whilst vectors are shifted up by 2 bits, and then extended in the LSBs.
256 Condition Registers have a slightly different scheme, along the same
257 principle, which takes into account the fact that each CR may be bit-level
258 addressed by Condition Register operations.
259
260 Readers familiar with the Power ISA will know of Rc=1 operations that create
261 an associated post-result "test", placing this test into an implicit
262 Condition Register. The original researchers who created the POWER ISA
263 chose CR0 for Integer, and CR1 for Floating Point. These *also become
264 Vectorized* - implicitly - if the associated destination register is
265 also Vectorized. This allows for some very interesting savings on
266 instruction count due to the very same CR Vectors being predication masks.
267
268 # Adding single predication
269
270 The next step is to add a single predicate mask. This is where it gets
271 interesting. Predicate masks are a bitvector, each bit specifying, in
272 order, whether the element operation is to be skipped ("masked out")
273 or allowed. If there is no predicate, it is set to all 1s, which is
274 effectively the same as "no predicate".
275
276 function op_add(RT, RA, RB) # add not VADD!
277 int id=0, irs1=0, irs2=0;
278 predval = get_pred_val(FALSE, RT); # dest mask
279 for i = 0 to VL-1:
280 if (predval & 1<<i) # predication bit test
281 ireg[RT+id] <= ireg[RA+irs1] + ireg[RB+irs2];
282 if (!RT.isvec) break;
283 if (RT.isvec) { id += 1; }
284 if (RA.isvec) { irs1 += 1; }
285 if (RB.isvec) { irs2 += 1; }
286
287 The key modification is to skip the creation and storage of the result
288 if the relevant predicate mask bit is clear, but *not the progression
289 through the registers*.
290
291 A particularly interesting case is if the destination is scalar, and the
292 first few bits of the predicate are zero. The loop proceeds to increment
293 the Vector *source* registers until the first nonzero predicate bit is
294 found, whereupon a single *Scalar* result is computed, and *then* the loop
295 exits.
296 This in effect uses the predicate to perform *Vector source indexing*.
297 This case was not possible without the predicate mask. Also, interestingly,
298 the predicate mode `1<<r3` is specifically provided as a way to select
299 one single entry from a Vector.
300
301 If all three registers are marked as Vector then the "traditional"
302 predicated Vector behaviour is provided. Yet, just as before, all other
303 options are still provided, right the way back to the pure-scalar case,
304 as if this were a straight Power ISA v3.0B non-augmented instruction.
305
306 Single Predication therefore provides several modes traditionally seen
307 in Vector ISAs:
308
309 * VINSERT: the predicate may be set as a single bit, the sources are
310 scalar and the destination a vector.
311 * VSPLAT (result broadcasting) is provided by making the sources scalar
312 and the destination a vector, and having no predicate set or having
313 multiple bits set.
314 * VSELECT is provided by setting up (at least one of) the sources as a
315 vector, using a single bit in the predicate, and the destination as
316 a scalar.
317
318 All of this capability and coverage without even adding one single actual
319 Vector opcode, let alone 180, 600 or 1,300!
320
321 # Predicate "zeroing" mode
322
323 Sometimes with predication it is ok to leave the masked-out element
324 alone (not modify the result) however sometimes it is better to zero the
325 masked-out elements. Zeroing can be combined with bit-wise ORing to build
326 up vectors from multiple predicate patterns: the same combining with
327 nonzeroing involves more mv operations and predicate mask operations.
328 Our pseudocode therefore ends up as follows, to take the enhancement
329 into account:
330
331 function op_add(RT, RA, RB) # add not VADD!
332 int id=0, irs1=0, irs2=0;
333 predval = get_pred_val(FALSE, RT); # dest pred
334 for i = 0 to VL-1:
335 if (predval & 1<<i) # predication bit test
336 ireg[RT+id] <= ireg[RA+irs1] + ireg[RB+irs2];
337 if (!RT.isvec) break;
338 else if zeroing: # predicate failed
339 ireg[RT+id] = 0 # set element to zero
340 if (RT.isvec) { id += 1; }
341 if (RA.isvec) { irs1 += 1; }
342 if (RB.isvec) { irs2 += 1; }
343
344 Many Vector systems either have zeroing or they have nonzeroing, they
345 do not have both. This is because they usually have separate Vector
346 register files. However SV sits on top of standard register files and
347 consequently there are advantages to both, so both are provided.
348
349 # Element Width overrides <a name="elwidths"></a>
350
351 All good Vector ISAs have the usual bitwidths for operations: 8/16/32/64
352 bit integer operations, and IEEE754 FP32 and 64. Often also included
353 is FP16 and more recently BF16. The *really* good Vector ISAs have
354 variable-width vectors right down to bitlevel, and as high as 1024 bit
355 arithmetic per element, as well as IEEE754 FP128.
356
357 SV has an "override" system that *changes* the bitwidth of operations
358 that were intended by the original scalar ISA designers to have (for
359 example) 64 bit operations (only). The override widths are 8, 16 and
360 32 for integer, and FP16 and FP32 for IEEE754 (with BF16 to be added in
361 the future).
362
363 This presents a particularly intriguing conundrum given that the Power
364 Scalar ISA was never designed with for example 8 bit operations in mind,
365 let alone Vectors of 8 bit.
366
367 The solution comes in terms of rethinking the definition of a Register
368 File. The typical regfile may be considered to be a multi-ported SRAM
369 block, 64 bits wide and usually 32 entries deep, to give 32 64 bit
370 registers. In c this would be:
371
372 typedef uint64_t reg_t;
373 reg_t int_regfile[32]; // standard scalar 32x 64bit
374
375 Conceptually, to get our variable element width vectors,
376 we may think of the regfile as instead being the following c-based data
377 structure, where all types uint16_t etc. are in little-endian order:
378
379 #pragma(packed)
380 typedef union {
381 uint8_t actual_bytes[8];
382 uint8_t b[0]; // array of type uint8_t
383 uint16_t s[0]; // array of LE ordered uint16_t
384 uint32_t i[0];
385 uint64_t l[0]; // default Power ISA uses this
386 } reg_t;
387
388 reg_t int_regfile[128]; // SV extends to 128 regs
389
390 This means that Vector elements start from locations specified by 64 bit
391 "register" but that from that location onwards the elements *overlap
392 subsequent registers*.
393
394 ![image](/openpower/svp64-primer/img/svp64_regs.svg){ width=40% height=40% }
395
396 Here is another way to view the same concept, bearing in mind that it
397 is assumed a LE memory order:
398
399 uint8_t reg_sram[8*128];
400 uint8_t *actual_bytes = &reg_sram[RA*8];
401 if elwidth == 8:
402 uint8_t *b = (uint8_t*)actual_bytes;
403 b[idx] = result;
404 if elwidth == 16:
405 uint16_t *s = (uint16_t*)actual_bytes;
406 s[idx] = result;
407 if elwidth == 32:
408 uint32_t *i = (uint32_t*)actual_bytes;
409 i[idx] = result;
410 if elwidth == default:
411 uint64_t *l = (uint64_t*)actual_bytes;
412 l[idx] = result;
413
414 Starting with all zeros, setting `actual_bytes[3]` in any given `reg_t`
415 to 0x01 would mean that:
416
417 * b[0..2] = 0x00 and b[3] = 0x01
418 * s[0] = 0x0000 and s[1] = 0x0001
419 * i[0] = 0x00010000
420 * l[0] = 0x0000000000010000
421
422 In tabular form, starting an elwidth=8 loop from r0 and extending for
423 16 elements would begin at r0 and extend over the entirety of r1:
424
425 | byte0 | byte1 | byte2 | byte3 | byte4 | byte5 | byte6 | byte7 |
426 | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- |
427 r0 | b[0] | b[1] | b[2] | b[3] | b[4] | b[5] | b[6] | b[7] |
428 r1 | b[8] | b[9] | b[10] | b[11] | b[12] | b[13] | b[14] | b[15] |
429
430 Starting an elwidth=16 loop from r0 and extending for
431 7 elements would begin at r0 and extend partly over r1. Note that
432 b0 indicates the low byte (lowest 8 bits) of each 16-bit word, and
433 b1 represents the top byte:
434
435 | byte0 | byte1 | byte2 | byte3 | byte4 | byte5 | byte6 | byte7 |
436 | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- |
437 r0 | s[0].b0 b1 | s[1].b0 b1 | s[2].b0 b1 | s[3].b0 b1 |
438 r1 | s[4].b0 b1 | s[5].b0 b1 | s[6].b0 b1 | unmodified |
439
440 Likewise for elwidth=32, and a loop extending for 3 elements. b0 through
441 b3 represent the bytes (numbered lowest for LSB and highest for MSB) within
442 each element word:
443
444 | byte0 | byte1 | byte2 | byte3 | byte4 | byte5 | byte6 | byte7 |
445 | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- |
446 r0 | w[0].b0 b1 b2 b3 | w[1].b0 b1 b2 b3 |
447 r1 | w[2].b0 b1 b2 b3 | unmodified unmodified |
448
449 64-bit (default) elements access the full registers. In each case the
450 register number (`RT`, `RA`) indicates the *starting* point for the storage
451 and retrieval of the elements.
452
453 Our simple loop, instead of accessing the array of regfile entries
454 with a computed index `iregs[RT+i]`, would access the appropriate element
455 of the appropriate width, such as `iregs[RT].s[i]` in order to access
456 16 bit elements starting from RT. Thus we have a series of overlapping
457 conceptual arrays that each start at what is traditionally thought of as
458 "a register". It then helps if we have a couple of routines:
459
460 get_polymorphed_reg(reg, bitwidth, offset):
461 reg_t res = 0;
462 if (!reg.isvec): # scalar
463 offset = 0
464 if bitwidth == 8:
465 reg.b = int_regfile[reg].b[offset]
466 elif bitwidth == 16:
467 reg.s = int_regfile[reg].s[offset]
468 elif bitwidth == 32:
469 reg.i = int_regfile[reg].i[offset]
470 elif bitwidth == default: # 64
471 reg.l = int_regfile[reg].l[offset]
472 return res
473
474 set_polymorphed_reg(reg, bitwidth, offset, val):
475 if (!reg.isvec): # scalar
476 offset = 0
477 if bitwidth == 8:
478 int_regfile[reg].b[offset] = val
479 elif bitwidth == 16:
480 int_regfile[reg].s[offset] = val
481 elif bitwidth == 32:
482 int_regfile[reg].i[offset] = val
483 elif bitwidth == default: # 64
484 int_regfile[reg].l[offset] = val
485
486 These basically provide a convenient parameterised way to access the
487 register file, at an arbitrary vector element offset and an arbitrary
488 element width. Our first simple loop thus becomes:
489
490 for i = 0 to VL-1:
491 src1 = get_polymorphed_reg(RA, srcwid, i)
492 src2 = get_polymorphed_reg(RB, srcwid, i)
493 result = src1 + src2 # actual add here
494 set_polymorphed_reg(RT, destwid, i, result)
495
496 With this loop, if elwidth=16 and VL=3 the first 48 bits of the target
497 register will contain three 16 bit addition results, and the upper 16
498 bits will be *unaltered*.
499
500 Note that things such as zero/sign-extension (and predication) have
501 been left out to illustrate the elwidth concept. Also note that it turns
502 out to be important to perform the operation internally at effectively an *infinite* bitwidth such that any truncation, rounding errors or
503 other artefacts may all be ironed out. This turns out to be important
504 when applying Saturation for Audio DSP workloads, particularly for multiply and IEEE754 FP rounding. By "infinite" this is conceptual only: in reality, the application of the different truncations and width-extensions set a fixed deterministic practical limit on the internal precision needed, on a per-operation basis.
505
506 Other than that, element width overrides, which can be applied to *either*
507 source or destination or both, are pretty straightforward, conceptually.
508 The details, for hardware engineers, involve byte-level write-enable
509 lines, which is exactly what is used on SRAMs anyway. Compiler writers
510 have to alter Register Allocation Tables to byte-level granularity.
511
512 One critical thing to note: upper parts of the underlying 64 bit
513 register are *not zero'd out* by a write involving a non-aligned Vector
514 Length. An 8 bit operation with VL=7 will *not* overwrite the 8th byte
515 of the destination. The only situation where a full overwrite occurs
516 is on "default" behaviour. This is extremely important to consider the
517 register file as a byte-level store, not a 64-bit-level store.
518
519 ## Why a LE regfile?
520
521 The concept of having a regfile where the byte ordering of the underlying
522 SRAM seems utter nonsense. Surely, a hardware implementation gets to
523 choose the order, right? It's memory only where LE/BE matters, right? The
524 bytes come in, all registers are 64 bit and it's just wiring, right?
525
526 Ordinarily this would be 100% correct, in both a scalar ISA and in a Cray
527 style Vector one. The assumption in that last question was, however, "all
528 registers are 64 bit". SV allows SIMD-style packing of vectors into the
529 64 bit registers, where one instruction and the next may interpret that
530 very same register as containing elements of completely different widths.
531
532 Consequently it becomes critically important to decide a byte-order.
533 That decision was - arbitrarily - LE mode. Actually it wasn't arbitrary
534 at all: it was such hell to implement BE supported interpretations of CRs
535 and LD/ST in LibreSOC, based on a terse spec that provides insufficient
536 clarity and assumes significant working knowledge of the Power ISA, with
537 arbitrary insertions of 7-index here and 3-bitindex there, the decision
538 to pick LE was extremely easy.
539
540 Without such a decision, if two words are packed as elements into a 64
541 bit register, what does this mean? Should they be inverted so that the
542 lower indexed element goes into the HI or the LO word? should the 8
543 bytes of each register be inverted? Should the bytes in each element
544 be inverted? Should the element indexing loop order be broken onto
545 discontiguous chunks such as 32107654 rather than 01234567, and if so
546 at what granularity of discontinuity? These are all equally valid and
547 legitimate interpretations of what constitutes "BE" and they all cause
548 merry mayhem.
549
550 The decision was therefore made: the c typedef union is the canonical
551 definition, and its members are defined as being in LE order. From there,
552 implementations may choose whatever internal HDL wire order they like
553 as long as the results produced conform to the elwidth pseudocode.
554
555 *Note: it turns out that both x86 SIMD and NEON SIMD follow this convention, namely that both are implicitly LE, even though their ISA Manuals may not explicitly spell this out*
556
557 * <https://developer.arm.com/documentation/ddi0406/c/Application-Level-Architecture/Application-Level-Memory-Model/Endian-support/Endianness-in-Advanced-SIMD?lang=en>
558 * <https://stackoverflow.com/questions/24045102/how-does-endianness-work-with-simd-registers>
559 * <https://llvm.org/docs/BigEndianNEON.html>
560
561
562 ## Source and Destination overrides
563
564 A minor fly in the ointment: what happens if the source and destination
565 are over-ridden to different widths? For example, FP16 arithmetic is
566 not accurate enough and may introduce rounding errors when up-converted
567 to FP32 output. The rule is therefore set:
568
569 The operation MUST take place effectively at infinite precision:
570 actual precision determined by the operation and the operand widths
571
572 In pseudocode this is:
573
574 for i = 0 to VL-1:
575 src1 = get_polymorphed_reg(RA, srcwid, i)
576 src2 = get_polymorphed_reg(RB, srcwid, i)
577 opwidth = max(srcwid, destwid) # usually
578 result = op_add(src1, src2, opwidth) # at max width
579 set_polymorphed_reg(rd, destwid, i, result)
580
581 In reality the source and destination widths determine the actual required
582 precision in a given ALU. The reason for setting "effectively" infinite precision
583 is illustrated for example by Saturated-multiply, where if the internal precision was insufficient it would not be possible to correctly determine the maximum clip range had been exceeded.
584
585 Thus it will turn out that under some conditions the combination of the
586 extension of the source registers followed by truncation of the result
587 gets rid of bits that didn't matter, and the operation might as well have
588 taken place at the narrower width and could save resources that way.
589 Examples include Logical OR where the source extension would place
590 zeros in the upper bits, the result will be truncated and throw those
591 zeros away.
592
593 Counterexamples include the previously mentioned FP16 arithmetic,
594 where for operations such as division of large numbers by very small
595 ones it should be clear that internal accuracy will play a major role
596 in influencing the result. Hence the rule that the calculation takes
597 place at the maximum bitwidth, and truncation follows afterwards.
598
599 ## Signed arithmetic
600
601 What happens when the operation involves signed arithmetic? Here the
602 implementor has to use common sense, and make sure behaviour is accurately
603 documented. If the result of the unmodified operation is sign-extended
604 because one of the inputs is signed, then the input source operands must
605 be first read at their overridden bitwidth and *then* sign-extended:
606
607 for i = 0 to VL-1:
608 src1 = get_polymorphed_reg(RA, srcwid, i)
609 src2 = get_polymorphed_reg(RB, srcwid, i)
610 opwidth = max(srcwid, destwid)
611 # srces known to be less than result width
612 src1 = sign_extend(src1, srcwid, opwidth)
613 src2 = sign_extend(src2, srcwid, opwidth)
614 result = op_signed(src1, src2, opwidth) # at max width
615 set_polymorphed_reg(rd, destwid, i, result)
616
617 The key here is that the cues are taken from the underlying operation.
618
619 ## Saturation
620
621 Audio DSPs need to be able to clip sound when the "volume" is adjusted,
622 but if it is too loud and the signal wraps, distortion occurs. The
623 solution is to clip (saturate) the audio and allow this to be detected.
624 In practical terms this is a post-result analysis however it needs to
625 take place at the largest bitwidth i.e. before a result is element width
626 truncated. Only then can the arithmetic saturation condition be detected:
627
628 for i = 0 to VL-1:
629 src1 = get_polymorphed_reg(RA, srcwid, i)
630 src2 = get_polymorphed_reg(RB, srcwid, i)
631 opwidth = max(srcwid, destwid)
632 # unsigned add
633 result = op_add(src1, src2, opwidth) # at max width
634 # now saturate (unsigned)
635 sat = min(result, (1<<destwid)-1)
636 set_polymorphed_reg(rd, destwid, i, sat)
637 # set sat overflow
638 if Rc=1:
639 CR[i].ov = (sat != result)
640
641 So the actual computation took place at the larger width, but was
642 post-analysed as an unsigned operation. If however "signed" saturation
643 is requested then the actual arithmetic operation has to be carefully
644 analysed to see what that actually means.
645
646 In terms of FP arithmetic, which by definition has a sign bit (so
647 always takes place as a signed operation anyway), the request to saturate
648 to signed min/max is pretty clear. However for integer arithmetic such
649 as shift (plain shift, not arithmetic shift), or logical operations
650 such as XOR, which were never designed to have the assumption that its
651 inputs be considered as signed numbers, common sense has to kick in,
652 and follow what CR0 does.
653
654 CR0 for Logical operations still applies: the test is still applied to
655 produce CR.eq, CR.lt and CR.gt analysis. Following this lead we may
656 do the same thing: although the input operations for and OR or XOR can
657 in no way be thought of as "signed" we may at least consider the result
658 to be signed, and thus apply min/max range detection -128 to +127 when
659 truncating down to 8 bit for example.
660
661 for i = 0 to VL-1:
662 src1 = get_polymorphed_reg(RA, srcwid, i)
663 src2 = get_polymorphed_reg(RB, srcwid, i)
664 opwidth = max(srcwid, destwid)
665 # logical op, signed has no meaning
666 result = op_xor(src1, src2, opwidth)
667 # now saturate (signed)
668 sat = min(result, (1<<destwid-1)-1)
669 sat = max(result, -(1<<destwid-1))
670 set_polymorphed_reg(rd, destwid, i, sat)
671
672 Overall here the rule is: apply common sense then document the behaviour
673 really clearly, for each and every operation.
674
675 # Quick recap so far
676
677 The above functionality pretty much covers around 85% of Vector ISA needs.
678 Predication is provided so that parallel if/then/else constructs can
679 be performed: critical given that sequential if/then statements and
680 branches simply do not translate successfully to Vector workloads.
681 VSPLAT capability is provided which is approximately 20% of all GPU
682 workload operations. Also covered, with elwidth overriding, is the
683 smaller arithmetic operations that caused ISAs developed from the
684 late 80s onwards to get themselves into a tiz when adding "Multimedia"
685 acceleration aka "SIMD" instructions.
686
687 Experienced Vector ISA readers will however have noted that VCOMPRESS
688 and VEXPAND are missing, as is Vector "reduce" (mapreduce) capability
689 and VGATHER and VSCATTER. Compress and Expand are covered by Twin
690 Predication, and yet to also be covered is fail-on-first, CR-based result
691 predication, and Subvectors and Swizzle.
692
693 ## SUBVL <a name="subvl"></a>
694
695 Adding in support for SUBVL is a matter of adding in an extra inner
696 for-loop, where register src and dest are still incremented inside the
697 inner part. Predication is still taken from the VL index, however it
698 is applied to the whole subvector:
699
700 function op_add(RT, RA, RB) # add not VADD!
701  int id=0, irs1=0, irs2=0;
702  predval = get_pred_val(FALSE, rd);
703 for i = 0 to VL-1:
704 if (predval & 1<<i) # predication uses intregs
705 for (s = 0; s < SUBVL; s++)
706 sd = id*SUBVL + s
707 srs1 = irs1*SUBVL + s
708 srs2 = irs2*SUBVL + s
709 ireg[RT+sd] <= ireg[RA+srs1] + ireg[RB+srs2];
710 if (!RT.isvec) break;
711 if (RT.isvec) { id += 1; }
712 if (RA.isvec) { irs1 += 1; }
713 if (RB.isvec) { irs2 += 1; }
714
715 The primary reason for this is because Shader Compilers treat vec2/3/4 as
716 "single units". Recognising this in hardware is just sensible.
717
718 # Swizzle <a name="swizzle"></a>
719
720 Swizzle is particularly important for 3D work. It allows in-place
721 reordering of XYZW, ARGB etc. and access of sub-portions of the same in
722 arbitrary order *without* requiring timeconsuming scalar mv instructions
723 (scalar due to the convoluted offsets).
724
725 Swizzling does not just do permutations: it allows arbitrary selection and multiple copying of
726 vec2/3/4 elements, such as XXXZ as the source operand, which will take
727 3 copies of the vec4 first element (vec4[0]), placing them at positions vec4[0],
728 vec4[1] and vec4[2], whilst the "Z" element (vec4[2]) was copied into vec4[3].
729
730 With somewhere between 10% and 30% of operations in 3D Shaders involving
731 swizzle this is a huge saving and reduces pressure on register files
732 due to having to use significant numbers of mv operations to get vector
733 elements to "line up".
734
735 In SV given the percentage of operations that also involve initialisation
736 to 0.0 or 1.0 into subvector elements the decision was made to include
737 those:
738
739 swizzle = get_swizzle_immed() # 12 bits
740 for (s = 0; s < SUBVL; s++)
741 remap = (swizzle >> 3*s) & 0b111
742 if remap == 0b000: continue # skip
743 if remap == 0b001: break # end marker
744 if remap == 0b010: ireg[rd+s] <= 0.0 # constant 0
745 elif remap == 0b011: ireg[rd+s] <= 1.0 # constant 1
746 else: # XYZW
747 sm = id*SUBVL + (remap-4)
748 ireg[rd+s] <= ireg[RA+sm]
749
750 Note that a value of 0b000 will leave the target subvector element
751 untouched. This is equivalent to a predicate mask which is built-in,
752 in immediate form, into the [[sv/mv.swizzle]] operation. mv.swizzle is
753 rare in that it is one of the few instructions needed to be added that
754 are never going to be part of a Scalar ISA. Even in High Performance
755 Compute workloads it is unusual: it is only because SV is targetted at
756 3D and Video that it is being considered.
757
758 Some 3D GPU ISAs also allow for two-operand subvector swizzles. These are
759 sufficiently unusual, and the immediate opcode space required so large
760 (12 bits per vec4 source),
761 that the tradeoff balance was decided in SV to only add mv.swizzle.
762
763 # Twin Predication
764
765 Twin Predication is cool. Essentially it is a back-to-back
766 VCOMPRESS-VEXPAND (a multiple sequentially ordered VINSERT). The compress
767 part is covered by the source predicate and the expand part by the
768 destination predicate. Of course, if either of those is all 1s then
769 the operation degenerates *to* VCOMPRESS or VEXPAND, respectively.
770
771 function op(RT, RS):
772  ps = get_pred_val(FALSE, RS); # predication on src
773  pd = get_pred_val(FALSE, RT); # ... AND on dest
774  for (int i = 0, int j = 0; i < VL && j < VL;):
775 if (RS.isvec) while (!(ps & 1<<i)) i++;
776 if (RT.isvec) while (!(pd & 1<<j)) j++;
777 reg[RT+j] = SCALAR_OPERATION_ON(reg[RS+i])
778 if (RS.isvec) i++;
779 if (RT.isvec) j++; else break
780
781 Here's the interesting part: given the fact that SV is a "context"
782 extension, the above pattern can be applied to a lot more than just MV,
783 which is normally only what VCOMPRESS and VEXPAND do in traditional
784 Vector ISAs: move registers. Twin Predication can be applied to `extsw`
785 or `fcvt`, LD/ST operations and even `rlwinmi` and other operations
786 taking a single source and immediate(s) such as `addi`. All of these
787 are termed single-source, single-destination.
788
789 LDST Address-generation, or AGEN, is a special case of single source,
790 because elwidth overriding does not make sense to apply to the computation
791 of the 64 bit address itself, but it *does* make sense to apply elwidth
792 overrides to the data being accessed *at* that memory address.
793
794 It also turns out that by using a single bit set in the source or
795 destination, *all* the sequential ordered standard patterns of Vector
796 ISAs are provided: VSPLAT, VSELECT, VINSERT, VCOMPRESS, VEXPAND.
797
798 The only one missing from the list here, because it is non-sequential,
799 is VGATHER (and VSCATTER): moving registers by specifying a vector of
800 register indices (`regs[rd] = regs[regs[rs]]` in a loop). This one is
801 tricky because it typically does not exist in standard scalar ISAs.
802 If it did it would be called [[sv/mv.x]]. Once Vectorized, it's a
803 VGATHER/VSCATTER.
804
805 # Exception-based Fail-on-first
806
807 One of the major issues with Vectorized LD/ST operations is when a
808 batch of LDs cross a page-fault boundary. With considerable resources
809 being taken up with in-flight data, a large Vector LD being cancelled
810 or unable to roll back is either a detriment to performance or can cause
811 data corruption.
812
813 What if, then, rather than cancel an entire Vector LD because the last
814 operation would cause a page fault, instead truncate the Vector to the
815 last successful element?
816
817 This is called "fail-on-first". Here is strncpy, illustrated from RVV:
818
819 strncpy:
820 c.mv a3, a0 # Copy dst
821 loop:
822 setvli x0, a2, vint8 # Vectors of bytes.
823 vlbff.v v1, (a1) # Get src bytes
824 vseq.vi v0, v1, 0 # Flag zero bytes
825 vmfirst a4, v0 # Zero found?
826 vmsif.v v0, v0 # Set mask up to and including zero byte.
827 vsb.v v1, (a3), v0.t # Write out bytes
828 c.bgez a4, exit # Done
829 csrr t1, vl # Get number of bytes fetched
830 c.add a1, a1, t1 # Bump src pointer
831 c.sub a2, a2, t1 # Decrement count.
832 c.add a3, a3, t1 # Bump dst pointer
833 c.bnez a2, loop # Anymore?
834 exit:
835 c.ret
836
837 Vector Length VL is truncated inherently at the first page faulting
838 byte-level LD. Otherwise, with more powerful hardware the number of
839 elements LOADed from memory could be dozens to hundreds or greater
840 (memory bandwidth permitting).
841
842 With VL truncated the analysis looking for the zero byte and the
843 subsequent STORE (a straight ST, not a ffirst ST) can proceed, safe in the
844 knowledge that every byte loaded in the Vector is valid. Implementors are
845 even permitted to "adapt" VL, truncating it early so that, for example,
846 subsequent iterations of loops will have LD/STs on aligned boundaries.
847
848 SIMD strncpy hand-written assembly routines are, to be blunt about it,
849 a total nightmare. 240 instructions is not uncommon, and the worst
850 thing about them is that they are unable to cope with detection of a
851 page fault condition.
852
853 Note: see <https://bugs.libre-soc.org/show_bug.cgi?id=561>
854
855 # Data-dependent fail-first
856
857 Data-dependent fail-first *stops* at the first failure:
858
859 if Rc=0: BO = inv<<2 | 0b00 # test CR.eq bit z/nz
860 for i in range(VL):
861 # predication test, skip all masked out elements.
862 if predicate_masked_out(i): continue # skip
863 result = op(iregs[RA+i], iregs[RB+i])
864 CRnew = analyse(result) # calculates eq/lt/gt
865 # now test CR, similar to branch
866 if CRnew[BO[0:1]] != BO[2]:
867 VL = i+VLi # truncate: only successes allowed
868 break
869 # test passed: store result (and CR?)
870 if not RC1: iregs[RT+i] = result
871 if RC1 or Rc=1: crregs[offs+i] = CRnew
872
873 This is particularly useful, again, for FP operations that might overflow,
874 where it is desirable to end the loop early, but also desirable to
875 complete at least those operations that were okay (passed the test)
876 without also having to slow down execution by adding extra instructions
877 that tested for the possibility of that failure, in advance of doing
878 the actual calculation.
879
880 The only minor downside here though is the change to VL, which in some
881 implementations may cause pipeline stalls.
882
883 # Vertical-First Mode
884
885 ![image](/openpower/sv/sv_horizontal_vs_vertical.svg){ width=40% height=40% }
886
887 This is a relatively new addition to SVP64 under development as of
888 July 2021. Where Horizontal-First is the standard Cray-style for-loop,
889 Vertical-First typically executes just the **one** scalar element
890 in each Vectorized operation. That element is selected by srcstep
891 and dststep *neither of which are changed as a side-effect of execution*.
892 Illustrating this in pseodocode, with a branch/loop.
893 To create loops, a new instruction `svstep` must be called,
894 explicitly, with Rc=1:
895
896 ```
897 loop:
898 sv.addi r0.v, r8.v, 5 # GPR(0+dststep) = GPR(8+srcstep) + 5
899 sv.addi r0.v, r8, 5 # GPR(0+dststep) = GPR(8 ) + 5
900 sv.addi r0, r8.v, 5 # GPR(0 ) = GPR(8+srcstep) + 5
901 svstep. # srcstep++, dststep++, CR0.eq = srcstep==VL
902 beq loop
903 ```
904
905 Three examples are illustrated of different types of Scalar-Vector
906 operations. Note that in its simplest form **only one** element is
907 executed per instruction **not** multiple elements per instruction.
908 (The more advanced version of Vertical-First mode may execute multiple
909 elements per instruction, however the number executed **must** remain
910 a fixed quantity.)
911
912 Now that such explicit loops can increment inexorably towards VL,
913 of course we now need a way to test if srcstep or dststep have reached
914 VL. This is achieved in one of two ways: [[sv/svstep]] has an Rc=1 mode
915 where CR0 will be updated if VL is reached. A standard v3.0B Branch
916 Conditional may rely on that. Alternatively, the number of elements
917 may be transferred into CTR, as is standard practice in Power ISA.
918 Here, SVP64 [[sv/branches]] have a mode which allows CTR to be decremented
919 by the number of vertical elements executed.
920
921 # Instruction format
922
923 Whilst this overview shows the internals, it does not go into detail
924 on the actual instruction format itself. There are a couple of reasons
925 for this: firstly, it's under development, and secondly, it needs to be
926 proposed to the OpenPOWER Foundation ISA WG for consideration and review.
927
928 That said: draft pages for [[sv/setvl]] and [[sv/svp64]] are written up.
929 The `setvl` instruction is pretty much as would be expected from a
930 Cray style VL instruction: the only differences being that, firstly,
931 the MAXVL (Maximum Vector Length) has to be specified, because that
932 determines - precisely - how many of the *scalar* registers are to be
933 used for a given Vector. Secondly: within the limit of MAXVL, VL is
934 required to be set to the requested value. By contrast, RVV systems
935 permit the hardware to set arbitrary values of VL.
936
937 The other key question is of course: what's the actual instruction format,
938 and what's in it? Bearing in mind that this requires OPF review, the
939 current draft is at the [[sv/svp64]] page, and includes space for all the
940 different modes, the predicates, element width overrides, SUBVL and the
941 register extensions, in 24 bits. This just about fits into a Power
942 v3.1B 64 bit Prefix by borrowing some of the Reserved Encoding space.
943 The v3.1B suffix - containing as it does a 32 bi Power instruction -
944 aligns perfectly with SV.
945
946 Further reading is at the main [[SV|sv]] page.
947
948 # Conclusion
949
950 Starting from a scalar ISA - Power v3.0B - it was shown above that,
951 with conceptual sub-loops, a Scalar ISA can be turned into a Vector one,
952 by embedding Scalar instructions - unmodified - into a Vector "context"
953 using "Prefixing". With careful thought, this technique reaches 90%
954 par with good Vector ISAs, increasing to 95% with the addition of a
955 mere handful of additional context-vectorizeable scalar instructions
956 ([[sv/mv.x]] amongst them).
957
958 What is particularly cool about the SV concept is that custom extensions
959 and research need not be concerned about inventing new Vector instructions
960 and how to get them to interact with the Scalar ISA: they are effectively
961 one and the same. Any new instruction added at the Scalar level is
962 inherently and automatically Vectorized, following some simple rules.
963