(no commit message)
[libreriscv.git] / openpower / sv / overview.mdwn
1 # SV Overview
2
3 This document provides an overview and introduction as to why SV (a
4 Cray-style Vector augmentation to OpenPOWER) exists, and how it works.
5
6 Links:
7
8 * [[discussion]] and
9 [bugreport](https://bugs.libre-soc.org/show_bug.cgi?id=556)
10 feel free to add comments, questions.
11 * [[SV|sv]]
12 * [[sv/svp64]]
13
14 Contents:
15
16 [[!toc]]
17
18 # Introduction: SIMD and Cray Vectors
19
20 SIMD, the primary method for easy parallelism of the
21 past 30 years in Computer Architectures, is [known to be
22 harmful](https://www.sigarch.org/simd-instructions-considered-harmful/).
23 SIMD provides a seductive simplicity that is easy to implement in
24 hardware. With each doubling in width it promises increases in raw performance without the complexity of either multi-issue or out-of-order execution.
25
26 Unfortunately, even with predication added, SIMD only becomes more and
27 more problematic with each power of two SIMD width increase introduced
28 through an ISA revision. The opcode proliferation, at O(N^6), inexorably
29 spirals out of control in the ISA, detrimentally impacting the hardware,
30 the software, the compilers and the testing and compliance.
31
32 Cray-style variable-length Vectors on the other hand result in
33 stunningly elegant and small loops, exceptionally high data throughput
34 per instruction (by one *or greater* orders of magnitude than SIMD), with no alarmingly high setup and cleanup code, where
35 at the hardware level the microarchitecture may execute from one element
36 right the way through to tens of thousands at a time, yet the executable
37 remains exactly the same and the ISA remains clear, true to the RISC
38 paradigm, and clean. Unlike in SIMD, powers of two limitations are not
39 involved in the ISA or in the assembly code.
40
41 SimpleV takes the Cray style Vector principle and applies it in the
42 abstract to a Scalar ISA, in the process allowing register file size
43 increases using "tagging" (similar to how x86 originally extended
44 registers from 32 to 64 bit).
45
46 ## SV
47
48 The fundamentals are:
49
50 * The Program Counter (PC) gains a "Sub Counter" context (Sub-PC)
51 * Vectorisation pauses the PC and runs a Sub-PC loop from 0 to VL-1
52 (where VL is Vector Length)
53 * The [[Program Order]] of "Sub-PC" instructions must be preserved,
54 just as is expected of instructions ordered by the PC.
55 * Some registers may be "tagged" as Vectors
56 * During the loop, "Vector"-tagged register are incremented by
57 one with each iteration, executing the *same instruction*
58 but with *different registers*
59 * Once the loop is completed *only then* is the Program Counter
60 allowed to move to the next instruction.
61
62 Hardware (and simulator) implementors are free and clear to implement this
63 as literally a for-loop, sitting in between instruction decode and issue.
64 Higher performance systems may deploy SIMD backends, multi-issue and
65 out-of-order execution, although it is strongly recommended to add
66 predication capability directly into SIMD backend units.
67
68 In OpenPOWER ISA v3.0B pseudo-code form, an ADD operation, assuming both
69 source and destination have been "tagged" as Vectors, is simply:
70
71 for i = 0 to VL-1:
72 GPR(RT+i) = GPR(RA+i) + GPR(RB+i)
73
74 At its heart, SimpleV really is this simple. On top of this fundamental
75 basis further refinements can be added which build up towards an extremely
76 powerful Vector augmentation system, with very little in the way of
77 additional opcodes required: simply external "context".
78
79 RISC-V RVV as of version 0.9 is over 180 instructions (more than the
80 rest of RV64G combined). Over 95% of that functionality is added to
81 OpenPOWER v3 0B, by SimpleV augmentation, with around 5 to 8 instructions.
82
83 Even in OpenPOWER v3.0B, the Scalar Integer ISA is around 150
84 instructions, with IEEE754 FP adding approximately 80 more. VSX, being
85 based on SIMD design principles, adds somewhere in the region of 600 more.
86 SimpleV again provides over 95% of VSX functionality, simply by augmenting
87 the *Scalar* OpenPOWER ISA, and in the process providing features such
88 as predication, which VSX is entirely missing.
89
90 AVX512, SVE2, VSX, RVV, all of these systems have to provide different
91 types of register files: Scalar and Vector is the minimum. AVX512
92 even provides a mini mask regfile, followed by explicit instructions
93 that handle operations on each of them *and map between all of them*.
94 SV simply not only uses the existing scalar regfiles (including CRs),
95 but because operations exist within OpenPOWER to cover interactions
96 between the scalar regfiles (`mfcr`, `fcvt`) there is very little that
97 needs to be added.
98
99 In fairness to both VSX and RVV, there are things that are not provided
100 by SimpleV:
101
102 * 128 bit or above arithmetic and other operations
103 (VSX Rijndael and SHA primitives; VSX shuffle and bitpermute operations)
104 * register files above 128 entries
105 * Vector lengths over 64
106 * Unit-strided LD/ST and other comprehensive memory operations
107 (struct-based LD/ST from RVV for example)
108 * 32-bit instruction lengths. [[svp64]] had to be added as 64 bit.
109
110 These limitations, which stem inherently from the adaptation process of starting from a Scalar ISA, are not insurmountable. Over time, they may well be
111 addressed in future revisions of SV.
112
113 The rest of this document builds on the above simple loop to add:
114
115 * Vector-Scalar, Scalar-Vector and Scalar-Scalar operation
116 (of all register files: Integer, FP *and CRs*)
117 * Traditional Vector operations (VSPLAT, VINSERT, VCOMPRESS etc)
118 * Predication masks (essential for parallel if/else constructs)
119 * 8, 16 and 32 bit integer operations, and both FP16 and BF16.
120 * Compacted operations into registers (normally only provided by SIMD)
121 * Fail-on-first (introduced in ARM SVE2)
122 * A new concept: Data-dependent fail-first
123 * Condition-Register based *post-result* predication (also new)
124 * A completely new concept: "Twin Predication"
125 * vec2/3/4 "Subvectors" and Swizzling (standard fare for 3D)
126
127 All of this is *without modifying the OpenPOWER v3.0B ISA*, except to add
128 "wrapping context", similar to how v3.1B 64 Prefixes work.
129
130 # Adding Scalar / Vector
131
132 The first augmentation to the simple loop is to add the option for all
133 source and destinations to all be either scalar or vector. As a FSM
134 this is where our "simple" loop gets its first complexity.
135
136 function op_add(RT, RA, RB) # add not VADD!
137 int id=0, irs1=0, irs2=0;
138 for i = 0 to VL-1:
139 ireg[RT+id] <= ireg[RA+irs1] + ireg[RB+irs2];
140 if (!RT.isvec) break;
141 if (RT.isvec) { id += 1; }
142 if (RA.isvec) { irs1 += 1; }
143 if (RB.isvec) { irs2 += 1; }
144
145 With some walkthroughs it is clear that the loop exits immediately
146 after the first scalar destination result is written, and that when the
147 destination is a Vector the loop proceeds to fill up the register file,
148 sequentially, starting at `rd` and ending at `rd+VL-1`. The two source
149 registers will, independently, either remain pointing at `RB` or `RA`
150 respectively, or, if marked as Vectors, will march incrementally in
151 lockstep, producing element results along the way, as the destination
152 also progresses through elements.
153
154 In this way all the eight permutations of Scalar and Vector behaviour
155 are covered, although without predication the scalar-destination ones are
156 reduced in usefulness. It does however clearly illustrate the principle.
157
158 Note in particular: there is no separate Scalar add instruction and
159 separate Vector instruction and separate Scalar-Vector instruction, *and
160 there is no separate Vector register file*: it's all the same instruction,
161 on the standard register file, just with a loop. Scalar happens to set
162 that loop size to one.
163
164 ## Register "tagging"
165
166 As an aside: in [[sv/svp64]] the encoding which allows SV to both extend the range beyond r0-r31 and to determine whether it is a scalar or vector is encoded in two to three bits, depending on the instruction.
167
168 The reason for using so few bits is because there are up to *four* registers to mark in this way (`fma`, `isel`) which starts to be of concern when there are only 24 available bits to specify the entire SV Vectorisation Context. In fact, for a small subset of instructions it is just not possible to tag every single register. Under these rare circumstances a tag has to be shared between two registers.
169
170 Below is the pseudocode which expresses the relationship which is usually applied to *every* register:
171
172 if extra3_mode:
173 spec = EXTRA3 # bit 2 s/v, 0-1 extends range
174 else:
175 spec = EXTRA2 << 1 # same as EXTRA3, shifted
176 if spec[2]: # vector
177 RA.isvec = True
178 return (RA << 2) | spec[0:1]
179 else: # scalar
180 RA.isvec = False
181 return (spec[0:1] << 5) | RA
182
183 Here we can see that the scalar registers are extended in the top bits, whilst vectors are shifted up by 2 bits, and then extended in the LSBs. Condition Registers have a slightly different scheme, along the same principle, which takes into account the fact that each CR may be bit-level addressed by Condition Register operations.
184
185 Readers familiar with OpenPOWER will know of Rc=1 operations that create an associated post-result "test", placing this test into an implicit Condition Register. The original researchers who created the POWER ISA chose CR0 for Integer, and CR1 for Floating Point. These *also become Vectorised* - implicitly - if the associated destination register is also Vectorised. This allows for some very interesting savings on instruction count due to the very same CR Vectors being predication masks.
186
187 # Adding single predication
188
189 The next step is to add a single predicate mask. This is where it gets
190 interesting. Predicate masks are a bitvector, each bit specifying, in
191 order, whether the element operation is to be skipped ("masked out")
192 or allowed. If there is no predicate, it is set to all 1s, which is
193 effectively the same as "no predicate".
194
195 function op_add(RT, RA, RB) # add not VADD!
196 int id=0, irs1=0, irs2=0;
197 predval = get_pred_val(FALSE, rd);
198 for i = 0 to VL-1:
199 if (predval & 1<<i) # predication bit test
200 ireg[RT+id] <= ireg[RA+irs1] + ireg[RB+irs2];
201 if (!RT.isvec) break;
202 if (RT.isvec) { id += 1; }
203 if (RA.isvec) { irs1 += 1; }
204 if (RB.isvec) { irs2 += 1; }
205
206 The key modification is to skip the creation and storage of the result
207 if the relevant predicate mask bit is clear, but *not the progression
208 through the registers*.
209
210 A particularly interesting case is if the destination is scalar, and the
211 first few bits of the predicate are zero. The loop proceeds to increment
212 the Scalar *source* registers until the first nonzero predicate bit is
213 found, whereupon a single result is computed, and *then* the loop exits.
214 This therefore uses the predicate to perform Vector source indexing.
215 This case was not possible without the predicate mask.
216
217 If all three registers are marked as Vector then the "traditional"
218 predicated Vector behaviour is provided. Yet, just as before, all other
219 options are still provided, right the way back to the pure-scalar case,
220 as if this were a straight OpenPOWER v3.0B non-augmented instruction.
221
222 Single Predication therefore provides several modes traditionally seen
223 in Vector ISAs:
224
225 * VINSERT: the predicate may be set as a single bit, the sources are scalar and the destination a vector.
226 * VSPLAT (result broadcasting) is provided by making the sources scalar and the destination a vector, and having no predicate set or having multiple bits set.
227 * VSELECT is provided by setting up (at least one of) the sources as a vector, using a single bit in olthe predicate, and the destination as a scalar.
228
229 # Predicate "zeroing" mode
230
231 Sometimes with predication it is ok to leave the masked-out element
232 alone (not modify the result) however sometimes it is better to zero the
233 masked-out elements. Zeroing can be combined with bit-wise ORing to build
234 up vectors from multiple predicate patterns: the same combining with
235 nonzeroing involves more mv operations and predicate mask operations.
236 Our pseudocode therefore ends up as follows, to take the enhancement
237 into account:
238
239 function op_add(RT, RA, RB) # add not VADD!
240 int id=0, irs1=0, irs2=0;
241 predval = get_pred_val(FALSE, rd);
242 for i = 0 to VL-1:
243 if (predval & 1<<i) # predication bit test
244 ireg[RT+id] <= ireg[RA+irs1] + ireg[RB+irs2];
245 if (!RT.isvec) break;
246 else if zeroing: # predicate failed
247 ireg[RT+id] = 0 # set element to zero
248 if (RT.isvec) { id += 1; }
249 if (RA.isvec) { irs1 += 1; }
250 if (RB.isvec) { irs2 += 1; }
251
252 Many Vector systems either have zeroing or they have nonzeroing, they
253 do not have both. This is because they usually have separate Vector
254 register files. However SV sits on top of standard register files and
255 consequently there are advantages to both, so both are provided.
256
257 # Element Width overrides <a name="elwidths"></a>
258
259 All good Vector ISAs have the usual bitwidths for operations: 8/16/32/64
260 bit integer operations, and IEEE754 FP32 and 64. Often also included
261 is FP16 and more recently BF16. The *really* good Vector ISAs have
262 variable-width vectors right down to bitlevel, and as high as 1024 bit
263 arithmetic per element, as well as IEEE754 FP128.
264
265 SV has an "override" system that *changes* the bitwidth of operations
266 that were intended by the original scalar ISA designers to have (for
267 example) 64 bit operations (only). The override widths are 8, 16 and
268 32 for integer, and FP16 and FP32 for IEEE754 (with BF16 to be added in
269 the future).
270
271 This presents a particularly intriguing conundrum given that the OpenPOWER
272 Scalar ISA was never designed with for example 8 bit operations in mind,
273 let alone Vectors of 8 bit.
274
275 The solution comes in terms of rethinking the definition of a Register
276 File. The typical regfile may be considered to be a multi-ported SRAM
277 block, 64 bits wide and usually 32 entries deep, to give 32 64 bit
278 registers. Conceptually, to get our variable element width vectors,
279 we may think of the regfile as instead being the following c-based data
280 structure:
281
282 typedef union {
283 uint8_t actual_bytes[8];
284 uint8_t b[0]; // array of type uint8_t
285 uint16_t s[0];
286 uint32_t i[0];
287 uint64_t l[0]; // default OpenPOWER ISA uses this
288 } reg_t;
289
290 reg_t int_regfile[128]; // SV extends to 128 regs
291
292 Then, our simple loop, instead of accessing the array of regfile entries
293 with a computed index, would access the appropriate element of the
294 appropriate type. Thus we have a series of overlapping conceptual arrays
295 that each start at what is traditionally thought of as "a register".
296 It then helps if we have a couple of routines:
297
298 get_polymorphed_reg(reg, bitwidth, offset):
299 reg_t res = 0;
300 if (!reg.isvec): # scalar
301 offset = 0
302 if bitwidth == 8:
303 reg.b = int_regfile[reg].b[offset]
304 elif bitwidth == 16:
305 reg.s = int_regfile[reg].s[offset]
306 elif bitwidth == 32:
307 reg.i = int_regfile[reg].i[offset]
308 elif bitwidth == default: # 64
309 reg.l = int_regfile[reg].l[offset]
310 return res
311
312 set_polymorphed_reg(reg, bitwidth, offset, val):
313 if (!reg.isvec): # scalar
314 offset = 0
315 if bitwidth == 8:
316 int_regfile[reg].b[offset] = val
317 elif bitwidth == 16:
318 int_regfile[reg].s[offset] = val
319 elif bitwidth == 32:
320 int_regfile[reg].i[offset] = val
321 elif bitwidth == default: # 64
322 int_regfile[reg].l[offset] = val
323
324 These basically provide a convenient parameterised way to access the
325 register file, at an arbitrary vector element offset and an arbitrary
326 element width. Our first simple loop thus becomes:
327
328 for i = 0 to VL-1:
329 src1 = get_polymorphed_reg(RA, srcwid, i)
330 src2 = get_polymorphed_reg(RB, srcwid, i)
331 result = src1 + src2 # actual add here
332 set_polymorphed_reg(rd, destwid, i, result)
333
334 With this loop, if elwidth=16 and VL=3 the first 48 bits of the target
335 register will contain three 16 bit addition results, and the upper 16
336 bits will be *unaltered*.
337
338 Note that things such as zero/sign-extension (and predication) have
339 been left out to illustrate the elwidth concept. Also note that it turns
340 out to be important to perform the operation at the maximum bitwidth -
341 `max(srcwid, destwid)` - such that any truncation, rounding errors or
342 other artefacts may all be ironed out. This turns out to be important
343 when applying Saturation for Audio DSP workloads.
344
345 Other than that, element width overrides, which can be applied to *either*
346 source or destination or both, are pretty straightforward, conceptually.
347 The details, for hardware engineers, involve byte-level write-enable
348 lines, which is exactly what is used on SRAMs anyway. Compiler writers
349 have to alter Register Allocation Tables to byte-level granularity.
350
351 One critical thing to note: upper parts of the underlying 64 bit
352 register are *not zero'd out* by a write involving a non-aligned Vector
353 Length. An 8 bit operation with VL=7 will *not* overwrite the 8th byte
354 of the destination. The only situation where a full overwrite occurs
355 is on "default" behaviour. This is extremely important to consider the
356 register file as a byte-level store, not a 64-bit-level store.
357
358 # Quick recap so far
359
360 The above functionality pretty much covers around 85% of Vector ISA needs.
361 Predication is provided so that parallel if/then/else constructs can
362 be performed: critical given that sequential if/then statements and
363 branches simply do not translate successfully to Vector workloads.
364 VSPLAT capability is provided which is approximately 20% of all GPU
365 workload operations. Also covered, with elwidth overriding, is the
366 smaller arithmetic operations that caused ISAs developed from the
367 late 80s onwards to get themselves into a tiz when adding "Multimedia"
368 acceleration aka "SIMD" instructions.
369
370 Experienced Vector ISA readers will however have noted that VCOMPRESS
371 and VEXPAND are missing, as is Vector "reduce" (mapreduce) capability
372 and VGATHER and VSCATTER. Compress and Expand are covered by Twin
373 Predication, and yet to also be covered is fail-on-first, CR-based result
374 predication, and Subvectors and Swizzle.
375
376 ## SUBVL <a name="subvl"></a>
377
378 Adding in support for SUBVL is a matter of adding in an extra inner
379 for-loop, where register src and dest are still incremented inside the
380 inner part. Predication is still taken from the VL index, however it
381 is applied to the whole subvector:
382
383 function op_add(RT, RA, RB) # add not VADD!
384  int id=0, irs1=0, irs2=0;
385  predval = get_pred_val(FALSE, rd);
386 for i = 0 to VL-1:
387 if (predval & 1<<i) # predication uses intregs
388 for (s = 0; s < SUBVL; s++)
389 sd = id*SUBVL + s
390 srs1 = irs1*SUBVL + s
391 srs2 = irs2*SUBVL + s
392 ireg[RT+sd] <= ireg[RA+srs1] + ireg[RB+srs2];
393 if (!RT.isvec) break;
394 if (RT.isvec) { id += 1; }
395 if (RA.isvec) { irs1 += 1; }
396 if (RB.isvec) { irs2 += 1; }
397
398 # Swizzle <a name="swizzle"></a>
399
400 Swizzle is particularly important for 3D work. It allows in-place
401 reordering of XYZW, ARGB etc. and access of sub-portions of the same in
402 arbitrary order *without* requiring timeconsuming scalar mv instructions
403 (scalar due to the convoluted offsets). With somewhere around 10% of
404 operations in 3D Shaders involving swizzle this is a huge saving and
405 reduces pressure on register files.
406
407 In SV given the percentage of operations that also involve initialisation
408 to 0.0 or 1.0 into subvector elements the decision was made to include
409 those:
410
411 swizzle = get_swizzle_immed() # 12 bits
412 for (s = 0; s < SUBVL; s++)
413 remap = (swizzle >> 3*s) & 0b111
414 if remap < 4:
415 sm = id*SUBVL + remap
416 ireg[rd+s] <= ireg[RA+sm]
417 elif remap == 4:
418 ireg[rd+s] <= 0.0
419 elif remap == 5:
420 ireg[rd+s] <= 1.0
421
422 Note that a value of 6 (and 7) will leave the target subvector element
423 untouched. This is equivalent to a predicate mask which is built-in,
424 in immediate form, into the [[sv/mv.swizzle]] operation. mv.swizzle is
425 rare in that it is one of the few instructions needed to be added that
426 are never going to be part of a Scalar ISA. Even in High Performance
427 Compute workloads it is unusual: it is only because SV is targetted at
428 3D and Video that it is being considered.
429
430 Some 3D GPU ISAs also allow for two-operand subvector swizzles. These are
431 sufficiently unusual, and the immediate opcode space required so large,
432 that the tradeoff balance was decided in SV to only add mv.swizzle.
433
434 # Twin Predication
435
436 Twin Predication is cool. Essentially it is a back-to-back
437 VCOMPRESS-VEXPAND (a multiple sequentially ordered VINSERT). The compress
438 part is covered by the source predicate and the expand part by the
439 destination predicate. Of course, if either of those is all 1s then
440 the operation degenerates *to* VCOMPRESS or VEXPAND, respectively.
441
442 function op(RT, RS):
443  ps = get_pred_val(FALSE, RS); # predication on src
444  pd = get_pred_val(FALSE, RT); # ... AND on dest
445  for (int i = 0, int j = 0; i < VL && j < VL;):
446 if (RS.isvec) while (!(ps & 1<<i)) i++;
447 if (RT.isvec) while (!(pd & 1<<j)) j++;
448 reg[RT+j] = SCALAR_OPERATION_ON(reg[RS+i])
449 if (RS.isvec) i++;
450 if (RT.isvec) j++; else break
451
452 Here's the interesting part: given the fact that SV is a "context"
453 extension, the above pattern can be applied to a lot more than just MV,
454 which is normally only what VCOMPRESS and VEXPAND do in traditional
455 Vector ISAs: move registers. Twin Predication can be applied to `extsw`
456 or `fcvt`, LD/ST operations and even `rlwinmi` and other operations
457 taking a single source and immediate(s) such as `addi`. All of these
458 are termed single-source, single-destination (LDST Address-generation,
459 or AGEN, is a single source).
460
461 It also turns out that by using a single bit set in the source or
462 destination, *all* the sequential ordered standard patterns of Vector
463 ISAs are provided: VSPLAT, VSELECT, VINSERT, VCOMPRESS, VEXPAND.
464
465 The only one missing from the list here, because it is non-sequential,
466 is VGATHER: moving registers by specifying a vector of register indices
467 (`regs[rd] = regs[regs[rs]]` in a loop). This one is tricky because it
468 typically does not exist in standard scalar ISAs. If it did it would
469 be called [[sv/mv.x]]. Once Vectorised, it's a VGATHER.
470
471 # CR predicate result analysis
472
473 OpenPOWER has Condition Registers. These store an analysis of the result
474 of an operation to test it for being greater, less than or equal to zero.
475 What if a test could be done, similar to branch BO testing, which hooked
476 into the predication system?
477
478 for i in range(VL):
479 # predication test, skip all masked out elements.
480 if predicate_masked_out(i): continue # skip
481 result = op(iregs[RA+i], iregs[RB+i])
482 CRnew = analyse(result) # calculates eq/lt/gt
483 # Rc=1 always stores the CR
484 if RC1 or Rc=1: crregs[offs+i] = CRnew
485 if RC1: continue # RC1 mode skips result store
486 # now test CR, similar to branch
487 if CRnew[BO[0:1]] == BO[2]:
488 # result optionally stored but CR always is
489 iregs[RT+i] = result
490
491 Note that whilst the Vector of CRs is always written to the CR regfile,
492 only those result elements that pass the BO test get written to the
493 integer regfile (when RC1 mode is not set). In RC1 mode the CR is always stored, but the result never is. This effectively turns every arithmetic operation into a type of `cmp` instruction.
494
495 Here for example if FP overflow occurred, and the CR testing was carried
496 out for that, all valid results would be stored but invalid ones would
497 not, but in addition the Vector of CRs would contain the indicators of
498 which ones failed. With the invalid results being simply not written
499 this could save resources (save on register file writes).
500
501 Also expected is, due to the fact that the predicate mask is effectively
502 ANDed with the post-result analysis as a secondary type of predication,
503 that there would be savings to be had in some types of operations where
504 the post-result analysis, if not included in SV, would need a second
505 predicate calculation followed by a predicate mask AND operation.
506
507 Note, hilariously, that Vectorised Condition Register Operations (crand, cror) may
508 also have post-result analysis applied to them. With Vectors of CRs being
509 utilised *for* predication, possibilities for compact and elegant code
510 begin to emerge from this innocuous-looking addition to SV.
511
512 # Exception-based Fail-on-first
513
514 One of the major issues with Vectorised LD/ST operations is when a batch of LDs cross a page-fault boundary. With considerable resources being taken up with in-flight data, a large Vector LD being cancelled or unable to roll back is either a detriment to performance or can cause data corruption.
515
516 What if, then, rather than cancel an entire Vector LD because the last operation would cause a page fault, instead truncate the Vector to the last successful element?
517
518 This is called "fail-on-first". Here is strncpy, illustrated from RVV:
519
520 strncpy:
521 c.mv a3, a0 # Copy dst
522 loop:
523 setvli x0, a2, vint8 # Vectors of bytes.
524 vlbff.v v1, (a1) # Get src bytes
525 vseq.vi v0, v1, 0 # Flag zero bytes
526 vmfirst a4, v0 # Zero found?
527 vmsif.v v0, v0 # Set mask up to and including zero byte.
528 vsb.v v1, (a3), v0.t # Write out bytes
529 c.bgez a4, exit # Done
530 csrr t1, vl # Get number of bytes fetched
531 c.add a1, a1, t1 # Bump src pointer
532 c.sub a2, a2, t1 # Decrement count.
533 c.add a3, a3, t1 # Bump dst pointer
534 c.bnez a2, loop # Anymore?
535 exit:
536 c.ret
537
538 Vector Length VL is truncated inherently at the first page faulting byte-level LD. Otherwise, with more powerful hardware the number of elements LOADed from memory could be dozens to hundreds or greater (memory bandwidth permitting).
539
540 With VL truncated the analysis looking for the zero byte and the subsequent STORE (a straight ST, not a ffirst ST) can proceed, safe in the knowledge that every byte loaded in the Vector is valid. Implementors are even permitted to "adapt" VL, truncating it early so that, for example, subsequent iterations of loops will have LD/STs on aligned boundaries.
541
542 SIMD strncpy hand-written assembly routines are, to be blunt about it, a total nightmare. 240 instructions is not uncommon, and the worst thing about them is that they are unable to cope with detection of a page fault condition.
543
544 # Data-dependent fail-first
545
546 This is a minor variant on the CR-based predicate-result mode. Where pred-result continues with independent element testing (any of which may be parallelised), data-dependent fail-first *stops* at the first failure:
547
548 if Rc=0: BO = inv<<2 | 0b00 # test CR.eq bit z/nz
549 for i in range(VL):
550 # predication test, skip all masked out elements.
551 if predicate_masked_out(i): continue # skip
552 result = op(iregs[RA+i], iregs[RB+i])
553 CRnew = analyse(result) # calculates eq/lt/gt
554 # now test CR, similar to branch
555 if CRnew[BO[0:1]] != BO[2]:
556 VL = i # truncate: only successes allowed
557 break
558 # test passed: store result (and CR?)
559 if not RC1: iregs[RT+i] = result
560 if RC1 or Rc=1: crregs[offs+i] = CRnew
561
562 This is particularly useful, again, for FP operations that might overflow, where it is desirable to end the loop early, but also desirable to complete at least those operations that were okay (passed the test) without also having to slow down execution by adding extra instructions that tested for the possibility of that failure, in advance of doing the actual calculation.
563
564 The only minor downside here though is the change to VL, which in some implementations may cause pipeline stalls. This was one of the reasons why CR-based pred-result analysis was added, because that at least is entirely paralleliseable.
565
566 # Instruction format
567
568 Whilst this overview shows the internals, it does not go into detail on the actual instruction format itself. There are a couple of reasons for this: firstly, it's under development, and secondly, it needs to be proposed to the OpenPOWER Foundation ISA WG for consideration and review.
569
570 That said: draft pages for [[sv/setvl]] and [[sv/svp64]] are written up. The `setvl` instruction is pretty much as would be expected from a Cray style VL instruction: the only differences being that, firstly, the MAXVL (Maximum Vector Length) has to be specified, because that determines - precisely - how many of the *scalar* registers are to be used for a given Vector. Secondly: within the limit of MAXVL, VL is required to be set to the requested value. By contrast, RVV systems permit the hardware to set arbitrary values of VL.
571
572 The other key question is of course: what's the actual instruction format, and what's in it? Bearing in mind that this requires OPF review, the current draft is at the [[sv/svp64]] page, and includes space for all the different modes, the predicates, element width overrides, SUBVL and the register extensions, in 24 bits. This just about fits into an OpenPOWER v3.1B 64 bit Prefix by borrowing some of the Reserved Encoding space. The v3.1B suffix - containing as it does a 32 bit OpenPOWER instruction - aligns perfectly with SV.
573
574 Further reading is at the main [[SV|sv]] page.
575
576 # Conclusion
577
578 Starting from a scalar ISA - OpenPOWER v3.0B - it was shown above that, with conceptual sub-loops, a Scalar ISA can be turned into a Vector one, by embedding Scalar instructions - unmodified - into a Vector "context" using "Prefixing". With careful thought, this technique reaches 90% par with good Vector ISAs, increasing to 95% with the addition of a mere handful of additional context-vectoriseable scalar instructions ([[sv/mv.x]] amongst them).
579
580 What is particularly cool about the SV concept is that custom extensions and research need not be concerned about inventing new Vector instructions and how to get them to interact with the Scalar ISA: they are effectively one and the same. Any new instruction added at the Scalar level is inherently and automatically Vectorised, following some simple rules.
581