7d6f0388843a79120995c4593612e61c3d753cae
[libreriscv.git] / openpower / sv / overview.mdwn
1 # SV Overview
2
3 [[discussion]] page feel free to add comments, questions.
4
5 [[!toc]]
6
7 This document provides an overview and introduction as to why SV (a
8 Cray-style Vector augmentation to OpenPOWER) exists, and how it works.
9
10 # Introduction: SIMD and Cray Vectors
11
12 SIMD, the primary method for easy parallelism of the
13 past 30 years in Computer Architectures, is [known to be
14 harmful](https://www.sigarch.org/simd-instructions-considered-harmful/).
15 SIMD provides a seductive simplicity that is easy to implement in
16 hardware. With each doubling in width it promises increases in raw performance without the complexity of either multi-issue or out-of-order execution.
17
18 Unfortunately, even with predication added, SIMD only becomes more and
19 more problematic with each power of two SIMD width increase introduced
20 through an ISA revision. The opcode proliferation, at O(N^6), inexorably
21 spirals out of control in the ISA, detrimentally impacting the hardware,
22 the software, the compilers and the testing and compliance.
23
24 Cray-style variable-length Vectors on the other hand result in
25 stunningly elegant and small loops, exceptionally high data throughput
26 per instruction (by one *or greater* orders of magnitude than SIMD), with no alarmingly high setup and cleanup code, where
27 at the hardware level the microarchitecture may execute from one element
28 right the way through to tens of thousands at a time, yet the executable
29 remains exactly the same and the ISA remains clear, true to the RISC
30 paradigm, and clean. Unlike in SIMD, powers of two limitations are not
31 involved in either the hardware or in the assembly code.
32
33 SimpleV takes the Cray style Vector principle and applies it in the
34 abstract to a Scalar ISA, in the process allowing register file size
35 increases using "tagging" (similar to how x86 originally extended
36 registers from 32 to 64 bit).
37
38 ## SV
39
40 The fundamentals are:
41
42 * The Program Counter gains a "Sub Counter" context.
43 * Vectorisation pauses the PC and runs a loop from 0 to VL-1
44 (where VL is Vector Length). This may be thought of as a
45 "Sub-PC"
46 * Some registers may be "tagged" as Vectors
47 * During the loop, "Vector"-tagged register are incremented by
48 one with each iteration, executing the *same instruction*
49 but with *different registers*
50 * Once the loop is completed *only then* is the Program Counter
51 allowed to move to the next instruction.
52
53 Hardware (and simulator) implementors are free and clear to implement this
54 as literally a for-loop, sitting in between instruction decode and issue.
55 Higher performance systems may deploy SIMD backends, multi-issue and
56 out-of-order execution, although it is strongly recommended to add
57 predication capability into all SIMD backend units.
58
59 In OpenPOWER ISA v3.0B pseudo-code form, an ADD operation, assuming both
60 source and destination have been "tagged" as Vectors, is simply:
61
62 for i = 0 to VL-1:
63 GPR(RT+i) = GPR(RA+i) + GPR(RB+i)
64
65 At its heart, SimpleV really is this simple. On top of this fundamental
66 basis further refinements can be added which build up towards an extremely
67 powerful Vector augmentation system, with very little in the way of
68 additional opcodes required: simply external "context".
69
70 RISC-V RVV as of version 0.9 is over 180 instructions (more than the
71 rest of RV64G combined). Over 95% of that functionality is added to
72 OpenPOWER v3 0B, by SimpleV augmentation, with around 5 to 8 instructions.
73
74 Even in OpenPOWER v3.0B, the Scalar Integer ISA is around 150
75 instructions, with IEEE754 FP adding approximately 80 more. VSX, being
76 based on SIMD design principles, adds somewhere in the region of 600 more.
77 SimpleV again provides over 95% of VSX functionality, simply by augmenting
78 the *Scalar* OpenPOWER ISA, and in the process providing features such
79 as predication, which VSX is entirely missing.
80
81 AVX512, SVE2, VSX, RVV, all of these systems have to provide different
82 types of register files: Scalar and Vector is the minimum. AVX512
83 even provides a mini mask regfile, followed by explicit instructions
84 that handle operations on each of them *and map between all of them*.
85 SV simply not only uses the existing scalar regfiles (including CRs),
86 but because operations exist within OpenPOWER to cover interactions
87 between the scalar regfiles (`mfcr`, `fcvt`) there is very little that
88 needs to be added.
89
90 In fairness to both VSX and RVV, there are things that are not provided
91 by SimpleV:
92
93 * 128 bit or above arithmetic and other operations
94 (VSX Rijndael and SHA primitives; VSX shuffle and bitpermute operations)
95 * register files above 128 entries
96 * Vector lengths over 64
97 * Unit-strided LD/ST and other comprehensive memory operations
98 (struct-based LD/ST from RVV for example)
99 * 32-bit instruction lengths. [[svp64]] had to be added as 64 bit.
100
101 These are not insurmountable limitations, that, over time, may well be
102 added in future revisions of SV.
103
104 The rest of this document builds on the above simple loop to add:
105
106 * Vector-Scalar, Scalar-Vector and Scalar-Scalar operation
107 (of all register files: Integer, FP *and CRs*)
108 * Traditional Vector operations (VSPLAT, VINSERT, VCOMPRESS etc)
109 * Predication masks (essential for parallel if/else constructs)
110 * 8, 16 and 32 bit integer operations, and both FP16 and BF16.
111 * Compacted operations into registers (normally only provided by SIMD)
112 * Fail-on-first (introduced in ARM SVE2)
113 * A new concept: Data-dependent fail-first
114 * Condition-Register based *post-result* predication (also new)
115 * A completely new concept: "Twin Predication"
116 * vec2/3/4 "Subvectors" and Swizzling (standard fare for 3D)
117
118 All of this is *without modifying the OpenPOWER v3.0B ISA*, except to add
119 "wrapping context", similar to how v3.1B 64 Prefixes work.
120
121 # Adding Scalar / Vector
122
123 The first augmentation to the simple loop is to add the option for all
124 source and destinations to all be either scalar or vector. As a FSM
125 this is where our "simple" loop gets its first complexity.
126
127 function op_add(RT, RA, RB) # add not VADD!
128 int id=0, irs1=0, irs2=0;
129 for i = 0 to VL-1:
130 ireg[RT+id] <= ireg[RA+irs1] + ireg[RB+irs2];
131 if (!RT.isvec) break;
132 if (RT.isvec) { id += 1; }
133 if (RA.isvec) { irs1 += 1; }
134 if (RB.isvec) { irs2 += 1; }
135
136 With some walkthroughs it is clear that the loop exits immediately
137 after the first scalar destination result is written, and that when the
138 destination is a Vector the loop proceeds to fill up the register file,
139 sequentially, starting at `rd` and ending at `rd+VL-1`. The two source
140 registers will, independently, either remain pointing at `RB` or `RA`
141 respectively, or, if marked as Vectors, will march incrementally in
142 lockstep, producing element results along the way, as the destination
143 also progresses through elements.
144
145 In this way all the eight permutations of Scalar and Vector behaviour
146 are covered, although without predication the scalar-destination ones are
147 reduced in usefulness. It does however clearly illustrate the principle.
148
149 Note in particular: there is no separate Scalar add instruction and
150 separate Vector instruction and separate Scalar-Vector instruction, *and
151 there is no separate Vector register file*: it's all the same instruction,
152 on the standard register file, just with a loop. Scalar happens to set
153 that loop size to one.
154
155 # Adding single predication
156
157 The next step is to add a single predicate mask. This is where it gets
158 interesting. Predicate masks are a bitvector, each bit specifying, in
159 order, whether the element operation is to be skipped ("masked out")
160 or allowed. If there is no predicate, it is set to all 1s, which is
161 effectively the same as "no predicate".
162
163 function op_add(RT, RA, RB) # add not VADD!
164 int id=0, irs1=0, irs2=0;
165 predval = get_pred_val(FALSE, rd);
166 for i = 0 to VL-1:
167 if (predval & 1<<i) # predication bit test
168 ireg[RT+id] <= ireg[RA+irs1] + ireg[RB+irs2];
169 if (!RT.isvec) break;
170 if (RT.isvec) { id += 1; }
171 if (RA.isvec) { irs1 += 1; }
172 if (RB.isvec) { irs2 += 1; }
173
174 The key modification is to skip the creation and storage of the result
175 if the relevant predicate mask bit is clear, but *not the progression
176 through the registers*.
177
178 A particularly interesting case is if the destination is scalar, and the
179 first few bits of the predicate are zero. The loop proceeds to increment
180 the Scalar *source* registers until the first nonzero predicate bit is
181 found, whereupon a single result is computed, and *then* the loop exits.
182 This therefore uses the predicate to perform Vector source indexing.
183 This case was not possible without the predicate mask.
184
185 If all three registers are marked as Vector then the "traditional"
186 predicated Vector behaviour is provided. Yet, just as before, all other
187 options are still provided, right the way back to the pure-scalar case,
188 as if this were a straight OpenPOWER v3.0B non-augmented instruction.
189
190 Single Predication therefore provides several modes traditionally seen
191 in Vector ISAs:
192
193 * VINSERT: the predicate may be set as a single bit, the sources are scalar and the destination a vector.
194 * VSPLAT (result broadcasting) is provided by making the sources scalar and the destination a vector, and having no predicate set or having multiple bits set.
195 * VSELECT is provided by setting up (at least one of) the sources as a vector, using a single bit in olthe predicate, and the destination as a scalar.
196
197 # Predicate "zeroing" mode
198
199 Sometimes with predication it is ok to leave the masked-out element
200 alone (not modify the result) however sometimes it is better to zero the
201 masked-out elements. Zeroing can be combined with bit-wise ORing to build
202 up vectors from multiple predicate patterns: the same combining with
203 nonzeroing involves more mv operations and predicate mask operations.
204 Our pseudocode therefore ends up as follows, to take the enhancement
205 into account:
206
207 function op_add(RT, RA, RB) # add not VADD!
208 int id=0, irs1=0, irs2=0;
209 predval = get_pred_val(FALSE, rd);
210 for i = 0 to VL-1:
211 if (predval & 1<<i) # predication bit test
212 ireg[RT+id] <= ireg[RA+irs1] + ireg[RB+irs2];
213 if (!RT.isvec) break;
214 else if zeroing: # predicate failed
215 ireg[RT+id] = 0 # set element to zero
216 if (RT.isvec) { id += 1; }
217 if (RA.isvec) { irs1 += 1; }
218 if (RB.isvec) { irs2 += 1; }
219
220 Many Vector systems either have zeroing or they have nonzeroing, they
221 do not have both. This is because they usually have separate Vector
222 register files. However SV sits on top of standard register files and
223 consequently there are advantages to both, so both are provided.
224
225 # Element Width overrides
226
227 All good Vector ISAs have the usual bitwidths for operations: 8/16/32/64
228 bit integer operations, and IEEE754 FP32 and 64. Often also included
229 is FP16 and more recently BF16. The *really* good Vector ISAs have
230 variable-width vectors right down to bitlevel, and as high as 1024 bit
231 arithmetic per element, as well as IEEE754 FP128.
232
233 SV has an "override" system that *changes* the bitwidth of operations
234 that were intended by the original scalar ISA designers to have (for
235 example) 64 bit operations (only). The override widths are 8, 16 and
236 32 for integer, and FP16 and FP32 for IEEE754 (with BF16 to be added in
237 the future).
238
239 This presents a particularly intriguing conundrum given that the OpenPOWER
240 Scalar ISA was never designed with for example 8 bit operations in mind,
241 let alone Vectors of 8 bit.
242
243 The solution comes in terms of rethinking the definition of a Register
244 File. The typical regfile may be considered to be a multi-ported SRAM
245 block, 64 bits wide and usually 32 entries deep, to give 32 64 bit
246 registers. Conceptually, to get our variable element width vectors,
247 we may think of the regfile as insead being the following c-based data
248 structure:
249
250 typedef union {
251 uint8_t actual_bytes[8];
252 uint8_t b[0]; // array of type uint8_t
253 uint16_t s[0];
254 uint32_t i[0];
255 uint64_t l[0]; // default OpenPOWER ISA uses this
256 } reg_t;
257
258 reg_t int_regfile[128]; // SV extends to 128 regs
259
260 Then, our simple loop, instead of accessing the array of regfile entries
261 with a computed index, would access the appropriate element of the
262 appropriate type. Thus we have a series of overlapping conceptual arrays
263 that each start at what is traditionally thought of as "a register".
264 It then helps if we have a couple of routines:
265
266 get_polymorphed_reg(reg, bitwidth, offset):
267 reg_t res = 0;
268 if (!reg.isvec): # scalar
269 offset = 0
270 if bitwidth == 8:
271 reg.b = int_regfile[reg].b[offset]
272 elif bitwidth == 16:
273 reg.s = int_regfile[reg].s[offset]
274 elif bitwidth == 32:
275 reg.i = int_regfile[reg].i[offset]
276 elif bitwidth == default: # 64
277 reg.l = int_regfile[reg].l[offset]
278 return res
279
280 set_polymorphed_reg(reg, bitwidth, offset, val):
281 if (!reg.isvec): # scalar
282 offset = 0
283 if bitwidth == 8:
284 int_regfile[reg].b[offset] = val
285 elif bitwidth == 16:
286 int_regfile[reg].s[offset] = val
287 elif bitwidth == 32:
288 int_regfile[reg].i[offset] = val
289 elif bitwidth == default: # 64
290 int_regfile[reg].l[offset] = val
291
292 These basically provide a convenient parameterised way to access the
293 register file, at an arbitrary vector element offset and an arbitrary
294 element width. Our first simple loop thus becomes:
295
296 for i = 0 to VL-1:
297 src1 = get_polymorphed_reg(RA, srcwid, i)
298 src2 = get_polymorphed_reg(RB, srcwid, i)
299 result = src1 + src2 # actual add here
300 set_polymorphed_reg(rd, destwid, i, result)
301
302 With this loop, if elwidth=16 and VL=3 the first 48 bits of the target
303 register will contain three 16 bit addition results, and the upper 16
304 bits will be *unaltered*.
305
306 Note that things such as zero/sign-extension (and predication) have
307 been left out to illustrate the elwidth concept. Also note that it turns
308 out to be important to perform the operation at the maximum bitwidth -
309 `max(srcwid, destwid)` - such that any truncation, rounding errors or
310 other artefacts may all be ironed out. This turns out to be important
311 when applying Saturation for Audio DSP workloads.
312
313 Other than that, element width overrides, which can be applied to *either*
314 source or destination or both, are pretty straightforward, conceptually.
315 The details, for hardware engineers, involve byte-level write-enable
316 lines, which is exactly what is used on SRAMs anyway. Compiler writers
317 have to alter Register Allocation Tables to byte-level granularity.
318
319 One critical thing to note: upper parts of the underlying 64 bit
320 register are *not zero'd out* by a write involving a non-aligned Vector
321 Length. An 8 bit operation with VL=7 will *not* overwrite the 8th byte
322 of the destination. The only situation where a full overwrite occurs
323 is on "default" behaviour. This is extremely important to consider the
324 register file as a byte-level store, not a 64-bit-level store.
325
326 # Quick recap so far
327
328 The above functionality pretty much covers around 85% of Vector ISA needs.
329 Predication is provided so that parallel if/then/else constructs can
330 be performed: critical given that sequential if/then statements and
331 branches simply do not translate successfully to Vector workloads.
332 VSPLAT capability is provided which is approximately 20% of all GPU
333 workload operations. Also covered, with elwidth overriding, is the
334 smaller arithmetic operations that caused ISAs developed from the
335 late 80s onwards to get themselves into a tiz when adding "Multimedia"
336 acceleration aka "SIMD" instructions.
337
338 Experienced Vector ISA readers will however have noted that VCOMPRESS
339 and VEXPAND are missing, as is Vector "reduce" (mapreduce) capability
340 and VGATHER and VSCATTER. Compress and Expand are covered by Twin
341 Predication, and yet to also be covered is fail-on-first, CR-based result
342 predication, and Subvectors and Swizzle.
343
344 ## SUBVL <a name="subvl"></a>
345
346 Adding in support for SUBVL is a matter of adding in an extra inner
347 for-loop, where register src and dest are still incremented inside the
348 inner part. Predication is still taken from the VL index, however it
349 is applied to the whole subvector:
350
351 function op_add(RT, RA, RB) # add not VADD!
352  int id=0, irs1=0, irs2=0;
353  predval = get_pred_val(FALSE, rd);
354 for i = 0 to VL-1:
355 if (predval & 1<<i) # predication uses intregs
356 for (s = 0; s < SUBVL; s++)
357 sd = id*SUBVL + s
358 srs1 = irs1*SUBVL + s
359 srs2 = irs2*SUBVL + s
360 ireg[RT+sd] <= ireg[RA+srs1] + ireg[RB+srs2];
361 if (!RT.isvec) break;
362 if (RT.isvec) { id += 1; }
363 if (RA.isvec) { irs1 += 1; }
364 if (RB.isvec) { irs2 += 1; }
365
366 # Swizzle <a name="subvl"></a>
367
368 Swizzle is particularly important for 3D work. It allows in-place
369 reordering of XYZW, ARGB etc. and access of sub-portions of the same in
370 arbitrary order *without* requiring timeconsuming scalar mv instructions
371 (scalar due to the convoluted offsets). With somewhere around 10% of
372 operations in 3D Shaders involving swizzle this is a huge saving and
373 reduces pressure on register files.
374
375 In SV given the percentage of operations that also involve initialisation
376 to 0.0 or 1.0 into subvector elements the decision was made to include
377 those:
378
379 swizzle = get_swizzle_immed() # 12 bits
380 for (s = 0; s < SUBVL; s++)
381 remap = (swizzle >> 3*s) & 0b111
382 if remap < 4:
383 sm = id*SUBVL + remap
384 ireg[rd+s] <= ireg[RA+sm]
385 elif remap == 4:
386 ireg[rd+s] <= 0.0
387 elif remap == 5:
388 ireg[rd+s] <= 1.0
389
390 Note that a value of 6 (and 7) will leave the target subvector element
391 untouched. This is equivalent to a predicate mask which is built-in,
392 in immediate form, into the [[sv/mv.swizzle]] operation. mv.swizzle is
393 rare in that it is one of the few instructions needed to be added that
394 are never going to be part of a Scalar ISA. Even in High Performance
395 Compute workloads it is unusual: it is only because SV is targetted at
396 3D and Video that it is being considered.
397
398 Some 3D GPU ISAs also allow for two-operand subvector swizzles. These are
399 sufficiently unusual, and the immediate opcode space required so large,
400 that the tradeoff balance was decided in SV to only add mv.swizzle.
401
402 # Twin Predication
403
404 Twin Predication is cool. Essentially it is a back-to-back
405 VCOMPRESS-VEXPAND (a multiple sequentially ordered VINSERT). The compress
406 part is covered by the source predicate and the expand part by the
407 destination predicate. Of course, if either of those is all 1s then
408 the operation degenerates *to* VCOMPRESS or VEXPAND, respectively.
409
410 function op(RT, RS):
411  ps = get_pred_val(FALSE, RS); # predication on src
412  pd = get_pred_val(FALSE, RT); # ... AND on dest
413  for (int i = 0, int j = 0; i < VL && j < VL;):
414 if (RS.isvec) while (!(ps & 1<<i)) i++;
415 if (RT.isvec) while (!(pd & 1<<j)) j++;
416 reg[RT+j] = SCALAR_OPERATION_ON(reg[RS+i])
417 if (int_csr[RS].isvec) i++;
418 if (int_csr[RT].isvec) j++; else break
419
420 Here's the interesting part: given the fact that SV is a "context"
421 extension, the above pattern can be applied to a lot more than just MV,
422 which is normally only what VCOMPRESS and VEXPAND do in traditional
423 Vector ISAs: move registers. Twin Predication can be applied to `extsw`
424 or `fcvt`, LD/ST operations and even `rlwinmi` and other operations
425 taking a single source and immediate(s) such as `addi`. All of these
426 are termed single-source, single-destination (LDST Address-generation,
427 or AGEN, is a single source).
428
429 It also turns out that by using a single bit set in the source or
430 destination, *all* the sequential ordered standard patterns of Vector
431 ISAs are provided: VSPLAT, VSELECT, VINSERT, VCOMPRESS, VEXPAND.
432
433 The only one missing from the list here, because it is non-sequential,
434 is VGATHER: moving registers by specifying a vector of register indices
435 (`regs[rd] = regs[regs[rs]]` in a loop). This one is tricky because it
436 typically does not exist in standard scalar ISAs. If it did it would
437 be called [[sv/mv.x]]. Once Vectorised, it's a VGATHER.
438
439 # CR predicate result analysis
440
441 OpenPOWER has Condition Registers. These store an analysis of the result
442 of an operation to test it for being greater, less than or equal to zero.
443 What if a test could be done, similar to branch BO testing, which hooked
444 into the predication system?
445
446 for i in range(VL):
447 # predication test, skip all masked out elements.
448 if predicate_masked_out(i): continue # skip
449 result = op(iregs[RA+i], iregs[RB+i])
450 CRnew = analyse(result) # calculates eq/lt/gt
451 # Rc=1 always stores the CR
452 if RC1 or Rc=1: crregs[offs+i] = CRnew
453 if RC1: continue # RC1 mode skips result store
454 # now test CR, similar to branch
455 if CRnew[BO[0:1]] == BO[2]:
456 # result optionally stored but CR always is
457 iregs[RT+i] = result
458
459 Note that whilst the Vector of CRs is always written to the CR regfile,
460 only those result elements that pass the BO test get written to the
461 integer regfile (when RC1 mode is not set). In RC1 mode the CR is always stored, but the result never is. This effectively turns every arithmetic operation into a type of `cmp` instruction.
462
463 Here for example if FP overflow occurred, and the CR testing was carried
464 out for that, all valid results would be stored but invalid ones would
465 not, but in addition the Vector of CRs would contain the indicators of
466 which ones failed. With the invalid results being simply not written
467 this could save resources (save on register file writes).
468
469 Also expected is, due to the fact that the predicate mask is effectively
470 ANDed with the post-result analysis as a secondary type of predication,
471 that there would be savings to be had in some types of operations where
472 the post-result analysis, if not included in SV, would need a second
473 predicate calculation followed by a predicate mask AND operation.
474
475 Note, hilariously, that Vectorised Condition Register Operations (crand, cror) may
476 also have post-result analysis applied to them. With Vectors of CRs being
477 utilised *for* predication, possibilities for compact and elegant code
478 begin to emerge from this innocuous-looking addition to SV.
479
480 # Exception-based Fail-on-first
481
482 One of the major issues with Vectorised LD/ST operations is when a batch of LDs cross a page-fault boundary. With considerable resources being taken up with in-flight data, a large Vector LD being cancelled or unable to roll back is either a detriment to performance or can cause data corruption.
483
484 What if, then, rather than cancel an entire Vector LD because the last operation would cause a page fault, instead truncate the Vector to the last successful element?
485
486 This is called "fail-on-first". Here is strncpy, illustrated from RVV:
487
488 strncpy:
489 c.mv a3, a0 # Copy dst
490 loop:
491 setvli x0, a2, vint8 # Vectors of bytes.
492 vlbff.v v1, (a1) # Get src bytes
493 vseq.vi v0, v1, 0 # Flag zero bytes
494 vmfirst a4, v0 # Zero found?
495 vmsif.v v0, v0 # Set mask up to and including zero byte.
496 vsb.v v1, (a3), v0.t # Write out bytes
497 c.bgez a4, exit # Done
498 csrr t1, vl # Get number of bytes fetched
499 c.add a1, a1, t1 # Bump src pointer
500 c.sub a2, a2, t1 # Decrement count.
501 c.add a3, a3, t1 # Bump dst pointer
502 c.bnez a2, loop # Anymore?
503 exit:
504 c.ret
505
506 Vector Length VL is truncated inherently at the first page faulting byte-level LD. Otherwise, with more powerful hardware the number of elements LOADed from memory could be dozens to hundreds or greater (memory bandwidth permitting).
507
508 With VL truncated the analysis looking for the zero byte and the subsequent STORE (a straight ST, not a ffirst ST) can proceed, safe in the knowledge that every byte loaded in the Vector is valid. Implementors are even permitted to "adapt" VL, truncating it early so that, for example, subsequent iterations of loops will have LD/STs on aligned boundaries.
509
510 SIMD strncpy hand-written assembly routines are, to be blunt about it, a total nightmare. 240 instructions is not uncommon, and the worst thing about them is that they are unable to cope with detection of a page fault condition.
511
512 # Data-dependent fail-first
513
514 This is a minor variant on the CR-based predicate-result mode. Where ored-result continues with independent element testing, data-dependent fail-first *stops* at the first failure:
515
516 if Rc=0: BO = inv<<2 | 0b00 # test CR.eq bit z/nz
517 for i in range(VL):
518 # predication test, skip all masked out elements.
519 if predicate_masked_out(i): continue # skip
520 result = op(iregs[RA+i], iregs[RB+i])
521 CRnew = analyse(result) # calculates eq/lt/gt
522 # now test CR, similar to branch
523 if CRnew[BO[0:1]] != BO[2]:
524 VL = i # truncate: only successes allowed
525 break
526 # test passed: store result (and CR?)
527 if not RC1: iregs[RT+i] = result
528 if RC1 or Rc=1: crregs[offs+i] = CRnew
529
530 This is particularly useful, again, for FP operations that might overflow, where it is desirable to end the loop early, but also desirable to complete at least those operations that were okay (passed the test) without also having to sllow down execution by adding extra instructions that tested for the possibility of that failure, in advance of doing the actual calculation.
531
532 The only minor downside here though is the change to VL, which in some implementations may cause pipeline stalls. This was one of the reasons why CR-based pred-result analysis was added, because that at least is entirely paralleliseable.
533
534 # Instruction format
535
536 Whilst this overview shows the internals, it does not go into detail on the actual instruction format itself. There are a couple of reasons for this: firstly, it's under development, and secondly, it needs to be proposed to the OpenPOWER Foundation ISA WG for consideration and review.
537
538 That said: draft pages for [[sv/setvl]] and [[sv/svp64]] are written up. The `setvl` instruction is pretty much as would be expected from a Cray style VL instruction: the only differences being that, firstly, the MAXVL (Maximum Vector Length) has to be specified, because that determines - precisely - how many of the *scalar* registers are to be used for a given Vector. Secondly: within the limit of MAXVL, VL is required to be set to the requested value. By contrast, RVV systems permit the hardware to set arbitrary values of VL.
539
540 The other key question is of course: what's the actual instruction format, and what's in it? Bearing in mind that this requires OPF review, the current draft is at the [[sv/svp64]] page, and includes space for all the different modes, the predicates, element width overrides, SUBVL and the register extensions, in 24 bits. This just about fits into an OpenPOWER v3.1B 64 bit Prefix by borrowing some of the Reserved Encoding space. The v3.1B suffix - containing as it does a 32 bit OpenPOWER instruction - aligns perfectly with SV.
541
542 Further reading is at the main [[SV|sv]] page.
543
544 # Conclusion
545
546 Starting from a scalar ISA - OpenPOWER v3.0B - it was shown above that, with conceptual sub-loops, a Scalar ISA can be turned into a Vector one, by embedding Scalar instructions - unmodified - into a Vector "context" using "Prefixing". With careful thought, this technique reaches 90% par with good Vector ISAs, increasing to 95% with the addition of a mere handful of additional context-vectoriseable scalar instructions ([[sv/mv.x]] amongst them).
547
548 What is particularly cool about the SV concept is that custom extensions and research need not be concerned about inventing new Vector instructions and how to get them to interact with the Scalar ISA: they are effectively one and the same. Any new instruction added at the Scalar level is inherently and automatically Vectorised, following some simple rules.
549