1a3068e82f9cdde3288c6c71414675899b5ccc4f
[libreriscv.git] / simple_v_extension / specification.mdwn
1 # Simple-V (Parallelism Extension Proposal) Specification
2
3 * Copyright (C) 2017, 2018, 3029 Luke Kenneth Casson Leighton
4 * Status: DRAFTv0.6
5 * Last edited: 21 jun 2019
6 * Ancillary resource: [[opcodes]] [[sv_prefix_proposal]]
7
8 With thanks to:
9
10 * Allen Baum
11 * Bruce Hoult
12 * comp.arch
13 * Jacob Bachmeyer
14 * Guy Lemurieux
15 * Jacob Lifshay
16 * Terje Mathisen
17 * The RISC-V Founders, without whom this all would not be possible.
18
19 [[!toc ]]
20
21 # Summary and Background: Rationale
22
23 Simple-V is a uniform parallelism API for RISC-V hardware that has several
24 unplanned side-effects including code-size reduction, expansion of
25 HINT space and more. The reason for
26 creating it is to provide a manageable way to turn a pre-existing design
27 into a parallel one, in a step-by-step incremental fashion, allowing
28 the implementor to focus on adding hardware where it is needed and necessary.
29 The primary target is for mobile-class 3D GPUs and VPUs, with secondary
30 goals being to reduce executable size and reduce context-switch latency.
31
32 Critically: **No new instructions are added**. The parallelism (if any
33 is implemented) is implicitly added by tagging *standard* scalar registers
34 for redirection. When such a tagged register is used in any instruction,
35 it indicates that the PC shall **not** be incremented; instead a loop
36 is activated where *multiple* instructions are issued to the pipeline
37 (as determined by a length CSR), with contiguously incrementing register
38 numbers starting from the tagged register. When the last "element"
39 has been reached, only then is the PC permitted to move on. Thus
40 Simple-V effectively sits (slots) *in between* the instruction decode phase
41 and the ALU(s).
42
43 The barrier to entry with SV is therefore very low. The minimum
44 compliant implementation is software-emulation (traps), requiring
45 only the CSRs and CSR tables, and that an exception be thrown if an
46 instruction's registers are detected to have been tagged. The looping
47 that would otherwise be done in hardware is thus carried out in software,
48 instead. Whilst much slower, it is "compliant" with the SV specification,
49 and may be suited for implementation in RV32E and also in situations
50 where the implementor wishes to focus on certain aspects of SV, without
51 unnecessary time and resources into the silicon, whilst also conforming
52 strictly with the API. A good area to punt to software would be the
53 polymorphic element width capability for example.
54
55 Hardware Parallelism, if any, is therefore added at the implementor's
56 discretion to turn what would otherwise be a sequential loop into a
57 parallel one.
58
59 To emphasise that clearly: Simple-V (SV) is *not*:
60
61 * A SIMD system
62 * A SIMT system
63 * A Vectorisation Microarchitecture
64 * A microarchitecture of any specific kind
65 * A mandary parallel processor microarchitecture of any kind
66 * A supercomputer extension
67
68 SV does **not** tell implementors how or even if they should implement
69 parallelism: it is a hardware "API" (Application Programming Interface)
70 that, if implemented, presents a uniform and consistent way to *express*
71 parallelism, at the same time leaving the choice of if, how, how much,
72 when and whether to parallelise operations **entirely to the implementor**.
73
74 # Basic Operation
75
76 The principle of SV is as follows:
77
78 * Standard RV instructions are "prefixed" either to a 48 format (single instruction option) or a variable
79 length VLIW-like prefix (multi or "grouped" option) that indicates
80 which registers are "tagged" as "vectorised". Predicates can also be added.
81 * A "Vector Length" CSR is set, indicating the span of any future
82 "parallel" operations.
83 * If any operation (a **scalar** standard RV opcode)
84 uses a register that has been so "marked",
85 a hardware "macro-unrolling loop" is activated, of length
86 VL, that effectively issues **multiple** identical instructions
87 using contiguous sequentially-incrementing register numbers.
88 * **Whether they be executed sequentially or in parallel or a
89 mixture of both or punted to software-emulation in a trap handler
90 is entirely up to the implementor**.
91
92 In this way an entire scalar algorithm may be vectorised with
93 the minimum of modification to the hardware and to compiler toolchains.
94
95 To reiterate: **There are *no* new opcodes**. The scheme works *entirely* on hidden context that augments *scalar* RISCV instructions.
96
97 # CSRs <a name="csrs"></a>
98
99 * An optional "reshaping" CSR key-value table which remaps from a 1D
100 linear shape to 2D or 3D, including full transposition.
101
102 There are also five additional User mode CSRs :
103
104 * uMVL (the Maximum Vector Length)
105 * uVL (which has different characteristics from standard CSRs)
106 * uSUBVL (effectively a kind of SIMD)
107 * uEPCVLIW (a copy of the sub-execution Program Counter, that is relative to the start of the current VLIW Group, set on a trap).
108 * uSTATE (useful for saving and restoring during context switch,
109 and for providing fast transitions)
110
111 There are also five additional CSRs for Supervisor-Mode:
112
113 * SMVL
114 * SVL
115 * SSUBVL
116 * SEPCVLIW
117 * SSTATE
118
119 And likewise for M-Mode:
120
121 * MMVL
122 * MVL
123 * MSUBVL
124 * MEPCVLIW
125 * MSTATE
126
127 Both Supervisor and M-Mode have their own CSR registers, independent of the other privilege levels, in order to make it easier to use Vectorisation in each level without affecting other privilege levels.
128
129 The access pattern for these groups of CSRs in each mode follows the
130 same pattern for other CSRs that have M-Mode and S-Mode "mirrors":
131
132 * In M-Mode, the S-Mode and U-Mode CSRs are separate and distinct.
133 * In S-Mode, accessing and changing of the M-Mode CSRs is identical
134 to changing the S-Mode CSRs. Accessing and changing the U-Mode
135 CSRs is permitted.
136 * In U-Mode, accessing and changing of the S-Mode and U-Mode CSRs
137 is prohibited.
138
139 In M-Mode, only the M-Mode CSRs are in effect, i.e. it is only the
140 M-Mode MVL, the M-Mode STATE and so on that influences the processor
141 behaviour. Likewise for S-Mode, and likewise for U-Mode.
142
143 This has the interesting benefit of allowing M-Mode (or S-Mode)
144 to be set up, for context-switching to take place, and, on return
145 back to the higher privileged mode, the CSRs of that mode will be
146 exactly as they were. Thus, it becomes possible for example to
147 set up CSRs suited best to aiding and assisting low-latency fast
148 context-switching *once and only once*, without the need for
149 re-initialising the CSRs needed to do so.
150
151 The xEPCVLIW CSRs must be treated exactly like their corresponding xepc equivalents. See VLIW section for details.
152
153 ## MAXVECTORLENGTH (MVL) <a name="mvl" />
154
155 MAXVECTORLENGTH is the same concept as MVL in RVV, except that it
156 is variable length and may be dynamically set. MVL is
157 however limited to the regfile bitwidth XLEN (1-32 for RV32,
158 1-64 for RV64 and so on).
159
160 The reason for setting this limit is so that predication registers, when
161 marked as such, may fit into a single register as opposed to fanning out
162 over several registers. This keeps the implementation a little simpler.
163
164 The other important factor to note is that the actual MVL is **offset
165 by one**, so that it can fit into only 6 bits (for RV64) and still cover
166 a range up to XLEN bits. So, when setting the MVL CSR to 0, this actually
167 means that MVL==1. When setting the MVL CSR to 3, this actually means
168 that MVL==4, and so on. This is expressed more clearly in the "pseudocode"
169 section, where there are subtle differences between CSRRW and CSRRWI.
170
171 ## Vector Length (VL) <a name="vl" />
172
173 VSETVL is slightly different from RVV. Like RVV, VL is set to be within
174 the range 1 <= VL <= MVL (where MVL in turn is limited to 1 <= MVL <= XLEN)
175
176 VL = rd = MIN(vlen, MVL)
177
178 where 1 <= MVL <= XLEN
179
180 However just like MVL it is important to note that the range for VL has
181 subtle design implications, covered in the "CSR pseudocode" section
182
183 The fixed (specific) setting of VL allows vector LOAD/STORE to be used
184 to switch the entire bank of registers using a single instruction (see
185 Appendix, "Context Switch Example"). The reason for limiting VL to XLEN
186 is down to the fact that predication bits fit into a single register of
187 length XLEN bits.
188
189 The second change is that when VSETVL is requested to be stored
190 into x0, it is *ignored* silently (VSETVL x0, x5)
191
192 The third and most important change is that, within the limits set by
193 MVL, the value passed in **must** be set in VL (and in the
194 destination register).
195
196 This has implication for the microarchitecture, as VL is required to be
197 set (limits from MVL notwithstanding) to the actual value
198 requested. RVV has the option to set VL to an arbitrary value that suits
199 the conditions and the micro-architecture: SV does *not* permit this.
200
201 The reason is so that if SV is to be used for a context-switch or as a
202 substitute for LOAD/STORE-Multiple, the operation can be done with only
203 2-3 instructions (setup of the CSRs, VSETVL x0, x0, #{regfilelen-1},
204 single LD/ST operation). If VL does *not* get set to the register file
205 length when VSETVL is called, then a software-loop would be needed.
206 To avoid this need, VL *must* be set to exactly what is requested
207 (limits notwithstanding).
208
209 Therefore, in turn, unlike RVV, implementors *must* provide
210 pseudo-parallelism (using sequential loops in hardware) if actual
211 hardware-parallelism in the ALUs is not deployed. A hybrid is also
212 permitted (as used in Broadcom's VideoCore-IV) however this must be
213 *entirely* transparent to the ISA.
214
215 The fourth change is that VSETVL is implemented as a CSR, where the
216 behaviour of CSRRW (and CSRRWI) must be changed to specifically store
217 the *new* value in the destination register, **not** the old value.
218 Where context-load/save is to be implemented in the usual fashion
219 by using a single CSRRW instruction to obtain the old value, the
220 *secondary* CSR must be used (SVSTATE). This CSR behaves
221 exactly as standard CSRs, and contains more than just VL.
222
223 One interesting side-effect of using CSRRWI to set VL is that this
224 may be done with a single instruction, useful particularly for a
225 context-load/save. There are however limitations: CSRWI's immediate
226 is limited to 0-31 (representing VL=1-32).
227
228 Note that when VL is set to 1, all parallel operations cease: the
229 hardware loop is reduced to a single element: scalar operations.
230
231 ## SUBVL - Sub Vector Length
232
233 This is a "group by quantity" that effectively divides VL into groups of elements of length SUBVL. VL itself must therefore be set in advance to a multiple of SUBVL.
234
235 Legal values are 1, 2, 3 and 4, and the STATE CSR must hold the 2 bit values 0b00 thru 0b11.
236
237 Setting this CSR to 0 must raise an exception. Setting it to a value greater than 4 likewise.
238
239 The main effect of SUBVL is that predication bits are applied per **group**,
240 rather than by individual element.
241
242 This saves a not insignificant number of instructions when handling 3D vectors, as otherwise a much longer predicate mask would have to be set up with regularly-repeated bit patterns.
243
244 ## STATE
245
246 This is a standard CSR that contains sufficient information for a
247 full context save/restore. It contains (and permits setting of)
248 MVL, VL, SUBVL,
249 the destination element offset of the current parallel
250 instruction being executed, and, for twin-predication, the source
251 element offset as well. Interestingly it may hypothetically
252 also be used to make the immediately-following instruction to skip a
253 certain number of elements, however the recommended method to do
254 this is predication or using the offset mode of the REMAP CSRs.
255
256 Setting destoffs and srcoffs is realistically intended for saving state
257 so that exceptions (page faults in particular) may be serviced and the
258 hardware-loop that was being executed at the time of the trap, from
259 user-mode (or Supervisor-mode), may be returned to and continued from
260 where it left off. The reason why this works is because setting
261 User-Mode STATE will not change (not be used) in M-Mode or S-Mode
262 (and is entirely why M-Mode and S-Mode have their own STATE CSRs).
263
264 The format of the STATE CSR is as follows:
265
266 | (28..27) | (26..24) | (23..18) | (17..12) | (11..6) | (5...0) |
267 | -------- | -------- | -------- | -------- | ------- | ------- |
268 | rsvd | subvl | destoffs | srcoffs | vl | maxvl |
269
270 When setting this CSR, the following characteristics will be enforced:
271
272 * **MAXVL** will be truncated (after offset) to be within the range 1 to XLEN
273 * **VL** will be truncated (after offset) to be within the range 1 to MAXVL
274 * **SUBVL** which sets a SIMD-like quantity, has only 4 values however if VL is not a multiple of SUBVL an exception will be raised.
275 * **srcoffs** will be truncated to be within the range 0 to VL-1
276 * **destoffs** will be truncated to be within the range 0 to VL-1
277
278 ## MVL and VL Pseudocode
279
280 The pseudo-code for get and set of VL and MVL are as follows:
281
282 set_mvl_csr(value, rd):
283 regs[rd] = MVL
284 MVL = MIN(value, MVL)
285
286 get_mvl_csr(rd):
287 regs[rd] = VL
288
289 set_vl_csr(value, rd):
290 VL = MIN(value, MVL)
291 regs[rd] = VL # yes returning the new value NOT the old CSR
292 return VL
293
294 get_vl_csr(rd):
295 regs[rd] = VL
296 return VL
297
298 Note that where setting MVL behaves as a normal CSR, unlike standard CSR
299 behaviour, setting VL will return the **new** value of VL **not** the old
300 one.
301
302 For CSRRWI, the range of the immediate is restricted to 5 bits. In order to
303 maximise the effectiveness, an immediate of 0 is used to set VL=1,
304 an immediate of 1 is used to set VL=2 and so on:
305
306 CSRRWI_Set_MVL(value):
307 set_mvl_csr(value+1, x0)
308
309 CSRRWI_Set_VL(value):
310 set_vl_csr(value+1, x0)
311
312 However for CSRRW the following pseudocode is used for MVL and VL,
313 where setting the value to zero will cause an exception to be raised.
314 The reason is that if VL or MVL are set to zero, the STATE CSR is
315 not capable of returning that value.
316
317 CSRRW_Set_MVL(rs1, rd):
318 value = regs[rs1]
319 if value == 0:
320 raise Exception
321 set_mvl_csr(value, rd)
322
323 CSRRW_Set_VL(rs1, rd):
324 value = regs[rs1]
325 if value == 0:
326 raise Exception
327 set_vl_csr(value, rd)
328
329 In this way, when CSRRW is utilised with a loop variable, the value
330 that goes into VL (and into the destination register) may be used
331 in an instruction-minimal fashion:
332
333 CSRvect1 = {type: F, key: a3, val: a3, elwidth: dflt}
334 CSRvect2 = {type: F, key: a7, val: a7, elwidth: dflt}
335 CSRRWI MVL, 3 # sets MVL == **4** (not 3)
336 j zerotest # in case loop counter a0 already 0
337 loop:
338 CSRRW VL, t0, a0 # vl = t0 = min(mvl, a0)
339 ld a3, a1 # load 4 registers a3-6 from x
340 slli t1, t0, 3 # t1 = vl * 8 (in bytes)
341 ld a7, a2 # load 4 registers a7-10 from y
342 add a1, a1, t1 # increment pointer to x by vl*8
343 fmadd a7, a3, fa0, a7 # v1 += v0 * fa0 (y = a * x + y)
344 sub a0, a0, t0 # n -= vl (t0)
345 st a7, a2 # store 4 registers a7-10 to y
346 add a2, a2, t1 # increment pointer to y by vl*8
347 zerotest:
348 bnez a0, loop # repeat if n != 0
349
350 With the STATE CSR, just like with CSRRWI, in order to maximise the
351 utilisation of the limited bitspace, "000000" in binary represents
352 VL==1, "00001" represents VL==2 and so on (likewise for MVL):
353
354 CSRRW_Set_SV_STATE(rs1, rd):
355 value = regs[rs1]
356 get_state_csr(rd)
357 MVL = set_mvl_csr(value[11:6]+1)
358 VL = set_vl_csr(value[5:0]+1)
359 destoffs = value[23:18]>>18
360 srcoffs = value[23:18]>>12
361
362 get_state_csr(rd):
363 regs[rd] = (MVL-1) | (VL-1)<<6 | (srcoffs)<<12 |
364 (destoffs)<<18
365 return regs[rd]
366
367 In both cases, whilst CSR read of VL and MVL return the exact values
368 of VL and MVL respectively, reading and writing the STATE CSR returns
369 those values **minus one**. This is absolutely critical to implement
370 if the STATE CSR is to be used for fast context-switching.
371
372 ## Register key-value (CAM) table <a name="regcsrtable" />
373
374 *NOTE: in prior versions of SV, this table used to be writable and accessible via CSRs. It is now stored in the VLIW instruction format, and entries may be overridden by the SVPrefix format*
375
376 The purpose of the Register table is four-fold:
377
378 * To mark integer and floating-point registers as requiring "redirection"
379 if it is ever used as a source or destination in any given operation.
380 This involves a level of indirection through a 5-to-7-bit lookup table,
381 such that **unmodified** operands with 5 bit (3 for Compressed) may
382 access up to **128** registers.
383 * To indicate whether, after redirection through the lookup table, the
384 register is a vector (or remains a scalar).
385 * To over-ride the implicit or explicit bitwidth that the operation would
386 normally give the register.
387
388 16 bit format:
389
390 | RegCAM | | 15 | (14..8) | 7 | (6..5) | (4..0) |
391 | ------ | | - | - | - | ------ | ------- |
392 | 0 | | isvec0 | regidx0 | i/f | vew0 | regkey |
393 | 1 | | isvec1 | regidx1 | i/f | vew1 | regkey |
394 | .. | | isvec.. | regidx.. | i/f | vew.. | regkey |
395 | 15 | | isvec15 | regidx15 | i/f | vew15 | regkey |
396
397 8 bit format:
398
399 | RegCAM | | 7 | (6..5) | (4..0) |
400 | ------ | | - | ------ | ------- |
401 | 0 | | i/f | vew0 | regnum |
402
403 i/f is set to "1" to indicate that the redirection/tag entry is to be applied
404 to integer registers; 0 indicates that it is relevant to floating-point
405 registers.
406
407 The 8 bit format is used for a much more compact expression. "isvec" is implicit and, similar to [[sv-prefix-proposal]], the target vector is "regnum<<2", implicitly. Contrast this with the 16-bit format where the target vector is *explicitly* named in bits 8 to 14, and bit 15 may optionally set "scalar" mode.
408
409 Note that whilst SVPrefis adds one extra bit to each of rd, rs1 etc., and thus the "vector" mode need only shift the (6 bit) regnum by 1 to get the actual (7 bit) register number to use, there is not enough space in the 8 bit format so "regnum<<2" is required.
410
411 vew has the following meanings, indicating that the instruction's
412 operand size is "over-ridden" in a polymorphic fashion:
413
414 | vew | bitwidth |
415 | --- | ------------------- |
416 | 00 | default (XLEN/FLEN) |
417 | 01 | 8 bit |
418 | 10 | 16 bit |
419 | 11 | 32 bit |
420
421 As the above table is a CAM (key-value store) it may be appropriate
422 (faster, implementation-wise) to expand it as follows:
423
424 struct vectorised fp_vec[32], int_vec[32];
425
426 for (i = 0; i < 16; i++) // 16 CSRs?
427 tb = int_vec if CSRvec[i].type == 0 else fp_vec
428 idx = CSRvec[i].regkey // INT/FP src/dst reg in opcode
429 tb[idx].elwidth = CSRvec[i].elwidth
430 tb[idx].regidx = CSRvec[i].regidx // indirection
431 tb[idx].isvector = CSRvec[i].isvector // 0=scalar
432 tb[idx].packed = CSRvec[i].packed // SIMD or not
433
434
435
436 ## Predication Table <a name="predication_csr_table"></a>
437
438 *NOTE: in prior versions of SV, this table used to be writable and accessible via CSRs. It is now stored in the VLIW instruction format, and entries may be overridden by the SVPrefix format*
439
440 The Predication Table is a key-value store indicating whether, if a given
441 destination register (integer or floating-point) is referred to in an
442 instruction, it is to be predicated. Like the Register table, it is an indirect lookup that allows the RV opcodes to not need modification.
443
444 It is particularly important to note
445 that the *actual* register used can be *different* from the one that is
446 in the instruction, due to the redirection through the lookup table.
447
448 * regidx is the register that in combination with the
449 i/f flag, if that integer or floating-point register is referred to
450 in a (standard RV) instruction
451 results in the lookup table being referenced to find the predication
452 mask to use for this operation.
453 * predidx is the
454 *actual* (full, 7 bit) register to be used for the predication mask.
455 * inv indicates that the predication mask bits are to be inverted
456 prior to use *without* actually modifying the contents of the
457 registerfrom which those bits originated.
458 * zeroing is either 1 or 0, and if set to 1, the operation must
459 place zeros in any element position where the predication mask is
460 set to zero. If zeroing is set to 0, unpredicated elements *must*
461 be left alone. Some microarchitectures may choose to interpret
462 this as skipping the operation entirely. Others which wish to
463 stick more closely to a SIMD architecture may choose instead to
464 interpret unpredicated elements as an internal "copy element"
465 operation (which would be necessary in SIMD microarchitectures
466 that perform register-renaming)
467
468 16 bit format:
469
470 | PrCSR | (15..11) | 10 | 9 | 8 | (7..1) | 0 |
471 | ----- | - | - | - | - | ------- | ------- |
472 | 0 | predkey | zero0 | inv0 | i/f | regidx | rsrvd |
473 | 1 | predkey | zero1 | inv1 | i/f | regidx | rsvd |
474 | ... | predkey | ..... | .... | i/f | ....... | ....... |
475 | 15 | predkey | zero15 | inv15 | i/f | regidx | rsvd |
476
477
478 8 bit format:
479
480 | PrCSR | 7 | 6 | 5 | (4..0) |
481 | ----- | - | - | - | ------- |
482 | 0 | zero0 | inv0 | i/f | regnum |
483
484 The 8 bit format is a compact and less expressive variant of the full 16 bit format. Using the 8 bit formatis very different: the predicate register to use is implicit, and numbering begins inplicitly from x9. The regnum is still used to "activate" predication, in the same fashion as described above.
485
486 The 16 bit Predication CSR Table is a key-value store, so implementation-wise
487 it will be faster to turn the table around (maintain topologically
488 equivalent state):
489
490 struct pred {
491 bool zero;
492 bool inv;
493 bool enabled;
494 int predidx; // redirection: actual int register to use
495 }
496
497 struct pred fp_pred_reg[32]; // 64 in future (bank=1)
498 struct pred int_pred_reg[32]; // 64 in future (bank=1)
499
500 for (i = 0; i < 16; i++)
501 tb = int_pred_reg if CSRpred[i].type == 0 else fp_pred_reg;
502 idx = CSRpred[i].regidx
503 tb[idx].zero = CSRpred[i].zero
504 tb[idx].inv = CSRpred[i].inv
505 tb[idx].predidx = CSRpred[i].predidx
506 tb[idx].enabled = true
507
508 So when an operation is to be predicated, it is the internal state that
509 is used. In Section 6.4.2 of Hwacha's Manual (EECS-2015-262) the following
510 pseudo-code for operations is given, where p is the explicit (direct)
511 reference to the predication register to be used:
512
513 for (int i=0; i<vl; ++i)
514 if ([!]preg[p][i])
515 (d ? vreg[rd][i] : sreg[rd]) =
516 iop(s1 ? vreg[rs1][i] : sreg[rs1],
517 s2 ? vreg[rs2][i] : sreg[rs2]); // for insts with 2 inputs
518
519 This instead becomes an *indirect* reference using the *internal* state
520 table generated from the Predication CSR key-value store, which is used
521 as follows.
522
523 if type(iop) == INT:
524 preg = int_pred_reg[rd]
525 else:
526 preg = fp_pred_reg[rd]
527
528 for (int i=0; i<vl; ++i)
529 predicate, zeroing = get_pred_val(type(iop) == INT, rd):
530 if (predicate && (1<<i))
531 (d ? regfile[rd+i] : regfile[rd]) =
532 iop(s1 ? regfile[rs1+i] : regfile[rs1],
533 s2 ? regfile[rs2+i] : regfile[rs2]); // for insts with 2 inputs
534 else if (zeroing)
535 (d ? regfile[rd+i] : regfile[rd]) = 0
536
537 Note:
538
539 * d, s1 and s2 are booleans indicating whether destination,
540 source1 and source2 are vector or scalar
541 * key-value CSR-redirection of rd, rs1 and rs2 have NOT been included
542 above, for clarity. rd, rs1 and rs2 all also must ALSO go through
543 register-level redirection (from the Register table) if they are
544 vectors.
545
546 If written as a function, obtaining the predication mask (and whether
547 zeroing takes place) may be done as follows:
548
549 def get_pred_val(bool is_fp_op, int reg):
550 tb = int_reg if is_fp_op else fp_reg
551 if (!tb[reg].enabled):
552 return ~0x0, False // all enabled; no zeroing
553 tb = int_pred if is_fp_op else fp_pred
554 if (!tb[reg].enabled):
555 return ~0x0, False // all enabled; no zeroing
556 predidx = tb[reg].predidx // redirection occurs HERE
557 predicate = intreg[predidx] // actual predicate HERE
558 if (tb[reg].inv):
559 predicate = ~predicate // invert ALL bits
560 return predicate, tb[reg].zero
561
562 Note here, critically, that **only** if the register is marked
563 in its **register** table entry as being "active" does the testing
564 proceed further to check if the **predicate** table entry is
565 also active.
566
567 Note also that this is in direct contrast to branch operations
568 for the storage of comparisions: in these specific circumstances
569 the requirement for there to be an active *register* entry
570 is removed.
571
572 ## REMAP CSR <a name="remap" />
573
574 (Note: both the REMAP and SHAPE sections are best read after the
575 rest of the document has been read)
576
577 There is one 32-bit CSR which may be used to indicate which registers,
578 if used in any operation, must be "reshaped" (re-mapped) from a linear
579 form to a 2D or 3D transposed form, or "offset" to permit arbitrary
580 access to elements within a register.
581
582 The 32-bit REMAP CSR may reshape up to 3 registers:
583
584 | 29..28 | 27..26 | 25..24 | 23 | 22..16 | 15 | 14..8 | 7 | 6..0 |
585 | ------ | ------ | ------ | -- | ------- | -- | ------- | -- | ------- |
586 | shape2 | shape1 | shape0 | 0 | regidx2 | 0 | regidx1 | 0 | regidx0 |
587
588 regidx0-2 refer not to the Register CSR CAM entry but to the underlying
589 *real* register (see regidx, the value) and consequently is 7-bits wide.
590 When set to zero (referring to x0), clearly reshaping x0 is pointless,
591 so is used to indicate "disabled".
592 shape0-2 refers to one of three SHAPE CSRs. A value of 0x3 is reserved.
593 Bits 7, 15, 23, 30 and 31 are also reserved, and must be set to zero.
594
595 It is anticipated that these specialist CSRs not be very often used.
596 Unlike the CSR Register and Predication tables, the REMAP CSRs use
597 the full 7-bit regidx so that they can be set once and left alone,
598 whilst the CSR Register entries pointing to them are disabled, instead.
599
600 ## SHAPE 1D/2D/3D vector-matrix remapping CSRs
601
602 (Note: both the REMAP and SHAPE sections are best read after the
603 rest of the document has been read)
604
605 There are three "shape" CSRs, SHAPE0, SHAPE1, SHAPE2, 32-bits in each,
606 which have the same format. When each SHAPE CSR is set entirely to zeros,
607 remapping is disabled: the register's elements are a linear (1D) vector.
608
609 | 26..24 | 23 | 22..16 | 15 | 14..8 | 7 | 6..0 |
610 | ------- | -- | ------- | -- | ------- | -- | ------- |
611 | permute | offs[2] | zdimsz | offs[1] | ydimsz | offs[0] | xdimsz |
612
613 offs is a 3-bit field, spread out across bits 7, 15 and 23, which
614 is added to the element index during the loop calculation.
615
616 xdimsz, ydimsz and zdimsz are offset by 1, such that a value of 0 indicates
617 that the array dimensionality for that dimension is 1. A value of xdimsz=2
618 would indicate that in the first dimension there are 3 elements in the
619 array. The format of the array is therefore as follows:
620
621 array[xdim+1][ydim+1][zdim+1]
622
623 However whilst illustrative of the dimensionality, that does not take the
624 "permute" setting into account. "permute" may be any one of six values
625 (0-5, with values of 6 and 7 being reserved, and not legal). The table
626 below shows how the permutation dimensionality order works:
627
628 | permute | order | array format |
629 | ------- | ----- | ------------------------ |
630 | 000 | 0,1,2 | (xdim+1)(ydim+1)(zdim+1) |
631 | 001 | 0,2,1 | (xdim+1)(zdim+1)(ydim+1) |
632 | 010 | 1,0,2 | (ydim+1)(xdim+1)(zdim+1) |
633 | 011 | 1,2,0 | (ydim+1)(zdim+1)(xdim+1) |
634 | 100 | 2,0,1 | (zdim+1)(xdim+1)(ydim+1) |
635 | 101 | 2,1,0 | (zdim+1)(ydim+1)(xdim+1) |
636
637 In other words, the "permute" option changes the order in which
638 nested for-loops over the array would be done. The algorithm below
639 shows this more clearly, and may be executed as a python program:
640
641 # mapidx = REMAP.shape2
642 xdim = 3 # SHAPE[mapidx].xdim_sz+1
643 ydim = 4 # SHAPE[mapidx].ydim_sz+1
644 zdim = 5 # SHAPE[mapidx].zdim_sz+1
645
646 lims = [xdim, ydim, zdim]
647 idxs = [0,0,0] # starting indices
648 order = [1,0,2] # experiment with different permutations, here
649 offs = 0 # experiment with different offsets, here
650
651 for idx in range(xdim * ydim * zdim):
652 new_idx = offs + idxs[0] + idxs[1] * xdim + idxs[2] * xdim * ydim
653 print new_idx,
654 for i in range(3):
655 idxs[order[i]] = idxs[order[i]] + 1
656 if (idxs[order[i]] != lims[order[i]]):
657 break
658 print
659 idxs[order[i]] = 0
660
661 Here, it is assumed that this algorithm be run within all pseudo-code
662 throughout this document where a (parallelism) for-loop would normally
663 run from 0 to VL-1 to refer to contiguous register
664 elements; instead, where REMAP indicates to do so, the element index
665 is run through the above algorithm to work out the **actual** element
666 index, instead. Given that there are three possible SHAPE entries, up to
667 three separate registers in any given operation may be simultaneously
668 remapped:
669
670 function op_add(rd, rs1, rs2) # add not VADD!
671 ...
672 ...
673  for (i = 0; i < VL; i++)
674 if (predval & 1<<i) # predication uses intregs
675    ireg[rd+remap(id)] <= ireg[rs1+remap(irs1)] +
676 ireg[rs2+remap(irs2)];
677 if (!int_vec[rd ].isvector) break;
678 if (int_vec[rd ].isvector)  { id += 1; }
679 if (int_vec[rs1].isvector)  { irs1 += 1; }
680 if (int_vec[rs2].isvector)  { irs2 += 1; }
681
682 By changing remappings, 2D matrices may be transposed "in-place" for one
683 operation, followed by setting a different permutation order without
684 having to move the values in the registers to or from memory. Also,
685 the reason for having REMAP separate from the three SHAPE CSRs is so
686 that in a chain of matrix multiplications and additions, for example,
687 the SHAPE CSRs need only be set up once; only the REMAP CSR need be
688 changed to target different registers.
689
690 Note that:
691
692 * Over-running the register file clearly has to be detected and
693 an illegal instruction exception thrown
694 * When non-default elwidths are set, the exact same algorithm still
695 applies (i.e. it offsets elements *within* registers rather than
696 entire registers).
697 * If permute option 000 is utilised, the actual order of the
698 reindexing does not change!
699 * If two or more dimensions are set to zero, the actual order does not change!
700 * The above algorithm is pseudo-code **only**. Actual implementations
701 will need to take into account the fact that the element for-looping
702 must be **re-entrant**, due to the possibility of exceptions occurring.
703 See MSTATE CSR, which records the current element index.
704 * Twin-predicated operations require **two** separate and distinct
705 element offsets. The above pseudo-code algorithm will be applied
706 separately and independently to each, should each of the two
707 operands be remapped. *This even includes C.LDSP* and other operations
708 in that category, where in that case it will be the **offset** that is
709 remapped (see Compressed Stack LOAD/STORE section).
710 * Offset is especially useful, on its own, for accessing elements
711 within the middle of a register. Without offsets, it is necessary
712 to either use a predicated MV, skipping the first elements, or
713 performing a LOAD/STORE cycle to memory.
714 With offsets, the data does not have to be moved.
715 * Setting the total elements (xdim+1) times (ydim+1) times (zdim+1) to
716 less than MVL is **perfectly legal**, albeit very obscure. It permits
717 entries to be regularly presented to operands **more than once**, thus
718 allowing the same underlying registers to act as an accumulator of
719 multiple vector or matrix operations, for example.
720
721 Clearly here some considerable care needs to be taken as the remapping
722 could hypothetically create arithmetic operations that target the
723 exact same underlying registers, resulting in data corruption due to
724 pipeline overlaps. Out-of-order / Superscalar micro-architectures with
725 register-renaming will have an easier time dealing with this than
726 DSP-style SIMD micro-architectures.
727
728 # Instruction Execution Order
729
730 Simple-V behaves as if it is a hardware-level "macro expansion system",
731 substituting and expanding a single instruction into multiple sequential
732 instructions with contiguous and sequentially-incrementing registers.
733 As such, it does **not** modify - or specify - the behaviour and semantics of
734 the execution order: that may be deduced from the **existing** RV
735 specification in each and every case.
736
737 So for example if a particular micro-architecture permits out-of-order
738 execution, and it is augmented with Simple-V, then wherever instructions
739 may be out-of-order then so may the "post-expansion" SV ones.
740
741 If on the other hand there are memory guarantees which specifically
742 prevent and prohibit certain instructions from being re-ordered
743 (such as the Atomicity Axiom, or FENCE constraints), then clearly
744 those constraints **MUST** also be obeyed "post-expansion".
745
746 It should be absolutely clear that SV is **not** about providing new
747 functionality or changing the existing behaviour of a micro-architetural
748 design, or about changing the RISC-V Specification.
749 It is **purely** about compacting what would otherwise be contiguous
750 instructions that use sequentially-increasing register numbers down
751 to the **one** instruction.
752
753 # Instructions <a name="instructions" />
754
755 Despite being a 98% complete and accurate topological remap of RVV
756 concepts and functionality, no new instructions are needed.
757 Compared to RVV: *All* RVV instructions can be re-mapped, however xBitManip
758 becomes a critical dependency for efficient manipulation of predication
759 masks (as a bit-field). Despite the removal of all operations,
760 with the exception of CLIP and VSELECT.X
761 *all instructions from RVV Base are topologically re-mapped and retain their
762 complete functionality, intact*. Note that if RV64G ever had
763 a MV.X added as well as FCLIP, the full functionality of RVV-Base would
764 be obtained in SV.
765
766 Three instructions, VSELECT, VCLIP and VCLIPI, do not have RV Standard
767 equivalents, so are left out of Simple-V. VSELECT could be included if
768 there existed a MV.X instruction in RV (MV.X is a hypothetical
769 non-immediate variant of MV that would allow another register to
770 specify which register was to be copied). Note that if any of these three
771 instructions are added to any given RV extension, their functionality
772 will be inherently parallelised.
773
774 With some exceptions, where it does not make sense or is simply too
775 challenging, all RV-Base instructions are parallelised:
776
777 * CSR instructions, whilst a case could be made for fast-polling of
778 a CSR into multiple registers, or for being able to copy multiple
779 contiguously addressed CSRs into contiguous registers, and so on,
780 are the fundamental core basis of SV. If parallelised, extreme
781 care would need to be taken. Additionally, CSR reads are done
782 using x0, and it is *really* inadviseable to tag x0.
783 * LUI, C.J, C.JR, WFI, AUIPC are not suitable for parallelising so are
784 left as scalar.
785 * LR/SC could hypothetically be parallelised however their purpose is
786 single (complex) atomic memory operations where the LR must be followed
787 up by a matching SC. A sequence of parallel LR instructions followed
788 by a sequence of parallel SC instructions therefore is guaranteed to
789 not be useful. Not least: the guarantees of a Multi-LR/SC
790 would be impossible to provide if emulated in a trap.
791 * EBREAK, NOP, FENCE and others do not use registers so are not inherently
792 paralleliseable anyway.
793
794 All other operations using registers are automatically parallelised.
795 This includes AMOMAX, AMOSWAP and so on, where particular care and
796 attention must be paid.
797
798 Example pseudo-code for an integer ADD operation (including scalar operations).
799 Floating-point uses fp csrs.
800
801 function op_add(rd, rs1, rs2) # add not VADD!
802  int i, id=0, irs1=0, irs2=0;
803  predval = get_pred_val(FALSE, rd);
804  rd = int_vec[rd ].isvector ? int_vec[rd ].regidx : rd;
805  rs1 = int_vec[rs1].isvector ? int_vec[rs1].regidx : rs1;
806  rs2 = int_vec[rs2].isvector ? int_vec[rs2].regidx : rs2;
807  for (i = 0; i < VL; i++)
808 if (predval & 1<<i) # predication uses intregs
809    ireg[rd+id] <= ireg[rs1+irs1] + ireg[rs2+irs2];
810 if (!int_vec[rd ].isvector) break;
811 if (int_vec[rd ].isvector)  { id += 1; }
812 if (int_vec[rs1].isvector)  { irs1 += 1; }
813 if (int_vec[rs2].isvector)  { irs2 += 1; }
814
815 Note that for simplicity there is quite a lot missing from the above
816 pseudo-code: element widths, zeroing on predication, dimensional
817 reshaping and offsets and so on. However it demonstrates the basic
818 principle. Augmentations that produce the full pseudo-code are covered in
819 other sections.
820
821 ## Instruction Format
822
823 It is critical to appreciate that there are
824 **no operations added to SV, at all**.
825
826 Instead, by using CSRs to tag registers as an indication of "changed behaviour",
827 SV *overloads* pre-existing branch operations into predicated
828 variants, and implicitly overloads arithmetic operations, MV,
829 FCVT, and LOAD/STORE depending on CSR configurations for bitwidth
830 and predication. **Everything** becomes parallelised. *This includes
831 Compressed instructions* as well as any future instructions and Custom
832 Extensions.
833
834 Note: CSR tags to change behaviour of instructions is nothing new, including
835 in RISC-V. UXL, SXL and MXL change the behaviour so that XLEN=32/64/128.
836 FRM changes the behaviour of the floating-point unit, to alter the rounding
837 mode. Other architectures change the LOAD/STORE byte-order from big-endian
838 to little-endian on a per-instruction basis. SV is just a little more...
839 comprehensive in its effect on instructions.
840
841 ## Branch Instructions
842
843 ### Standard Branch <a name="standard_branch"></a>
844
845 Branch operations use standard RV opcodes that are reinterpreted to
846 be "predicate variants" in the instance where either of the two src
847 registers are marked as vectors (active=1, vector=1).
848
849 Note that the predication register to use (if one is enabled) is taken from
850 the *first* src register, and that this is used, just as with predicated
851 arithmetic operations, to mask whether the comparison operations take
852 place or not. The target (destination) predication register
853 to use (if one is enabled) is taken from the *second* src register.
854
855 If either of src1 or src2 are scalars (whether by there being no
856 CSR register entry or whether by the CSR entry specifically marking
857 the register as "scalar") the comparison goes ahead as vector-scalar
858 or scalar-vector.
859
860 In instances where no vectorisation is detected on either src registers
861 the operation is treated as an absolutely standard scalar branch operation.
862 Where vectorisation is present on either or both src registers, the
863 branch may stil go ahead if any only if *all* tests succeed (i.e. excluding
864 those tests that are predicated out).
865
866 Note that when zero-predication is enabled (from source rs1),
867 a cleared bit in the predicate indicates that the result
868 of the compare is set to "false", i.e. that the corresponding
869 destination bit (or result)) be set to zero. Contrast this with
870 when zeroing is not set: bits in the destination predicate are
871 only *set*; they are **not** cleared. This is important to appreciate,
872 as there may be an expectation that, going into the hardware-loop,
873 the destination predicate is always expected to be set to zero:
874 this is **not** the case. The destination predicate is only set
875 to zero if **zeroing** is enabled.
876
877 Note that just as with the standard (scalar, non-predicated) branch
878 operations, BLE, BGT, BLEU and BTGU may be synthesised by inverting
879 src1 and src2.
880
881 In Hwacha EECS-2015-262 Section 6.7.2 the following pseudocode is given
882 for predicated compare operations of function "cmp":
883
884 for (int i=0; i<vl; ++i)
885 if ([!]preg[p][i])
886 preg[pd][i] = cmp(s1 ? vreg[rs1][i] : sreg[rs1],
887 s2 ? vreg[rs2][i] : sreg[rs2]);
888
889 With associated predication, vector-length adjustments and so on,
890 and temporarily ignoring bitwidth (which makes the comparisons more
891 complex), this becomes:
892
893 s1 = reg_is_vectorised(src1);
894 s2 = reg_is_vectorised(src2);
895
896 if not s1 && not s2
897 if cmp(rs1, rs2) # scalar compare
898 goto branch
899 return
900
901 preg = int_pred_reg[rd]
902 reg = int_regfile
903
904 ps = get_pred_val(I/F==INT, rs1);
905 rd = get_pred_val(I/F==INT, rs2); # this may not exist
906
907 if not exists(rd) or zeroing:
908 result = 0
909 else
910 result = preg[rd]
911
912 for (int i = 0; i < VL; ++i)
913 if (zeroing)
914 if not (ps & (1<<i))
915 result &= ~(1<<i);
916 else if (ps & (1<<i))
917 if (cmp(s1 ? reg[src1+i]:reg[src1],
918 s2 ? reg[src2+i]:reg[src2])
919 result |= 1<<i;
920 else
921 result &= ~(1<<i);
922
923 if not exists(rd)
924 if result == ps
925 goto branch
926 else
927 preg[rd] = result # store in destination
928 if preg[rd] == ps
929 goto branch
930
931 Notes:
932
933 * Predicated SIMD comparisons would break src1 and src2 further down
934 into bitwidth-sized chunks (see Appendix "Bitwidth Virtual Register
935 Reordering") setting Vector-Length times (number of SIMD elements) bits
936 in Predicate Register rd, as opposed to just Vector-Length bits.
937 * The execution of "parallelised" instructions **must** be implemented
938 as "re-entrant" (to use a term from software). If an exception (trap)
939 occurs during the middle of a vectorised
940 Branch (now a SV predicated compare) operation, the partial results
941 of any comparisons must be written out to the destination
942 register before the trap is permitted to begin. If however there
943 is no predicate, the **entire** set of comparisons must be **restarted**,
944 with the offset loop indices set back to zero. This is because
945 there is no place to store the temporary result during the handling
946 of traps.
947
948 TODO: predication now taken from src2. also branch goes ahead
949 if all compares are successful.
950
951 Note also that where normally, predication requires that there must
952 also be a CSR register entry for the register being used in order
953 for the **predication** CSR register entry to also be active,
954 for branches this is **not** the case. src2 does **not** have
955 to have its CSR register entry marked as active in order for
956 predication on src2 to be active.
957
958 Also note: SV Branch operations are **not** twin-predicated
959 (see Twin Predication section). This would require three
960 element offsets: one to track src1, one to track src2 and a third
961 to track where to store the accumulation of the results. Given
962 that the element offsets need to be exposed via CSRs so that
963 the parallel hardware looping may be made re-entrant on traps
964 and exceptions, the decision was made not to make SV Branches
965 twin-predicated.
966
967 ### Floating-point Comparisons
968
969 There does not exist floating-point branch operations, only compare.
970 Interestingly no change is needed to the instruction format because
971 FP Compare already stores a 1 or a zero in its "rd" integer register
972 target, i.e. it's not actually a Branch at all: it's a compare.
973
974 In RV (scalar) Base, a branch on a floating-point compare is
975 done via the sequence "FEQ x1, f0, f5; BEQ x1, x0, #jumploc".
976 This does extend to SV, as long as x1 (in the example sequence given)
977 is vectorised. When that is the case, x1..x(1+VL-1) will also be
978 set to 0 or 1 depending on whether f0==f5, f1==f6, f2==f7 and so on.
979 The BEQ that follows will *also* compare x1==x0, x2==x0, x3==x0 and
980 so on. Consequently, unlike integer-branch, FP Compare needs no
981 modification in its behaviour.
982
983 In addition, it is noted that an entry "FNE" (the opposite of FEQ) is missing,
984 and whilst in ordinary branch code this is fine because the standard
985 RVF compare can always be followed up with an integer BEQ or a BNE (or
986 a compressed comparison to zero or non-zero), in predication terms that
987 becomes more of an impact. To deal with this, SV's predication has
988 had "invert" added to it.
989
990 Also: note that FP Compare may be predicated, using the destination
991 integer register (rd) to determine the predicate. FP Compare is **not**
992 a twin-predication operation, as, again, just as with SV Branches,
993 there are three registers involved: FP src1, FP src2 and INT rd.
994
995 ### Compressed Branch Instruction
996
997 Compressed Branch instructions are, just like standard Branch instructions,
998 reinterpreted to be vectorised and predicated based on the source register
999 (rs1s) CSR entries. As however there is only the one source register,
1000 given that c.beqz a10 is equivalent to beqz a10,x0, the optional target
1001 to store the results of the comparisions is taken from CSR predication
1002 table entries for **x0**.
1003
1004 The specific required use of x0 is, with a little thought, quite obvious,
1005 but is counterintuitive. Clearly it is **not** recommended to redirect
1006 x0 with a CSR register entry, however as a means to opaquely obtain
1007 a predication target it is the only sensible option that does not involve
1008 additional special CSRs (or, worse, additional special opcodes).
1009
1010 Note also that, just as with standard branches, the 2nd source
1011 (in this case x0 rather than src2) does **not** have to have its CSR
1012 register table marked as "active" in order for predication to work.
1013
1014 ## Vectorised Dual-operand instructions
1015
1016 There is a series of 2-operand instructions involving copying (and
1017 sometimes alteration):
1018
1019 * C.MV
1020 * FMV, FNEG, FABS, FCVT, FSGNJ, FSGNJN and FSGNJX
1021 * C.LWSP, C.SWSP, C.LDSP, C.FLWSP etc.
1022 * LOAD(-FP) and STORE(-FP)
1023
1024 All of these operations follow the same two-operand pattern, so it is
1025 *both* the source *and* destination predication masks that are taken into
1026 account. This is different from
1027 the three-operand arithmetic instructions, where the predication mask
1028 is taken from the *destination* register, and applied uniformly to the
1029 elements of the source register(s), element-for-element.
1030
1031 The pseudo-code pattern for twin-predicated operations is as
1032 follows:
1033
1034 function op(rd, rs):
1035  rd = int_csr[rd].active ? int_csr[rd].regidx : rd;
1036  rs = int_csr[rs].active ? int_csr[rs].regidx : rs;
1037  ps = get_pred_val(FALSE, rs); # predication on src
1038  pd = get_pred_val(FALSE, rd); # ... AND on dest
1039  for (int i = 0, int j = 0; i < VL && j < VL;):
1040 if (int_csr[rs].isvec) while (!(ps & 1<<i)) i++;
1041 if (int_csr[rd].isvec) while (!(pd & 1<<j)) j++;
1042 reg[rd+j] = SCALAR_OPERATION_ON(reg[rs+i])
1043 if (int_csr[rs].isvec) i++;
1044 if (int_csr[rd].isvec) j++; else break
1045
1046 This pattern covers scalar-scalar, scalar-vector, vector-scalar
1047 and vector-vector, and predicated variants of all of those.
1048 Zeroing is not presently included (TODO). As such, when compared
1049 to RVV, the twin-predicated variants of C.MV and FMV cover
1050 **all** standard vector operations: VINSERT, VSPLAT, VREDUCE,
1051 VEXTRACT, VSCATTER, VGATHER, VCOPY, and more.
1052
1053 Note that:
1054
1055 * elwidth (SIMD) is not covered in the pseudo-code above
1056 * ending the loop early in scalar cases (VINSERT, VEXTRACT) is also
1057 not covered
1058 * zero predication is also not shown (TODO).
1059
1060 ### C.MV Instruction <a name="c_mv"></a>
1061
1062 There is no MV instruction in RV however there is a C.MV instruction.
1063 It is used for copying integer-to-integer registers (vectorised FMV
1064 is used for copying floating-point).
1065
1066 If either the source or the destination register are marked as vectors
1067 C.MV is reinterpreted to be a vectorised (multi-register) predicated
1068 move operation. The actual instruction's format does not change:
1069
1070 [[!table data="""
1071 15 12 | 11 7 | 6 2 | 1 0 |
1072 funct4 | rd | rs | op |
1073 4 | 5 | 5 | 2 |
1074 C.MV | dest | src | C0 |
1075 """]]
1076
1077 A simplified version of the pseudocode for this operation is as follows:
1078
1079 function op_mv(rd, rs) # MV not VMV!
1080  rd = int_csr[rd].active ? int_csr[rd].regidx : rd;
1081  rs = int_csr[rs].active ? int_csr[rs].regidx : rs;
1082  ps = get_pred_val(FALSE, rs); # predication on src
1083  pd = get_pred_val(FALSE, rd); # ... AND on dest
1084  for (int i = 0, int j = 0; i < VL && j < VL;):
1085 if (int_csr[rs].isvec) while (!(ps & 1<<i)) i++;
1086 if (int_csr[rd].isvec) while (!(pd & 1<<j)) j++;
1087 ireg[rd+j] <= ireg[rs+i];
1088 if (int_csr[rs].isvec) i++;
1089 if (int_csr[rd].isvec) j++; else break
1090
1091 There are several different instructions from RVV that are covered by
1092 this one opcode:
1093
1094 [[!table data="""
1095 src | dest | predication | op |
1096 scalar | vector | none | VSPLAT |
1097 scalar | vector | destination | sparse VSPLAT |
1098 scalar | vector | 1-bit dest | VINSERT |
1099 vector | scalar | 1-bit? src | VEXTRACT |
1100 vector | vector | none | VCOPY |
1101 vector | vector | src | Vector Gather |
1102 vector | vector | dest | Vector Scatter |
1103 vector | vector | src & dest | Gather/Scatter |
1104 vector | vector | src == dest | sparse VCOPY |
1105 """]]
1106
1107 Also, VMERGE may be implemented as back-to-back (macro-op fused) C.MV
1108 operations with inversion on the src and dest predication for one of the
1109 two C.MV operations.
1110
1111 Note that in the instance where the Compressed Extension is not implemented,
1112 MV may be used, but that is a pseudo-operation mapping to addi rd, x0, rs.
1113 Note that the behaviour is **different** from C.MV because with addi the
1114 predication mask to use is taken **only** from rd and is applied against
1115 all elements: rs[i] = rd[i].
1116
1117 ### FMV, FNEG and FABS Instructions
1118
1119 These are identical in form to C.MV, except covering floating-point
1120 register copying. The same double-predication rules also apply.
1121 However when elwidth is not set to default the instruction is implicitly
1122 and automatic converted to a (vectorised) floating-point type conversion
1123 operation of the appropriate size covering the source and destination
1124 register bitwidths.
1125
1126 (Note that FMV, FNEG and FABS are all actually pseudo-instructions)
1127
1128 ### FVCT Instructions
1129
1130 These are again identical in form to C.MV, except that they cover
1131 floating-point to integer and integer to floating-point. When element
1132 width in each vector is set to default, the instructions behave exactly
1133 as they are defined for standard RV (scalar) operations, except vectorised
1134 in exactly the same fashion as outlined in C.MV.
1135
1136 However when the source or destination element width is not set to default,
1137 the opcode's explicit element widths are *over-ridden* to new definitions,
1138 and the opcode's element width is taken as indicative of the SIMD width
1139 (if applicable i.e. if packed SIMD is requested) instead.
1140
1141 For example FCVT.S.L would normally be used to convert a 64-bit
1142 integer in register rs1 to a 64-bit floating-point number in rd.
1143 If however the source rs1 is set to be a vector, where elwidth is set to
1144 default/2 and "packed SIMD" is enabled, then the first 32 bits of
1145 rs1 are converted to a floating-point number to be stored in rd's
1146 first element and the higher 32-bits *also* converted to floating-point
1147 and stored in the second. The 32 bit size comes from the fact that
1148 FCVT.S.L's integer width is 64 bit, and with elwidth on rs1 set to
1149 divide that by two it means that rs1 element width is to be taken as 32.
1150
1151 Similar rules apply to the destination register.
1152
1153 ## LOAD / STORE Instructions and LOAD-FP/STORE-FP <a name="load_store"></a>
1154
1155 An earlier draft of SV modified the behaviour of LOAD/STORE (modified
1156 the interpretation of the instruction fields). This
1157 actually undermined the fundamental principle of SV, namely that there
1158 be no modifications to the scalar behaviour (except where absolutely
1159 necessary), in order to simplify an implementor's task if considering
1160 converting a pre-existing scalar design to support parallelism.
1161
1162 So the original RISC-V scalar LOAD/STORE and LOAD-FP/STORE-FP functionality
1163 do not change in SV, however just as with C.MV it is important to note
1164 that dual-predication is possible.
1165
1166 In vectorised architectures there are usually at least two different modes
1167 for LOAD/STORE:
1168
1169 * Read (or write for STORE) from sequential locations, where one
1170 register specifies the address, and the one address is incremented
1171 by a fixed amount. This is usually known as "Unit Stride" mode.
1172 * Read (or write) from multiple indirected addresses, where the
1173 vector elements each specify separate and distinct addresses.
1174
1175 To support these different addressing modes, the CSR Register "isvector"
1176 bit is used. So, for a LOAD, when the src register is set to
1177 scalar, the LOADs are sequentially incremented by the src register
1178 element width, and when the src register is set to "vector", the
1179 elements are treated as indirection addresses. Simplified
1180 pseudo-code would look like this:
1181
1182 function op_ld(rd, rs) # LD not VLD!
1183  rdv = int_csr[rd].active ? int_csr[rd].regidx : rd;
1184  rsv = int_csr[rs].active ? int_csr[rs].regidx : rs;
1185  ps = get_pred_val(FALSE, rs); # predication on src
1186  pd = get_pred_val(FALSE, rd); # ... AND on dest
1187  for (int i = 0, int j = 0; i < VL && j < VL;):
1188 if (int_csr[rs].isvec) while (!(ps & 1<<i)) i++;
1189 if (int_csr[rd].isvec) while (!(pd & 1<<j)) j++;
1190 if (int_csr[rd].isvec)
1191 # indirect mode (multi mode)
1192 srcbase = ireg[rsv+i];
1193 else
1194 # unit stride mode
1195 srcbase = ireg[rsv] + i * XLEN/8; # offset in bytes
1196 ireg[rdv+j] <= mem[srcbase + imm_offs];
1197 if (!int_csr[rs].isvec &&
1198 !int_csr[rd].isvec) break # scalar-scalar LD
1199 if (int_csr[rs].isvec) i++;
1200 if (int_csr[rd].isvec) j++;
1201
1202 Notes:
1203
1204 * For simplicity, zeroing and elwidth is not included in the above:
1205 the key focus here is the decision-making for srcbase; vectorised
1206 rs means use sequentially-numbered registers as the indirection
1207 address, and scalar rs is "offset" mode.
1208 * The test towards the end for whether both source and destination are
1209 scalar is what makes the above pseudo-code provide the "standard" RV
1210 Base behaviour for LD operations.
1211 * The offset in bytes (XLEN/8) changes depending on whether the
1212 operation is a LB (1 byte), LH (2 byes), LW (4 bytes) or LD
1213 (8 bytes), and also whether the element width is over-ridden
1214 (see special element width section).
1215
1216 ## Compressed Stack LOAD / STORE Instructions <a name="c_ld_st"></a>
1217
1218 C.LWSP / C.SWSP and floating-point etc. are also source-dest twin-predicated,
1219 where it is implicit in C.LWSP/FLWSP etc. that x2 is the source register.
1220 It is therefore possible to use predicated C.LWSP to efficiently
1221 pop registers off the stack (by predicating x2 as the source), cherry-picking
1222 which registers to store to (by predicating the destination). Likewise
1223 for C.SWSP. In this way, LOAD/STORE-Multiple is efficiently achieved.
1224
1225 The two modes ("unit stride" and multi-indirection) are still supported,
1226 as with standard LD/ST. Essentially, the only difference is that the
1227 use of x2 is hard-coded into the instruction.
1228
1229 **Note**: it is still possible to redirect x2 to an alternative target
1230 register. With care, this allows C.LWSP / C.SWSP (and C.FLWSP) to be used as
1231 general-purpose LOAD/STORE operations.
1232
1233 ## Compressed LOAD / STORE Instructions
1234
1235 Compressed LOAD and STORE are again exactly the same as scalar LOAD/STORE,
1236 where the same rules apply and the same pseudo-code apply as for
1237 non-compressed LOAD/STORE. Again: setting scalar or vector mode
1238 on the src for LOAD and dest for STORE switches mode from "Unit Stride"
1239 to "Multi-indirection", respectively.
1240
1241 # Element bitwidth polymorphism <a name="elwidth"></a>
1242
1243 Element bitwidth is best covered as its own special section, as it
1244 is quite involved and applies uniformly across-the-board. SV restricts
1245 bitwidth polymorphism to default, 8-bit, 16-bit and 32-bit.
1246
1247 The effect of setting an element bitwidth is to re-cast each entry
1248 in the register table, and for all memory operations involving
1249 load/stores of certain specific sizes, to a completely different width.
1250 Thus In c-style terms, on an RV64 architecture, effectively each register
1251 now looks like this:
1252
1253 typedef union {
1254 uint8_t b[8];
1255 uint16_t s[4];
1256 uint32_t i[2];
1257 uint64_t l[1];
1258 } reg_t;
1259
1260 // integer table: assume maximum SV 7-bit regfile size
1261 reg_t int_regfile[128];
1262
1263 where the CSR Register table entry (not the instruction alone) determines
1264 which of those union entries is to be used on each operation, and the
1265 VL element offset in the hardware-loop specifies the index into each array.
1266
1267 However a naive interpretation of the data structure above masks the
1268 fact that setting VL greater than 8, for example, when the bitwidth is 8,
1269 accessing one specific register "spills over" to the following parts of
1270 the register file in a sequential fashion. So a much more accurate way
1271 to reflect this would be:
1272
1273 typedef union {
1274 uint8_t actual_bytes[8]; // 8 for RV64, 4 for RV32, 16 for RV128
1275 uint8_t b[0]; // array of type uint8_t
1276 uint16_t s[0];
1277 uint32_t i[0];
1278 uint64_t l[0];
1279 uint128_t d[0];
1280 } reg_t;
1281
1282 reg_t int_regfile[128];
1283
1284 where when accessing any individual regfile[n].b entry it is permitted
1285 (in c) to arbitrarily over-run the *declared* length of the array (zero),
1286 and thus "overspill" to consecutive register file entries in a fashion
1287 that is completely transparent to a greatly-simplified software / pseudo-code
1288 representation.
1289 It is however critical to note that it is clearly the responsibility of
1290 the implementor to ensure that, towards the end of the register file,
1291 an exception is thrown if attempts to access beyond the "real" register
1292 bytes is ever attempted.
1293
1294 Now we may modify pseudo-code an operation where all element bitwidths have
1295 been set to the same size, where this pseudo-code is otherwise identical
1296 to its "non" polymorphic versions (above):
1297
1298 function op_add(rd, rs1, rs2) # add not VADD!
1299 ...
1300 ...
1301  for (i = 0; i < VL; i++)
1302 ...
1303 ...
1304 // TODO, calculate if over-run occurs, for each elwidth
1305 if (elwidth == 8) {
1306    int_regfile[rd].b[id] <= int_regfile[rs1].i[irs1] +
1307     int_regfile[rs2].i[irs2];
1308 } else if elwidth == 16 {
1309    int_regfile[rd].s[id] <= int_regfile[rs1].s[irs1] +
1310     int_regfile[rs2].s[irs2];
1311 } else if elwidth == 32 {
1312    int_regfile[rd].i[id] <= int_regfile[rs1].i[irs1] +
1313     int_regfile[rs2].i[irs2];
1314 } else { // elwidth == 64
1315    int_regfile[rd].l[id] <= int_regfile[rs1].l[irs1] +
1316     int_regfile[rs2].l[irs2];
1317 }
1318 ...
1319 ...
1320
1321 So here we can see clearly: for 8-bit entries rd, rs1 and rs2 (and registers
1322 following sequentially on respectively from the same) are "type-cast"
1323 to 8-bit; for 16-bit entries likewise and so on.
1324
1325 However that only covers the case where the element widths are the same.
1326 Where the element widths are different, the following algorithm applies:
1327
1328 * Analyse the bitwidth of all source operands and work out the
1329 maximum. Record this as "maxsrcbitwidth"
1330 * If any given source operand requires sign-extension or zero-extension
1331 (ldb, div, rem, mul, sll, srl, sra etc.), instead of mandatory 32-bit
1332 sign-extension / zero-extension or whatever is specified in the standard
1333 RV specification, **change** that to sign-extending from the respective
1334 individual source operand's bitwidth from the CSR table out to
1335 "maxsrcbitwidth" (previously calculated), instead.
1336 * Following separate and distinct (optional) sign/zero-extension of all
1337 source operands as specifically required for that operation, carry out the
1338 operation at "maxsrcbitwidth". (Note that in the case of LOAD/STORE or MV
1339 this may be a "null" (copy) operation, and that with FCVT, the changes
1340 to the source and destination bitwidths may also turn FVCT effectively
1341 into a copy).
1342 * If the destination operand requires sign-extension or zero-extension,
1343 instead of a mandatory fixed size (typically 32-bit for arithmetic,
1344 for subw for example, and otherwise various: 8-bit for sb, 16-bit for sw
1345 etc.), overload the RV specification with the bitwidth from the
1346 destination register's elwidth entry.
1347 * Finally, store the (optionally) sign/zero-extended value into its
1348 destination: memory for sb/sw etc., or an offset section of the register
1349 file for an arithmetic operation.
1350
1351 In this way, polymorphic bitwidths are achieved without requiring a
1352 massive 64-way permutation of calculations **per opcode**, for example
1353 (4 possible rs1 bitwidths times 4 possible rs2 bitwidths times 4 possible
1354 rd bitwidths). The pseudo-code is therefore as follows:
1355
1356 typedef union {
1357 uint8_t b;
1358 uint16_t s;
1359 uint32_t i;
1360 uint64_t l;
1361 } el_reg_t;
1362
1363 bw(elwidth):
1364 if elwidth == 0:
1365 return xlen
1366 if elwidth == 1:
1367 return xlen / 2
1368 if elwidth == 2:
1369 return xlen * 2
1370 // elwidth == 3:
1371 return 8
1372
1373 get_max_elwidth(rs1, rs2):
1374 return max(bw(int_csr[rs1].elwidth), # default (XLEN) if not set
1375 bw(int_csr[rs2].elwidth)) # again XLEN if no entry
1376
1377 get_polymorphed_reg(reg, bitwidth, offset):
1378 el_reg_t res;
1379 res.l = 0; // TODO: going to need sign-extending / zero-extending
1380 if bitwidth == 8:
1381 reg.b = int_regfile[reg].b[offset]
1382 elif bitwidth == 16:
1383 reg.s = int_regfile[reg].s[offset]
1384 elif bitwidth == 32:
1385 reg.i = int_regfile[reg].i[offset]
1386 elif bitwidth == 64:
1387 reg.l = int_regfile[reg].l[offset]
1388 return res
1389
1390 set_polymorphed_reg(reg, bitwidth, offset, val):
1391 if (!int_csr[reg].isvec):
1392 # sign/zero-extend depending on opcode requirements, from
1393 # the reg's bitwidth out to the full bitwidth of the regfile
1394 val = sign_or_zero_extend(val, bitwidth, xlen)
1395 int_regfile[reg].l[0] = val
1396 elif bitwidth == 8:
1397 int_regfile[reg].b[offset] = val
1398 elif bitwidth == 16:
1399 int_regfile[reg].s[offset] = val
1400 elif bitwidth == 32:
1401 int_regfile[reg].i[offset] = val
1402 elif bitwidth == 64:
1403 int_regfile[reg].l[offset] = val
1404
1405 maxsrcwid = get_max_elwidth(rs1, rs2) # source element width(s)
1406 destwid = int_csr[rs1].elwidth # destination element width
1407  for (i = 0; i < VL; i++)
1408 if (predval & 1<<i) # predication uses intregs
1409 // TODO, calculate if over-run occurs, for each elwidth
1410 src1 = get_polymorphed_reg(rs1, maxsrcwid, irs1)
1411 // TODO, sign/zero-extend src1 and src2 as operation requires
1412 if (op_requires_sign_extend_src1)
1413 src1 = sign_extend(src1, maxsrcwid)
1414 src2 = get_polymorphed_reg(rs2, maxsrcwid, irs2)
1415 result = src1 + src2 # actual add here
1416 // TODO, sign/zero-extend result, as operation requires
1417 if (op_requires_sign_extend_dest)
1418 result = sign_extend(result, maxsrcwid)
1419 set_polymorphed_reg(rd, destwid, ird, result)
1420 if (!int_vec[rd].isvector) break
1421 if (int_vec[rd ].isvector)  { id += 1; }
1422 if (int_vec[rs1].isvector)  { irs1 += 1; }
1423 if (int_vec[rs2].isvector)  { irs2 += 1; }
1424
1425 Whilst specific sign-extension and zero-extension pseudocode call
1426 details are left out, due to each operation being different, the above
1427 should be clear that;
1428
1429 * the source operands are extended out to the maximum bitwidth of all
1430 source operands
1431 * the operation takes place at that maximum source bitwidth (the
1432 destination bitwidth is not involved at this point, at all)
1433 * the result is extended (or potentially even, truncated) before being
1434 stored in the destination. i.e. truncation (if required) to the
1435 destination width occurs **after** the operation **not** before.
1436 * when the destination is not marked as "vectorised", the **full**
1437 (standard, scalar) register file entry is taken up, i.e. the
1438 element is either sign-extended or zero-extended to cover the
1439 full register bitwidth (XLEN) if it is not already XLEN bits long.
1440
1441 Implementors are entirely free to optimise the above, particularly
1442 if it is specifically known that any given operation will complete
1443 accurately in less bits, as long as the results produced are
1444 directly equivalent and equal, for all inputs and all outputs,
1445 to those produced by the above algorithm.
1446
1447 ## Polymorphic floating-point operation exceptions and error-handling
1448
1449 For floating-point operations, conversion takes place without
1450 raising any kind of exception. Exactly as specified in the standard
1451 RV specification, NAN (or appropriate) is stored if the result
1452 is beyond the range of the destination, and, again, exactly as
1453 with the standard RV specification just as with scalar
1454 operations, the floating-point flag is raised (FCSR). And, again, just as
1455 with scalar operations, it is software's responsibility to check this flag.
1456 Given that the FCSR flags are "accrued", the fact that multiple element
1457 operations could have occurred is not a problem.
1458
1459 Note that it is perfectly legitimate for floating-point bitwidths of
1460 only 8 to be specified. However whilst it is possible to apply IEEE 754
1461 principles, no actual standard yet exists. Implementors wishing to
1462 provide hardware-level 8-bit support rather than throw a trap to emulate
1463 in software should contact the author of this specification before
1464 proceeding.
1465
1466 ## Polymorphic shift operators
1467
1468 A special note is needed for changing the element width of left and right
1469 shift operators, particularly right-shift. Even for standard RV base,
1470 in order for correct results to be returned, the second operand RS2 must
1471 be truncated to be within the range of RS1's bitwidth. spike's implementation
1472 of sll for example is as follows:
1473
1474 WRITE_RD(sext_xlen(zext_xlen(RS1) << (RS2 & (xlen-1))));
1475
1476 which means: where XLEN is 32 (for RV32), restrict RS2 to cover the
1477 range 0..31 so that RS1 will only be left-shifted by the amount that
1478 is possible to fit into a 32-bit register. Whilst this appears not
1479 to matter for hardware, it matters greatly in software implementations,
1480 and it also matters where an RV64 system is set to "RV32" mode, such
1481 that the underlying registers RS1 and RS2 comprise 64 hardware bits
1482 each.
1483
1484 For SV, where each operand's element bitwidth may be over-ridden, the
1485 rule about determining the operation's bitwidth *still applies*, being
1486 defined as the maximum bitwidth of RS1 and RS2. *However*, this rule
1487 **also applies to the truncation of RS2**. In other words, *after*
1488 determining the maximum bitwidth, RS2's range must **also be truncated**
1489 to ensure a correct answer. Example:
1490
1491 * RS1 is over-ridden to a 16-bit width
1492 * RS2 is over-ridden to an 8-bit width
1493 * RD is over-ridden to a 64-bit width
1494 * the maximum bitwidth is thus determined to be 16-bit - max(8,16)
1495 * RS2 is **truncated to a range of values from 0 to 15**: RS2 & (16-1)
1496
1497 Pseudocode (in spike) for this example would therefore be:
1498
1499 WRITE_RD(sext_xlen(zext_16bit(RS1) << (RS2 & (16-1))));
1500
1501 This example illustrates that considerable care therefore needs to be
1502 taken to ensure that left and right shift operations are implemented
1503 correctly. The key is that
1504
1505 * The operation bitwidth is determined by the maximum bitwidth
1506 of the *source registers*, **not** the destination register bitwidth
1507 * The result is then sign-extend (or truncated) as appropriate.
1508
1509 ## Polymorphic MULH/MULHU/MULHSU
1510
1511 MULH is designed to take the top half MSBs of a multiply that
1512 does not fit within the range of the source operands, such that
1513 smaller width operations may produce a full double-width multiply
1514 in two cycles. The issue is: SV allows the source operands to
1515 have variable bitwidth.
1516
1517 Here again special attention has to be paid to the rules regarding
1518 bitwidth, which, again, are that the operation is performed at
1519 the maximum bitwidth of the **source** registers. Therefore:
1520
1521 * An 8-bit x 8-bit multiply will create a 16-bit result that must
1522 be shifted down by 8 bits
1523 * A 16-bit x 8-bit multiply will create a 24-bit result that must
1524 be shifted down by 16 bits (top 8 bits being zero)
1525 * A 16-bit x 16-bit multiply will create a 32-bit result that must
1526 be shifted down by 16 bits
1527 * A 32-bit x 16-bit multiply will create a 48-bit result that must
1528 be shifted down by 32 bits
1529 * A 32-bit x 8-bit multiply will create a 40-bit result that must
1530 be shifted down by 32 bits
1531
1532 So again, just as with shift-left and shift-right, the result
1533 is shifted down by the maximum of the two source register bitwidths.
1534 And, exactly again, truncation or sign-extension is performed on the
1535 result. If sign-extension is to be carried out, it is performed
1536 from the same maximum of the two source register bitwidths out
1537 to the result element's bitwidth.
1538
1539 If truncation occurs, i.e. the top MSBs of the result are lost,
1540 this is "Officially Not Our Problem", i.e. it is assumed that the
1541 programmer actually desires the result to be truncated. i.e. if the
1542 programmer wanted all of the bits, they would have set the destination
1543 elwidth to accommodate them.
1544
1545 ## Polymorphic elwidth on LOAD/STORE <a name="elwidth_loadstore"></a>
1546
1547 Polymorphic element widths in vectorised form means that the data
1548 being loaded (or stored) across multiple registers needs to be treated
1549 (reinterpreted) as a contiguous stream of elwidth-wide items, where
1550 the source register's element width is **independent** from the destination's.
1551
1552 This makes for a slightly more complex algorithm when using indirection
1553 on the "addressed" register (source for LOAD and destination for STORE),
1554 particularly given that the LOAD/STORE instruction provides important
1555 information about the width of the data to be reinterpreted.
1556
1557 Let's illustrate the "load" part, where the pseudo-code for elwidth=default
1558 was as follows, and i is the loop from 0 to VL-1:
1559
1560 srcbase = ireg[rs+i];
1561 return mem[srcbase + imm]; // returns XLEN bits
1562
1563 Instead, when elwidth != default, for a LW (32-bit LOAD), elwidth-wide
1564 chunks are taken from the source memory location addressed by the current
1565 indexed source address register, and only when a full 32-bits-worth
1566 are taken will the index be moved on to the next contiguous source
1567 address register:
1568
1569 bitwidth = bw(elwidth); // source elwidth from CSR reg entry
1570 elsperblock = 32 / bitwidth // 1 if bw=32, 2 if bw=16, 4 if bw=8
1571 srcbase = ireg[rs+i/(elsperblock)]; // integer divide
1572 offs = i % elsperblock; // modulo
1573 return &mem[srcbase + imm + offs]; // re-cast to uint8_t*, uint16_t* etc.
1574
1575 Note that the constant "32" above is replaced by 8 for LB, 16 for LH, 64 for LD
1576 and 128 for LQ.
1577
1578 The principle is basically exactly the same as if the srcbase were pointing
1579 at the memory of the *register* file: memory is re-interpreted as containing
1580 groups of elwidth-wide discrete elements.
1581
1582 When storing the result from a load, it's important to respect the fact
1583 that the destination register has its *own separate element width*. Thus,
1584 when each element is loaded (at the source element width), any sign-extension
1585 or zero-extension (or truncation) needs to be done to the *destination*
1586 bitwidth. Also, the storing has the exact same analogous algorithm as
1587 above, where in fact it is just the set\_polymorphed\_reg pseudocode
1588 (completely unchanged) used above.
1589
1590 One issue remains: when the source element width is **greater** than
1591 the width of the operation, it is obvious that a single LB for example
1592 cannot possibly obtain 16-bit-wide data. This condition may be detected
1593 where, when using integer divide, elsperblock (the width of the LOAD
1594 divided by the bitwidth of the element) is zero.
1595
1596 The issue is "fixed" by ensuring that elsperblock is a minimum of 1:
1597
1598 elsperblock = min(1, LD_OP_BITWIDTH / element_bitwidth)
1599
1600 The elements, if the element bitwidth is larger than the LD operation's
1601 size, will then be sign/zero-extended to the full LD operation size, as
1602 specified by the LOAD (LDU instead of LD, LBU instead of LB), before
1603 being passed on to the second phase.
1604
1605 As LOAD/STORE may be twin-predicated, it is important to note that
1606 the rules on twin predication still apply, except where in previous
1607 pseudo-code (elwidth=default for both source and target) it was
1608 the *registers* that the predication was applied to, it is now the
1609 **elements** that the predication is applied to.
1610
1611 Thus the full pseudocode for all LD operations may be written out
1612 as follows:
1613
1614 function LBU(rd, rs):
1615 load_elwidthed(rd, rs, 8, true)
1616 function LB(rd, rs):
1617 load_elwidthed(rd, rs, 8, false)
1618 function LH(rd, rs):
1619 load_elwidthed(rd, rs, 16, false)
1620 ...
1621 ...
1622 function LQ(rd, rs):
1623 load_elwidthed(rd, rs, 128, false)
1624
1625 # returns 1 byte of data when opwidth=8, 2 bytes when opwidth=16..
1626 function load_memory(rs, imm, i, opwidth):
1627 elwidth = int_csr[rs].elwidth
1628 bitwidth = bw(elwidth);
1629 elsperblock = min(1, opwidth / bitwidth)
1630 srcbase = ireg[rs+i/(elsperblock)];
1631 offs = i % elsperblock;
1632 return mem[srcbase + imm + offs]; # 1/2/4/8/16 bytes
1633
1634 function load_elwidthed(rd, rs, opwidth, unsigned):
1635 destwid = int_csr[rd].elwidth # destination element width
1636  rd = int_csr[rd].active ? int_csr[rd].regidx : rd;
1637  rs = int_csr[rs].active ? int_csr[rs].regidx : rs;
1638  ps = get_pred_val(FALSE, rs); # predication on src
1639  pd = get_pred_val(FALSE, rd); # ... AND on dest
1640  for (int i = 0, int j = 0; i < VL && j < VL;):
1641 if (int_csr[rs].isvec) while (!(ps & 1<<i)) i++;
1642 if (int_csr[rd].isvec) while (!(pd & 1<<j)) j++;
1643 val = load_memory(rs, imm, i, opwidth)
1644 if unsigned:
1645 val = zero_extend(val, min(opwidth, bitwidth))
1646 else:
1647 val = sign_extend(val, min(opwidth, bitwidth))
1648 set_polymorphed_reg(rd, bitwidth, j, val)
1649 if (int_csr[rs].isvec) i++;
1650 if (int_csr[rd].isvec) j++; else break;
1651
1652 Note:
1653
1654 * when comparing against for example the twin-predicated c.mv
1655 pseudo-code, the pattern of independent incrementing of rd and rs
1656 is preserved unchanged.
1657 * just as with the c.mv pseudocode, zeroing is not included and must be
1658 taken into account (TODO).
1659 * that due to the use of a twin-predication algorithm, LOAD/STORE also
1660 take on the same VSPLAT, VINSERT, VREDUCE, VEXTRACT, VGATHER and
1661 VSCATTER characteristics.
1662 * that due to the use of the same set\_polymorphed\_reg pseudocode,
1663 a destination that is not vectorised (marked as scalar) will
1664 result in the element being fully sign-extended or zero-extended
1665 out to the full register file bitwidth (XLEN). When the source
1666 is also marked as scalar, this is how the compatibility with
1667 standard RV LOAD/STORE is preserved by this algorithm.
1668
1669 ### Example Tables showing LOAD elements
1670
1671 This section contains examples of vectorised LOAD operations, showing
1672 how the two stage process works (three if zero/sign-extension is included).
1673
1674
1675 #### Example: LD x8, x5(0), x8 CSR-elwidth=32, x5 CSR-elwidth=16, VL=7
1676
1677 This is:
1678
1679 * a 64-bit load, with an offset of zero
1680 * with a source-address elwidth of 16-bit
1681 * into a destination-register with an elwidth of 32-bit
1682 * where VL=7
1683 * from register x5 (actually x5-x6) to x8 (actually x8 to half of x11)
1684 * RV64, where XLEN=64 is assumed.
1685
1686 First, the memory table, which, due to the
1687 element width being 16 and the operation being LD (64), the 64-bits
1688 loaded from memory are subdivided into groups of **four** elements.
1689 And, with VL being 7 (deliberately to illustrate that this is reasonable
1690 and possible), the first four are sourced from the offset addresses pointed
1691 to by x5, and the next three from the ofset addresses pointed to by
1692 the next contiguous register, x6:
1693
1694 [[!table data="""
1695 addr | byte 0 | byte 1 | byte 2 | byte 3 | byte 4 | byte 5 | byte 6 | byte 7 |
1696 @x5 | elem 0 || elem 1 || elem 2 || elem 3 ||
1697 @x6 | elem 4 || elem 5 || elem 6 || not loaded ||
1698 """]]
1699
1700 Next, the elements are zero-extended from 16-bit to 32-bit, as whilst
1701 the elwidth CSR entry for x5 is 16-bit, the destination elwidth on x8 is 32.
1702
1703 [[!table data="""
1704 byte 3 | byte 2 | byte 1 | byte 0 |
1705 0x0 | 0x0 | elem0 ||
1706 0x0 | 0x0 | elem1 ||
1707 0x0 | 0x0 | elem2 ||
1708 0x0 | 0x0 | elem3 ||
1709 0x0 | 0x0 | elem4 ||
1710 0x0 | 0x0 | elem5 ||
1711 0x0 | 0x0 | elem6 ||
1712 0x0 | 0x0 | elem7 ||
1713 """]]
1714
1715 Lastly, the elements are stored in contiguous blocks, as if x8 was also
1716 byte-addressable "memory". That "memory" happens to cover registers
1717 x8, x9, x10 and x11, with the last 32 "bits" of x11 being **UNMODIFIED**:
1718
1719 [[!table data="""
1720 reg# | byte 7 | byte 6 | byte 5 | byte 4 | byte 3 | byte 2 | byte 1 | byte 0 |
1721 x8 | 0x0 | 0x0 | elem 1 || 0x0 | 0x0 | elem 0 ||
1722 x9 | 0x0 | 0x0 | elem 3 || 0x0 | 0x0 | elem 2 ||
1723 x10 | 0x0 | 0x0 | elem 5 || 0x0 | 0x0 | elem 4 ||
1724 x11 | **UNMODIFIED** |||| 0x0 | 0x0 | elem 6 ||
1725 """]]
1726
1727 Thus we have data that is loaded from the **addresses** pointed to by
1728 x5 and x6, zero-extended from 16-bit to 32-bit, stored in the **registers**
1729 x8 through to half of x11.
1730 The end result is that elements 0 and 1 end up in x8, with element 8 being
1731 shifted up 32 bits, and so on, until finally element 6 is in the
1732 LSBs of x11.
1733
1734 Note that whilst the memory addressing table is shown left-to-right byte order,
1735 the registers are shown in right-to-left (MSB) order. This does **not**
1736 imply that bit or byte-reversal is carried out: it's just easier to visualise
1737 memory as being contiguous bytes, and emphasises that registers are not
1738 really actually "memory" as such.
1739
1740 ## Why SV bitwidth specification is restricted to 4 entries
1741
1742 The four entries for SV element bitwidths only allows three over-rides:
1743
1744 * default bitwidth for a given operation *divided* by two
1745 * default bitwidth for a given operation *multiplied* by two
1746 * 8-bit
1747
1748 At first glance this seems completely inadequate: for example, RV64
1749 cannot possibly operate on 16-bit operations, because 64 divided by
1750 2 is 32. However, the reader may have forgotten that it is possible,
1751 at run-time, to switch a 64-bit application into 32-bit mode, by
1752 setting UXL. Once switched, opcodes that formerly had 64-bit
1753 meanings now have 32-bit meanings, and in this way, "default/2"
1754 now reaches **16-bit** where previously it meant "32-bit".
1755
1756 There is however an absolutely crucial aspect oF SV here that explicitly
1757 needs spelling out, and it's whether the "vectorised" bit is set in
1758 the Register's CSR entry.
1759
1760 If "vectorised" is clear (not set), this indicates that the operation
1761 is "scalar". Under these circumstances, when set on a destination (RD),
1762 then sign-extension and zero-extension, whilst changed to match the
1763 override bitwidth (if set), will erase the **full** register entry
1764 (64-bit if RV64).
1765
1766 When vectorised is *set*, this indicates that the operation now treats
1767 **elements** as if they were independent registers, so regardless of
1768 the length, any parts of a given actual register that are not involved
1769 in the operation are **NOT** modified, but are **PRESERVED**.
1770
1771 SIMD micro-architectures may implement this by using predication on
1772 any elements in a given actual register that are beyond the end of
1773 multi-element operation.
1774
1775 Example:
1776
1777 * rs1, rs2 and rd are all set to 8-bit
1778 * VL is set to 3
1779 * RV64 architecture is set (UXL=64)
1780 * add operation is carried out
1781 * bits 0-23 of RD are modified to be rs1[23..16] + rs2[23..16]
1782 concatenated with similar add operations on bits 15..8 and 7..0
1783 * bits 24 through 63 **remain as they originally were**.
1784
1785 Example SIMD micro-architectural implementation:
1786
1787 * SIMD architecture works out the nearest round number of elements
1788 that would fit into a full RV64 register (in this case: 8)
1789 * SIMD architecture creates a hidden predicate, binary 0b00000111
1790 i.e. the bottom 3 bits set (VL=3) and the top 5 bits clear
1791 * SIMD architecture goes ahead with the add operation as if it
1792 was a full 8-wide batch of 8 adds
1793 * SIMD architecture passes top 5 elements through the adders
1794 (which are "disabled" due to zero-bit predication)
1795 * SIMD architecture gets the 5 unmodified top 8-bits back unmodified
1796 and stores them in rd.
1797
1798 This requires a read on rd, however this is required anyway in order
1799 to support non-zeroing mode.
1800
1801 ## Polymorphic floating-point
1802
1803 Standard scalar RV integer operations base the register width on XLEN,
1804 which may be changed (UXL in USTATUS, and the corresponding MXL and
1805 SXL in MSTATUS and SSTATUS respectively). Integer LOAD, STORE and
1806 arithmetic operations are therefore restricted to an active XLEN bits,
1807 with sign or zero extension to pad out the upper bits when XLEN has
1808 been dynamically set to less than the actual register size.
1809
1810 For scalar floating-point, the active (used / changed) bits are
1811 specified exclusively by the operation: ADD.S specifies an active
1812 32-bits, with the upper bits of the source registers needing to
1813 be all 1s ("NaN-boxed"), and the destination upper bits being
1814 *set* to all 1s (including on LOAD/STOREs).
1815
1816 Where elwidth is set to default (on any source or the destination)
1817 it is obvious that this NaN-boxing behaviour can and should be
1818 preserved. When elwidth is non-default things are less obvious,
1819 so need to be thought through. Here is a normal (scalar) sequence,
1820 assuming an RV64 which supports Quad (128-bit) FLEN:
1821
1822 * FLD loads 64-bit wide from memory. Top 64 MSBs are set to all 1s
1823 * ADD.D performs a 64-bit-wide add. Top 64 MSBs of destination set to 1s.
1824 * FSD stores lowest 64-bits from the 128-bit-wide register to memory:
1825 top 64 MSBs ignored.
1826
1827 Therefore it makes sense to mirror this behaviour when, for example,
1828 elwidth is set to 32. Assume elwidth set to 32 on all source and
1829 destination registers:
1830
1831 * FLD loads 64-bit wide from memory as **two** 32-bit single-precision
1832 floating-point numbers.
1833 * ADD.D performs **two** 32-bit-wide adds, storing one of the adds
1834 in bits 0-31 and the second in bits 32-63.
1835 * FSD stores lowest 64-bits from the 128-bit-wide register to memory
1836
1837 Here's the thing: it does not make sense to overwrite the top 64 MSBs
1838 of the registers either during the FLD **or** the ADD.D. The reason
1839 is that, effectively, the top 64 MSBs actually represent a completely
1840 independent 64-bit register, so overwriting it is not only gratuitous
1841 but may actually be harmful for a future extension to SV which may
1842 have a way to directly access those top 64 bits.
1843
1844 The decision is therefore **not** to touch the upper parts of floating-point
1845 registers whereever elwidth is set to non-default values, including
1846 when "isvec" is false in a given register's CSR entry. Only when the
1847 elwidth is set to default **and** isvec is false will the standard
1848 RV behaviour be followed, namely that the upper bits be modified.
1849
1850 Ultimately if elwidth is default and isvec false on *all* source
1851 and destination registers, a SimpleV instruction defaults completely
1852 to standard RV scalar behaviour (this holds true for **all** operations,
1853 right across the board).
1854
1855 The nice thing here is that ADD.S, ADD.D and ADD.Q when elwidth are
1856 non-default values are effectively all the same: they all still perform
1857 multiple ADD operations, just at different widths. A future extension
1858 to SimpleV may actually allow ADD.S to access the upper bits of the
1859 register, effectively breaking down a 128-bit register into a bank
1860 of 4 independently-accesible 32-bit registers.
1861
1862 In the meantime, although when e.g. setting VL to 8 it would technically
1863 make no difference to the ALU whether ADD.S, ADD.D or ADD.Q is used,
1864 using ADD.Q may be an easy way to signal to the microarchitecture that
1865 it is to receive a higher VL value. On a superscalar OoO architecture
1866 there may be absolutely no difference, however on simpler SIMD-style
1867 microarchitectures they may not necessarily have the infrastructure in
1868 place to know the difference, such that when VL=8 and an ADD.D instruction
1869 is issued, it completes in 2 cycles (or more) rather than one, where
1870 if an ADD.Q had been issued instead on such simpler microarchitectures
1871 it would complete in one.
1872
1873 ## Specific instruction walk-throughs
1874
1875 This section covers walk-throughs of the above-outlined procedure
1876 for converting standard RISC-V scalar arithmetic operations to
1877 polymorphic widths, to ensure that it is correct.
1878
1879 ### add
1880
1881 Standard Scalar RV32/RV64 (xlen):
1882
1883 * RS1 @ xlen bits
1884 * RS2 @ xlen bits
1885 * add @ xlen bits
1886 * RD @ xlen bits
1887
1888 Polymorphic variant:
1889
1890 * RS1 @ rs1 bits, zero-extended to max(rs1, rs2) bits
1891 * RS2 @ rs2 bits, zero-extended to max(rs1, rs2) bits
1892 * add @ max(rs1, rs2) bits
1893 * RD @ rd bits. zero-extend to rd if rd > max(rs1, rs2) otherwise truncate
1894
1895 Note here that polymorphic add zero-extends its source operands,
1896 where addw sign-extends.
1897
1898 ### addw
1899
1900 The RV Specification specifically states that "W" variants of arithmetic
1901 operations always produce 32-bit signed values. In a polymorphic
1902 environment it is reasonable to assume that the signed aspect is
1903 preserved, where it is the length of the operands and the result
1904 that may be changed.
1905
1906 Standard Scalar RV64 (xlen):
1907
1908 * RS1 @ xlen bits
1909 * RS2 @ xlen bits
1910 * add @ xlen bits
1911 * RD @ xlen bits, truncate add to 32-bit and sign-extend to xlen.
1912
1913 Polymorphic variant:
1914
1915 * RS1 @ rs1 bits, sign-extended to max(rs1, rs2) bits
1916 * RS2 @ rs2 bits, sign-extended to max(rs1, rs2) bits
1917 * add @ max(rs1, rs2) bits
1918 * RD @ rd bits. sign-extend to rd if rd > max(rs1, rs2) otherwise truncate
1919
1920 Note here that polymorphic addw sign-extends its source operands,
1921 where add zero-extends.
1922
1923 This requires a little more in-depth analysis. Where the bitwidth of
1924 rs1 equals the bitwidth of rs2, no sign-extending will occur. It is
1925 only where the bitwidth of either rs1 or rs2 are different, will the
1926 lesser-width operand be sign-extended.
1927
1928 Effectively however, both rs1 and rs2 are being sign-extended (or truncated),
1929 where for add they are both zero-extended. This holds true for all arithmetic
1930 operations ending with "W".
1931
1932 ### addiw
1933
1934 Standard Scalar RV64I:
1935
1936 * RS1 @ xlen bits, truncated to 32-bit
1937 * immed @ 12 bits, sign-extended to 32-bit
1938 * add @ 32 bits
1939 * RD @ rd bits. sign-extend to rd if rd > 32, otherwise truncate.
1940
1941 Polymorphic variant:
1942
1943 * RS1 @ rs1 bits
1944 * immed @ 12 bits, sign-extend to max(rs1, 12) bits
1945 * add @ max(rs1, 12) bits
1946 * RD @ rd bits. sign-extend to rd if rd > max(rs1, 12) otherwise truncate
1947
1948 # Predication Element Zeroing
1949
1950 The introduction of zeroing on traditional vector predication is usually
1951 intended as an optimisation for lane-based microarchitectures with register
1952 renaming to be able to save power by avoiding a register read on elements
1953 that are passed through en-masse through the ALU. Simpler microarchitectures
1954 do not have this issue: they simply do not pass the element through to
1955 the ALU at all, and therefore do not store it back in the destination.
1956 More complex non-lane-based micro-architectures can, when zeroing is
1957 not set, use the predication bits to simply avoid sending element-based
1958 operations to the ALUs, entirely: thus, over the long term, potentially
1959 keeping all ALUs 100% occupied even when elements are predicated out.
1960
1961 SimpleV's design principle is not based on or influenced by
1962 microarchitectural design factors: it is a hardware-level API.
1963 Therefore, looking purely at whether zeroing is *useful* or not,
1964 (whether less instructions are needed for certain scenarios),
1965 given that a case can be made for zeroing *and* non-zeroing, the
1966 decision was taken to add support for both.
1967
1968 ## Single-predication (based on destination register)
1969
1970 Zeroing on predication for arithmetic operations is taken from
1971 the destination register's predicate. i.e. the predication *and*
1972 zeroing settings to be applied to the whole operation come from the
1973 CSR Predication table entry for the destination register.
1974 Thus when zeroing is set on predication of a destination element,
1975 if the predication bit is clear, then the destination element is *set*
1976 to zero (twin-predication is slightly different, and will be covered
1977 next).
1978
1979 Thus the pseudo-code loop for a predicated arithmetic operation
1980 is modified to as follows:
1981
1982  for (i = 0; i < VL; i++)
1983 if not zeroing: # an optimisation
1984 while (!(predval & 1<<i) && i < VL)
1985 if (int_vec[rd ].isvector)  { id += 1; }
1986 if (int_vec[rs1].isvector)  { irs1 += 1; }
1987 if (int_vec[rs2].isvector)  { irs2 += 1; }
1988 if i == VL:
1989 break
1990 if (predval & 1<<i)
1991 src1 = ....
1992 src2 = ...
1993 else:
1994 result = src1 + src2 # actual add (or other op) here
1995 set_polymorphed_reg(rd, destwid, ird, result)
1996 if (!int_vec[rd].isvector) break
1997 else if zeroing:
1998 result = 0
1999 set_polymorphed_reg(rd, destwid, ird, result)
2000 if (int_vec[rd ].isvector)  { id += 1; }
2001 else if (predval & 1<<i) break;
2002 if (int_vec[rs1].isvector)  { irs1 += 1; }
2003 if (int_vec[rs2].isvector)  { irs2 += 1; }
2004
2005 The optimisation to skip elements entirely is only possible for certain
2006 micro-architectures when zeroing is not set. However for lane-based
2007 micro-architectures this optimisation may not be practical, as it
2008 implies that elements end up in different "lanes". Under these
2009 circumstances it is perfectly fine to simply have the lanes
2010 "inactive" for predicated elements, even though it results in
2011 less than 100% ALU utilisation.
2012
2013 ## Twin-predication (based on source and destination register)
2014
2015 Twin-predication is not that much different, except that that
2016 the source is independently zero-predicated from the destination.
2017 This means that the source may be zero-predicated *or* the
2018 destination zero-predicated *or both*, or neither.
2019
2020 When with twin-predication, zeroing is set on the source and not
2021 the destination, if a predicate bit is set it indicates that a zero
2022 data element is passed through the operation (the exception being:
2023 if the source data element is to be treated as an address - a LOAD -
2024 then the data returned *from* the LOAD is zero, rather than looking up an
2025 *address* of zero.
2026
2027 When zeroing is set on the destination and not the source, then just
2028 as with single-predicated operations, a zero is stored into the destination
2029 element (or target memory address for a STORE).
2030
2031 Zeroing on both source and destination effectively result in a bitwise
2032 NOR operation of the source and destination predicate: the result is that
2033 where either source predicate OR destination predicate is set to 0,
2034 a zero element will ultimately end up in the destination register.
2035
2036 However: this may not necessarily be the case for all operations;
2037 implementors, particularly of custom instructions, clearly need to
2038 think through the implications in each and every case.
2039
2040 Here is pseudo-code for a twin zero-predicated operation:
2041
2042 function op_mv(rd, rs) # MV not VMV!
2043  rd = int_csr[rd].active ? int_csr[rd].regidx : rd;
2044  rs = int_csr[rs].active ? int_csr[rs].regidx : rs;
2045  ps, zerosrc = get_pred_val(FALSE, rs); # predication on src
2046  pd, zerodst = get_pred_val(FALSE, rd); # ... AND on dest
2047  for (int i = 0, int j = 0; i < VL && j < VL):
2048 if (int_csr[rs].isvec && !zerosrc) while (!(ps & 1<<i)) i++;
2049 if (int_csr[rd].isvec && !zerodst) while (!(pd & 1<<j)) j++;
2050 if ((pd & 1<<j))
2051 if ((pd & 1<<j))
2052 sourcedata = ireg[rs+i];
2053 else
2054 sourcedata = 0
2055 ireg[rd+j] <= sourcedata
2056 else if (zerodst)
2057 ireg[rd+j] <= 0
2058 if (int_csr[rs].isvec)
2059 i++;
2060 if (int_csr[rd].isvec)
2061 j++;
2062 else
2063 if ((pd & 1<<j))
2064 break;
2065
2066 Note that in the instance where the destination is a scalar, the hardware
2067 loop is ended the moment a value *or a zero* is placed into the destination
2068 register/element. Also note that, for clarity, variable element widths
2069 have been left out of the above.
2070
2071 # Exceptions
2072
2073 TODO: expand. Exceptions may occur at any time, in any given underlying
2074 scalar operation. This implies that context-switching (traps) may
2075 occur, and operation must be returned to where it left off. That in
2076 turn implies that the full state - including the current parallel
2077 element being processed - has to be saved and restored. This is
2078 what the **STATE** CSR is for.
2079
2080 The implications are that all underlying individual scalar operations
2081 "issued" by the parallelisation have to appear to be executed sequentially.
2082 The further implications are that if two or more individual element
2083 operations are underway, and one with an earlier index causes an exception,
2084 it may be necessary for the microarchitecture to **discard** or terminate
2085 operations with higher indices.
2086
2087 This being somewhat dissatisfactory, an "opaque predication" variant
2088 of the STATE CSR is being considered.
2089
2090 # Hints
2091
2092 A "HINT" is an operation that has no effect on architectural state,
2093 where its use may, by agreed convention, give advance notification
2094 to the microarchitecture: branch prediction notification would be
2095 a good example. Usually HINTs are where rd=x0.
2096
2097 With Simple-V being capable of issuing *parallel* instructions where
2098 rd=x0, the space for possible HINTs is expanded considerably. VL
2099 could be used to indicate different hints. In addition, if predication
2100 is set, the predication register itself could hypothetically be passed
2101 in as a *parameter* to the HINT operation.
2102
2103 No specific hints are yet defined in Simple-V
2104
2105 # VLIW Format <a name="vliw-format"></a>
2106
2107 One issue with SV is the setup and teardown time of the CSRs. The cost
2108 of the use of a full CSRRW (requiring LI) is quite high. A VLIW format
2109 therefore makes sense.
2110
2111 A suitable prefix, which fits the Expanded Instruction-Length encoding
2112 for "(80 + 16 times instruction_length)", as defined in Section 1.5
2113 of the RISC-V ISA, is as follows:
2114
2115 | 15 | 14:12 | 11:10 | 9:8 | 7 | 6:0 |
2116 | - | ----- | ----- | ----- | --- | ------- |
2117 | vlset | 16xil | pplen | rplen | mode | 1111111 |
2118
2119 An optional VL Block, optional predicate entries, optional register entries and finally some 16/32/48 bit standard RV or SVPrefix opcodes follow.
2120
2121 The variable-length format from Section 1.5 of the RISC-V ISA:
2122
2123 | base+4 ... base+2 | base | number of bits |
2124 | ------ ------------------- | ---------------- -------------------------- |
2125 | ..xxxx xxxxxxxxxxxxxxxx | xnnnxxxxx1111111 | (80+16\*nnn)-bit, nnn!=111 |
2126 | {ops}{Pred}{Reg}{VL Block} | SV Prefix | |
2127
2128 VL/MAXVL/SubVL Block:
2129
2130 | 31-30 | 29:28 | 27:22 | 21:17 - 16 |
2131 | - | ----- | ------ | ------ - - |
2132 | 0 | SubVL | VLdest | VLEN vlt |
2133 | 1 | SubVL | VLdest | VLEN |
2134
2135 If vlt is 0, VLEN is a 5 bit immediate value. If vlt is 1, it specifies
2136 the scalar register from which VL is set by this VLIW instruction
2137 group. VL, whether set from the register or the immediate, is then
2138 modified (truncated) to be MIN(VL, MAXVL), and the result stored in the
2139 scalar register specified in VLdest. If VLdest is zero, no store in the
2140 regfile occurs (however VL is still set).
2141
2142 This option will typically be used to start vectorised loops, where
2143 the VLIW instruction effectively embeds an optional "SETSUBVL, SETVL"
2144 sequence (in compact form).
2145
2146 When bit 15 is set to 1, MAXVL and VL are both set to the immediate,
2147 VLEN, which is 6 bits in length, and the same value stored in scalar
2148 register VLdest (if that register is nonzero).
2149
2150 This option will typically not be used so much for loops as it will be
2151 for one-off instructions such as saving the entire register file to the
2152 stack with a single one-off Vectorised and predicated LD/ST.
2153
2154 CSRs needed:
2155
2156 * mepcvliw
2157 * sepcvliw
2158 * uepcvliw
2159 * hepcvliw
2160
2161 Notes:
2162
2163 * Bit 7 specifies if the prefix block format is the full 16 bit format
2164 (1) or the compact less expressive format (0). In the 8 bit format,
2165 pplen is multiplied by 2.
2166 * 8 bit format predicate numbering is implicit and begins from x9. Thus it is critical to put blocks in the correct order as required.
2167 * Bit 7 also specifies if the register block format is 16 bit (1) or 8 bit
2168 (0). In the 8 bit format, rplen is multiplied by 2. If only an odd number
2169 of entries are needed the last may be set to 0x00, indicating "unused".
2170 * Bit 15 specifies if the VL Block is present. If set to 1, the VL Block immediately follows the VLIW instruction Prefix
2171 * Bits 8 and 9 define how many RegCam entries (0 to 3 if bit 15 is 1, otherwise 0 to 6) follow the (optional) VL Block.
2172 * Bits 10 and 11 define how many PredCam entries (0 to 3 if bit 7 is 1, otherwise 0 to 6) follow the (optional) RegCam entries
2173 * Bits 14 to 12 (IL) define the actual length of the instruction: total
2174 number of bits is 80 + 16 times IL. Standard RV32, RVC and also
2175 SVPrefix (P48-\*-Type) instructions fit into this space, after the
2176 (optional) VL / RegCam / PredCam entries
2177 * Anything - any registers - within the VLIW-prefixed format *MUST* have the
2178 RegCam and PredCam entries applied to it.
2179 * At the end of the VLIW Group, the RegCam and PredCam entries
2180 *no longer apply*. VL, MAXVL and SUBVL on the other hand remain at
2181 the values set by the last instruction (whether a CSRRW or the VL
2182 Block header).
2183 * Although an inefficient use of resources, it is fine to set the MAXVL,
2184 VL and SUBVL CSRs with standard CSRRW instructions, within a VLIW block.
2185
2186 All this would greatly reduce the amount of space utilised by Vectorised
2187 instructions, given that 64-bit CSRRW requires 3, even 4 32-bit opcodes: the
2188 CSR itself, a LI, and the setting up of the value into the RS register
2189 of the CSR, which, again, requires a LI / LUI to get the 32 bit
2190 data into the CSR. To get 64-bit data into the register in order to put
2191 it into the CSR(s), LOAD operations from memory are needed!
2192
2193 Given that each 64-bit CSR can hold only 4x PredCAM entries (or 4 RegCAM
2194 entries), that's potentially 6 to eight 32-bit instructions, just to
2195 establish the Vector State!
2196
2197 Not only that: even CSRRW on VL and MAXVL requires 64-bits (even more bits if
2198 VL needs to be set to greater than 32). Bear in mind that in SV, both MAXVL
2199 and VL need to be set.
2200
2201 By contrast, the VLIW prefix is only 16 bits, the VL/MAX/SubVL block is
2202 only 16 bits, and as long as not too many predicates and register vector
2203 qualifiers are specified, several 32-bit and 16-bit opcodes can fit into
2204 the format. If the full flexibility of the 16 bit block formats are not
2205 needed, more space is saved by using the 8 bit formats.
2206
2207 In this light, embedding the VL/MAXVL, PredCam and RegCam CSR entries into
2208 a VLIW format makes a lot of sense.
2209
2210 Open Questions:
2211
2212 * Is it necessary to stick to the RISC-V 1.5 format? Why not go with
2213 using the 15th bit to allow 80 + 16\*0bnnnn bits? Perhaps to be sane,
2214 limit to 256 bits (16 times 0-11).
2215 * Could a "hint" be used to set which operations are parallel and which
2216 are sequential?
2217 * Could a new sub-instruction opcode format be used, one that does not
2218 conform precisely to RISC-V rules, but *unpacks* to RISC-V opcodes?
2219 no need for byte or bit-alignment
2220 * Could a hardware compression algorithm be deployed? Quite likely,
2221 because of the sub-execution context (sub-VLIW PC)
2222
2223 ## Limitations on instructions.
2224
2225 To greatly simplify implementations, it is required to treat the VLIW
2226 group as a separate sub-program with its own separate PC. The sub-pc
2227 advances separately whilst the main PC remains pointing at the beginning
2228 of the VLIW instruction (not to be confused with how VL works, which
2229 is exactly the same principle, except it is VStart in the STATE CSR
2230 that increments).
2231
2232 This has implications, namely that a new set of CSRs identical to xepc
2233 (mepc, srpc, hepc and uepc) must be created and managed and respected
2234 as being a sub extension of the xepc set of CSRs. Thus, xepcvliw CSRs
2235 must be context switched and saved / restored in traps.
2236
2237 The VStart indices in the STATE CSR may be similarly regarded as another
2238 sub-execution context, giving in effect two sets of nested sub-levels
2239 of the RISCV Program Counter.
2240
2241 In addition, as xepcvliw CSRs are relative to the beginning of the VLIW
2242 block, branches MUST be restricted to within the block, i.e. addressing
2243 is now restricted to the start (and very short) length of the block.
2244
2245 Also: calling subroutines is inadviseable, unless they can be entirely
2246 accomplished within a block.
2247
2248 A normal jump and a normal function call may only be taken by letting
2249 the VLIW end, returning to "normal" standard RV mode, using RVC, 32 bit
2250 or P48-*-type opcodes.
2251
2252 ## Links
2253
2254 * <https://groups.google.com/d/msg/comp.arch/yIFmee-Cx-c/jRcf0evSAAAJ>
2255
2256 # Subsets of RV functionality
2257
2258 This section describes the differences when SV is implemented on top of
2259 different subsets of RV.
2260
2261 ## Common options
2262
2263 It is permitted to limit the size of either (or both) the register files
2264 down to the original size of the standard RV architecture. However, below
2265 the mandatory limits set in the RV standard will result in non-compliance
2266 with the SV Specification.
2267
2268 ## RV32 / RV32F
2269
2270 When RV32 or RV32F is implemented, XLEN is set to 32, and thus the
2271 maximum limit for predication is also restricted to 32 bits. Whilst not
2272 actually specifically an "option" it is worth noting.
2273
2274 ## RV32G
2275
2276 Normally in standard RV32 it does not make much sense to have
2277 RV32G, The critical instructions that are missing in standard RV32
2278 are those for moving data to and from the double-width floating-point
2279 registers into the integer ones, as well as the FCVT routines.
2280
2281 In an earlier draft of SV, it was possible to specify an elwidth
2282 of double the standard register size: this had to be dropped,
2283 and may be reintroduced in future revisions.
2284
2285 ## RV32 (not RV32F / RV32G) and RV64 (not RV64F / RV64G)
2286
2287 When floating-point is not implemented, the size of the User Register and
2288 Predication CSR tables may be halved, to only 4 2x16-bit CSRs (8 entries
2289 per table).
2290
2291 ## RV32E
2292
2293 In embedded scenarios the User Register and Predication CSRs may be
2294 dropped entirely, or optionally limited to 1 CSR, such that the combined
2295 number of entries from the M-Mode CSR Register table plus U-Mode
2296 CSR Register table is either 4 16-bit entries or (if the U-Mode is
2297 zero) only 2 16-bit entries (M-Mode CSR table only). Likewise for
2298 the Predication CSR tables.
2299
2300 RV32E is the most likely candidate for simply detecting that registers
2301 are marked as "vectorised", and generating an appropriate exception
2302 for the VL loop to be implemented in software.
2303
2304 ## RV128
2305
2306 RV128 has not been especially considered, here, however it has some
2307 extremely large possibilities: double the element width implies
2308 256-bit operands, spanning 2 128-bit registers each, and predication
2309 of total length 128 bit given that XLEN is now 128.
2310
2311 # Under consideration <a name="issues"></a>
2312
2313 for element-grouping, if there is unused space within a register
2314 (3 16-bit elements in a 64-bit register for example), recommend:
2315
2316 * For the unused elements in an integer register, the used element
2317 closest to the MSB is sign-extended on write and the unused elements
2318 are ignored on read.
2319 * The unused elements in a floating-point register are treated as-if
2320 they are set to all ones on write and are ignored on read, matching the
2321 existing standard for storing smaller FP values in larger registers.
2322
2323 ---
2324
2325 info register,
2326
2327 > One solution is to just not support LR/SC wider than a fixed
2328 > implementation-dependent size, which must be at least 
2329 >1 XLEN word, which can be read from a read-only CSR
2330 > that can also be used for info like the kind and width of 
2331 > hw parallelism supported (128-bit SIMD, minimal virtual 
2332 > parallelism, etc.) and other things (like maybe the number 
2333 > of registers supported). 
2334
2335 > That CSR would have to have a flag to make a read trap so
2336 > a hypervisor can simulate different values.
2337
2338 ----
2339
2340 > And what about instructions like JALR? 
2341
2342 answer: they're not vectorised, so not a problem
2343
2344 ----
2345
2346 * if opcode is in the RV32 group, rd, rs1 and rs2 bitwidth are
2347 XLEN if elwidth==default
2348 * if opcode is in the RV32I group, rd, rs1 and rs2 bitwidth are
2349 *32* if elwidth == default
2350
2351 ---
2352
2353 TODO: update elwidth to be default / 8 / 16 / 32
2354
2355 ---
2356
2357 TODO: document different lengths for INT / FP regfiles, and provide
2358 as part of info register. 00=32, 01=64, 10=128, 11=reserved.
2359
2360 ---
2361
2362 TODO, update to remove RegCam and PredCam CSRs, just use SVprefix and VLIW format