1 # Simple-V (Parallelism Extension Proposal) Specification
3 * Copyright (C) 2017, 2018, 3029 Luke Kenneth Casson Leighton
5 * Last edited: 21 jun 2019
6 * Ancillary resource: [[opcodes]] [[sv_prefix_proposal]]
17 * The RISC-V Founders, without whom this all would not be possible.
21 # Summary and Background: Rationale
23 Simple-V is a uniform parallelism API for RISC-V hardware that has several
24 unplanned side-effects including code-size reduction, expansion of
25 HINT space and more. The reason for
26 creating it is to provide a manageable way to turn a pre-existing design
27 into a parallel one, in a step-by-step incremental fashion, without adding any new opcodes, thus allowing
28 the implementor to focus on adding hardware where it is needed and necessary.
29 The primary target is for mobile-class 3D GPUs and VPUs, with secondary
30 goals being to reduce executable size and reduce context-switch latency.
32 Critically: **No new instructions are added**. The parallelism (if any
33 is implemented) is implicitly added by tagging *standard* scalar registers
34 for redirection. When such a tagged register is used in any instruction,
35 it indicates that the PC shall **not** be incremented; instead a loop
36 is activated where *multiple* instructions are issued to the pipeline
37 (as determined by a length CSR), with contiguously incrementing register
38 numbers starting from the tagged register. When the last "element"
39 has been reached, only then is the PC permitted to move on. Thus
40 Simple-V effectively sits (slots) *in between* the instruction decode phase
43 The barrier to entry with SV is therefore very low. The minimum
44 compliant implementation is software-emulation (traps), requiring
45 only the CSRs and CSR tables, and that an exception be thrown if an
46 instruction's registers are detected to have been tagged. The looping
47 that would otherwise be done in hardware is thus carried out in software,
48 instead. Whilst much slower, it is "compliant" with the SV specification,
49 and may be suited for implementation in RV32E and also in situations
50 where the implementor wishes to focus on certain aspects of SV, without
51 unnecessary time and resources into the silicon, whilst also conforming
52 strictly with the API. A good area to punt to software would be the
53 polymorphic element width capability for example.
55 Hardware Parallelism, if any, is therefore added at the implementor's
56 discretion to turn what would otherwise be a sequential loop into a
59 To emphasise that clearly: Simple-V (SV) is *not*:
63 * A Vectorisation Microarchitecture
64 * A microarchitecture of any specific kind
65 * A mandary parallel processor microarchitecture of any kind
66 * A supercomputer extension
68 SV does **not** tell implementors how or even if they should implement
69 parallelism: it is a hardware "API" (Application Programming Interface)
70 that, if implemented, presents a uniform and consistent way to *express*
71 parallelism, at the same time leaving the choice of if, how, how much,
72 when and whether to parallelise operations **entirely to the implementor**.
76 The principle of SV is as follows:
78 * Standard RV instructions are "prefixed" (extended) through a 48/64
79 bit format (single instruction option) or a variable
80 length VLIW-like prefix (multi or "grouped" option).
81 * The prefix(es) indicate which registers are "tagged" as
82 "vectorised". Predicates can also be added, and element widths overridden on any src or dest register.
83 * A "Vector Length" CSR is set, indicating the span of any future
84 "parallel" operations.
85 * If any operation (a **scalar** standard RV opcode) uses a register
86 that has been so "marked" ("tagged"), a hardware "macro-unrolling loop"
87 is activated, of length VL, that effectively issues **multiple**
88 identical instructions using contiguous sequentially-incrementing
89 register numbers, based on the "tags".
90 * **Whether they be executed sequentially or in parallel or a
91 mixture of both or punted to software-emulation in a trap handler
92 is entirely up to the implementor**.
94 In this way an entire scalar algorithm may be vectorised with
95 the minimum of modification to the hardware and to compiler toolchains.
97 To reiterate: **There are *no* new opcodes**. The scheme works *entirely*
98 on hidden context that augments *scalar* RISCV instructions.
100 # CSRs <a name="csrs"></a>
102 * An optional "reshaping" CSR key-value table which remaps from a 1D
103 linear shape to 2D or 3D, including full transposition.
105 There are also five additional User mode CSRs :
107 * uMVL (the Maximum Vector Length)
108 * uVL (which has different characteristics from standard CSRs)
109 * uSUBVL (effectively a kind of SIMD)
110 * uEPCVLIW (a copy of the sub-execution Program Counter, that is relative
111 to the start of the current VLIW Group, set on a trap).
112 * uSTATE (useful for saving and restoring during context switch,
113 and for providing fast transitions)
115 There are also five additional CSRs for Supervisor-Mode:
123 And likewise for M-Mode:
131 Both Supervisor and M-Mode have their own CSR registers, independent
132 of the other privilege levels, in order to make it easier to use
133 Vectorisation in each level without affecting other privilege levels.
135 The access pattern for these groups of CSRs in each mode follows the
136 same pattern for other CSRs that have M-Mode and S-Mode "mirrors":
138 * In M-Mode, the S-Mode and U-Mode CSRs are separate and distinct.
139 * In S-Mode, accessing and changing of the M-Mode CSRs is transparently
141 to changing the S-Mode CSRs. Accessing and changing the U-Mode
143 * In U-Mode, accessing and changing of the S-Mode and U-Mode CSRs
146 In M-Mode, only the M-Mode CSRs are in effect, i.e. it is only the
147 M-Mode MVL, the M-Mode STATE and so on that influences the processor
148 behaviour. Likewise for S-Mode, and likewise for U-Mode.
150 This has the interesting benefit of allowing M-Mode (or S-Mode) to be set
151 up, for context-switching to take place, and, on return back to the higher
152 privileged mode, the CSRs of that mode will be exactly as they were.
153 Thus, it becomes possible for example to set up CSRs suited best to aiding
154 and assisting low-latency fast context-switching *once and only once*
155 (for example at boot time), without the need for re-initialising the
156 CSRs needed to do so.
158 Another interesting side effect of separate S Mode CSRs is that Vectorised
159 saving of the entire register file to the stack is a single instruction
160 (accidental provision of LOAD-MULTI semantics). It can even be predicated,
161 which opens up some very interesting possibilities.
163 The xEPCVLIW CSRs must be treated exactly like their corresponding xepc
164 equivalents. See VLIW section for details.
166 ## MAXVECTORLENGTH (MVL) <a name="mvl" />
168 MAXVECTORLENGTH is the same concept as MVL in RVV, except that it
169 is variable length and may be dynamically set. MVL is
170 however limited to the regfile bitwidth XLEN (1-32 for RV32,
171 1-64 for RV64 and so on).
173 The reason for setting this limit is so that predication registers, when
174 marked as such, may fit into a single register as opposed to fanning out
175 over several registers. This keeps the implementation a little simpler.
177 The other important factor to note is that the actual MVL is internally
178 stored **offset by one**, so that it can fit into only 6 bits (for RV64)
179 and still cover a range up to XLEN bits. Attempts to set MVL to zero will
180 return an exception. This is expressed more clearly in the "pseudocode"
181 section, where there are subtle differences between CSRRW and CSRRWI.
183 ## Vector Length (VL) <a name="vl" />
185 VSETVL is slightly different from RVV. Similar to RVV, VL is set to be within
186 the range 1 <= VL <= MVL (where MVL in turn is limited to 1 <= MVL <= XLEN)
188 VL = rd = MIN(vlen, MVL)
190 where 1 <= MVL <= XLEN
192 However just like MVL it is important to note that the range for VL has
193 subtle design implications, covered in the "CSR pseudocode" section
195 The fixed (specific) setting of VL allows vector LOAD/STORE to be used
196 to switch the entire bank of registers using a single instruction (see
197 Appendix, "Context Switch Example"). The reason for limiting VL to XLEN
198 is down to the fact that predication bits fit into a single register of
201 The second change is that when VSETVL is requested to be stored
202 into x0, it is *ignored* silently (VSETVL x0, x5)
204 The third and most important change is that, within the limits set by
205 MVL, the value passed in **must** be set in VL (and in the
206 destination register).
208 This has implication for the microarchitecture, as VL is required to be
209 set (limits from MVL notwithstanding) to the actual value
210 requested. RVV has the option to set VL to an arbitrary value that suits
211 the conditions and the micro-architecture: SV does *not* permit this.
213 The reason is so that if SV is to be used for a context-switch or as a
214 substitute for LOAD/STORE-Multiple, the operation can be done with only
215 2-3 instructions (setup of the CSRs, VSETVL x0, x0, #{regfilelen-1},
216 single LD/ST operation). If VL does *not* get set to the register file
217 length when VSETVL is called, then a software-loop would be needed.
218 To avoid this need, VL *must* be set to exactly what is requested
219 (limits notwithstanding).
221 Therefore, in turn, unlike RVV, implementors *must* provide
222 pseudo-parallelism (using sequential loops in hardware) if actual
223 hardware-parallelism in the ALUs is not deployed. A hybrid is also
224 permitted (as used in Broadcom's VideoCore-IV) however this must be
225 *entirely* transparent to the ISA.
227 The fourth change is that VSETVL is implemented as a CSR, where the
228 behaviour of CSRRW (and CSRRWI) must be changed to specifically store
229 the *new* value in the destination register, **not** the old value.
230 Where context-load/save is to be implemented in the usual fashion
231 by using a single CSRRW instruction to obtain the old value, the
232 *secondary* CSR must be used (SVSTATE). This CSR behaves
233 exactly as standard CSRs, and contains more than just VL.
235 One interesting side-effect of using CSRRWI to set VL is that this
236 may be done with a single instruction, useful particularly for a
237 context-load/save. There are however limitations: CSRWI's immediate
238 is limited to 0-31 (representing VL=1-32).
240 Note that when VL is set to 1, all parallel operations cease: the
241 hardware loop is reduced to a single element: scalar operations.
243 ## SUBVL - Sub Vector Length
245 This is a "group by quantity" that effectivrly asks each iteration of the hardware loop to load SUBVL elements of width elwidth at a time. Effectively, SUBVL is like a SIMD multiplier: instead of just 1 operation issued, SUBVL operations are issued.
247 Another way to view SUBVL is that each element in the VL length vector is now SUBVL times elwidth bits in length.
249 The primary use case for SUBVL is for 3D FP Vectors. A Vector of 3D coordinates X,Y,Z for example may be loaded and multiplied the stored, per VL element iteration, rather than having to set VL to three times larger.
251 Legal values are 1, 2, 3 and 4, and the STATE CSR must hold the 2 bit values 0b00 thru 0b11.
253 Setting this CSR to 0 must raise an exception. Setting it to a value
254 greater than 4 likewise.
256 The main effect of SUBVL is that predication bits are applied per **group**,
257 rather than by individual element.
259 This saves a not insignificant number of instructions when handling 3D
260 vectors, as otherwise a much longer predicate mask would have to be set
261 up with regularly-repeated bit patterns.
263 See SUBVL Pseudocode illustration for details.
267 This is a standard CSR that contains sufficient information for a
268 full context save/restore. It contains (and permits setting of):
272 * the destination element offset of the current parallel instruction
274 * and, for twin-predication, the source element offset as well.
276 * the subvector destination element offset of the current parallel instruction
278 * and, for twin-predication, the subvector source element offset as well.
280 Interestingly STATE may hypothetically also be used to make the
281 immediately-following instruction to skip a certain number of elements,
282 by playing with destoffs and srcoffs
283 (and the subvector offsets as well)
285 Setting destoffs and srcoffs is realistically intended for saving state
286 so that exceptions (page faults in particular) may be serviced and the
287 hardware-loop that was being executed at the time of the trap, from
288 user-mode (or Supervisor-mode), may be returned to and continued from exactly
289 where it left off. The reason why this works is because setting
290 User-Mode STATE will not change (not be used) in M-Mode or S-Mode
291 (and is entirely why M-Mode and S-Mode have their own STATE CSRs).
293 The format of the STATE CSR is as follows:
295 | (30..29 | (28..27) | (26..24) | (23..18) | (17..12) | (11..6) | (5...0) |
296 | ------- | -------- | -------- | -------- | -------- | ------- | ------- |
297 | dsvoffs | ssvoffs | subvl | destoffs | srcoffs | vl | maxvl |
299 When setting this CSR, the following characteristics will be enforced:
301 * **MAXVL** will be truncated (after offset) to be within the range 1 to XLEN
302 * **VL** will be truncated (after offset) to be within the range 1 to MAXVL
303 * **SUBVL** which sets a SIMD-like quantity, has only 4 values there are no changes needed
304 * **srcoffs** will be truncated to be within the range 0 to VL-1
305 * **destoffs** will be truncated to be within the range 0 to VL-1
306 * **ssvoffs** will be truncated to be within the range 0 to SUBVL-1
307 * **dsvoffs** will be truncated to be within the range 0 to SUBVL-1
309 NOTE: if the following instruction is not a twin predicated instruction, and destoffs or dsvoffs has been set to non-zero, subsequent execution behaviour is undefined. **USE WITH CARE**.
311 ## MVL and VL Pseudocode
313 The pseudo-code for get and set of VL and MVL use the following internal
314 functions as follows:
316 set_mvl_csr(value, rd):
318 MVL = MIN(value, MVL)
323 set_vl_csr(value, rd):
325 regs[rd] = VL # yes returning the new value NOT the old CSR
332 Note that where setting MVL behaves as a normal CSR (returns the old
333 value), unlike standard CSR behaviour, setting VL will return the **new**
334 value of VL **not** the old one.
336 For CSRRWI, the range of the immediate is restricted to 5 bits. In order to
337 maximise the effectiveness, an immediate of 0 is used to set VL=1,
338 an immediate of 1 is used to set VL=2 and so on:
340 CSRRWI_Set_MVL(value):
341 set_mvl_csr(value+1, x0)
343 CSRRWI_Set_VL(value):
344 set_vl_csr(value+1, x0)
346 However for CSRRW the following pseudocode is used for MVL and VL,
347 where setting the value to zero will cause an exception to be raised.
348 The reason is that if VL or MVL are set to zero, the STATE CSR is
349 not capable of returning that value.
351 CSRRW_Set_MVL(rs1, rd):
353 if value == 0 or value > XLEN:
355 set_mvl_csr(value, rd)
357 CSRRW_Set_VL(rs1, rd):
359 if value == 0 or value > XLEN:
361 set_vl_csr(value, rd)
363 In this way, when CSRRW is utilised with a loop variable, the value
364 that goes into VL (and into the destination register) may be used
365 in an instruction-minimal fashion:
367 CSRvect1 = {type: F, key: a3, val: a3, elwidth: dflt}
368 CSRvect2 = {type: F, key: a7, val: a7, elwidth: dflt}
369 CSRRWI MVL, 3 # sets MVL == **4** (not 3)
370 j zerotest # in case loop counter a0 already 0
372 CSRRW VL, t0, a0 # vl = t0 = min(mvl, a0)
373 ld a3, a1 # load 4 registers a3-6 from x
374 slli t1, t0, 3 # t1 = vl * 8 (in bytes)
375 ld a7, a2 # load 4 registers a7-10 from y
376 add a1, a1, t1 # increment pointer to x by vl*8
377 fmadd a7, a3, fa0, a7 # v1 += v0 * fa0 (y = a * x + y)
378 sub a0, a0, t0 # n -= vl (t0)
379 st a7, a2 # store 4 registers a7-10 to y
380 add a2, a2, t1 # increment pointer to y by vl*8
382 bnez a0, loop # repeat if n != 0
384 With the STATE CSR, just like with CSRRWI, in order to maximise the
385 utilisation of the limited bitspace, "000000" in binary represents
386 VL==1, "00001" represents VL==2 and so on (likewise for MVL):
388 CSRRW_Set_SV_STATE(rs1, rd):
391 MVL = set_mvl_csr(value[11:6]+1)
392 VL = set_vl_csr(value[5:0]+1)
393 destoffs = value[23:18]>>18
394 srcoffs = value[23:18]>>12
397 regs[rd] = (MVL-1) | (VL-1)<<6 | (srcoffs)<<12 |
401 In both cases, whilst CSR read of VL and MVL return the exact values
402 of VL and MVL respectively, reading and writing the STATE CSR returns
403 those values **minus one**. This is absolutely critical to implement
404 if the STATE CSR is to be used for fast context-switching.
406 ## Register key-value (CAM) table <a name="regcsrtable" />
408 *NOTE: in prior versions of SV, this table used to be writable and
409 accessible via CSRs. It is now stored in the VLIW instruction format,
410 and entries may be overridden by the SVPrefix format*
412 The purpose of the Register table is four-fold:
414 * To mark integer and floating-point registers as requiring "redirection"
415 if it is ever used as a source or destination in any given operation.
416 This involves a level of indirection through a 5-to-7-bit lookup table,
417 such that **unmodified** operands with 5 bit (3 for Compressed) may
418 access up to **128** registers.
419 * To indicate whether, after redirection through the lookup table, the
420 register is a vector (or remains a scalar).
421 * To over-ride the implicit or explicit bitwidth that the operation would
422 normally give the register.
426 | RegCAM | | 15 | (14..8) | 7 | (6..5) | (4..0) |
427 | ------ | | - | - | - | ------ | ------- |
428 | 0 | | isvec0 | regidx0 | i/f | vew0 | regkey |
429 | 1 | | isvec1 | regidx1 | i/f | vew1 | regkey |
430 | .. | | isvec.. | regidx.. | i/f | vew.. | regkey |
431 | 15 | | isvec15 | regidx15 | i/f | vew15 | regkey |
435 | RegCAM | | 7 | (6..5) | (4..0) |
436 | ------ | | - | ------ | ------- |
437 | 0 | | i/f | vew0 | regnum |
439 i/f is set to "1" to indicate that the redirection/tag entry is to be applied
440 to integer registers; 0 indicates that it is relevant to floating-point
443 The 8 bit format is used for a much more compact expression. "isvec"
444 is implicit and, similar to [[sv-prefix-proposal]], the target vector
445 is "regnum<<2", implicitly. Contrast this with the 16-bit format where
446 the target vector is *explicitly* named in bits 8 to 14, and bit 15 may
447 optionally set "scalar" mode.
449 Note that whilst SVPrefis adds one extra bit to each of rd, rs1 etc.,
450 and thus the "vector" mode need only shift the (6 bit) regnum by 1 to
451 get the actual (7 bit) register number to use, there is not enough space
452 in the 8 bit format so "regnum<<2" is required.
454 vew has the following meanings, indicating that the instruction's
455 operand size is "over-ridden" in a polymorphic fashion:
458 | --- | ------------------- |
459 | 00 | default (XLEN/FLEN) |
464 As the above table is a CAM (key-value store) it may be appropriate
465 (faster, implementation-wise) to expand it as follows:
467 struct vectorised fp_vec[32], int_vec[32];
469 for (i = 0; i < 16; i++) // 16 CSRs?
470 tb = int_vec if CSRvec[i].type == 0 else fp_vec
471 idx = CSRvec[i].regkey // INT/FP src/dst reg in opcode
472 tb[idx].elwidth = CSRvec[i].elwidth
473 tb[idx].regidx = CSRvec[i].regidx // indirection
474 tb[idx].isvector = CSRvec[i].isvector // 0=scalar
475 tb[idx].packed = CSRvec[i].packed // SIMD or not
479 ## Predication Table <a name="predication_csr_table"></a>
481 *NOTE: in prior versions of SV, this table used to be writable and
482 accessible via CSRs. It is now stored in the VLIW instruction format,
483 and entries may be overridden by the SVPrefix format*
485 The Predication Table is a key-value store indicating whether, if a
486 given destination register (integer or floating-point) is referred to
487 in an instruction, it is to be predicated. Like the Register table, it
488 is an indirect lookup that allows the RV opcodes to not need modification.
490 It is particularly important to note
491 that the *actual* register used can be *different* from the one that is
492 in the instruction, due to the redirection through the lookup table.
494 * regidx is the register that in combination with the
495 i/f flag, if that integer or floating-point register is referred to
496 in a (standard RV) instruction
497 results in the lookup table being referenced to find the predication
498 mask to use for this operation.
500 *actual* (full, 7 bit) register to be used for the predication mask.
501 * inv indicates that the predication mask bits are to be inverted
502 prior to use *without* actually modifying the contents of the
503 registerfrom which those bits originated.
504 * zeroing is either 1 or 0, and if set to 1, the operation must
505 place zeros in any element position where the predication mask is
506 set to zero. If zeroing is set to 0, unpredicated elements *must*
507 be left alone. Some microarchitectures may choose to interpret
508 this as skipping the operation entirely. Others which wish to
509 stick more closely to a SIMD architecture may choose instead to
510 interpret unpredicated elements as an internal "copy element"
511 operation (which would be necessary in SIMD microarchitectures
512 that perform register-renaming)
516 | PrCSR | (15..11) | 10 | 9 | 8 | (7..1) | 0 |
517 | ----- | - | - | - | - | ------- | ------- |
518 | 0 | predkey | zero0 | inv0 | i/f | regidx | rsrvd |
519 | 1 | predkey | zero1 | inv1 | i/f | regidx | rsvd |
520 | ... | predkey | ..... | .... | i/f | ....... | ....... |
521 | 15 | predkey | zero15 | inv15 | i/f | regidx | rsvd |
526 | PrCSR | 7 | 6 | 5 | (4..0) |
527 | ----- | - | - | - | ------- |
528 | 0 | zero0 | inv0 | i/f | regnum |
530 The 8 bit format is a compact and less expressive variant of the full
531 16 bit format. Using the 8 bit formatis very different: the predicate
532 register to use is implicit, and numbering begins inplicitly from x9. The
533 regnum is still used to "activate" predication, in the same fashion as
536 The 16 bit Predication CSR Table is a key-value store, so implementation-wise
537 it will be faster to turn the table around (maintain topologically
544 int predidx; // redirection: actual int register to use
547 struct pred fp_pred_reg[32]; // 64 in future (bank=1)
548 struct pred int_pred_reg[32]; // 64 in future (bank=1)
550 for (i = 0; i < 16; i++)
551 tb = int_pred_reg if CSRpred[i].type == 0 else fp_pred_reg;
552 idx = CSRpred[i].regidx
553 tb[idx].zero = CSRpred[i].zero
554 tb[idx].inv = CSRpred[i].inv
555 tb[idx].predidx = CSRpred[i].predidx
556 tb[idx].enabled = true
558 So when an operation is to be predicated, it is the internal state that
559 is used. In Section 6.4.2 of Hwacha's Manual (EECS-2015-262) the following
560 pseudo-code for operations is given, where p is the explicit (direct)
561 reference to the predication register to be used:
563 for (int i=0; i<vl; ++i)
565 (d ? vreg[rd][i] : sreg[rd]) =
566 iop(s1 ? vreg[rs1][i] : sreg[rs1],
567 s2 ? vreg[rs2][i] : sreg[rs2]); // for insts with 2 inputs
569 This instead becomes an *indirect* reference using the *internal* state
570 table generated from the Predication CSR key-value store, which is used
574 preg = int_pred_reg[rd]
576 preg = fp_pred_reg[rd]
578 for (int i=0; i<vl; ++i)
579 predicate, zeroing = get_pred_val(type(iop) == INT, rd):
580 if (predicate && (1<<i))
581 (d ? regfile[rd+i] : regfile[rd]) =
582 iop(s1 ? regfile[rs1+i] : regfile[rs1],
583 s2 ? regfile[rs2+i] : regfile[rs2]); // for insts with 2 inputs
585 (d ? regfile[rd+i] : regfile[rd]) = 0
589 * d, s1 and s2 are booleans indicating whether destination,
590 source1 and source2 are vector or scalar
591 * key-value CSR-redirection of rd, rs1 and rs2 have NOT been included
592 above, for clarity. rd, rs1 and rs2 all also must ALSO go through
593 register-level redirection (from the Register table) if they are
596 If written as a function, obtaining the predication mask (and whether
597 zeroing takes place) may be done as follows:
599 def get_pred_val(bool is_fp_op, int reg):
600 tb = int_reg if is_fp_op else fp_reg
601 if (!tb[reg].enabled):
602 return ~0x0, False // all enabled; no zeroing
603 tb = int_pred if is_fp_op else fp_pred
604 if (!tb[reg].enabled):
605 return ~0x0, False // all enabled; no zeroing
606 predidx = tb[reg].predidx // redirection occurs HERE
607 predicate = intreg[predidx] // actual predicate HERE
609 predicate = ~predicate // invert ALL bits
610 return predicate, tb[reg].zero
612 Note here, critically, that **only** if the register is marked
613 in its **register** table entry as being "active" does the testing
614 proceed further to check if the **predicate** table entry is
617 Note also that this is in direct contrast to branch operations
618 for the storage of comparisions: in these specific circumstances
619 the requirement for there to be an active *register* entry
622 ## REMAP CSR <a name="remap" />
624 (Note: both the REMAP and SHAPE sections are best read after the
625 rest of the document has been read)
627 There is one 32-bit CSR which may be used to indicate which registers,
628 if used in any operation, must be "reshaped" (re-mapped) from a linear
629 form to a 2D or 3D transposed form, or "offset" to permit arbitrary
630 access to elements within a register.
632 The 32-bit REMAP CSR may reshape up to 3 registers:
634 | 29..28 | 27..26 | 25..24 | 23 | 22..16 | 15 | 14..8 | 7 | 6..0 |
635 | ------ | ------ | ------ | -- | ------- | -- | ------- | -- | ------- |
636 | shape2 | shape1 | shape0 | 0 | regidx2 | 0 | regidx1 | 0 | regidx0 |
638 regidx0-2 refer not to the Register CSR CAM entry but to the underlying
639 *real* register (see regidx, the value) and consequently is 7-bits wide.
640 When set to zero (referring to x0), clearly reshaping x0 is pointless,
641 so is used to indicate "disabled".
642 shape0-2 refers to one of three SHAPE CSRs. A value of 0x3 is reserved.
643 Bits 7, 15, 23, 30 and 31 are also reserved, and must be set to zero.
645 It is anticipated that these specialist CSRs not be very often used.
646 Unlike the CSR Register and Predication tables, the REMAP CSRs use
647 the full 7-bit regidx so that they can be set once and left alone,
648 whilst the CSR Register entries pointing to them are disabled, instead.
650 ## SHAPE 1D/2D/3D vector-matrix remapping CSRs
652 (Note: both the REMAP and SHAPE sections are best read after the
653 rest of the document has been read)
655 There are three "shape" CSRs, SHAPE0, SHAPE1, SHAPE2, 32-bits in each,
656 which have the same format. When each SHAPE CSR is set entirely to zeros,
657 remapping is disabled: the register's elements are a linear (1D) vector.
659 | 26..24 | 23 | 22..16 | 15 | 14..8 | 7 | 6..0 |
660 | ------- | -- | ------- | -- | ------- | -- | ------- |
661 | permute | offs[2] | zdimsz | offs[1] | ydimsz | offs[0] | xdimsz |
663 offs is a 3-bit field, spread out across bits 7, 15 and 23, which
664 is added to the element index during the loop calculation.
666 xdimsz, ydimsz and zdimsz are offset by 1, such that a value of 0 indicates
667 that the array dimensionality for that dimension is 1. A value of xdimsz=2
668 would indicate that in the first dimension there are 3 elements in the
669 array. The format of the array is therefore as follows:
671 array[xdim+1][ydim+1][zdim+1]
673 However whilst illustrative of the dimensionality, that does not take the
674 "permute" setting into account. "permute" may be any one of six values
675 (0-5, with values of 6 and 7 being reserved, and not legal). The table
676 below shows how the permutation dimensionality order works:
678 | permute | order | array format |
679 | ------- | ----- | ------------------------ |
680 | 000 | 0,1,2 | (xdim+1)(ydim+1)(zdim+1) |
681 | 001 | 0,2,1 | (xdim+1)(zdim+1)(ydim+1) |
682 | 010 | 1,0,2 | (ydim+1)(xdim+1)(zdim+1) |
683 | 011 | 1,2,0 | (ydim+1)(zdim+1)(xdim+1) |
684 | 100 | 2,0,1 | (zdim+1)(xdim+1)(ydim+1) |
685 | 101 | 2,1,0 | (zdim+1)(ydim+1)(xdim+1) |
687 In other words, the "permute" option changes the order in which
688 nested for-loops over the array would be done. The algorithm below
689 shows this more clearly, and may be executed as a python program:
691 # mapidx = REMAP.shape2
692 xdim = 3 # SHAPE[mapidx].xdim_sz+1
693 ydim = 4 # SHAPE[mapidx].ydim_sz+1
694 zdim = 5 # SHAPE[mapidx].zdim_sz+1
696 lims = [xdim, ydim, zdim]
697 idxs = [0,0,0] # starting indices
698 order = [1,0,2] # experiment with different permutations, here
699 offs = 0 # experiment with different offsets, here
701 for idx in range(xdim * ydim * zdim):
702 new_idx = offs + idxs[0] + idxs[1] * xdim + idxs[2] * xdim * ydim
705 idxs[order[i]] = idxs[order[i]] + 1
706 if (idxs[order[i]] != lims[order[i]]):
711 Here, it is assumed that this algorithm be run within all pseudo-code
712 throughout this document where a (parallelism) for-loop would normally
713 run from 0 to VL-1 to refer to contiguous register
714 elements; instead, where REMAP indicates to do so, the element index
715 is run through the above algorithm to work out the **actual** element
716 index, instead. Given that there are three possible SHAPE entries, up to
717 three separate registers in any given operation may be simultaneously
720 function op_add(rd, rs1, rs2) # add not VADD!
723 for (i = 0; i < VL; i++)
724 xSTATE.srcoffs = i # save context
725 if (predval & 1<<i) # predication uses intregs
726 ireg[rd+remap(id)] <= ireg[rs1+remap(irs1)] +
727 ireg[rs2+remap(irs2)];
728 if (!int_vec[rd ].isvector) break;
729 if (int_vec[rd ].isvector) { id += 1; }
730 if (int_vec[rs1].isvector) { irs1 += 1; }
731 if (int_vec[rs2].isvector) { irs2 += 1; }
733 By changing remappings, 2D matrices may be transposed "in-place" for one
734 operation, followed by setting a different permutation order without
735 having to move the values in the registers to or from memory. Also,
736 the reason for having REMAP separate from the three SHAPE CSRs is so
737 that in a chain of matrix multiplications and additions, for example,
738 the SHAPE CSRs need only be set up once; only the REMAP CSR need be
739 changed to target different registers.
743 * Over-running the register file clearly has to be detected and
744 an illegal instruction exception thrown
745 * When non-default elwidths are set, the exact same algorithm still
746 applies (i.e. it offsets elements *within* registers rather than
748 * If permute option 000 is utilised, the actual order of the
749 reindexing does not change!
750 * If two or more dimensions are set to zero, the actual order does not change!
751 * The above algorithm is pseudo-code **only**. Actual implementations
752 will need to take into account the fact that the element for-looping
753 must be **re-entrant**, due to the possibility of exceptions occurring.
754 See MSTATE CSR, which records the current element index.
755 * Twin-predicated operations require **two** separate and distinct
756 element offsets. The above pseudo-code algorithm will be applied
757 separately and independently to each, should each of the two
758 operands be remapped. *This even includes C.LDSP* and other operations
759 in that category, where in that case it will be the **offset** that is
760 remapped (see Compressed Stack LOAD/STORE section).
761 * Offset is especially useful, on its own, for accessing elements
762 within the middle of a register. Without offsets, it is necessary
763 to either use a predicated MV, skipping the first elements, or
764 performing a LOAD/STORE cycle to memory.
765 With offsets, the data does not have to be moved.
766 * Setting the total elements (xdim+1) times (ydim+1) times (zdim+1) to
767 less than MVL is **perfectly legal**, albeit very obscure. It permits
768 entries to be regularly presented to operands **more than once**, thus
769 allowing the same underlying registers to act as an accumulator of
770 multiple vector or matrix operations, for example.
772 Clearly here some considerable care needs to be taken as the remapping
773 could hypothetically create arithmetic operations that target the
774 exact same underlying registers, resulting in data corruption due to
775 pipeline overlaps. Out-of-order / Superscalar micro-architectures with
776 register-renaming will have an easier time dealing with this than
777 DSP-style SIMD micro-architectures.
779 # Instruction Execution Order
781 Simple-V behaves as if it is a hardware-level "macro expansion system",
782 substituting and expanding a single instruction into multiple sequential
783 instructions with contiguous and sequentially-incrementing registers.
784 As such, it does **not** modify - or specify - the behaviour and semantics of
785 the execution order: that may be deduced from the **existing** RV
786 specification in each and every case.
788 So for example if a particular micro-architecture permits out-of-order
789 execution, and it is augmented with Simple-V, then wherever instructions
790 may be out-of-order then so may the "post-expansion" SV ones.
792 If on the other hand there are memory guarantees which specifically
793 prevent and prohibit certain instructions from being re-ordered
794 (such as the Atomicity Axiom, or FENCE constraints), then clearly
795 those constraints **MUST** also be obeyed "post-expansion".
797 It should be absolutely clear that SV is **not** about providing new
798 functionality or changing the existing behaviour of a micro-architetural
799 design, or about changing the RISC-V Specification.
800 It is **purely** about compacting what would otherwise be contiguous
801 instructions that use sequentially-increasing register numbers down
802 to the **one** instruction.
804 # Instructions <a name="instructions" />
806 Despite being a 98% complete and accurate topological remap of RVV
807 concepts and functionality, no new instructions are needed.
808 Compared to RVV: *All* RVV instructions can be re-mapped, however xBitManip
809 becomes a critical dependency for efficient manipulation of predication
810 masks (as a bit-field). Despite the removal of all operations,
811 with the exception of CLIP and VSELECT.X
812 *all instructions from RVV Base are topologically re-mapped and retain their
813 complete functionality, intact*. Note that if RV64G ever had
814 a MV.X added as well as FCLIP, the full functionality of RVV-Base would
817 Three instructions, VSELECT, VCLIP and VCLIPI, do not have RV Standard
818 equivalents, so are left out of Simple-V. VSELECT could be included if
819 there existed a MV.X instruction in RV (MV.X is a hypothetical
820 non-immediate variant of MV that would allow another register to
821 specify which register was to be copied). Note that if any of these three
822 instructions are added to any given RV extension, their functionality
823 will be inherently parallelised.
825 With some exceptions, where it does not make sense or is simply too
826 challenging, all RV-Base instructions are parallelised:
828 * CSR instructions, whilst a case could be made for fast-polling of
829 a CSR into multiple registers, or for being able to copy multiple
830 contiguously addressed CSRs into contiguous registers, and so on,
831 are the fundamental core basis of SV. If parallelised, extreme
832 care would need to be taken. Additionally, CSR reads are done
833 using x0, and it is *really* inadviseable to tag x0.
834 * LUI, C.J, C.JR, WFI, AUIPC are not suitable for parallelising so are
836 * LR/SC could hypothetically be parallelised however their purpose is
837 single (complex) atomic memory operations where the LR must be followed
838 up by a matching SC. A sequence of parallel LR instructions followed
839 by a sequence of parallel SC instructions therefore is guaranteed to
840 not be useful. Not least: the guarantees of a Multi-LR/SC
841 would be impossible to provide if emulated in a trap.
842 * EBREAK, NOP, FENCE and others do not use registers so are not inherently
843 paralleliseable anyway.
845 All other operations using registers are automatically parallelised.
846 This includes AMOMAX, AMOSWAP and so on, where particular care and
847 attention must be paid.
849 Example pseudo-code for an integer ADD operation (including scalar operations).
850 Floating-point uses fp csrs.
852 function op_add(rd, rs1, rs2) # add not VADD!
853 int i, id=0, irs1=0, irs2=0;
854 predval = get_pred_val(FALSE, rd);
855 rd = int_vec[rd ].isvector ? int_vec[rd ].regidx : rd;
856 rs1 = int_vec[rs1].isvector ? int_vec[rs1].regidx : rs1;
857 rs2 = int_vec[rs2].isvector ? int_vec[rs2].regidx : rs2;
858 for (i = 0; i < VL; i++)
859 xSTATE.srcoffs = i # save context
860 if (predval & 1<<i) # predication uses intregs
861 ireg[rd+id] <= ireg[rs1+irs1] + ireg[rs2+irs2];
862 if (!int_vec[rd ].isvector) break;
863 if (int_vec[rd ].isvector) { id += 1; }
864 if (int_vec[rs1].isvector) { irs1 += 1; }
865 if (int_vec[rs2].isvector) { irs2 += 1; }
867 Note that for simplicity there is quite a lot missing from the above
868 pseudo-code: element widths, zeroing on predication, dimensional
869 reshaping and offsets and so on. However it demonstrates the basic
870 principle. Augmentations that produce the full pseudo-code are covered in
875 Adding in support for SUBVL is a matter of adding in an extra inner for-loop, where register src and dest are still incremented inside the inner part. Not that the predication is still taken from the VL index.
877 So whilst elements are indexed by (i * SUBVL + s), predicate bits are indexed by i
879 function op_add(rd, rs1, rs2) # add not VADD!
880 int i, id=0, irs1=0, irs2=0;
881 predval = get_pred_val(FALSE, rd);
882 rd = int_vec[rd ].isvector ? int_vec[rd ].regidx : rd;
883 rs1 = int_vec[rs1].isvector ? int_vec[rs1].regidx : rs1;
884 rs2 = int_vec[rs2].isvector ? int_vec[rs2].regidx : rs2;
885 for (i = 0; i < VL; i++)
886 xSTATE.srcoffs = i # save context
887 for (s = 0; s < SUBVL; s++)
888 xSTATE.ssvoffs = s # save context
889 if (predval & 1<<i) # predication uses intregs
890 ireg[rd+id] <= ireg[rs1+irs1] + ireg[rs2+irs2];
891 if (!int_vec[rd ].isvector) break;
892 if (int_vec[rd ].isvector) { id += 1; }
893 if (int_vec[rs1].isvector) { irs1 += 1; }
894 if (int_vec[rs2].isvector) { irs2 += 1; }
896 NOTE: pseudocode simplified greatly: zeroing, proper predicate handling, elwidth handling etc. all left out.
898 ## Instruction Format
900 It is critical to appreciate that there are
901 **no operations added to SV, at all**.
903 Instead, by using CSRs to tag registers as an indication of "changed
904 behaviour", SV *overloads* pre-existing branch operations into predicated
905 variants, and implicitly overloads arithmetic operations, MV, FCVT, and
906 LOAD/STORE depending on CSR configurations for bitwidth and predication.
907 **Everything** becomes parallelised. *This includes Compressed
908 instructions* as well as any future instructions and Custom Extensions.
910 Note: CSR tags to change behaviour of instructions is nothing new, including
911 in RISC-V. UXL, SXL and MXL change the behaviour so that XLEN=32/64/128.
912 FRM changes the behaviour of the floating-point unit, to alter the rounding
913 mode. Other architectures change the LOAD/STORE byte-order from big-endian
914 to little-endian on a per-instruction basis. SV is just a little more...
915 comprehensive in its effect on instructions.
917 ## Branch Instructions
919 ### Standard Branch <a name="standard_branch"></a>
921 Branch operations use standard RV opcodes that are reinterpreted to
922 be "predicate variants" in the instance where either of the two src
923 registers are marked as vectors (active=1, vector=1).
925 Note that the predication register to use (if one is enabled) is taken from
926 the *first* src register, and that this is used, just as with predicated
927 arithmetic operations, to mask whether the comparison operations take
928 place or not. The target (destination) predication register
929 to use (if one is enabled) is taken from the *second* src register.
931 If either of src1 or src2 are scalars (whether by there being no
932 CSR register entry or whether by the CSR entry specifically marking
933 the register as "scalar") the comparison goes ahead as vector-scalar
936 In instances where no vectorisation is detected on either src registers
937 the operation is treated as an absolutely standard scalar branch operation.
938 Where vectorisation is present on either or both src registers, the
939 branch may stil go ahead if any only if *all* tests succeed (i.e. excluding
940 those tests that are predicated out).
942 Note that when zero-predication is enabled (from source rs1),
943 a cleared bit in the predicate indicates that the result
944 of the compare is set to "false", i.e. that the corresponding
945 destination bit (or result)) be set to zero. Contrast this with
946 when zeroing is not set: bits in the destination predicate are
947 only *set*; they are **not** cleared. This is important to appreciate,
948 as there may be an expectation that, going into the hardware-loop,
949 the destination predicate is always expected to be set to zero:
950 this is **not** the case. The destination predicate is only set
951 to zero if **zeroing** is enabled.
953 Note that just as with the standard (scalar, non-predicated) branch
954 operations, BLE, BGT, BLEU and BTGU may be synthesised by inverting
957 In Hwacha EECS-2015-262 Section 6.7.2 the following pseudocode is given
958 for predicated compare operations of function "cmp":
960 for (int i=0; i<vl; ++i)
962 preg[pd][i] = cmp(s1 ? vreg[rs1][i] : sreg[rs1],
963 s2 ? vreg[rs2][i] : sreg[rs2]);
965 With associated predication, vector-length adjustments and so on,
966 and temporarily ignoring bitwidth (which makes the comparisons more
967 complex), this becomes:
969 s1 = reg_is_vectorised(src1);
970 s2 = reg_is_vectorised(src2);
973 if cmp(rs1, rs2) # scalar compare
977 preg = int_pred_reg[rd]
980 ps = get_pred_val(I/F==INT, rs1);
981 rd = get_pred_val(I/F==INT, rs2); # this may not exist
983 if not exists(rd) or zeroing:
988 for (int i = 0; i < VL; ++i)
992 else if (ps & (1<<i))
993 if (cmp(s1 ? reg[src1+i]:reg[src1],
994 s2 ? reg[src2+i]:reg[src2])
1003 preg[rd] = result # store in destination
1009 * Predicated SIMD comparisons would break src1 and src2 further down
1010 into bitwidth-sized chunks (see Appendix "Bitwidth Virtual Register
1011 Reordering") setting Vector-Length times (number of SIMD elements) bits
1012 in Predicate Register rd, as opposed to just Vector-Length bits.
1013 * The execution of "parallelised" instructions **must** be implemented
1014 as "re-entrant" (to use a term from software). If an exception (trap)
1015 occurs during the middle of a vectorised
1016 Branch (now a SV predicated compare) operation, the partial results
1017 of any comparisons must be written out to the destination
1018 register before the trap is permitted to begin. If however there
1019 is no predicate, the **entire** set of comparisons must be **restarted**,
1020 with the offset loop indices set back to zero. This is because
1021 there is no place to store the temporary result during the handling
1024 TODO: predication now taken from src2. also branch goes ahead
1025 if all compares are successful.
1027 Note also that where normally, predication requires that there must
1028 also be a CSR register entry for the register being used in order
1029 for the **predication** CSR register entry to also be active,
1030 for branches this is **not** the case. src2 does **not** have
1031 to have its CSR register entry marked as active in order for
1032 predication on src2 to be active.
1034 Also note: SV Branch operations are **not** twin-predicated
1035 (see Twin Predication section). This would require three
1036 element offsets: one to track src1, one to track src2 and a third
1037 to track where to store the accumulation of the results. Given
1038 that the element offsets need to be exposed via CSRs so that
1039 the parallel hardware looping may be made re-entrant on traps
1040 and exceptions, the decision was made not to make SV Branches
1043 ### Floating-point Comparisons
1045 There does not exist floating-point branch operations, only compare.
1046 Interestingly no change is needed to the instruction format because
1047 FP Compare already stores a 1 or a zero in its "rd" integer register
1048 target, i.e. it's not actually a Branch at all: it's a compare.
1050 In RV (scalar) Base, a branch on a floating-point compare is
1051 done via the sequence "FEQ x1, f0, f5; BEQ x1, x0, #jumploc".
1052 This does extend to SV, as long as x1 (in the example sequence given)
1053 is vectorised. When that is the case, x1..x(1+VL-1) will also be
1054 set to 0 or 1 depending on whether f0==f5, f1==f6, f2==f7 and so on.
1055 The BEQ that follows will *also* compare x1==x0, x2==x0, x3==x0 and
1056 so on. Consequently, unlike integer-branch, FP Compare needs no
1057 modification in its behaviour.
1059 In addition, it is noted that an entry "FNE" (the opposite of FEQ) is missing,
1060 and whilst in ordinary branch code this is fine because the standard
1061 RVF compare can always be followed up with an integer BEQ or a BNE (or
1062 a compressed comparison to zero or non-zero), in predication terms that
1063 becomes more of an impact. To deal with this, SV's predication has
1064 had "invert" added to it.
1066 Also: note that FP Compare may be predicated, using the destination
1067 integer register (rd) to determine the predicate. FP Compare is **not**
1068 a twin-predication operation, as, again, just as with SV Branches,
1069 there are three registers involved: FP src1, FP src2 and INT rd.
1071 ### Compressed Branch Instruction
1073 Compressed Branch instructions are, just like standard Branch instructions,
1074 reinterpreted to be vectorised and predicated based on the source register
1075 (rs1s) CSR entries. As however there is only the one source register,
1076 given that c.beqz a10 is equivalent to beqz a10,x0, the optional target
1077 to store the results of the comparisions is taken from CSR predication
1078 table entries for **x0**.
1080 The specific required use of x0 is, with a little thought, quite obvious,
1081 but is counterintuitive. Clearly it is **not** recommended to redirect
1082 x0 with a CSR register entry, however as a means to opaquely obtain
1083 a predication target it is the only sensible option that does not involve
1084 additional special CSRs (or, worse, additional special opcodes).
1086 Note also that, just as with standard branches, the 2nd source
1087 (in this case x0 rather than src2) does **not** have to have its CSR
1088 register table marked as "active" in order for predication to work.
1090 ## Vectorised Dual-operand instructions
1092 There is a series of 2-operand instructions involving copying (and
1093 sometimes alteration):
1096 * FMV, FNEG, FABS, FCVT, FSGNJ, FSGNJN and FSGNJX
1097 * C.LWSP, C.SWSP, C.LDSP, C.FLWSP etc.
1098 * LOAD(-FP) and STORE(-FP)
1100 All of these operations follow the same two-operand pattern, so it is
1101 *both* the source *and* destination predication masks that are taken into
1102 account. This is different from
1103 the three-operand arithmetic instructions, where the predication mask
1104 is taken from the *destination* register, and applied uniformly to the
1105 elements of the source register(s), element-for-element.
1107 The pseudo-code pattern for twin-predicated operations is as
1110 function op(rd, rs):
1111 rd = int_csr[rd].active ? int_csr[rd].regidx : rd;
1112 rs = int_csr[rs].active ? int_csr[rs].regidx : rs;
1113 ps = get_pred_val(FALSE, rs); # predication on src
1114 pd = get_pred_val(FALSE, rd); # ... AND on dest
1115 for (int i = 0, int j = 0; i < VL && j < VL;):
1116 if (int_csr[rs].isvec) while (!(ps & 1<<i)) i++;
1117 if (int_csr[rd].isvec) while (!(pd & 1<<j)) j++;
1118 xSTATE.srcoffs = i # save context
1119 xSTATE.destoffs = j # save context
1120 reg[rd+j] = SCALAR_OPERATION_ON(reg[rs+i])
1121 if (int_csr[rs].isvec) i++;
1122 if (int_csr[rd].isvec) j++; else break
1124 This pattern covers scalar-scalar, scalar-vector, vector-scalar
1125 and vector-vector, and predicated variants of all of those.
1126 Zeroing is not presently included (TODO). As such, when compared
1127 to RVV, the twin-predicated variants of C.MV and FMV cover
1128 **all** standard vector operations: VINSERT, VSPLAT, VREDUCE,
1129 VEXTRACT, VSCATTER, VGATHER, VCOPY, and more.
1133 * elwidth (SIMD) is not covered in the pseudo-code above
1134 * ending the loop early in scalar cases (VINSERT, VEXTRACT) is also
1136 * zero predication is also not shown (TODO).
1138 ### C.MV Instruction <a name="c_mv"></a>
1140 There is no MV instruction in RV however there is a C.MV instruction.
1141 It is used for copying integer-to-integer registers (vectorised FMV
1142 is used for copying floating-point).
1144 If either the source or the destination register are marked as vectors
1145 C.MV is reinterpreted to be a vectorised (multi-register) predicated
1146 move operation. The actual instruction's format does not change:
1149 15 12 | 11 7 | 6 2 | 1 0 |
1150 funct4 | rd | rs | op |
1152 C.MV | dest | src | C0 |
1155 A simplified version of the pseudocode for this operation is as follows:
1157 function op_mv(rd, rs) # MV not VMV!
1158 rd = int_csr[rd].active ? int_csr[rd].regidx : rd;
1159 rs = int_csr[rs].active ? int_csr[rs].regidx : rs;
1160 ps = get_pred_val(FALSE, rs); # predication on src
1161 pd = get_pred_val(FALSE, rd); # ... AND on dest
1162 for (int i = 0, int j = 0; i < VL && j < VL;):
1163 if (int_csr[rs].isvec) while (!(ps & 1<<i)) i++;
1164 if (int_csr[rd].isvec) while (!(pd & 1<<j)) j++;
1165 xSTATE.srcoffs = i # save context
1166 xSTATE.destoffs = j # save context
1167 ireg[rd+j] <= ireg[rs+i];
1168 if (int_csr[rs].isvec) i++;
1169 if (int_csr[rd].isvec) j++; else break
1171 There are several different instructions from RVV that are covered by
1175 src | dest | predication | op |
1176 scalar | vector | none | VSPLAT |
1177 scalar | vector | destination | sparse VSPLAT |
1178 scalar | vector | 1-bit dest | VINSERT |
1179 vector | scalar | 1-bit? src | VEXTRACT |
1180 vector | vector | none | VCOPY |
1181 vector | vector | src | Vector Gather |
1182 vector | vector | dest | Vector Scatter |
1183 vector | vector | src & dest | Gather/Scatter |
1184 vector | vector | src == dest | sparse VCOPY |
1187 Also, VMERGE may be implemented as back-to-back (macro-op fused) C.MV
1188 operations with inversion on the src and dest predication for one of the
1189 two C.MV operations.
1191 Note that in the instance where the Compressed Extension is not implemented,
1192 MV may be used, but that is a pseudo-operation mapping to addi rd, x0, rs.
1193 Note that the behaviour is **different** from C.MV because with addi the
1194 predication mask to use is taken **only** from rd and is applied against
1195 all elements: rs[i] = rd[i].
1197 ### FMV, FNEG and FABS Instructions
1199 These are identical in form to C.MV, except covering floating-point
1200 register copying. The same double-predication rules also apply.
1201 However when elwidth is not set to default the instruction is implicitly
1202 and automatic converted to a (vectorised) floating-point type conversion
1203 operation of the appropriate size covering the source and destination
1206 (Note that FMV, FNEG and FABS are all actually pseudo-instructions)
1208 ### FVCT Instructions
1210 These are again identical in form to C.MV, except that they cover
1211 floating-point to integer and integer to floating-point. When element
1212 width in each vector is set to default, the instructions behave exactly
1213 as they are defined for standard RV (scalar) operations, except vectorised
1214 in exactly the same fashion as outlined in C.MV.
1216 However when the source or destination element width is not set to default,
1217 the opcode's explicit element widths are *over-ridden* to new definitions,
1218 and the opcode's element width is taken as indicative of the SIMD width
1219 (if applicable i.e. if packed SIMD is requested) instead.
1221 For example FCVT.S.L would normally be used to convert a 64-bit
1222 integer in register rs1 to a 64-bit floating-point number in rd.
1223 If however the source rs1 is set to be a vector, where elwidth is set to
1224 default/2 and "packed SIMD" is enabled, then the first 32 bits of
1225 rs1 are converted to a floating-point number to be stored in rd's
1226 first element and the higher 32-bits *also* converted to floating-point
1227 and stored in the second. The 32 bit size comes from the fact that
1228 FCVT.S.L's integer width is 64 bit, and with elwidth on rs1 set to
1229 divide that by two it means that rs1 element width is to be taken as 32.
1231 Similar rules apply to the destination register.
1233 ## LOAD / STORE Instructions and LOAD-FP/STORE-FP <a name="load_store"></a>
1235 An earlier draft of SV modified the behaviour of LOAD/STORE (modified
1236 the interpretation of the instruction fields). This
1237 actually undermined the fundamental principle of SV, namely that there
1238 be no modifications to the scalar behaviour (except where absolutely
1239 necessary), in order to simplify an implementor's task if considering
1240 converting a pre-existing scalar design to support parallelism.
1242 So the original RISC-V scalar LOAD/STORE and LOAD-FP/STORE-FP functionality
1243 do not change in SV, however just as with C.MV it is important to note
1244 that dual-predication is possible.
1246 In vectorised architectures there are usually at least two different modes
1249 * Read (or write for STORE) from sequential locations, where one
1250 register specifies the address, and the one address is incremented
1251 by a fixed amount. This is usually known as "Unit Stride" mode.
1252 * Read (or write) from multiple indirected addresses, where the
1253 vector elements each specify separate and distinct addresses.
1255 To support these different addressing modes, the CSR Register "isvector"
1256 bit is used. So, for a LOAD, when the src register is set to
1257 scalar, the LOADs are sequentially incremented by the src register
1258 element width, and when the src register is set to "vector", the
1259 elements are treated as indirection addresses. Simplified
1260 pseudo-code would look like this:
1262 function op_ld(rd, rs) # LD not VLD!
1263 rdv = int_csr[rd].active ? int_csr[rd].regidx : rd;
1264 rsv = int_csr[rs].active ? int_csr[rs].regidx : rs;
1265 ps = get_pred_val(FALSE, rs); # predication on src
1266 pd = get_pred_val(FALSE, rd); # ... AND on dest
1267 for (int i = 0, int j = 0; i < VL && j < VL;):
1268 if (int_csr[rs].isvec) while (!(ps & 1<<i)) i++;
1269 if (int_csr[rd].isvec) while (!(pd & 1<<j)) j++;
1270 if (int_csr[rd].isvec)
1271 # indirect mode (multi mode)
1272 srcbase = ireg[rsv+i];
1275 srcbase = ireg[rsv] + i * XLEN/8; # offset in bytes
1276 ireg[rdv+j] <= mem[srcbase + imm_offs];
1277 if (!int_csr[rs].isvec &&
1278 !int_csr[rd].isvec) break # scalar-scalar LD
1279 if (int_csr[rs].isvec) i++;
1280 if (int_csr[rd].isvec) j++;
1284 * For simplicity, zeroing and elwidth is not included in the above:
1285 the key focus here is the decision-making for srcbase; vectorised
1286 rs means use sequentially-numbered registers as the indirection
1287 address, and scalar rs is "offset" mode.
1288 * The test towards the end for whether both source and destination are
1289 scalar is what makes the above pseudo-code provide the "standard" RV
1290 Base behaviour for LD operations.
1291 * The offset in bytes (XLEN/8) changes depending on whether the
1292 operation is a LB (1 byte), LH (2 byes), LW (4 bytes) or LD
1293 (8 bytes), and also whether the element width is over-ridden
1294 (see special element width section).
1296 ## Compressed Stack LOAD / STORE Instructions <a name="c_ld_st"></a>
1298 C.LWSP / C.SWSP and floating-point etc. are also source-dest twin-predicated,
1299 where it is implicit in C.LWSP/FLWSP etc. that x2 is the source register.
1300 It is therefore possible to use predicated C.LWSP to efficiently
1301 pop registers off the stack (by predicating x2 as the source), cherry-picking
1302 which registers to store to (by predicating the destination). Likewise
1303 for C.SWSP. In this way, LOAD/STORE-Multiple is efficiently achieved.
1305 The two modes ("unit stride" and multi-indirection) are still supported,
1306 as with standard LD/ST. Essentially, the only difference is that the
1307 use of x2 is hard-coded into the instruction.
1309 **Note**: it is still possible to redirect x2 to an alternative target
1310 register. With care, this allows C.LWSP / C.SWSP (and C.FLWSP) to be used as
1311 general-purpose LOAD/STORE operations.
1313 ## Compressed LOAD / STORE Instructions
1315 Compressed LOAD and STORE are again exactly the same as scalar LOAD/STORE,
1316 where the same rules apply and the same pseudo-code apply as for
1317 non-compressed LOAD/STORE. Again: setting scalar or vector mode
1318 on the src for LOAD and dest for STORE switches mode from "Unit Stride"
1319 to "Multi-indirection", respectively.
1321 # Element bitwidth polymorphism <a name="elwidth"></a>
1323 Element bitwidth is best covered as its own special section, as it
1324 is quite involved and applies uniformly across-the-board. SV restricts
1325 bitwidth polymorphism to default, 8-bit, 16-bit and 32-bit.
1327 The effect of setting an element bitwidth is to re-cast each entry
1328 in the register table, and for all memory operations involving
1329 load/stores of certain specific sizes, to a completely different width.
1330 Thus In c-style terms, on an RV64 architecture, effectively each register
1331 now looks like this:
1340 // integer table: assume maximum SV 7-bit regfile size
1341 reg_t int_regfile[128];
1343 where the CSR Register table entry (not the instruction alone) determines
1344 which of those union entries is to be used on each operation, and the
1345 VL element offset in the hardware-loop specifies the index into each array.
1347 However a naive interpretation of the data structure above masks the
1348 fact that setting VL greater than 8, for example, when the bitwidth is 8,
1349 accessing one specific register "spills over" to the following parts of
1350 the register file in a sequential fashion. So a much more accurate way
1351 to reflect this would be:
1354 uint8_t actual_bytes[8]; // 8 for RV64, 4 for RV32, 16 for RV128
1355 uint8_t b[0]; // array of type uint8_t
1362 reg_t int_regfile[128];
1364 where when accessing any individual regfile[n].b entry it is permitted
1365 (in c) to arbitrarily over-run the *declared* length of the array (zero),
1366 and thus "overspill" to consecutive register file entries in a fashion
1367 that is completely transparent to a greatly-simplified software / pseudo-code
1369 It is however critical to note that it is clearly the responsibility of
1370 the implementor to ensure that, towards the end of the register file,
1371 an exception is thrown if attempts to access beyond the "real" register
1372 bytes is ever attempted.
1374 Now we may modify pseudo-code an operation where all element bitwidths have
1375 been set to the same size, where this pseudo-code is otherwise identical
1376 to its "non" polymorphic versions (above):
1378 function op_add(rd, rs1, rs2) # add not VADD!
1381 for (i = 0; i < VL; i++)
1384 // TODO, calculate if over-run occurs, for each elwidth
1386 int_regfile[rd].b[id] <= int_regfile[rs1].i[irs1] +
1387 int_regfile[rs2].i[irs2];
1388 } else if elwidth == 16 {
1389 int_regfile[rd].s[id] <= int_regfile[rs1].s[irs1] +
1390 int_regfile[rs2].s[irs2];
1391 } else if elwidth == 32 {
1392 int_regfile[rd].i[id] <= int_regfile[rs1].i[irs1] +
1393 int_regfile[rs2].i[irs2];
1394 } else { // elwidth == 64
1395 int_regfile[rd].l[id] <= int_regfile[rs1].l[irs1] +
1396 int_regfile[rs2].l[irs2];
1401 So here we can see clearly: for 8-bit entries rd, rs1 and rs2 (and registers
1402 following sequentially on respectively from the same) are "type-cast"
1403 to 8-bit; for 16-bit entries likewise and so on.
1405 However that only covers the case where the element widths are the same.
1406 Where the element widths are different, the following algorithm applies:
1408 * Analyse the bitwidth of all source operands and work out the
1409 maximum. Record this as "maxsrcbitwidth"
1410 * If any given source operand requires sign-extension or zero-extension
1411 (ldb, div, rem, mul, sll, srl, sra etc.), instead of mandatory 32-bit
1412 sign-extension / zero-extension or whatever is specified in the standard
1413 RV specification, **change** that to sign-extending from the respective
1414 individual source operand's bitwidth from the CSR table out to
1415 "maxsrcbitwidth" (previously calculated), instead.
1416 * Following separate and distinct (optional) sign/zero-extension of all
1417 source operands as specifically required for that operation, carry out the
1418 operation at "maxsrcbitwidth". (Note that in the case of LOAD/STORE or MV
1419 this may be a "null" (copy) operation, and that with FCVT, the changes
1420 to the source and destination bitwidths may also turn FVCT effectively
1422 * If the destination operand requires sign-extension or zero-extension,
1423 instead of a mandatory fixed size (typically 32-bit for arithmetic,
1424 for subw for example, and otherwise various: 8-bit for sb, 16-bit for sw
1425 etc.), overload the RV specification with the bitwidth from the
1426 destination register's elwidth entry.
1427 * Finally, store the (optionally) sign/zero-extended value into its
1428 destination: memory for sb/sw etc., or an offset section of the register
1429 file for an arithmetic operation.
1431 In this way, polymorphic bitwidths are achieved without requiring a
1432 massive 64-way permutation of calculations **per opcode**, for example
1433 (4 possible rs1 bitwidths times 4 possible rs2 bitwidths times 4 possible
1434 rd bitwidths). The pseudo-code is therefore as follows:
1453 get_max_elwidth(rs1, rs2):
1454 return max(bw(int_csr[rs1].elwidth), # default (XLEN) if not set
1455 bw(int_csr[rs2].elwidth)) # again XLEN if no entry
1457 get_polymorphed_reg(reg, bitwidth, offset):
1459 res.l = 0; // TODO: going to need sign-extending / zero-extending
1461 reg.b = int_regfile[reg].b[offset]
1462 elif bitwidth == 16:
1463 reg.s = int_regfile[reg].s[offset]
1464 elif bitwidth == 32:
1465 reg.i = int_regfile[reg].i[offset]
1466 elif bitwidth == 64:
1467 reg.l = int_regfile[reg].l[offset]
1470 set_polymorphed_reg(reg, bitwidth, offset, val):
1471 if (!int_csr[reg].isvec):
1472 # sign/zero-extend depending on opcode requirements, from
1473 # the reg's bitwidth out to the full bitwidth of the regfile
1474 val = sign_or_zero_extend(val, bitwidth, xlen)
1475 int_regfile[reg].l[0] = val
1477 int_regfile[reg].b[offset] = val
1478 elif bitwidth == 16:
1479 int_regfile[reg].s[offset] = val
1480 elif bitwidth == 32:
1481 int_regfile[reg].i[offset] = val
1482 elif bitwidth == 64:
1483 int_regfile[reg].l[offset] = val
1485 maxsrcwid = get_max_elwidth(rs1, rs2) # source element width(s)
1486 destwid = int_csr[rs1].elwidth # destination element width
1487 for (i = 0; i < VL; i++)
1488 if (predval & 1<<i) # predication uses intregs
1489 // TODO, calculate if over-run occurs, for each elwidth
1490 src1 = get_polymorphed_reg(rs1, maxsrcwid, irs1)
1491 // TODO, sign/zero-extend src1 and src2 as operation requires
1492 if (op_requires_sign_extend_src1)
1493 src1 = sign_extend(src1, maxsrcwid)
1494 src2 = get_polymorphed_reg(rs2, maxsrcwid, irs2)
1495 result = src1 + src2 # actual add here
1496 // TODO, sign/zero-extend result, as operation requires
1497 if (op_requires_sign_extend_dest)
1498 result = sign_extend(result, maxsrcwid)
1499 set_polymorphed_reg(rd, destwid, ird, result)
1500 if (!int_vec[rd].isvector) break
1501 if (int_vec[rd ].isvector) { id += 1; }
1502 if (int_vec[rs1].isvector) { irs1 += 1; }
1503 if (int_vec[rs2].isvector) { irs2 += 1; }
1505 Whilst specific sign-extension and zero-extension pseudocode call
1506 details are left out, due to each operation being different, the above
1507 should be clear that;
1509 * the source operands are extended out to the maximum bitwidth of all
1511 * the operation takes place at that maximum source bitwidth (the
1512 destination bitwidth is not involved at this point, at all)
1513 * the result is extended (or potentially even, truncated) before being
1514 stored in the destination. i.e. truncation (if required) to the
1515 destination width occurs **after** the operation **not** before.
1516 * when the destination is not marked as "vectorised", the **full**
1517 (standard, scalar) register file entry is taken up, i.e. the
1518 element is either sign-extended or zero-extended to cover the
1519 full register bitwidth (XLEN) if it is not already XLEN bits long.
1521 Implementors are entirely free to optimise the above, particularly
1522 if it is specifically known that any given operation will complete
1523 accurately in less bits, as long as the results produced are
1524 directly equivalent and equal, for all inputs and all outputs,
1525 to those produced by the above algorithm.
1527 ## Polymorphic floating-point operation exceptions and error-handling
1529 For floating-point operations, conversion takes place without
1530 raising any kind of exception. Exactly as specified in the standard
1531 RV specification, NAN (or appropriate) is stored if the result
1532 is beyond the range of the destination, and, again, exactly as
1533 with the standard RV specification just as with scalar
1534 operations, the floating-point flag is raised (FCSR). And, again, just as
1535 with scalar operations, it is software's responsibility to check this flag.
1536 Given that the FCSR flags are "accrued", the fact that multiple element
1537 operations could have occurred is not a problem.
1539 Note that it is perfectly legitimate for floating-point bitwidths of
1540 only 8 to be specified. However whilst it is possible to apply IEEE 754
1541 principles, no actual standard yet exists. Implementors wishing to
1542 provide hardware-level 8-bit support rather than throw a trap to emulate
1543 in software should contact the author of this specification before
1546 ## Polymorphic shift operators
1548 A special note is needed for changing the element width of left and right
1549 shift operators, particularly right-shift. Even for standard RV base,
1550 in order for correct results to be returned, the second operand RS2 must
1551 be truncated to be within the range of RS1's bitwidth. spike's implementation
1552 of sll for example is as follows:
1554 WRITE_RD(sext_xlen(zext_xlen(RS1) << (RS2 & (xlen-1))));
1556 which means: where XLEN is 32 (for RV32), restrict RS2 to cover the
1557 range 0..31 so that RS1 will only be left-shifted by the amount that
1558 is possible to fit into a 32-bit register. Whilst this appears not
1559 to matter for hardware, it matters greatly in software implementations,
1560 and it also matters where an RV64 system is set to "RV32" mode, such
1561 that the underlying registers RS1 and RS2 comprise 64 hardware bits
1564 For SV, where each operand's element bitwidth may be over-ridden, the
1565 rule about determining the operation's bitwidth *still applies*, being
1566 defined as the maximum bitwidth of RS1 and RS2. *However*, this rule
1567 **also applies to the truncation of RS2**. In other words, *after*
1568 determining the maximum bitwidth, RS2's range must **also be truncated**
1569 to ensure a correct answer. Example:
1571 * RS1 is over-ridden to a 16-bit width
1572 * RS2 is over-ridden to an 8-bit width
1573 * RD is over-ridden to a 64-bit width
1574 * the maximum bitwidth is thus determined to be 16-bit - max(8,16)
1575 * RS2 is **truncated to a range of values from 0 to 15**: RS2 & (16-1)
1577 Pseudocode (in spike) for this example would therefore be:
1579 WRITE_RD(sext_xlen(zext_16bit(RS1) << (RS2 & (16-1))));
1581 This example illustrates that considerable care therefore needs to be
1582 taken to ensure that left and right shift operations are implemented
1583 correctly. The key is that
1585 * The operation bitwidth is determined by the maximum bitwidth
1586 of the *source registers*, **not** the destination register bitwidth
1587 * The result is then sign-extend (or truncated) as appropriate.
1589 ## Polymorphic MULH/MULHU/MULHSU
1591 MULH is designed to take the top half MSBs of a multiply that
1592 does not fit within the range of the source operands, such that
1593 smaller width operations may produce a full double-width multiply
1594 in two cycles. The issue is: SV allows the source operands to
1595 have variable bitwidth.
1597 Here again special attention has to be paid to the rules regarding
1598 bitwidth, which, again, are that the operation is performed at
1599 the maximum bitwidth of the **source** registers. Therefore:
1601 * An 8-bit x 8-bit multiply will create a 16-bit result that must
1602 be shifted down by 8 bits
1603 * A 16-bit x 8-bit multiply will create a 24-bit result that must
1604 be shifted down by 16 bits (top 8 bits being zero)
1605 * A 16-bit x 16-bit multiply will create a 32-bit result that must
1606 be shifted down by 16 bits
1607 * A 32-bit x 16-bit multiply will create a 48-bit result that must
1608 be shifted down by 32 bits
1609 * A 32-bit x 8-bit multiply will create a 40-bit result that must
1610 be shifted down by 32 bits
1612 So again, just as with shift-left and shift-right, the result
1613 is shifted down by the maximum of the two source register bitwidths.
1614 And, exactly again, truncation or sign-extension is performed on the
1615 result. If sign-extension is to be carried out, it is performed
1616 from the same maximum of the two source register bitwidths out
1617 to the result element's bitwidth.
1619 If truncation occurs, i.e. the top MSBs of the result are lost,
1620 this is "Officially Not Our Problem", i.e. it is assumed that the
1621 programmer actually desires the result to be truncated. i.e. if the
1622 programmer wanted all of the bits, they would have set the destination
1623 elwidth to accommodate them.
1625 ## Polymorphic elwidth on LOAD/STORE <a name="elwidth_loadstore"></a>
1627 Polymorphic element widths in vectorised form means that the data
1628 being loaded (or stored) across multiple registers needs to be treated
1629 (reinterpreted) as a contiguous stream of elwidth-wide items, where
1630 the source register's element width is **independent** from the destination's.
1632 This makes for a slightly more complex algorithm when using indirection
1633 on the "addressed" register (source for LOAD and destination for STORE),
1634 particularly given that the LOAD/STORE instruction provides important
1635 information about the width of the data to be reinterpreted.
1637 Let's illustrate the "load" part, where the pseudo-code for elwidth=default
1638 was as follows, and i is the loop from 0 to VL-1:
1640 srcbase = ireg[rs+i];
1641 return mem[srcbase + imm]; // returns XLEN bits
1643 Instead, when elwidth != default, for a LW (32-bit LOAD), elwidth-wide
1644 chunks are taken from the source memory location addressed by the current
1645 indexed source address register, and only when a full 32-bits-worth
1646 are taken will the index be moved on to the next contiguous source
1649 bitwidth = bw(elwidth); // source elwidth from CSR reg entry
1650 elsperblock = 32 / bitwidth // 1 if bw=32, 2 if bw=16, 4 if bw=8
1651 srcbase = ireg[rs+i/(elsperblock)]; // integer divide
1652 offs = i % elsperblock; // modulo
1653 return &mem[srcbase + imm + offs]; // re-cast to uint8_t*, uint16_t* etc.
1655 Note that the constant "32" above is replaced by 8 for LB, 16 for LH, 64 for LD
1658 The principle is basically exactly the same as if the srcbase were pointing
1659 at the memory of the *register* file: memory is re-interpreted as containing
1660 groups of elwidth-wide discrete elements.
1662 When storing the result from a load, it's important to respect the fact
1663 that the destination register has its *own separate element width*. Thus,
1664 when each element is loaded (at the source element width), any sign-extension
1665 or zero-extension (or truncation) needs to be done to the *destination*
1666 bitwidth. Also, the storing has the exact same analogous algorithm as
1667 above, where in fact it is just the set\_polymorphed\_reg pseudocode
1668 (completely unchanged) used above.
1670 One issue remains: when the source element width is **greater** than
1671 the width of the operation, it is obvious that a single LB for example
1672 cannot possibly obtain 16-bit-wide data. This condition may be detected
1673 where, when using integer divide, elsperblock (the width of the LOAD
1674 divided by the bitwidth of the element) is zero.
1676 The issue is "fixed" by ensuring that elsperblock is a minimum of 1:
1678 elsperblock = min(1, LD_OP_BITWIDTH / element_bitwidth)
1680 The elements, if the element bitwidth is larger than the LD operation's
1681 size, will then be sign/zero-extended to the full LD operation size, as
1682 specified by the LOAD (LDU instead of LD, LBU instead of LB), before
1683 being passed on to the second phase.
1685 As LOAD/STORE may be twin-predicated, it is important to note that
1686 the rules on twin predication still apply, except where in previous
1687 pseudo-code (elwidth=default for both source and target) it was
1688 the *registers* that the predication was applied to, it is now the
1689 **elements** that the predication is applied to.
1691 Thus the full pseudocode for all LD operations may be written out
1694 function LBU(rd, rs):
1695 load_elwidthed(rd, rs, 8, true)
1696 function LB(rd, rs):
1697 load_elwidthed(rd, rs, 8, false)
1698 function LH(rd, rs):
1699 load_elwidthed(rd, rs, 16, false)
1702 function LQ(rd, rs):
1703 load_elwidthed(rd, rs, 128, false)
1705 # returns 1 byte of data when opwidth=8, 2 bytes when opwidth=16..
1706 function load_memory(rs, imm, i, opwidth):
1707 elwidth = int_csr[rs].elwidth
1708 bitwidth = bw(elwidth);
1709 elsperblock = min(1, opwidth / bitwidth)
1710 srcbase = ireg[rs+i/(elsperblock)];
1711 offs = i % elsperblock;
1712 return mem[srcbase + imm + offs]; # 1/2/4/8/16 bytes
1714 function load_elwidthed(rd, rs, opwidth, unsigned):
1715 destwid = int_csr[rd].elwidth # destination element width
1716 rd = int_csr[rd].active ? int_csr[rd].regidx : rd;
1717 rs = int_csr[rs].active ? int_csr[rs].regidx : rs;
1718 ps = get_pred_val(FALSE, rs); # predication on src
1719 pd = get_pred_val(FALSE, rd); # ... AND on dest
1720 for (int i = 0, int j = 0; i < VL && j < VL;):
1721 if (int_csr[rs].isvec) while (!(ps & 1<<i)) i++;
1722 if (int_csr[rd].isvec) while (!(pd & 1<<j)) j++;
1723 val = load_memory(rs, imm, i, opwidth)
1725 val = zero_extend(val, min(opwidth, bitwidth))
1727 val = sign_extend(val, min(opwidth, bitwidth))
1728 set_polymorphed_reg(rd, bitwidth, j, val)
1729 if (int_csr[rs].isvec) i++;
1730 if (int_csr[rd].isvec) j++; else break;
1734 * when comparing against for example the twin-predicated c.mv
1735 pseudo-code, the pattern of independent incrementing of rd and rs
1736 is preserved unchanged.
1737 * just as with the c.mv pseudocode, zeroing is not included and must be
1738 taken into account (TODO).
1739 * that due to the use of a twin-predication algorithm, LOAD/STORE also
1740 take on the same VSPLAT, VINSERT, VREDUCE, VEXTRACT, VGATHER and
1741 VSCATTER characteristics.
1742 * that due to the use of the same set\_polymorphed\_reg pseudocode,
1743 a destination that is not vectorised (marked as scalar) will
1744 result in the element being fully sign-extended or zero-extended
1745 out to the full register file bitwidth (XLEN). When the source
1746 is also marked as scalar, this is how the compatibility with
1747 standard RV LOAD/STORE is preserved by this algorithm.
1749 ### Example Tables showing LOAD elements
1751 This section contains examples of vectorised LOAD operations, showing
1752 how the two stage process works (three if zero/sign-extension is included).
1755 #### Example: LD x8, x5(0), x8 CSR-elwidth=32, x5 CSR-elwidth=16, VL=7
1759 * a 64-bit load, with an offset of zero
1760 * with a source-address elwidth of 16-bit
1761 * into a destination-register with an elwidth of 32-bit
1763 * from register x5 (actually x5-x6) to x8 (actually x8 to half of x11)
1764 * RV64, where XLEN=64 is assumed.
1766 First, the memory table, which, due to the
1767 element width being 16 and the operation being LD (64), the 64-bits
1768 loaded from memory are subdivided into groups of **four** elements.
1769 And, with VL being 7 (deliberately to illustrate that this is reasonable
1770 and possible), the first four are sourced from the offset addresses pointed
1771 to by x5, and the next three from the ofset addresses pointed to by
1772 the next contiguous register, x6:
1775 addr | byte 0 | byte 1 | byte 2 | byte 3 | byte 4 | byte 5 | byte 6 | byte 7 |
1776 @x5 | elem 0 || elem 1 || elem 2 || elem 3 ||
1777 @x6 | elem 4 || elem 5 || elem 6 || not loaded ||
1780 Next, the elements are zero-extended from 16-bit to 32-bit, as whilst
1781 the elwidth CSR entry for x5 is 16-bit, the destination elwidth on x8 is 32.
1784 byte 3 | byte 2 | byte 1 | byte 0 |
1785 0x0 | 0x0 | elem0 ||
1786 0x0 | 0x0 | elem1 ||
1787 0x0 | 0x0 | elem2 ||
1788 0x0 | 0x0 | elem3 ||
1789 0x0 | 0x0 | elem4 ||
1790 0x0 | 0x0 | elem5 ||
1791 0x0 | 0x0 | elem6 ||
1792 0x0 | 0x0 | elem7 ||
1795 Lastly, the elements are stored in contiguous blocks, as if x8 was also
1796 byte-addressable "memory". That "memory" happens to cover registers
1797 x8, x9, x10 and x11, with the last 32 "bits" of x11 being **UNMODIFIED**:
1800 reg# | byte 7 | byte 6 | byte 5 | byte 4 | byte 3 | byte 2 | byte 1 | byte 0 |
1801 x8 | 0x0 | 0x0 | elem 1 || 0x0 | 0x0 | elem 0 ||
1802 x9 | 0x0 | 0x0 | elem 3 || 0x0 | 0x0 | elem 2 ||
1803 x10 | 0x0 | 0x0 | elem 5 || 0x0 | 0x0 | elem 4 ||
1804 x11 | **UNMODIFIED** |||| 0x0 | 0x0 | elem 6 ||
1807 Thus we have data that is loaded from the **addresses** pointed to by
1808 x5 and x6, zero-extended from 16-bit to 32-bit, stored in the **registers**
1809 x8 through to half of x11.
1810 The end result is that elements 0 and 1 end up in x8, with element 8 being
1811 shifted up 32 bits, and so on, until finally element 6 is in the
1814 Note that whilst the memory addressing table is shown left-to-right byte order,
1815 the registers are shown in right-to-left (MSB) order. This does **not**
1816 imply that bit or byte-reversal is carried out: it's just easier to visualise
1817 memory as being contiguous bytes, and emphasises that registers are not
1818 really actually "memory" as such.
1820 ## Why SV bitwidth specification is restricted to 4 entries
1822 The four entries for SV element bitwidths only allows three over-rides:
1828 This would seem inadequate, surely it would be better to have 3 bits or
1829 more and allow 64, 128 and some other options besides. The answer here
1830 is, it gets too complex, no RV128 implementation yet exists, and so RV64's
1831 default is 64 bit, so the 4 major element widths are covered anyway.
1833 There is an absolutely crucial aspect oF SV here that explicitly
1834 needs spelling out, and it's whether the "vectorised" bit is set in
1835 the Register's CSR entry.
1837 If "vectorised" is clear (not set), this indicates that the operation
1838 is "scalar". Under these circumstances, when set on a destination (RD),
1839 then sign-extension and zero-extension, whilst changed to match the
1840 override bitwidth (if set), will erase the **full** register entry
1843 When vectorised is *set*, this indicates that the operation now treats
1844 **elements** as if they were independent registers, so regardless of
1845 the length, any parts of a given actual register that are not involved
1846 in the operation are **NOT** modified, but are **PRESERVED**.
1850 * when the vector bit is clear and elwidth set to 16 on the destination
1851 register, operations are truncated to 16 bit and then sign or zero
1852 extended to the *FULL* XLEN register width.
1853 * when the vector bit is set, elwidth is 16 and VL=1 (or other value where
1854 groups of elwidth sized elements do not fill an entire XLEN register),
1855 the "top" bits of the destination register do *NOT* get modified, zero'd
1856 or otherwise overwritten.
1858 SIMD micro-architectures may implement this by using predication on
1859 any elements in a given actual register that are beyond the end of
1860 multi-element operation.
1862 Other microarchitectures may choose to provide byte-level write-enable
1863 lines on the register file, such that each 64 bit register in an RV64
1864 system requires 8 WE lines. Scalar RV64 operations would require
1865 activation of all 8 lines, where SV elwidth based operations would
1866 activate the required subset of those byte-level write lines.
1870 * rs1, rs2 and rd are all set to 8-bit
1872 * RV64 architecture is set (UXL=64)
1873 * add operation is carried out
1874 * bits 0-23 of RD are modified to be rs1[23..16] + rs2[23..16]
1875 concatenated with similar add operations on bits 15..8 and 7..0
1876 * bits 24 through 63 **remain as they originally were**.
1878 Example SIMD micro-architectural implementation:
1880 * SIMD architecture works out the nearest round number of elements
1881 that would fit into a full RV64 register (in this case: 8)
1882 * SIMD architecture creates a hidden predicate, binary 0b00000111
1883 i.e. the bottom 3 bits set (VL=3) and the top 5 bits clear
1884 * SIMD architecture goes ahead with the add operation as if it
1885 was a full 8-wide batch of 8 adds
1886 * SIMD architecture passes top 5 elements through the adders
1887 (which are "disabled" due to zero-bit predication)
1888 * SIMD architecture gets the 5 unmodified top 8-bits back unmodified
1889 and stores them in rd.
1891 This requires a read on rd, however this is required anyway in order
1892 to support non-zeroing mode.
1894 ## Polymorphic floating-point
1896 Standard scalar RV integer operations base the register width on XLEN,
1897 which may be changed (UXL in USTATUS, and the corresponding MXL and
1898 SXL in MSTATUS and SSTATUS respectively). Integer LOAD, STORE and
1899 arithmetic operations are therefore restricted to an active XLEN bits,
1900 with sign or zero extension to pad out the upper bits when XLEN has
1901 been dynamically set to less than the actual register size.
1903 For scalar floating-point, the active (used / changed) bits are
1904 specified exclusively by the operation: ADD.S specifies an active
1905 32-bits, with the upper bits of the source registers needing to
1906 be all 1s ("NaN-boxed"), and the destination upper bits being
1907 *set* to all 1s (including on LOAD/STOREs).
1909 Where elwidth is set to default (on any source or the destination)
1910 it is obvious that this NaN-boxing behaviour can and should be
1911 preserved. When elwidth is non-default things are less obvious,
1912 so need to be thought through. Here is a normal (scalar) sequence,
1913 assuming an RV64 which supports Quad (128-bit) FLEN:
1915 * FLD loads 64-bit wide from memory. Top 64 MSBs are set to all 1s
1916 * ADD.D performs a 64-bit-wide add. Top 64 MSBs of destination set to 1s.
1917 * FSD stores lowest 64-bits from the 128-bit-wide register to memory:
1918 top 64 MSBs ignored.
1920 Therefore it makes sense to mirror this behaviour when, for example,
1921 elwidth is set to 32. Assume elwidth set to 32 on all source and
1922 destination registers:
1924 * FLD loads 64-bit wide from memory as **two** 32-bit single-precision
1925 floating-point numbers.
1926 * ADD.D performs **two** 32-bit-wide adds, storing one of the adds
1927 in bits 0-31 and the second in bits 32-63.
1928 * FSD stores lowest 64-bits from the 128-bit-wide register to memory
1930 Here's the thing: it does not make sense to overwrite the top 64 MSBs
1931 of the registers either during the FLD **or** the ADD.D. The reason
1932 is that, effectively, the top 64 MSBs actually represent a completely
1933 independent 64-bit register, so overwriting it is not only gratuitous
1934 but may actually be harmful for a future extension to SV which may
1935 have a way to directly access those top 64 bits.
1937 The decision is therefore **not** to touch the upper parts of floating-point
1938 registers whereever elwidth is set to non-default values, including
1939 when "isvec" is false in a given register's CSR entry. Only when the
1940 elwidth is set to default **and** isvec is false will the standard
1941 RV behaviour be followed, namely that the upper bits be modified.
1943 Ultimately if elwidth is default and isvec false on *all* source
1944 and destination registers, a SimpleV instruction defaults completely
1945 to standard RV scalar behaviour (this holds true for **all** operations,
1946 right across the board).
1948 The nice thing here is that ADD.S, ADD.D and ADD.Q when elwidth are
1949 non-default values are effectively all the same: they all still perform
1950 multiple ADD operations, just at different widths. A future extension
1951 to SimpleV may actually allow ADD.S to access the upper bits of the
1952 register, effectively breaking down a 128-bit register into a bank
1953 of 4 independently-accesible 32-bit registers.
1955 In the meantime, although when e.g. setting VL to 8 it would technically
1956 make no difference to the ALU whether ADD.S, ADD.D or ADD.Q is used,
1957 using ADD.Q may be an easy way to signal to the microarchitecture that
1958 it is to receive a higher VL value. On a superscalar OoO architecture
1959 there may be absolutely no difference, however on simpler SIMD-style
1960 microarchitectures they may not necessarily have the infrastructure in
1961 place to know the difference, such that when VL=8 and an ADD.D instruction
1962 is issued, it completes in 2 cycles (or more) rather than one, where
1963 if an ADD.Q had been issued instead on such simpler microarchitectures
1964 it would complete in one.
1966 ## Specific instruction walk-throughs
1968 This section covers walk-throughs of the above-outlined procedure
1969 for converting standard RISC-V scalar arithmetic operations to
1970 polymorphic widths, to ensure that it is correct.
1974 Standard Scalar RV32/RV64 (xlen):
1981 Polymorphic variant:
1983 * RS1 @ rs1 bits, zero-extended to max(rs1, rs2) bits
1984 * RS2 @ rs2 bits, zero-extended to max(rs1, rs2) bits
1985 * add @ max(rs1, rs2) bits
1986 * RD @ rd bits. zero-extend to rd if rd > max(rs1, rs2) otherwise truncate
1988 Note here that polymorphic add zero-extends its source operands,
1989 where addw sign-extends.
1993 The RV Specification specifically states that "W" variants of arithmetic
1994 operations always produce 32-bit signed values. In a polymorphic
1995 environment it is reasonable to assume that the signed aspect is
1996 preserved, where it is the length of the operands and the result
1997 that may be changed.
1999 Standard Scalar RV64 (xlen):
2004 * RD @ xlen bits, truncate add to 32-bit and sign-extend to xlen.
2006 Polymorphic variant:
2008 * RS1 @ rs1 bits, sign-extended to max(rs1, rs2) bits
2009 * RS2 @ rs2 bits, sign-extended to max(rs1, rs2) bits
2010 * add @ max(rs1, rs2) bits
2011 * RD @ rd bits. sign-extend to rd if rd > max(rs1, rs2) otherwise truncate
2013 Note here that polymorphic addw sign-extends its source operands,
2014 where add zero-extends.
2016 This requires a little more in-depth analysis. Where the bitwidth of
2017 rs1 equals the bitwidth of rs2, no sign-extending will occur. It is
2018 only where the bitwidth of either rs1 or rs2 are different, will the
2019 lesser-width operand be sign-extended.
2021 Effectively however, both rs1 and rs2 are being sign-extended (or truncated),
2022 where for add they are both zero-extended. This holds true for all arithmetic
2023 operations ending with "W".
2027 Standard Scalar RV64I:
2029 * RS1 @ xlen bits, truncated to 32-bit
2030 * immed @ 12 bits, sign-extended to 32-bit
2032 * RD @ rd bits. sign-extend to rd if rd > 32, otherwise truncate.
2034 Polymorphic variant:
2037 * immed @ 12 bits, sign-extend to max(rs1, 12) bits
2038 * add @ max(rs1, 12) bits
2039 * RD @ rd bits. sign-extend to rd if rd > max(rs1, 12) otherwise truncate
2041 # Predication Element Zeroing
2043 The introduction of zeroing on traditional vector predication is usually
2044 intended as an optimisation for lane-based microarchitectures with register
2045 renaming to be able to save power by avoiding a register read on elements
2046 that are passed through en-masse through the ALU. Simpler microarchitectures
2047 do not have this issue: they simply do not pass the element through to
2048 the ALU at all, and therefore do not store it back in the destination.
2049 More complex non-lane-based micro-architectures can, when zeroing is
2050 not set, use the predication bits to simply avoid sending element-based
2051 operations to the ALUs, entirely: thus, over the long term, potentially
2052 keeping all ALUs 100% occupied even when elements are predicated out.
2054 SimpleV's design principle is not based on or influenced by
2055 microarchitectural design factors: it is a hardware-level API.
2056 Therefore, looking purely at whether zeroing is *useful* or not,
2057 (whether less instructions are needed for certain scenarios),
2058 given that a case can be made for zeroing *and* non-zeroing, the
2059 decision was taken to add support for both.
2061 ## Single-predication (based on destination register)
2063 Zeroing on predication for arithmetic operations is taken from
2064 the destination register's predicate. i.e. the predication *and*
2065 zeroing settings to be applied to the whole operation come from the
2066 CSR Predication table entry for the destination register.
2067 Thus when zeroing is set on predication of a destination element,
2068 if the predication bit is clear, then the destination element is *set*
2069 to zero (twin-predication is slightly different, and will be covered
2072 Thus the pseudo-code loop for a predicated arithmetic operation
2073 is modified to as follows:
2075 for (i = 0; i < VL; i++)
2076 if not zeroing: # an optimisation
2077 while (!(predval & 1<<i) && i < VL)
2078 if (int_vec[rd ].isvector) { id += 1; }
2079 if (int_vec[rs1].isvector) { irs1 += 1; }
2080 if (int_vec[rs2].isvector) { irs2 += 1; }
2087 result = src1 + src2 # actual add (or other op) here
2088 set_polymorphed_reg(rd, destwid, ird, result)
2089 if (!int_vec[rd].isvector) break
2092 set_polymorphed_reg(rd, destwid, ird, result)
2093 if (int_vec[rd ].isvector) { id += 1; }
2094 else if (predval & 1<<i) break;
2095 if (int_vec[rs1].isvector) { irs1 += 1; }
2096 if (int_vec[rs2].isvector) { irs2 += 1; }
2098 The optimisation to skip elements entirely is only possible for certain
2099 micro-architectures when zeroing is not set. However for lane-based
2100 micro-architectures this optimisation may not be practical, as it
2101 implies that elements end up in different "lanes". Under these
2102 circumstances it is perfectly fine to simply have the lanes
2103 "inactive" for predicated elements, even though it results in
2104 less than 100% ALU utilisation.
2106 ## Twin-predication (based on source and destination register)
2108 Twin-predication is not that much different, except that that
2109 the source is independently zero-predicated from the destination.
2110 This means that the source may be zero-predicated *or* the
2111 destination zero-predicated *or both*, or neither.
2113 When with twin-predication, zeroing is set on the source and not
2114 the destination, if a predicate bit is set it indicates that a zero
2115 data element is passed through the operation (the exception being:
2116 if the source data element is to be treated as an address - a LOAD -
2117 then the data returned *from* the LOAD is zero, rather than looking up an
2120 When zeroing is set on the destination and not the source, then just
2121 as with single-predicated operations, a zero is stored into the destination
2122 element (or target memory address for a STORE).
2124 Zeroing on both source and destination effectively result in a bitwise
2125 NOR operation of the source and destination predicate: the result is that
2126 where either source predicate OR destination predicate is set to 0,
2127 a zero element will ultimately end up in the destination register.
2129 However: this may not necessarily be the case for all operations;
2130 implementors, particularly of custom instructions, clearly need to
2131 think through the implications in each and every case.
2133 Here is pseudo-code for a twin zero-predicated operation:
2135 function op_mv(rd, rs) # MV not VMV!
2136 rd = int_csr[rd].active ? int_csr[rd].regidx : rd;
2137 rs = int_csr[rs].active ? int_csr[rs].regidx : rs;
2138 ps, zerosrc = get_pred_val(FALSE, rs); # predication on src
2139 pd, zerodst = get_pred_val(FALSE, rd); # ... AND on dest
2140 for (int i = 0, int j = 0; i < VL && j < VL):
2141 if (int_csr[rs].isvec && !zerosrc) while (!(ps & 1<<i)) i++;
2142 if (int_csr[rd].isvec && !zerodst) while (!(pd & 1<<j)) j++;
2145 sourcedata = ireg[rs+i];
2148 ireg[rd+j] <= sourcedata
2151 if (int_csr[rs].isvec)
2153 if (int_csr[rd].isvec)
2159 Note that in the instance where the destination is a scalar, the hardware
2160 loop is ended the moment a value *or a zero* is placed into the destination
2161 register/element. Also note that, for clarity, variable element widths
2162 have been left out of the above.
2166 TODO: expand. Exceptions may occur at any time, in any given underlying
2167 scalar operation. This implies that context-switching (traps) may
2168 occur, and operation must be returned to where it left off. That in
2169 turn implies that the full state - including the current parallel
2170 element being processed - has to be saved and restored. This is
2171 what the **STATE** CSR is for.
2173 The implications are that all underlying individual scalar operations
2174 "issued" by the parallelisation have to appear to be executed sequentially.
2175 The further implications are that if two or more individual element
2176 operations are underway, and one with an earlier index causes an exception,
2177 it may be necessary for the microarchitecture to **discard** or terminate
2178 operations with higher indices.
2180 This being somewhat dissatisfactory, an "opaque predication" variant
2181 of the STATE CSR is being considered.
2185 A "HINT" is an operation that has no effect on architectural state,
2186 where its use may, by agreed convention, give advance notification
2187 to the microarchitecture: branch prediction notification would be
2188 a good example. Usually HINTs are where rd=x0.
2190 With Simple-V being capable of issuing *parallel* instructions where
2191 rd=x0, the space for possible HINTs is expanded considerably. VL
2192 could be used to indicate different hints. In addition, if predication
2193 is set, the predication register itself could hypothetically be passed
2194 in as a *parameter* to the HINT operation.
2196 No specific hints are yet defined in Simple-V
2198 # VLIW Format <a name="vliw-format"></a>
2200 One issue with SV is the setup and teardown time of the CSRs. The cost
2201 of the use of a full CSRRW (requiring LI) is quite high. A VLIW format
2202 therefore makes sense.
2204 A suitable prefix, which fits the Expanded Instruction-Length encoding
2205 for "(80 + 16 times instruction_length)", as defined in Section 1.5
2206 of the RISC-V ISA, is as follows:
2208 | 15 | 14:12 | 11:10 | 9:8 | 7 | 6:0 |
2209 | - | ----- | ----- | ----- | --- | ------- |
2210 | vlset | 16xil | pplen | rplen | mode | 1111111 |
2212 An optional VL Block, optional predicate entries, optional register
2213 entries and finally some 16/32/48 bit standard RV or SVPrefix opcodes
2216 The variable-length format from Section 1.5 of the RISC-V ISA:
2218 | base+4 ... base+2 | base | number of bits |
2219 | ------ ----------------- | ---------------- | -------------------------- |
2220 | ..xxxx xxxxxxxxxxxxxxxx | xnnnxxxxx1111111 | (80+16\*nnn)-bit, nnn!=111 |
2221 | {ops}{Pred}{Reg}{VL Block} | SV Prefix | |
2223 VL/MAXVL/SubVL Block:
2225 | 31-30 | 29:28 | 27:22 | 21:17 - 16 |
2226 | - | ----- | ------ | ------ - - |
2227 | 0 | SubVL | VLdest | VLEN vlt |
2228 | 1 | SubVL | VLdest | VLEN |
2230 Note: this format is very similar to that used in [[sv_prefix_proposal]]
2232 If vlt is 0, VLEN is a 5 bit immediate value, offset by one (i.e
2233 a bit sequence of 0b00000 represents VL=1 and so on). If vlt is 1,
2234 it specifies the scalar register from which VL is set by this VLIW
2235 instruction group. VL, whether set from the register or the immediate,
2236 is then modified (truncated) to be MIN(VL, MAXVL), and the result stored
2237 in the scalar register specified in VLdest. If VLdest is zero, no store
2238 in the regfile occurs (however VL is still set).
2240 This option will typically be used to start vectorised loops, where
2241 the VLIW instruction effectively embeds an optional "SETSUBVL, SETVL"
2242 sequence (in compact form).
2244 When bit 15 is set to 1, MAXVL and VL are both set to the immediate,
2245 VLEN (again, offset by one), which is 6 bits in length, and the same
2246 value stored in scalar register VLdest (if that register is nonzero).
2247 A value of 0b000000 will set MAXVL=VL=1, a value of 0b000001 will
2248 set MAXVL=VL= 2 and so on.
2250 This option will typically not be used so much for loops as it will be
2251 for one-off instructions such as saving the entire register file to the
2252 stack with a single one-off Vectorised and predicated LD/ST, or as a way
2253 to save or restore registers in a function call with a single instruction.
2264 * Bit 7 specifies if the prefix block format is the full 16 bit format
2265 (1) or the compact less expressive format (0). In the 8 bit format,
2266 pplen is multiplied by 2.
2267 * 8 bit format predicate numbering is implicit and begins from x9. Thus
2268 it is critical to put blocks in the correct order as required.
2269 * Bit 7 also specifies if the register block format is 16 bit (1) or 8 bit
2270 (0). In the 8 bit format, rplen is multiplied by 2. If only an odd number
2271 of entries are needed the last may be set to 0x00, indicating "unused".
2272 * Bit 15 specifies if the VL Block is present. If set to 1, the VL Block
2273 immediately follows the VLIW instruction Prefix
2274 * Bits 8 and 9 define how many RegCam entries (0 to 3 if bit 15 is 1,
2275 otherwise 0 to 6) follow the (optional) VL Block.
2276 * Bits 10 and 11 define how many PredCam entries (0 to 3 if bit 7 is 1,
2277 otherwise 0 to 6) follow the (optional) RegCam entries
2278 * Bits 14 to 12 (IL) define the actual length of the instruction: total
2279 number of bits is 80 + 16 times IL. Standard RV32, RVC and also
2280 SVPrefix (P48/64-\*-Type) instructions fit into this space, after the
2281 (optional) VL / RegCam / PredCam entries
2282 * Anything - any registers - within the VLIW-prefixed format *MUST* have the
2283 RegCam and PredCam entries applied to it.
2284 * At the end of the VLIW Group, the RegCam and PredCam entries
2285 *no longer apply*. VL, MAXVL and SUBVL on the other hand remain at
2286 the values set by the last instruction (whether a CSRRW or the VL
2288 * Although an inefficient use of resources, it is fine to set the MAXVL,
2289 VL and SUBVL CSRs with standard CSRRW instructions, within a VLIW block.
2291 All this would greatly reduce the amount of space utilised by Vectorised
2292 instructions, given that 64-bit CSRRW requires 3, even 4 32-bit opcodes: the
2293 CSR itself, a LI, and the setting up of the value into the RS register
2294 of the CSR, which, again, requires a LI / LUI to get the 32 bit
2295 data into the CSR. To get 64-bit data into the register in order to put
2296 it into the CSR(s), LOAD operations from memory are needed!
2298 Given that each 64-bit CSR can hold only 4x PredCAM entries (or 4 RegCAM
2299 entries), that's potentially 6 to eight 32-bit instructions, just to
2300 establish the Vector State!
2302 Not only that: even CSRRW on VL and MAXVL requires 64-bits (even more bits if
2303 VL needs to be set to greater than 32). Bear in mind that in SV, both MAXVL
2304 and VL need to be set.
2306 By contrast, the VLIW prefix is only 16 bits, the VL/MAX/SubVL block is
2307 only 16 bits, and as long as not too many predicates and register vector
2308 qualifiers are specified, several 32-bit and 16-bit opcodes can fit into
2309 the format. If the full flexibility of the 16 bit block formats are not
2310 needed, more space is saved by using the 8 bit formats.
2312 In this light, embedding the VL/MAXVL, PredCam and RegCam CSR entries into
2313 a VLIW format makes a lot of sense.
2317 * Is it necessary to stick to the RISC-V 1.5 format? Why not go with
2318 using the 15th bit to allow 80 + 16\*0bnnnn bits? Perhaps to be sane,
2319 limit to 256 bits (16 times 0-11).
2320 * Could a "hint" be used to set which operations are parallel and which
2322 * Could a new sub-instruction opcode format be used, one that does not
2323 conform precisely to RISC-V rules, but *unpacks* to RISC-V opcodes?
2324 no need for byte or bit-alignment
2325 * Could a hardware compression algorithm be deployed? Quite likely,
2326 because of the sub-execution context (sub-VLIW PC)
2328 ## Limitations on instructions.
2330 To greatly simplify implementations, it is required to treat the VLIW
2331 group as a separate sub-program with its own separate PC. The sub-pc
2332 advances separately whilst the main PC remains pointing at the beginning
2333 of the VLIW instruction (not to be confused with how VL works, which
2334 is exactly the same principle, except it is VStart in the STATE CSR
2337 This has implications, namely that a new set of CSRs identical to xepc
2338 (mepc, srpc, hepc and uepc) must be created and managed and respected
2339 as being a sub extension of the xepc set of CSRs. Thus, xepcvliw CSRs
2340 must be context switched and saved / restored in traps.
2342 The srcoffs and destoffs indices in the STATE CSR may be similarly regarded as another
2343 sub-execution context, giving in effect two sets of nested sub-levels
2344 of the RISCV Program Counter (actually, three including SUBVL and ssvoffs).
2346 In addition, as xepcvliw CSRs are relative to the beginning of the VLIW
2347 block, branches MUST be restricted to within (relative to) the block, i.e. addressing
2348 is now restricted to the start (and very short) length of the block.
2350 Also: calling subroutines is inadviseable, unless they can be entirely
2351 accomplished within a block.
2353 A normal jump, normal branch and a normal function call may only be taken by letting
2354 the VLIW group end, returning to "normal" standard RV mode, and then using standard RVC, 32 bit
2355 or P48/64-\*-type opcodes.
2359 * <https://groups.google.com/d/msg/comp.arch/yIFmee-Cx-c/jRcf0evSAAAJ>
2361 # Subsets of RV functionality
2363 This section describes the differences when SV is implemented on top of
2364 different subsets of RV.
2368 It is permitted to only implement SVprefix and not the VLIW instruction format option.
2369 UNIX Platforms **MUST** raise illegal instruction on seeing a VLIW opcode so that traps may emulate the format.
2371 It is permitted in SVprefix to either not implement VL or not implement SUBVL (see [[sv_prefix_proposal]] for full details. Again, UNIX Platforms *MUST* raise illegal instruction on implementations that do not support VL or SUBVL.
2373 It is permitted to limit the size of either (or both) the register files
2374 down to the original size of the standard RV architecture. However, below
2375 the mandatory limits set in the RV standard will result in non-compliance
2376 with the SV Specification.
2380 When RV32 or RV32F is implemented, XLEN is set to 32, and thus the
2381 maximum limit for predication is also restricted to 32 bits. Whilst not
2382 actually specifically an "option" it is worth noting.
2386 Normally in standard RV32 it does not make much sense to have
2387 RV32G, The critical instructions that are missing in standard RV32
2388 are those for moving data to and from the double-width floating-point
2389 registers into the integer ones, as well as the FCVT routines.
2391 In an earlier draft of SV, it was possible to specify an elwidth
2392 of double the standard register size: this had to be dropped,
2393 and may be reintroduced in future revisions.
2395 ## RV32 (not RV32F / RV32G) and RV64 (not RV64F / RV64G)
2397 When floating-point is not implemented, the size of the User Register and
2398 Predication CSR tables may be halved, to only 4 2x16-bit CSRs (8 entries
2403 In embedded scenarios the User Register and Predication CSRs may be
2404 dropped entirely, or optionally limited to 1 CSR, such that the combined
2405 number of entries from the M-Mode CSR Register table plus U-Mode
2406 CSR Register table is either 4 16-bit entries or (if the U-Mode is
2407 zero) only 2 16-bit entries (M-Mode CSR table only). Likewise for
2408 the Predication CSR tables.
2410 RV32E is the most likely candidate for simply detecting that registers
2411 are marked as "vectorised", and generating an appropriate exception
2412 for the VL loop to be implemented in software.
2416 RV128 has not been especially considered, here, however it has some
2417 extremely large possibilities: double the element width implies
2418 256-bit operands, spanning 2 128-bit registers each, and predication
2419 of total length 128 bit given that XLEN is now 128.
2421 # Under consideration <a name="issues"></a>
2423 for element-grouping, if there is unused space within a register
2424 (3 16-bit elements in a 64-bit register for example), recommend:
2426 * For the unused elements in an integer register, the used element
2427 closest to the MSB is sign-extended on write and the unused elements
2428 are ignored on read.
2429 * The unused elements in a floating-point register are treated as-if
2430 they are set to all ones on write and are ignored on read, matching the
2431 existing standard for storing smaller FP values in larger registers.
2437 > One solution is to just not support LR/SC wider than a fixed
2438 > implementation-dependent size, which must be at least
2439 >1 XLEN word, which can be read from a read-only CSR
2440 > that can also be used for info like the kind and width of
2441 > hw parallelism supported (128-bit SIMD, minimal virtual
2442 > parallelism, etc.) and other things (like maybe the number
2443 > of registers supported).
2445 > That CSR would have to have a flag to make a read trap so
2446 > a hypervisor can simulate different values.
2450 > And what about instructions like JALR?
2452 answer: they're not vectorised, so not a problem
2456 * if opcode is in the RV32 group, rd, rs1 and rs2 bitwidth are
2457 XLEN if elwidth==default
2458 * if opcode is in the RV32I group, rd, rs1 and rs2 bitwidth are
2459 *32* if elwidth == default
2463 TODO: document different lengths for INT / FP regfiles, and provide
2464 as part of info register. 00=32, 01=64, 10=128, 11=reserved.
2468 TODO, update to remove RegCam and PredCam CSRs, just use SVprefix and