1 # Simple-V (Parallelism Extension Proposal) Specification
3 * Copyright (C) 2017, 2018 Luke Kenneth Casson Leighton
5 * Last edited: 14 nov 2018
6 * Ancillary resource: [[opcodes]]
17 * The RISC-V Founders, without whom this all would not be possible.
21 # Summary and Background: Rationale
23 Simple-V is a uniform parallelism API for RISC-V hardware that has several
24 unplanned side-effects including code-size reduction, expansion of
25 HINT space and more. The reason for
26 creating it is to provide a manageable way to turn a pre-existing design
27 into a parallel one, in a step-by-step incremental fashion, allowing
28 the implementor to focus on adding hardware where it is needed and necessary.
29 The primary target is for mobile-class 3D GPUs and VPUs, with secondary
30 goals being to reduce executable size and reduce context-switch latency.
32 Critically: **No new instructions are added**. The parallelism (if any
33 is implemented) is implicitly added by tagging *standard* scalar registers
34 for redirection. When such a tagged register is used in any instruction,
35 it indicates that the PC shall **not** be incremented; instead a loop
36 is activated where *multiple* instructions are issued to the pipeline
37 (as determined by a length CSR), with contiguously incrementing register
38 numbers starting from the tagged register. When the last "element"
39 has been reached, only then is the PC permitted to move on. Thus
40 Simple-V effectively sits (slots) *in between* the instruction decode phase
43 The barrier to entry with SV is therefore very low. The minimum
44 compliant implementation is software-emulation (traps), requiring
45 only the CSRs and CSR tables, and that an exception be thrown if an
46 instruction's registers are detected to have been tagged. The looping
47 that would otherwise be done in hardware is thus carried out in software,
48 instead. Whilst much slower, it is "compliant" with the SV specification,
49 and may be suited for implementation in RV32E and also in situations
50 where the implementor wishes to focus on certain aspects of SV, without
51 unnecessary time and resources into the silicon, whilst also conforming
52 strictly with the API. A good area to punt to software would be the
53 polymorphic element width capability for example.
55 Hardware Parallelism, if any, is therefore added at the implementor's
56 discretion to turn what would otherwise be a sequential loop into a
59 To emphasise that clearly: Simple-V (SV) is *not*:
63 * A Vectorisation Microarchitecture
64 * A microarchitecture of any specific kind
65 * A mandary parallel processor microarchitecture of any kind
66 * A supercomputer extension
68 SV does **not** tell implementors how or even if they should implement
69 parallelism: it is a hardware "API" (Application Programming Interface)
70 that, if implemented, presents a uniform and consistent way to *express*
71 parallelism, at the same time leaving the choice of if, how, how much,
72 when and whether to parallelise operations **entirely to the implementor**.
76 The principle of SV is as follows:
78 * CSRs indicating which registers are "tagged" as "vectorised"
79 (potentially parallel, depending on the microarchitecture)
81 * A "Vector Length" CSR is set, indicating the span of any future
82 "parallel" operations.
83 * A **scalar** operation, just after the decode phase and before the
84 execution phase, checks the CSR register tables to see if any of
85 its registers have been marked as "vectorised"
86 * If so, a hardware "macro-unrolling loop" is activated, of length
87 VL, that effectively issues **multiple** identical instructions
88 using contiguous sequentially-incrementing registers.
89 **Whether they be executed sequentially or in parallel or a
90 mixture of both or punted to software-emulation in a trap handler
91 is entirely up to the implementor**.
93 In this way an entire scalar algorithm may be vectorised with
94 the minimum of modification to the hardware and to compiler toolchains.
95 There are **no** new opcodes.
97 # CSRs <a name="csrs"></a>
99 For U-Mode there are two CSR key-value stores needed to create lookup
100 tables which are used at the register decode phase.
102 * A register CSR key-value table (typically 8 32-bit CSRs of 2 16-bits each)
103 * A predication CSR key-value table (again, 8 32-bit CSRs of 2 16-bits each)
104 * Small U-Mode and S-Mode register and predication CSR key-value tables
105 (2 32-bit CSRs of 2x 16-bit entries each).
106 * An optional "reshaping" CSR key-value table which remaps from a 1D
107 linear shape to 2D or 3D, including full transposition.
109 There are also four additional CSRs for User-Mode:
111 * CFG subsets the CSR tables
112 * MVL (the Maximum Vector Length)
113 * VL (which has different characteristics from standard CSRs)
114 * STATE (useful for saving and restoring during context switch,
115 and for providing fast transitions)
117 There are also three additional CSRs for Supervisor-Mode:
123 And likewise for M-Mode:
129 Both Supervisor and M-Mode have their own (small) CSR register and
130 predication tables of only 4 entries each.
132 The access pattern for these groups of CSRs in each mode follows the
133 same pattern for other CSRs that have M-Mode and S-Mode "mirrors":
135 * In M-Mode, the S-Mode and U-Mode CSRs are separate and distinct.
136 * In S-Mode, accessing and changing of the M-Mode CSRs is identical
137 to changing the S-Mode CSRs. Accessing and changing the U-Mode
139 * In U-Mode, accessing and changing of the S-Mode and U-Mode CSRs
142 In M-Mode, only the M-Mode CSRs are in effect, i.e. it is only the
143 M-Mode MVL, the M-Mode STATE and so on that influences the processor
144 behaviour. Likewise for S-Mode, and likewise for U-Mode.
146 This has the interesting benefit of allowing M-Mode (or S-Mode)
147 to be set up, for context-switching to take place, and, on return
148 back to the higher privileged mode, the CSRs of that mode will be
149 exactly as they were. Thus, it becomes possible for example to
150 set up CSRs suited best to aiding and assisting low-latency fast
151 context-switching *once and only once*, without the need for
152 re-initialising the CSRs needed to do so.
156 This CSR may be used to switch between subsets of the CSR Register and
157 Predication Tables: it is kept to 5 bits so that a single CSRRWI instruction
158 can be used. A setting of all ones is reserved to indicate that SimpleV
165 Bank is 3 bits in size, and indicates the starting index of the CSR
166 Register and Predication Table entries that are "enabled". Given that
167 each CSR table row is 16 bits and contains 2 CAM entries each, there
168 are only 8 CSRs to cover in each table, so 8 bits is sufficient.
170 Size is 2 bits. With the exception of when bank == 7 and size == 3,
171 the number of elements enabled is taken by right-shifting 2 by size:
174 | ------ | -------- |
180 Given that there are 2 16-bit CAM entries per CSR table row, this
181 may also be viewed as the number of CSR rows to enable, by raising size to
186 * When bank = 0 and size = 3, SVREGCFG0 through to SVREGCFG7 are
187 enabled, and SVPREDCFG0 through to SVPREGCFG7 are enabled.
188 * When bank = 1 and size = 3, SVREGCFG1 through to SVREGCFG7 are
189 enabled, and SVPREDCFG1 through to SVPREGCFG7 are enabled.
190 * When bank = 3 and size = 0, SVREGCFG3 and SVPREDCFG3 are enabled.
191 * When bank = 3 and size = 1, SVREGCFG3-4 and SVPREDCFG3-4 are enabled.
192 * When bank = 7 and size = 1, SVREGCFG7 and SVPREDCFG7 are enabled
193 (because there are only 8 32-bit CSRs there does not exist a
194 SVREGCFG8 or SVPREDCFG8 to enable).
195 * When bank = 7 and size = 3, SimpleV is entirely disabled.
197 In this way it is possible to enable and disable SimpleV with a
198 single instruction, and, furthermore, on context-switching the quantity
199 of CSRs to be saved and restored is greatly reduced.
201 ## MAXVECTORLENGTH (MVL) <a name="mvl" />
203 MAXVECTORLENGTH is the same concept as MVL in RVV, except that it
204 is variable length and may be dynamically set. MVL is
205 however limited to the regfile bitwidth XLEN (1-32 for RV32,
206 1-64 for RV64 and so on).
208 The reason for setting this limit is so that predication registers, when
209 marked as such, may fit into a single register as opposed to fanning out
210 over several registers. This keeps the implementation a little simpler.
212 The other important factor to note is that the actual MVL is **offset
213 by one**, so that it can fit into only 6 bits (for RV64) and still cover
214 a range up to XLEN bits. So, when setting the MVL CSR to 0, this actually
215 means that MVL==1. When setting the MVL CSR to 3, this actually means
216 that MVL==4, and so on. This is expressed more clearly in the "pseudocode"
217 section, where there are subtle differences between CSRRW and CSRRWI.
219 ## Vector Length (VL) <a name="vl" />
221 VSETVL is slightly different from RVV. Like RVV, VL is set to be within
222 the range 1 <= VL <= MVL (where MVL in turn is limited to 1 <= MVL <= XLEN)
224 VL = rd = MIN(vlen, MVL)
226 where 1 <= MVL <= XLEN
228 However just like MVL it is important to note that the range for VL has
229 subtle design implications, covered in the "CSR pseudocode" section
231 The fixed (specific) setting of VL allows vector LOAD/STORE to be used
232 to switch the entire bank of registers using a single instruction (see
233 Appendix, "Context Switch Example"). The reason for limiting VL to XLEN
234 is down to the fact that predication bits fit into a single register of
237 The second change is that when VSETVL is requested to be stored
238 into x0, it is *ignored* silently (VSETVL x0, x5)
240 The third and most important change is that, within the limits set by
241 MVL, the value passed in **must** be set in VL (and in the
242 destination register).
244 This has implication for the microarchitecture, as VL is required to be
245 set (limits from MVL notwithstanding) to the actual value
246 requested. RVV has the option to set VL to an arbitrary value that suits
247 the conditions and the micro-architecture: SV does *not* permit this.
249 The reason is so that if SV is to be used for a context-switch or as a
250 substitute for LOAD/STORE-Multiple, the operation can be done with only
251 2-3 instructions (setup of the CSRs, VSETVL x0, x0, #{regfilelen-1},
252 single LD/ST operation). If VL does *not* get set to the register file
253 length when VSETVL is called, then a software-loop would be needed.
254 To avoid this need, VL *must* be set to exactly what is requested
255 (limits notwithstanding).
257 Therefore, in turn, unlike RVV, implementors *must* provide
258 pseudo-parallelism (using sequential loops in hardware) if actual
259 hardware-parallelism in the ALUs is not deployed. A hybrid is also
260 permitted (as used in Broadcom's VideoCore-IV) however this must be
261 *entirely* transparent to the ISA.
263 The fourth change is that VSETVL is implemented as a CSR, where the
264 behaviour of CSRRW (and CSRRWI) must be changed to specifically store
265 the *new* value in the destination register, **not** the old value.
266 Where context-load/save is to be implemented in the usual fashion
267 by using a single CSRRW instruction to obtain the old value, the
268 *secondary* CSR must be used (SVSTATE). This CSR behaves
269 exactly as standard CSRs, and contains more than just VL.
271 One interesting side-effect of using CSRRWI to set VL is that this
272 may be done with a single instruction, useful particularly for a
273 context-load/save. There are however limitations: CSRWI's immediate
274 is limited to 0-31 (representing VL=1-32).
276 Note that when VL is set to 1, all parallel operations cease: the
277 hardware loop is reduced to a single element: scalar operations.
281 This is a standard CSR that contains sufficient information for a
282 full context save/restore. It contains (and permits setting of)
283 MVL, VL, CFG, the destination element offset of the current parallel
284 instruction being executed, and, for twin-predication, the source
285 element offset as well. Interestingly it may hypothetically
286 also be used to make the immediately-following instruction to skip a
287 certain number of elements, however the recommended method to do
288 this is predication or using the offset mode of the REMAP CSRs.
290 Setting destoffs and srcoffs is realistically intended for saving state
291 so that exceptions (page faults in particular) may be serviced and the
292 hardware-loop that was being executed at the time of the trap, from
293 user-mode (or Supervisor-mode), may be returned to and continued from
294 where it left off. The reason why this works is because setting
295 User-Mode STATE will not change (not be used) in M-Mode or S-Mode
296 (and is entirely why M-Mode and S-Mode have their own STATE CSRs).
298 The format of the STATE CSR is as follows:
300 | (28..27) | (26..24) | (23..18) | (17..12) | (11..6) | (5...0) |
301 | -------- | -------- | -------- | -------- | ------- | ------- |
302 | size | bank | destoffs | srcoffs | vl | maxvl |
304 When setting this CSR, the following characteristics will be enforced:
306 * **MAXVL** will be truncated (after offset) to be within the range 1 to XLEN
307 * **VL** will be truncated (after offset) to be within the range 1 to MAXVL
308 * **srcoffs** will be truncated to be within the range 0 to VL-1
309 * **destoffs** will be truncated to be within the range 0 to VL-1
311 ## MVL, VL and CSR Pseudocode
313 The pseudo-code for get and set of VL and MVL are as follows:
315 set_mvl_csr(value, rd):
317 MVL = MIN(value, MVL)
322 set_vl_csr(value, rd):
324 regs[rd] = VL # yes returning the new value NOT the old CSR
329 Note that where setting MVL behaves as a normal CSR, unlike standard CSR
330 behaviour, setting VL will return the **new** value of VL **not** the old
333 For CSRRWI, the range of the immediate is restricted to 5 bits. In order to
334 maximise the effectiveness, an immediate of 0 is used to set VL=1,
335 an immediate of 1 is used to set VL=2 and so on:
337 CSRRWI_Set_MVL(value):
338 set_mvl_csr(value+1, x0)
340 CSRRWI_Set_VL(value):
341 set_vl_csr(value+1, x0)
343 However for CSRRW the following pseudocide is used for MVL and VL,
344 where setting the value to zero will cause an exception to be raised.
345 The reason is that if VL or MVL are set to zero, the STATE CSR is
346 not capable of returning that value.
348 CSRRW_Set_MVL(rs1, rd):
352 set_mvl_csr(value, rd)
354 CSRRW_Set_VL(rs1, rd):
358 set_vl_csr(value, rd)
360 In this way, when CSRRW is utilised with a loop variable, the value
361 that goes into VL (and into the destination register) may be used
362 in an instruction-minimal fashion:
364 CSRvect1 = {type: F, key: a3, val: a3, elwidth: dflt}
365 CSRvect2 = {type: F, key: a7, val: a7, elwidth: dflt}
366 CSRRWI MVL, 3 # sets MVL == **4** (not 3)
367 j zerotest # in case loop counter a0 already 0
369 CSRRW VL, t0, a0 # vl = t0 = min(mvl, a0)
370 ld a3, a1 # load 4 registers a3-6 from x
371 slli t1, t0, 3 # t1 = vl * 8 (in bytes)
372 ld a7, a2 # load 4 registers a7-10 from y
373 add a1, a1, t1 # increment pointer to x by vl*8
374 fmadd a7, a3, fa0, a7 # v1 += v0 * fa0 (y = a * x + y)
375 sub a0, a0, t0 # n -= vl (t0)
376 st a7, a2 # store 4 registers a7-10 to y
377 add a2, a2, t1 # increment pointer to y by vl*8
379 bnez a0, loop # repeat if n != 0
381 With the STATE CSR, just like with CSRRWI, in order to maximise the
382 utilisation of the limited bitspace, "000000" in binary represents
383 VL==1, "00001" represents VL==2 and so on (likewise for MVL):
385 CSRRW_Set_SV_STATE(rs1, rd):
388 MVL = set_mvl_csr(value[11:6]+1)
389 VL = set_vl_csr(value[5:0]+1)
390 CFG = value[28:24]>>24
391 destoffs = value[23:18]>>18
392 srcoffs = value[23:18]>>12
395 regs[rd] = (MVL-1) | (VL-1)<<6 | (srcoffs)<<12 |
396 (destoffs)<<18 | (CFG)<<24
399 In both cases, whilst CSR read of VL and MVL return the exact values
400 of VL and MVL respectively, reading and writing the STATE CSR returns
401 those values **minus one**. This is absolutely critical to implement
402 if the STATE CSR is to be used for fast context-switching.
404 ## Register CSR key-value (CAM) table <a name="regcsrtable" />
406 The purpose of the Register CSR table is four-fold:
408 * To mark integer and floating-point registers as requiring "redirection"
409 if it is ever used as a source or destination in any given operation.
410 This involves a level of indirection through a 5-to-7-bit lookup table,
411 such that **unmodified** operands with 5 bit (3 for Compressed) may
412 access up to **128** registers.
413 * To indicate whether, after redirection through the lookup table, the
414 register is a vector (or remains a scalar).
415 * To over-ride the implicit or explicit bitwidth that the operation would
416 normally give the register.
420 | RegCAM | | 15 | (14..8) | 7 | (6..5) | (4..0) |
421 | ------ | | - | - | - | ------ | ------- |
422 | 0 | | isvec0 | regidx0 | i/f | vew0 | regkey |
423 | 1 | | isvec1 | regidx1 | i/f | vew1 | regkey |
424 | .. | | isvec.. | regidx.. | i/f | vew.. | regkey |
425 | 15 | | isvec15 | regidx15 | i/f | vew15 | regkey |
427 i/f is set to "1" to indicate that the redirection/tag entry is to be applied
428 to integer registers; 0 indicates that it is relevant to floating-point
429 registers. vew has the following meanings, indicating that the instruction's
430 operand size is "over-ridden" in a polymorphic fashion:
433 | --- | ------------------- |
434 | 00 | default (XLEN/FLEN) |
439 As the above table is a CAM (key-value store) it may be appropriate
440 (faster, implementation-wise) to expand it as follows:
442 struct vectorised fp_vec[32], int_vec[32];
444 for (i = 0; i < 16; i++) // 16 CSRs?
445 tb = int_vec if CSRvec[i].type == 0 else fp_vec
446 idx = CSRvec[i].regkey // INT/FP src/dst reg in opcode
447 tb[idx].elwidth = CSRvec[i].elwidth
448 tb[idx].regidx = CSRvec[i].regidx // indirection
449 tb[idx].isvector = CSRvec[i].isvector // 0=scalar
450 tb[idx].packed = CSRvec[i].packed // SIMD or not
452 The actual size of the CSR Register table depends on the platform
453 and on whether other Extensions are present (RV64G, RV32E, etc.).
454 For details see "Subsets" section.
456 There are two CSRs (per privilege level) for adding to and removing
457 entries from the table, which, conceptually may be viewed as either
458 a register window (similar to SPARC) or as the "top of a stack".
460 * SVREGTOP will push or pop entries onto the top of the "stack"
461 (highest non-zero indexed entry in the table)
462 * SVREGBOT will push or pop entries from the bottom (always
463 element indexed as zero.
465 In addition, note that CSRRWI behaviour is completely different
466 from CSRRW when writing to these two CSR registers. The CSRRW
467 behaviour: the src register is subdivided into 16-bit chunks,
468 and each non-zero chunk is pushed/popped separately. The
469 CSRRWI behaviour: the immediate indicates the number of
470 entries in the table to be popped.
474 * The src register indicates how many entries to pop from the
476 * "CSRRWI SVREGTOP, 3" indicates that the top 3
477 entries are to be zero'd and returned as the CSR return
478 result. The top entry is returned in bits 0-15, the
479 next entry down in bits 16-31, and when XLEN==64, an
480 extra 2 entries are also returned.
481 * "CSRRWI SVREGBOT, 3" indicates that the bottom 3 entries are
482 to be returned, and the entries with indices above 3 are
483 to be shuffled down. The first entry to be popped off the
484 bottom is returned in bits 0-15, the second entry as bits
486 * If XLEN==32, only a maximum of 2 entries may be returned
487 (and shuffled). If XLEN==64, only a maximum of 4 entries
489 * If however the destination register is x0 (zero), then
490 the exact number of entries requested will be removed
495 * When the src register is all zeros, this is a request to
496 pop one and only one 16-bit element from the table.
497 * "CSRRW SVREGTOP, 0" will return (and clear) the highest
498 non-zero 16-bit entry in the table
499 * "CSRRW SVREGBOT, 0" will return (and clear) the zero'th
500 16-bit entry in the table, and will shuffle down all
501 other entries (if any) by one index.
505 All other CSRRW behaviours are a "loop", taking 16-bits
506 at a time from the src register. Obviously, for XLEN=32
507 that can only be up to 2 16-bit entries, however for XLEN=64
510 * When the src 16-bit chunk is non-zero and there already exists
511 an entry with the exact same "regkey" (bits 0-4), the
512 entry is **updated**. No other modifications are made.
513 * When the 16-bit chunk is non-zero and there does not exist
514 an entry, the new value will be placed at the end
515 (in the highest non-zero slot), or at the beginning
516 (shuffling up all other entries to make room).
517 * If there is not enough room, the entry at the opposite
518 end will become part of the CSR return result.
519 * The process is repeated for the next 16-bit chunk (starting
520 with bits 0-15 and moving next to 16-31 and so on), until
521 the limit of XLEN is reached or a chunk is all-zeros, at
522 which point the looping stops.
523 * Any 16-bit entries that are pushed out of the stack
524 (from either end) are concatenated in order (first entry
525 pushed out is bits 0-15 of the return result).
527 What this behaviour basically does is allow the CAM table to
528 effectively be like the top entries of a stack. Entries that
529 get returned from CSRRW SVREGTOP can be *actually* stored on the stack,
530 such that after a function call exits, CSRRWI SVREGTOP may be used
531 to delete the callee's CAM entries, and the caller's entries may then
532 be pushed *back*, using CSRRW SVREGBOT.
534 Context-switching may be carried out in a loop, where CSRRWI may
535 be called to "pop" values that are tested for being non-zero, and
536 transferred onto the stack with C.SWSP using only around 4-5 instructions.
537 CSRRW may then be used in combination with C.LWSP to get the CAM entries
538 off the stack and back into the CAM table, again with a loop using
539 only around 4-5 instructions.
541 Contrast this with needing around 6-7 instructions (8-9 without SV on
542 RV64, and 16-17 on RV32) to do a context-switch of fixed-address CSRs:
543 a sequence of fixed-address C.LWSP with fixed offsets plus fixed-address
544 CSRRWs, and that is without testing if any of the entries are zero
547 ## Predication CSR <a name="predication_csr_table"></a>
549 TODO: update CSR tables, now 7-bit for regidx
551 The Predication CSR is a key-value store indicating whether, if a given
552 destination register (integer or floating-point) is referred to in an
553 instruction, it is to be predicated. Tt is particularly important to note
554 that the *actual* register used can be *different* from the one that is
555 in the instruction, due to the redirection through the lookup table.
557 * regidx is the actual register that in combination with the
558 i/f flag, if that integer or floating-point register is referred to,
559 results in the lookup table being referenced to find the predication
560 mask to use on the operation in which that (regidx) register has
562 * predidx (in combination with the bank bit in the future) is the
563 *actual* register to be used for the predication mask. Note:
564 in effect predidx is actually a 6-bit register address, as the bank
565 bit is the MSB (and is nominally set to zero for now).
566 * inv indicates that the predication mask bits are to be inverted
567 prior to use *without* actually modifying the contents of the
569 * zeroing is either 1 or 0, and if set to 1, the operation must
570 place zeros in any element position where the predication mask is
571 set to zero. If zeroing is set to 0, unpredicated elements *must*
572 be left alone. Some microarchitectures may choose to interpret
573 this as skipping the operation entirely. Others which wish to
574 stick more closely to a SIMD architecture may choose instead to
575 interpret unpredicated elements as an internal "copy element"
576 operation (which would be necessary in SIMD microarchitectures
577 that perform register-renaming)
578 * "packed" indicates if the register is to be interpreted as SIMD
579 i.e. containing multiple contiguous elements of size equal to "bitwidth".
580 (Note: in earlier drafts this was in the Register CSR table.
581 However after extending to 7 bits there was not enough space.
582 To use "unpredicated" packed SIMD, set the predicate to x0 and
583 set "invert". This has the effect of setting a predicate of all 1s)
585 | PrCSR | (15..11) | 10 | 9 | 8 | (7..1) | 0 |
586 | ----- | - | - | - | - | ------- | ------- |
587 | 0 | predkey | zero0 | inv0 | i/f | regidx | packed0 |
588 | 1 | predkey | zero1 | inv1 | i/f | regidx | packed1 |
589 | ... | predkey | ..... | .... | i/f | ....... | ....... |
590 | 15 | predkey | zero15 | inv15 | i/f | regidx | packed15|
592 The Predication CSR Table is a key-value store, so implementation-wise
593 it will be faster to turn the table around (maintain topologically
600 int predidx; // redirection: actual int register to use
603 struct pred fp_pred_reg[32]; // 64 in future (bank=1)
604 struct pred int_pred_reg[32]; // 64 in future (bank=1)
606 for (i = 0; i < 16; i++)
607 tb = int_pred_reg if CSRpred[i].type == 0 else fp_pred_reg;
608 idx = CSRpred[i].regidx
609 tb[idx].zero = CSRpred[i].zero
610 tb[idx].inv = CSRpred[i].inv
611 tb[idx].predidx = CSRpred[i].predidx
612 tb[idx].enabled = true
614 So when an operation is to be predicated, it is the internal state that
615 is used. In Section 6.4.2 of Hwacha's Manual (EECS-2015-262) the following
616 pseudo-code for operations is given, where p is the explicit (direct)
617 reference to the predication register to be used:
619 for (int i=0; i<vl; ++i)
621 (d ? vreg[rd][i] : sreg[rd]) =
622 iop(s1 ? vreg[rs1][i] : sreg[rs1],
623 s2 ? vreg[rs2][i] : sreg[rs2]); // for insts with 2 inputs
625 This instead becomes an *indirect* reference using the *internal* state
626 table generated from the Predication CSR key-value store, which iwws used
630 preg = int_pred_reg[rd]
632 preg = fp_pred_reg[rd]
634 for (int i=0; i<vl; ++i)
635 predicate, zeroing = get_pred_val(type(iop) == INT, rd):
636 if (predicate && (1<<i))
637 (d ? regfile[rd+i] : regfile[rd]) =
638 iop(s1 ? regfile[rs1+i] : regfile[rs1],
639 s2 ? regfile[rs2+i] : regfile[rs2]); // for insts with 2 inputs
641 (d ? regfile[rd+i] : regfile[rd]) = 0
645 * d, s1 and s2 are booleans indicating whether destination,
646 source1 and source2 are vector or scalar
647 * key-value CSR-redirection of rd, rs1 and rs2 have NOT been included
648 above, for clarity. rd, rs1 and rs2 all also must ALSO go through
649 register-level redirection (from the Register CSR table) if they are
652 If written as a function, obtaining the predication mask (and whether
653 zeroing takes place) may be done as follows:
655 def get_pred_val(bool is_fp_op, int reg):
656 tb = int_reg if is_fp_op else fp_reg
657 if (!tb[reg].enabled):
658 return ~0x0, False // all enabled; no zeroing
659 tb = int_pred if is_fp_op else fp_pred
660 if (!tb[reg].enabled):
661 return ~0x0, False // all enabled; no zeroing
662 predidx = tb[reg].predidx // redirection occurs HERE
663 predicate = intreg[predidx] // actual predicate HERE
665 predicate = ~predicate // invert ALL bits
666 return predicate, tb[reg].zero
668 Note here, critically, that **only** if the register is marked
669 in its CSR **register** table entry as being "active" does the testing
670 proceed further to check if the CSR **predicate** table entry is
673 Note also that this is in direct contrast to branch operations
674 for the storage of comparisions: in these specific circumstances
675 the requirement for there to be an active CSR *register* entry
678 ## REMAP CSR <a name="remap" />
680 (Note: both the REMAP and SHAPE sections are best read after the
681 rest of the document has been read)
683 There is one 32-bit CSR which may be used to indicate which registers,
684 if used in any operation, must be "reshaped" (re-mapped) from a linear
685 form to a 2D or 3D transposed form, or "offset" to permit arbitrary
686 access to elements within a register.
688 The 32-bit REMAP CSR may reshape up to 3 registers:
690 | 29..28 | 27..26 | 25..24 | 23 | 22..16 | 15 | 14..8 | 7 | 6..0 |
691 | ------ | ------ | ------ | -- | ------- | -- | ------- | -- | ------- |
692 | shape2 | shape1 | shape0 | 0 | regidx2 | 0 | regidx1 | 0 | regidx0 |
694 regidx0-2 refer not to the Register CSR CAM entry but to the underlying
695 *real* register (see regidx, the value) and consequently is 7-bits wide.
696 When set to zero (referring to x0), clearly reshaping x0 is pointless,
697 so is used to indicate "disabled".
698 shape0-2 refers to one of three SHAPE CSRs. A value of 0x3 is reserved.
699 Bits 7, 15, 23, 30 and 31 are also reserved, and must be set to zero.
701 It is anticipated that these specialist CSRs not be very often used.
702 Unlike the CSR Register and Predication tables, the REMAP CSRs use
703 the full 7-bit regidx so that they can be set once and left alone,
704 whilst the CSR Register entries pointing to them are disabled, instead.
706 ## SHAPE 1D/2D/3D vector-matrix remapping CSRs
708 (Note: both the REMAP and SHAPE sections are best read after the
709 rest of the document has been read)
711 There are three "shape" CSRs, SHAPE0, SHAPE1, SHAPE2, 32-bits in each,
712 which have the same format. When each SHAPE CSR is set entirely to zeros,
713 remapping is disabled: the register's elements are a linear (1D) vector.
715 | 26..24 | 23 | 22..16 | 15 | 14..8 | 7 | 6..0 |
716 | ------- | -- | ------- | -- | ------- | -- | ------- |
717 | permute | offs[2] | zdimsz | offs[1] | ydimsz | offs[0] | xdimsz |
719 offs is a 3-bit field, spread out across bits 7, 15 and 23, which
720 is added to the element index during the loop calculation.
722 xdimsz, ydimsz and zdimsz are offset by 1, such that a value of 0 indicates
723 that the array dimensionality for that dimension is 1. A value of xdimsz=2
724 would indicate that in the first dimension there are 3 elements in the
725 array. The format of the array is therefore as follows:
727 array[xdim+1][ydim+1][zdim+1]
729 However whilst illustrative of the dimensionality, that does not take the
730 "permute" setting into account. "permute" may be any one of six values
731 (0-5, with values of 6 and 7 being reserved, and not legal). The table
732 below shows how the permutation dimensionality order works:
734 | permute | order | array format |
735 | ------- | ----- | ------------------------ |
736 | 000 | 0,1,2 | (xdim+1)(ydim+1)(zdim+1) |
737 | 001 | 0,2,1 | (xdim+1)(zdim+1)(ydim+1) |
738 | 010 | 1,0,2 | (ydim+1)(xdim+1)(zdim+1) |
739 | 011 | 1,2,0 | (ydim+1)(zdim+1)(xdim+1) |
740 | 100 | 2,0,1 | (zdim+1)(xdim+1)(ydim+1) |
741 | 101 | 2,1,0 | (zdim+1)(ydim+1)(xdim+1) |
743 In other words, the "permute" option changes the order in which
744 nested for-loops over the array would be done. The algorithm below
745 shows this more clearly, and may be executed as a python program:
747 # mapidx = REMAP.shape2
748 xdim = 3 # SHAPE[mapidx].xdim_sz+1
749 ydim = 4 # SHAPE[mapidx].ydim_sz+1
750 zdim = 5 # SHAPE[mapidx].zdim_sz+1
752 lims = [xdim, ydim, zdim]
753 idxs = [0,0,0] # starting indices
754 order = [1,0,2] # experiment with different permutations, here
755 offs = 0 # experiment with different offsets, here
757 for idx in range(xdim * ydim * zdim):
758 new_idx = offs + idxs[0] + idxs[1] * xdim + idxs[2] * xdim * ydim
761 idxs[order[i]] = idxs[order[i]] + 1
762 if (idxs[order[i]] != lims[order[i]]):
767 Here, it is assumed that this algorithm be run within all pseudo-code
768 throughout this document where a (parallelism) for-loop would normally
769 run from 0 to VL-1 to refer to contiguous register
770 elements; instead, where REMAP indicates to do so, the element index
771 is run through the above algorithm to work out the **actual** element
772 index, instead. Given that there are three possible SHAPE entries, up to
773 three separate registers in any given operation may be simultaneously
776 function op_add(rd, rs1, rs2) # add not VADD!
779 for (i = 0; i < VL; i++)
780 if (predval & 1<<i) # predication uses intregs
781 ireg[rd+remap(id)] <= ireg[rs1+remap(irs1)] +
782 ireg[rs2+remap(irs2)];
783 if (!int_vec[rd ].isvector) break;
784 if (int_vec[rd ].isvector) { id += 1; }
785 if (int_vec[rs1].isvector) { irs1 += 1; }
786 if (int_vec[rs2].isvector) { irs2 += 1; }
788 By changing remappings, 2D matrices may be transposed "in-place" for one
789 operation, followed by setting a different permutation order without
790 having to move the values in the registers to or from memory. Also,
791 the reason for having REMAP separate from the three SHAPE CSRs is so
792 that in a chain of matrix multiplications and additions, for example,
793 the SHAPE CSRs need only be set up once; only the REMAP CSR need be
794 changed to target different registers.
798 * Over-running the register file clearly has to be detected and
799 an illegal instruction exception thrown
800 * When non-default elwidths are set, the exact same algorithm still
801 applies (i.e. it offsets elements *within* registers rather than
803 * If permute option 000 is utilised, the actual order of the
804 reindexing does not change!
805 * If two or more dimensions are set to zero, the actual order does not change!
806 * The above algorithm is pseudo-code **only**. Actual implementations
807 will need to take into account the fact that the element for-looping
808 must be **re-entrant**, due to the possibility of exceptions occurring.
809 See MSTATE CSR, which records the current element index.
810 * Twin-predicated operations require **two** separate and distinct
811 element offsets. The above pseudo-code algorithm will be applied
812 separately and independently to each, should each of the two
813 operands be remapped. *This even includes C.LDSP* and other operations
814 in that category, where in that case it will be the **offset** that is
815 remapped (see Compressed Stack LOAD/STORE section).
816 * Offset is especially useful, on its own, for accessing elements
817 within the middle of a register. Without offsets, it is necessary
818 to either use a predicated MV, skipping the first elements, or
819 performing a LOAD/STORE cycle to memory.
820 With offsets, the data does not have to be moved.
821 * Setting the total elements (xdim+1) times (ydim+1) times (zdim+1) to
822 less than MVL is **perfectly legal**, albeit very obscure. It permits
823 entries to be regularly presented to operands **more than once**, thus
824 allowing the same underlying registers to act as an accumulator of
825 multiple vector or matrix operations, for example.
827 Clearly here some considerable care needs to be taken as the remapping
828 could hypothetically create arithmetic operations that target the
829 exact same underlying registers, resulting in data corruption due to
830 pipeline overlaps. Out-of-order / Superscalar micro-architectures with
831 register-renaming will have an easier time dealing with this than
832 DSP-style SIMD micro-architectures.
834 # Instruction Execution Order
836 Simple-V behaves as if it is a hardware-level "macro expansion system",
837 substituting and expanding a single instruction into multiple sequential
838 instructions with contiguous and sequentially-incrementing registers.
839 As such, it does **not** modify - or specify - the behaviour and semantics of
840 the execution order: that may be deduced from the **existing** RV
841 specification in each and every case.
843 So for example if a particular micro-architecture permits out-of-order
844 execution, and it is augmented with Simple-V, then wherever instructions
845 may be out-of-order then so may the "post-expansion" SV ones.
847 If on the other hand there are memory guarantees which specifically
848 prevent and prohibit certain instructions from being re-ordered
849 (such as the Atomicity Axiom, or FENCE constraints), then clearly
850 those constraints **MUST** also be obeyed "post-expansion".
852 It should be absolutely clear that SV is **not** about providing new
853 functionality or changing the existing behaviour of a micro-architetural
854 design, or about changing the RISC-V Specification.
855 It is **purely** about compacting what would otherwise be contiguous
856 instructions that use sequentially-increasing register numbers down
857 to the **one** instruction.
859 # Instructions <a name="instructions" />
861 Despite being a 98% complete and accurate topological remap of RVV
862 concepts and functionality, no new instructions are needed.
863 Compared to RVV: *All* RVV instructions can be re-mapped, however xBitManip
864 becomes a critical dependency for efficient manipulation of predication
865 masks (as a bit-field). Despite the removal of all operations,
866 with the exception of CLIP and VSELECT.X
867 *all instructions from RVV Base are topologically re-mapped and retain their
868 complete functionality, intact*. Note that if RV64G ever had
869 a MV.X added as well as FCLIP, the full functionality of RVV-Base would
872 Three instructions, VSELECT, VCLIP and VCLIPI, do not have RV Standard
873 equivalents, so are left out of Simple-V. VSELECT could be included if
874 there existed a MV.X instruction in RV (MV.X is a hypothetical
875 non-immediate variant of MV that would allow another register to
876 specify which register was to be copied). Note that if any of these three
877 instructions are added to any given RV extension, their functionality
878 will be inherently parallelised.
880 With some exceptions, where it does not make sense or is simply too
881 challenging, all RV-Base instructions are parallelised:
883 * CSR instructions, whilst a case could be made for fast-polling of
884 a CSR into multiple registers, or for being able to copy multiple
885 contiguously addressed CSRs into contiguous registers, and so on,
886 are the fundamental core basis of SV. If parallelised, extreme
887 care would need to be taken. Additionally, CSR reads are done
888 using x0, and it is *really* inadviseable to tag x0.
889 * LUI, C.J, C.JR, WFI, AUIPC are not suitable for parallelising so are
891 * LR/SC could hypothetically be parallelised however their purpose is
892 single (complex) atomic memory operations where the LR must be followed
893 up by a matching SC. A sequence of parallel LR instructions followed
894 by a sequence of parallel SC instructions therefore is guaranteed to
895 not be useful. Not least: the guarantees of a Multi-LR/SC
896 would be impossible to provide if emulated in a trap.
897 * EBREAK, NOP, FENCE and others do not use registers so are not inherently
898 paralleliseable anyway.
900 All other operations using registers are automatically parallelised.
901 This includes AMOMAX, AMOSWAP and so on, where particular care and
902 attention must be paid.
904 Example pseudo-code for an integer ADD operation (including scalar operations).
905 Floating-point uses fp csrs.
907 function op_add(rd, rs1, rs2) # add not VADD!
908 int i, id=0, irs1=0, irs2=0;
909 predval = get_pred_val(FALSE, rd);
910 rd = int_vec[rd ].isvector ? int_vec[rd ].regidx : rd;
911 rs1 = int_vec[rs1].isvector ? int_vec[rs1].regidx : rs1;
912 rs2 = int_vec[rs2].isvector ? int_vec[rs2].regidx : rs2;
913 for (i = 0; i < VL; i++)
914 if (predval & 1<<i) # predication uses intregs
915 ireg[rd+id] <= ireg[rs1+irs1] + ireg[rs2+irs2];
916 if (!int_vec[rd ].isvector) break;
917 if (int_vec[rd ].isvector) { id += 1; }
918 if (int_vec[rs1].isvector) { irs1 += 1; }
919 if (int_vec[rs2].isvector) { irs2 += 1; }
921 Note that for simplicity there is quite a lot missing from the above
922 pseudo-code: element widths, zeroing on predication, dimensional
923 reshaping and offsets and so on. However it demonstrates the basic
924 principle. Augmentations that produce the full pseudo-code are covered in
927 ## Instruction Format
929 It is critical to appreciate that there are
930 **no operations added to SV, at all**.
932 Instead, by using CSRs to tag registers as an indication of "changed behaviour",
933 SV *overloads* pre-existing branch operations into predicated
934 variants, and implicitly overloads arithmetic operations, MV,
935 FCVT, and LOAD/STORE depending on CSR configurations for bitwidth
936 and predication. **Everything** becomes parallelised. *This includes
937 Compressed instructions* as well as any future instructions and Custom
940 Note: CSR tags to change behaviour of instructions is nothing new, including
941 in RISC-V. UXL, SXL and MXL change the behaviour so that XLEN=32/64/128.
942 FRM changes the behaviour of the floating-point unit, to alter the rounding
943 mode. Other architectures change the LOAD/STORE byte-order from big-endian
944 to little-endian on a per-instruction basis. SV is just a little more...
945 comprehensive in its effect on instructions.
947 ## Branch Instructions
949 ### Standard Branch <a name="standard_branch"></a>
951 Branch operations use standard RV opcodes that are reinterpreted to
952 be "predicate variants" in the instance where either of the two src
953 registers are marked as vectors (active=1, vector=1).
955 Note that the predication register to use (if one is enabled) is taken from
956 the *first* src register, and that this is used, just as with predicated
957 arithmetic operations, to mask whether the comparison operations take
958 place or not. The target (destination) predication register
959 to use (if one is enabled) is taken from the *second* src register.
961 If either of src1 or src2 are scalars (whether by there being no
962 CSR register entry or whether by the CSR entry specifically marking
963 the register as "scalar") the comparison goes ahead as vector-scalar
966 In instances where no vectorisation is detected on either src registers
967 the operation is treated as an absolutely standard scalar branch operation.
968 Where vectorisation is present on either or both src registers, the
969 branch may stil go ahead if any only if *all* tests succeed (i.e. excluding
970 those tests that are predicated out).
972 Note that when zero-predication is enabled (from source rs1),
973 a cleared bit in the predicate indicates that the result
974 of the compare is set to "false", i.e. that the corresponding
975 destination bit (or result)) be set to zero. Contrast this with
976 when zeroing is not set: bits in the destination predicate are
977 only *set*; they are **not** cleared. This is important to appreciate,
978 as there may be an expectation that, going into the hardware-loop,
979 the destination predicate is always expected to be set to zero:
980 this is **not** the case. The destination predicate is only set
981 to zero if **zeroing** is enabled.
983 Note that just as with the standard (scalar, non-predicated) branch
984 operations, BLE, BGT, BLEU and BTGU may be synthesised by inverting
987 In Hwacha EECS-2015-262 Section 6.7.2 the following pseudocode is given
988 for predicated compare operations of function "cmp":
990 for (int i=0; i<vl; ++i)
992 preg[pd][i] = cmp(s1 ? vreg[rs1][i] : sreg[rs1],
993 s2 ? vreg[rs2][i] : sreg[rs2]);
995 With associated predication, vector-length adjustments and so on,
996 and temporarily ignoring bitwidth (which makes the comparisons more
997 complex), this becomes:
999 s1 = reg_is_vectorised(src1);
1000 s2 = reg_is_vectorised(src2);
1003 if cmp(rs1, rs2) # scalar compare
1007 preg = int_pred_reg[rd]
1010 ps = get_pred_val(I/F==INT, rs1);
1011 rd = get_pred_val(I/F==INT, rs2); # this may not exist
1013 if not exists(rd) or zeroing:
1018 for (int i = 0; i < VL; ++i)
1020 if not (ps & (1<<i))
1022 else if (ps & (1<<i))
1023 if (cmp(s1 ? reg[src1+i]:reg[src1],
1024 s2 ? reg[src2+i]:reg[src2])
1033 preg[rd] = result # store in destination
1039 * Predicated SIMD comparisons would break src1 and src2 further down
1040 into bitwidth-sized chunks (see Appendix "Bitwidth Virtual Register
1041 Reordering") setting Vector-Length times (number of SIMD elements) bits
1042 in Predicate Register rd, as opposed to just Vector-Length bits.
1043 * The execution of "parallelised" instructions **must** be implemented
1044 as "re-entrant" (to use a term from software). If an exception (trap)
1045 occurs during the middle of a vectorised
1046 Branch (now a SV predicated compare) operation, the partial results
1047 of any comparisons must be written out to the destination
1048 register before the trap is permitted to begin. If however there
1049 is no predicate, the **entire** set of comparisons must be **restarted**,
1050 with the offset loop indices set back to zero. This is because
1051 there is no place to store the temporary result during the handling
1054 TODO: predication now taken from src2. also branch goes ahead
1055 if all compares are successful.
1057 Note also that where normally, predication requires that there must
1058 also be a CSR register entry for the register being used in order
1059 for the **predication** CSR register entry to also be active,
1060 for branches this is **not** the case. src2 does **not** have
1061 to have its CSR register entry marked as active in order for
1062 predication on src2 to be active.
1064 Also note: SV Branch operations are **not** twin-predicated
1065 (see Twin Predication section). This would require three
1066 element offsets: one to track src1, one to track src2 and a third
1067 to track where to store the accumulation of the results. Given
1068 that the element offsets need to be exposed via CSRs so that
1069 the parallel hardware looping may be made re-entrant on traps
1070 and exceptions, the decision was made not to make SV Branches
1073 ### Floating-point Comparisons
1075 There does not exist floating-point branch operations, only compare.
1076 Interestingly no change is needed to the instruction format because
1077 FP Compare already stores a 1 or a zero in its "rd" integer register
1078 target, i.e. it's not actually a Branch at all: it's a compare.
1080 In RV (scalar) Base, a branch on a floating-point compare is
1081 done via the sequence "FEQ x1, f0, f5; BEQ x1, x0, #jumploc".
1082 This does extend to SV, as long as x1 (in the example sequence given)
1083 is vectorised. When that is the case, x1..x(1+VL-1) will also be
1084 set to 0 or 1 depending on whether f0==f5, f1==f6, f2==f7 and so on.
1085 The BEQ that follows will *also* compare x1==x0, x2==x0, x3==x0 and
1086 so on. Consequently, unlike integer-branch, FP Compare needs no
1087 modification in its behaviour.
1089 In addition, it is noted that an entry "FNE" (the opposite of FEQ) is missing,
1090 and whilst in ordinary branch code this is fine because the standard
1091 RVF compare can always be followed up with an integer BEQ or a BNE (or
1092 a compressed comparison to zero or non-zero), in predication terms that
1093 becomes more of an impact. To deal with this, SV's predication has
1094 had "invert" added to it.
1096 Also: note that FP Compare may be predicated, using the destination
1097 integer register (rd) to determine the predicate. FP Compare is **not**
1098 a twin-predication operation, as, again, just as with SV Branches,
1099 there are three registers involved: FP src1, FP src2 and INT rd.
1101 ### Compressed Branch Instruction
1103 Compressed Branch instructions are, just like standard Branch instructions,
1104 reinterpreted to be vectorised and predicated based on the source register
1105 (rs1s) CSR entries. As however there is only the one source register,
1106 given that c.beqz a10 is equivalent to beqz a10,x0, the optional target
1107 to store the results of the comparisions is taken from CSR predication
1108 table entries for **x0**.
1110 The specific required use of x0 is, with a little thought, quite obvious,
1111 but is counterintuitive. Clearly it is **not** recommended to redirect
1112 x0 with a CSR register entry, however as a means to opaquely obtain
1113 a predication target it is the only sensible option that does not involve
1114 additional special CSRs (or, worse, additional special opcodes).
1116 Note also that, just as with standard branches, the 2nd source
1117 (in this case x0 rather than src2) does **not** have to have its CSR
1118 register table marked as "active" in order for predication to work.
1120 ## Vectorised Dual-operand instructions
1122 There is a series of 2-operand instructions involving copying (and
1123 sometimes alteration):
1126 * FMV, FNEG, FABS, FCVT, FSGNJ, FSGNJN and FSGNJX
1127 * C.LWSP, C.SWSP, C.LDSP, C.FLWSP etc.
1128 * LOAD(-FP) and STORE(-FP)
1130 All of these operations follow the same two-operand pattern, so it is
1131 *both* the source *and* destination predication masks that are taken into
1132 account. This is different from
1133 the three-operand arithmetic instructions, where the predication mask
1134 is taken from the *destination* register, and applied uniformly to the
1135 elements of the source register(s), element-for-element.
1137 The pseudo-code pattern for twin-predicated operations is as
1140 function op(rd, rs):
1141 rd = int_csr[rd].active ? int_csr[rd].regidx : rd;
1142 rs = int_csr[rs].active ? int_csr[rs].regidx : rs;
1143 ps = get_pred_val(FALSE, rs); # predication on src
1144 pd = get_pred_val(FALSE, rd); # ... AND on dest
1145 for (int i = 0, int j = 0; i < VL && j < VL;):
1146 if (int_csr[rs].isvec) while (!(ps & 1<<i)) i++;
1147 if (int_csr[rd].isvec) while (!(pd & 1<<j)) j++;
1148 reg[rd+j] = SCALAR_OPERATION_ON(reg[rs+i])
1149 if (int_csr[rs].isvec) i++;
1150 if (int_csr[rd].isvec) j++; else break
1152 This pattern covers scalar-scalar, scalar-vector, vector-scalar
1153 and vector-vector, and predicated variants of all of those.
1154 Zeroing is not presently included (TODO). As such, when compared
1155 to RVV, the twin-predicated variants of C.MV and FMV cover
1156 **all** standard vector operations: VINSERT, VSPLAT, VREDUCE,
1157 VEXTRACT, VSCATTER, VGATHER, VCOPY, and more.
1161 * elwidth (SIMD) is not covered in the pseudo-code above
1162 * ending the loop early in scalar cases (VINSERT, VEXTRACT) is also
1164 * zero predication is also not shown (TODO).
1166 ### C.MV Instruction <a name="c_mv"></a>
1168 There is no MV instruction in RV however there is a C.MV instruction.
1169 It is used for copying integer-to-integer registers (vectorised FMV
1170 is used for copying floating-point).
1172 If either the source or the destination register are marked as vectors
1173 C.MV is reinterpreted to be a vectorised (multi-register) predicated
1174 move operation. The actual instruction's format does not change:
1177 15 12 | 11 7 | 6 2 | 1 0 |
1178 funct4 | rd | rs | op |
1180 C.MV | dest | src | C0 |
1183 A simplified version of the pseudocode for this operation is as follows:
1185 function op_mv(rd, rs) # MV not VMV!
1186 rd = int_csr[rd].active ? int_csr[rd].regidx : rd;
1187 rs = int_csr[rs].active ? int_csr[rs].regidx : rs;
1188 ps = get_pred_val(FALSE, rs); # predication on src
1189 pd = get_pred_val(FALSE, rd); # ... AND on dest
1190 for (int i = 0, int j = 0; i < VL && j < VL;):
1191 if (int_csr[rs].isvec) while (!(ps & 1<<i)) i++;
1192 if (int_csr[rd].isvec) while (!(pd & 1<<j)) j++;
1193 ireg[rd+j] <= ireg[rs+i];
1194 if (int_csr[rs].isvec) i++;
1195 if (int_csr[rd].isvec) j++; else break
1197 There are several different instructions from RVV that are covered by
1201 src | dest | predication | op |
1202 scalar | vector | none | VSPLAT |
1203 scalar | vector | destination | sparse VSPLAT |
1204 scalar | vector | 1-bit dest | VINSERT |
1205 vector | scalar | 1-bit? src | VEXTRACT |
1206 vector | vector | none | VCOPY |
1207 vector | vector | src | Vector Gather |
1208 vector | vector | dest | Vector Scatter |
1209 vector | vector | src & dest | Gather/Scatter |
1210 vector | vector | src == dest | sparse VCOPY |
1213 Also, VMERGE may be implemented as back-to-back (macro-op fused) C.MV
1214 operations with inversion on the src and dest predication for one of the
1215 two C.MV operations.
1217 Note that in the instance where the Compressed Extension is not implemented,
1218 MV may be used, but that is a pseudo-operation mapping to addi rd, x0, rs.
1219 Note that the behaviour is **different** from C.MV because with addi the
1220 predication mask to use is taken **only** from rd and is applied against
1221 all elements: rs[i] = rd[i].
1223 ### FMV, FNEG and FABS Instructions
1225 These are identical in form to C.MV, except covering floating-point
1226 register copying. The same double-predication rules also apply.
1227 However when elwidth is not set to default the instruction is implicitly
1228 and automatic converted to a (vectorised) floating-point type conversion
1229 operation of the appropriate size covering the source and destination
1232 (Note that FMV, FNEG and FABS are all actually pseudo-instructions)
1234 ### FVCT Instructions
1236 These are again identical in form to C.MV, except that they cover
1237 floating-point to integer and integer to floating-point. When element
1238 width in each vector is set to default, the instructions behave exactly
1239 as they are defined for standard RV (scalar) operations, except vectorised
1240 in exactly the same fashion as outlined in C.MV.
1242 However when the source or destination element width is not set to default,
1243 the opcode's explicit element widths are *over-ridden* to new definitions,
1244 and the opcode's element width is taken as indicative of the SIMD width
1245 (if applicable i.e. if packed SIMD is requested) instead.
1247 For example FCVT.S.L would normally be used to convert a 64-bit
1248 integer in register rs1 to a 64-bit floating-point number in rd.
1249 If however the source rs1 is set to be a vector, where elwidth is set to
1250 default/2 and "packed SIMD" is enabled, then the first 32 bits of
1251 rs1 are converted to a floating-point number to be stored in rd's
1252 first element and the higher 32-bits *also* converted to floating-point
1253 and stored in the second. The 32 bit size comes from the fact that
1254 FCVT.S.L's integer width is 64 bit, and with elwidth on rs1 set to
1255 divide that by two it means that rs1 element width is to be taken as 32.
1257 Similar rules apply to the destination register.
1259 ## LOAD / STORE Instructions and LOAD-FP/STORE-FP <a name="load_store"></a>
1261 An earlier draft of SV modified the behaviour of LOAD/STORE (modified
1262 the interpretation of the instruction fields). This
1263 actually undermined the fundamental principle of SV, namely that there
1264 be no modifications to the scalar behaviour (except where absolutely
1265 necessary), in order to simplify an implementor's task if considering
1266 converting a pre-existing scalar design to support parallelism.
1268 So the original RISC-V scalar LOAD/STORE and LOAD-FP/STORE-FP functionality
1269 do not change in SV, however just as with C.MV it is important to note
1270 that dual-predication is possible.
1272 In vectorised architectures there are usually at least two different modes
1275 * Read (or write for STORE) from sequential locations, where one
1276 register specifies the address, and the one address is incremented
1277 by a fixed amount. This is usually known as "Unit Stride" mode.
1278 * Read (or write) from multiple indirected addresses, where the
1279 vector elements each specify separate and distinct addresses.
1281 To support these different addressing modes, the CSR Register "isvector"
1282 bit is used. So, for a LOAD, when the src register is set to
1283 scalar, the LOADs are sequentially incremented by the src register
1284 element width, and when the src register is set to "vector", the
1285 elements are treated as indirection addresses. Simplified
1286 pseudo-code would look like this:
1288 function op_ld(rd, rs) # LD not VLD!
1289 rdv = int_csr[rd].active ? int_csr[rd].regidx : rd;
1290 rsv = int_csr[rs].active ? int_csr[rs].regidx : rs;
1291 ps = get_pred_val(FALSE, rs); # predication on src
1292 pd = get_pred_val(FALSE, rd); # ... AND on dest
1293 for (int i = 0, int j = 0; i < VL && j < VL;):
1294 if (int_csr[rs].isvec) while (!(ps & 1<<i)) i++;
1295 if (int_csr[rd].isvec) while (!(pd & 1<<j)) j++;
1296 if (int_csr[rd].isvec)
1297 # indirect mode (multi mode)
1298 srcbase = ireg[rsv+i];
1301 srcbase = ireg[rsv] + i * XLEN/8; # offset in bytes
1302 ireg[rdv+j] <= mem[srcbase + imm_offs];
1303 if (!int_csr[rs].isvec &&
1304 !int_csr[rd].isvec) break # scalar-scalar LD
1305 if (int_csr[rs].isvec) i++;
1306 if (int_csr[rd].isvec) j++;
1310 * For simplicity, zeroing and elwidth is not included in the above:
1311 the key focus here is the decision-making for srcbase; vectorised
1312 rs means use sequentially-numbered registers as the indirection
1313 address, and scalar rs is "offset" mode.
1314 * The test towards the end for whether both source and destination are
1315 scalar is what makes the above pseudo-code provide the "standard" RV
1316 Base behaviour for LD operations.
1317 * The offset in bytes (XLEN/8) changes depending on whether the
1318 operation is a LB (1 byte), LH (2 byes), LW (4 bytes) or LD
1319 (8 bytes), and also whether the element width is over-ridden
1320 (see special element width section).
1322 ## Compressed Stack LOAD / STORE Instructions <a name="c_ld_st"></a>
1324 C.LWSP / C.SWSP and floating-point etc. are also source-dest twin-predicated,
1325 where it is implicit in C.LWSP/FLWSP etc. that x2 is the source register.
1326 It is therefore possible to use predicated C.LWSP to efficiently
1327 pop registers off the stack (by predicating x2 as the source), cherry-picking
1328 which registers to store to (by predicating the destination). Likewise
1329 for C.SWSP. In this way, LOAD/STORE-Multiple is efficiently achieved.
1331 The two modes ("unit stride" and multi-indirection) are still supported,
1332 as with standard LD/ST. Essentially, the only difference is that the
1333 use of x2 is hard-coded into the instruction.
1335 **Note**: it is still possible to redirect x2 to an alternative target
1336 register. With care, this allows C.LWSP / C.SWSP (and C.FLWSP) to be used as
1337 general-purpose LOAD/STORE operations.
1339 ## Compressed LOAD / STORE Instructions
1341 Compressed LOAD and STORE are again exactly the same as scalar LOAD/STORE,
1342 where the same rules apply and the same pseudo-code apply as for
1343 non-compressed LOAD/STORE. Again: setting scalar or vector mode
1344 on the src for LOAD and dest for STORE switches mode from "Unit Stride"
1345 to "Multi-indirection", respectively.
1347 # Element bitwidth polymorphism <a name="elwidth"></a>
1349 Element bitwidth is best covered as its own special section, as it
1350 is quite involved and applies uniformly across-the-board. SV restricts
1351 bitwidth polymorphism to default, 8-bit, 16-bit and 32-bit.
1353 The effect of setting an element bitwidth is to re-cast each entry
1354 in the register table, and for all memory operations involving
1355 load/stores of certain specific sizes, to a completely different width.
1356 Thus In c-style terms, on an RV64 architecture, effectively each register
1357 now looks like this:
1366 // integer table: assume maximum SV 7-bit regfile size
1367 reg_t int_regfile[128];
1369 where the CSR Register table entry (not the instruction alone) determines
1370 which of those union entries is to be used on each operation, and the
1371 VL element offset in the hardware-loop specifies the index into each array.
1373 However a naive interpretation of the data structure above masks the
1374 fact that setting VL greater than 8, for example, when the bitwidth is 8,
1375 accessing one specific register "spills over" to the following parts of
1376 the register file in a sequential fashion. So a much more accurate way
1377 to reflect this would be:
1380 uint8_t actual_bytes[8]; // 8 for RV64, 4 for RV32, 16 for RV128
1381 uint8_t b[0]; // array of type uint8_t
1388 reg_t int_regfile[128];
1390 where when accessing any individual regfile[n].b entry it is permitted
1391 (in c) to arbitrarily over-run the *declared* length of the array (zero),
1392 and thus "overspill" to consecutive register file entries in a fashion
1393 that is completely transparent to a greatly-simplified software / pseudo-code
1395 It is however critical to note that it is clearly the responsibility of
1396 the implementor to ensure that, towards the end of the register file,
1397 an exception is thrown if attempts to access beyond the "real" register
1398 bytes is ever attempted.
1400 Now we may modify pseudo-code an operation where all element bitwidths have
1401 been set to the same size, where this pseudo-code is otherwise identical
1402 to its "non" polymorphic versions (above):
1404 function op_add(rd, rs1, rs2) # add not VADD!
1407 for (i = 0; i < VL; i++)
1410 // TODO, calculate if over-run occurs, for each elwidth
1412 int_regfile[rd].b[id] <= int_regfile[rs1].i[irs1] +
1413 int_regfile[rs2].i[irs2];
1414 } else if elwidth == 16 {
1415 int_regfile[rd].s[id] <= int_regfile[rs1].s[irs1] +
1416 int_regfile[rs2].s[irs2];
1417 } else if elwidth == 32 {
1418 int_regfile[rd].i[id] <= int_regfile[rs1].i[irs1] +
1419 int_regfile[rs2].i[irs2];
1420 } else { // elwidth == 64
1421 int_regfile[rd].l[id] <= int_regfile[rs1].l[irs1] +
1422 int_regfile[rs2].l[irs2];
1427 So here we can see clearly: for 8-bit entries rd, rs1 and rs2 (and registers
1428 following sequentially on respectively from the same) are "type-cast"
1429 to 8-bit; for 16-bit entries likewise and so on.
1431 However that only covers the case where the element widths are the same.
1432 Where the element widths are different, the following algorithm applies:
1434 * Analyse the bitwidth of all source operands and work out the
1435 maximum. Record this as "maxsrcbitwidth"
1436 * If any given source operand requires sign-extension or zero-extension
1437 (ldb, div, rem, mul, sll, srl, sra etc.), instead of mandatory 32-bit
1438 sign-extension / zero-extension or whatever is specified in the standard
1439 RV specification, **change** that to sign-extending from the respective
1440 individual source operand's bitwidth from the CSR table out to
1441 "maxsrcbitwidth" (previously calculated), instead.
1442 * Following separate and distinct (optional) sign/zero-extension of all
1443 source operands as specifically required for that operation, carry out the
1444 operation at "maxsrcbitwidth". (Note that in the case of LOAD/STORE or MV
1445 this may be a "null" (copy) operation, and that with FCVT, the changes
1446 to the source and destination bitwidths may also turn FVCT effectively
1448 * If the destination operand requires sign-extension or zero-extension,
1449 instead of a mandatory fixed size (typically 32-bit for arithmetic,
1450 for subw for example, and otherwise various: 8-bit for sb, 16-bit for sw
1451 etc.), overload the RV specification with the bitwidth from the
1452 destination register's elwidth entry.
1453 * Finally, store the (optionally) sign/zero-extended value into its
1454 destination: memory for sb/sw etc., or an offset section of the register
1455 file for an arithmetic operation.
1457 In this way, polymorphic bitwidths are achieved without requiring a
1458 massive 64-way permutation of calculations **per opcode**, for example
1459 (4 possible rs1 bitwidths times 4 possible rs2 bitwidths times 4 possible
1460 rd bitwidths). The pseudo-code is therefore as follows:
1479 get_max_elwidth(rs1, rs2):
1480 return max(bw(int_csr[rs1].elwidth), # default (XLEN) if not set
1481 bw(int_csr[rs2].elwidth)) # again XLEN if no entry
1483 get_polymorphed_reg(reg, bitwidth, offset):
1485 res.l = 0; // TODO: going to need sign-extending / zero-extending
1487 reg.b = int_regfile[reg].b[offset]
1488 elif bitwidth == 16:
1489 reg.s = int_regfile[reg].s[offset]
1490 elif bitwidth == 32:
1491 reg.i = int_regfile[reg].i[offset]
1492 elif bitwidth == 64:
1493 reg.l = int_regfile[reg].l[offset]
1496 set_polymorphed_reg(reg, bitwidth, offset, val):
1497 if (!int_csr[reg].isvec):
1498 # sign/zero-extend depending on opcode requirements, from
1499 # the reg's bitwidth out to the full bitwidth of the regfile
1500 val = sign_or_zero_extend(val, bitwidth, xlen)
1501 int_regfile[reg].l[0] = val
1503 int_regfile[reg].b[offset] = val
1504 elif bitwidth == 16:
1505 int_regfile[reg].s[offset] = val
1506 elif bitwidth == 32:
1507 int_regfile[reg].i[offset] = val
1508 elif bitwidth == 64:
1509 int_regfile[reg].l[offset] = val
1511 maxsrcwid = get_max_elwidth(rs1, rs2) # source element width(s)
1512 destwid = int_csr[rs1].elwidth # destination element width
1513 for (i = 0; i < VL; i++)
1514 if (predval & 1<<i) # predication uses intregs
1515 // TODO, calculate if over-run occurs, for each elwidth
1516 src1 = get_polymorphed_reg(rs1, maxsrcwid, irs1)
1517 // TODO, sign/zero-extend src1 and src2 as operation requires
1518 if (op_requires_sign_extend_src1)
1519 src1 = sign_extend(src1, maxsrcwid)
1520 src2 = get_polymorphed_reg(rs2, maxsrcwid, irs2)
1521 result = src1 + src2 # actual add here
1522 // TODO, sign/zero-extend result, as operation requires
1523 if (op_requires_sign_extend_dest)
1524 result = sign_extend(result, maxsrcwid)
1525 set_polymorphed_reg(rd, destwid, ird, result)
1526 if (!int_vec[rd].isvector) break
1527 if (int_vec[rd ].isvector) { id += 1; }
1528 if (int_vec[rs1].isvector) { irs1 += 1; }
1529 if (int_vec[rs2].isvector) { irs2 += 1; }
1531 Whilst specific sign-extension and zero-extension pseudocode call
1532 details are left out, due to each operation being different, the above
1533 should be clear that;
1535 * the source operands are extended out to the maximum bitwidth of all
1537 * the operation takes place at that maximum source bitwidth (the
1538 destination bitwidth is not involved at this point, at all)
1539 * the result is extended (or potentially even, truncated) before being
1540 stored in the destination. i.e. truncation (if required) to the
1541 destination width occurs **after** the operation **not** before.
1542 * when the destination is not marked as "vectorised", the **full**
1543 (standard, scalar) register file entry is taken up, i.e. the
1544 element is either sign-extended or zero-extended to cover the
1545 full register bitwidth (XLEN) if it is not already XLEN bits long.
1547 Implementors are entirely free to optimise the above, particularly
1548 if it is specifically known that any given operation will complete
1549 accurately in less bits, as long as the results produced are
1550 directly equivalent and equal, for all inputs and all outputs,
1551 to those produced by the above algorithm.
1553 ## Polymorphic floating-point operation exceptions and error-handling
1555 For floating-point operations, conversion takes place without
1556 raising any kind of exception. Exactly as specified in the standard
1557 RV specification, NAN (or appropriate) is stored if the result
1558 is beyond the range of the destination, and, again, exactly as
1559 with the standard RV specification just as with scalar
1560 operations, the floating-point flag is raised (FCSR). And, again, just as
1561 with scalar operations, it is software's responsibility to check this flag.
1562 Given that the FCSR flags are "accrued", the fact that multiple element
1563 operations could have occurred is not a problem.
1565 Note that it is perfectly legitimate for floating-point bitwidths of
1566 only 8 to be specified. However whilst it is possible to apply IEEE 754
1567 principles, no actual standard yet exists. Implementors wishing to
1568 provide hardware-level 8-bit support rather than throw a trap to emulate
1569 in software should contact the author of this specification before
1572 ## Polymorphic shift operators
1574 A special note is needed for changing the element width of left and right
1575 shift operators, particularly right-shift. Even for standard RV base,
1576 in order for correct results to be returned, the second operand RS2 must
1577 be truncated to be within the range of RS1's bitwidth. spike's implementation
1578 of sll for example is as follows:
1580 WRITE_RD(sext_xlen(zext_xlen(RS1) << (RS2 & (xlen-1))));
1582 which means: where XLEN is 32 (for RV32), restrict RS2 to cover the
1583 range 0..31 so that RS1 will only be left-shifted by the amount that
1584 is possible to fit into a 32-bit register. Whilst this appears not
1585 to matter for hardware, it matters greatly in software implementations,
1586 and it also matters where an RV64 system is set to "RV32" mode, such
1587 that the underlying registers RS1 and RS2 comprise 64 hardware bits
1590 For SV, where each operand's element bitwidth may be over-ridden, the
1591 rule about determining the operation's bitwidth *still applies*, being
1592 defined as the maximum bitwidth of RS1 and RS2. *However*, this rule
1593 **also applies to the truncation of RS2**. In other words, *after*
1594 determining the maximum bitwidth, RS2's range must **also be truncated**
1595 to ensure a correct answer. Example:
1597 * RS1 is over-ridden to a 16-bit width
1598 * RS2 is over-ridden to an 8-bit width
1599 * RD is over-ridden to a 64-bit width
1600 * the maximum bitwidth is thus determined to be 16-bit - max(8,16)
1601 * RS2 is **truncated to a range of values from 0 to 15**: RS2 & (16-1)
1603 Pseudocode (in spike) for this example would therefore be:
1605 WRITE_RD(sext_xlen(zext_16bit(RS1) << (RS2 & (16-1))));
1607 This example illustrates that considerable care therefore needs to be
1608 taken to ensure that left and right shift operations are implemented
1609 correctly. The key is that
1611 * The operation bitwidth is determined by the maximum bitwidth
1612 of the *source registers*, **not** the destination register bitwidth
1613 * The result is then sign-extend (or truncated) as appropriate.
1615 ## Polymorphic MULH/MULHU/MULHSU
1617 MULH is designed to take the top half MSBs of a multiply that
1618 does not fit within the range of the source operands, such that
1619 smaller width operations may produce a full double-width multiply
1620 in two cycles. The issue is: SV allows the source operands to
1621 have variable bitwidth.
1623 Here again special attention has to be paid to the rules regarding
1624 bitwidth, which, again, are that the operation is performed at
1625 the maximum bitwidth of the **source** registers. Therefore:
1627 * An 8-bit x 8-bit multiply will create a 16-bit result that must
1628 be shifted down by 8 bits
1629 * A 16-bit x 8-bit multiply will create a 24-bit result that must
1630 be shifted down by 16 bits (top 8 bits being zero)
1631 * A 16-bit x 16-bit multiply will create a 32-bit result that must
1632 be shifted down by 16 bits
1633 * A 32-bit x 16-bit multiply will create a 48-bit result that must
1634 be shifted down by 32 bits
1635 * A 32-bit x 8-bit multiply will create a 40-bit result that must
1636 be shifted down by 32 bits
1638 So again, just as with shift-left and shift-right, the result
1639 is shifted down by the maximum of the two source register bitwidths.
1640 And, exactly again, truncation or sign-extension is performed on the
1641 result. If sign-extension is to be carried out, it is performed
1642 from the same maximum of the two source register bitwidths out
1643 to the result element's bitwidth.
1645 If truncation occurs, i.e. the top MSBs of the result are lost,
1646 this is "Officially Not Our Problem", i.e. it is assumed that the
1647 programmer actually desires the result to be truncated. i.e. if the
1648 programmer wanted all of the bits, they would have set the destination
1649 elwidth to accommodate them.
1651 ## Polymorphic elwidth on LOAD/STORE <a name="elwidth_loadstore"></a>
1653 Polymorphic element widths in vectorised form means that the data
1654 being loaded (or stored) across multiple registers needs to be treated
1655 (reinterpreted) as a contiguous stream of elwidth-wide items, where
1656 the source register's element width is **independent** from the destination's.
1658 This makes for a slightly more complex algorithm when using indirection
1659 on the "addressed" register (source for LOAD and destination for STORE),
1660 particularly given that the LOAD/STORE instruction provides important
1661 information about the width of the data to be reinterpreted.
1663 Let's illustrate the "load" part, where the pseudo-code for elwidth=default
1664 was as follows, and i is the loop from 0 to VL-1:
1666 srcbase = ireg[rs+i];
1667 return mem[srcbase + imm]; // returns XLEN bits
1669 Instead, when elwidth != default, for a LW (32-bit LOAD), elwidth-wide
1670 chunks are taken from the source memory location addressed by the current
1671 indexed source address register, and only when a full 32-bits-worth
1672 are taken will the index be moved on to the next contiguous source
1675 bitwidth = bw(elwidth); // source elwidth from CSR reg entry
1676 elsperblock = 32 / bitwidth // 1 if bw=32, 2 if bw=16, 4 if bw=8
1677 srcbase = ireg[rs+i/(elsperblock)]; // integer divide
1678 offs = i % elsperblock; // modulo
1679 return &mem[srcbase + imm + offs]; // re-cast to uint8_t*, uint16_t* etc.
1681 Note that the constant "32" above is replaced by 8 for LB, 16 for LH, 64 for LD
1684 The principle is basically exactly the same as if the srcbase were pointing
1685 at the memory of the *register* file: memory is re-interpreted as containing
1686 groups of elwidth-wide discrete elements.
1688 When storing the result from a load, it's important to respect the fact
1689 that the destination register has its *own separate element width*. Thus,
1690 when each element is loaded (at the source element width), any sign-extension
1691 or zero-extension (or truncation) needs to be done to the *destination*
1692 bitwidth. Also, the storing has the exact same analogous algorithm as
1693 above, where in fact it is just the set\_polymorphed\_reg pseudocode
1694 (completely unchanged) used above.
1696 One issue remains: when the source element width is **greater** than
1697 the width of the operation, it is obvious that a single LB for example
1698 cannot possibly obtain 16-bit-wide data. This condition may be detected
1699 where, when using integer divide, elsperblock (the width of the LOAD
1700 divided by the bitwidth of the element) is zero.
1702 The issue is "fixed" by ensuring that elsperblock is a minimum of 1:
1704 elsperblock = min(1, LD_OP_BITWIDTH / element_bitwidth)
1706 The elements, if the element bitwidth is larger than the LD operation's
1707 size, will then be sign/zero-extended to the full LD operation size, as
1708 specified by the LOAD (LDU instead of LD, LBU instead of LB), before
1709 being passed on to the second phase.
1711 As LOAD/STORE may be twin-predicated, it is important to note that
1712 the rules on twin predication still apply, except where in previous
1713 pseudo-code (elwidth=default for both source and target) it was
1714 the *registers* that the predication was applied to, it is now the
1715 **elements** that the predication is applied to.
1717 Thus the full pseudocode for all LD operations may be written out
1720 function LBU(rd, rs):
1721 load_elwidthed(rd, rs, 8, true)
1722 function LB(rd, rs):
1723 load_elwidthed(rd, rs, 8, false)
1724 function LH(rd, rs):
1725 load_elwidthed(rd, rs, 16, false)
1728 function LQ(rd, rs):
1729 load_elwidthed(rd, rs, 128, false)
1731 # returns 1 byte of data when opwidth=8, 2 bytes when opwidth=16..
1732 function load_memory(rs, imm, i, opwidth):
1733 elwidth = int_csr[rs].elwidth
1734 bitwidth = bw(elwidth);
1735 elsperblock = min(1, opwidth / bitwidth)
1736 srcbase = ireg[rs+i/(elsperblock)];
1737 offs = i % elsperblock;
1738 return mem[srcbase + imm + offs]; # 1/2/4/8/16 bytes
1740 function load_elwidthed(rd, rs, opwidth, unsigned):
1741 destwid = int_csr[rd].elwidth # destination element width
1742 rd = int_csr[rd].active ? int_csr[rd].regidx : rd;
1743 rs = int_csr[rs].active ? int_csr[rs].regidx : rs;
1744 ps = get_pred_val(FALSE, rs); # predication on src
1745 pd = get_pred_val(FALSE, rd); # ... AND on dest
1746 for (int i = 0, int j = 0; i < VL && j < VL;):
1747 if (int_csr[rs].isvec) while (!(ps & 1<<i)) i++;
1748 if (int_csr[rd].isvec) while (!(pd & 1<<j)) j++;
1749 val = load_memory(rs, imm, i, opwidth)
1751 val = zero_extend(val, min(opwidth, bitwidth))
1753 val = sign_extend(val, min(opwidth, bitwidth))
1754 set_polymorphed_reg(rd, bitwidth, j, val)
1755 if (int_csr[rs].isvec) i++;
1756 if (int_csr[rd].isvec) j++; else break;
1760 * when comparing against for example the twin-predicated c.mv
1761 pseudo-code, the pattern of independent incrementing of rd and rs
1762 is preserved unchanged.
1763 * just as with the c.mv pseudocode, zeroing is not included and must be
1764 taken into account (TODO).
1765 * that due to the use of a twin-predication algorithm, LOAD/STORE also
1766 take on the same VSPLAT, VINSERT, VREDUCE, VEXTRACT, VGATHER and
1767 VSCATTER characteristics.
1768 * that due to the use of the same set\_polymorphed\_reg pseudocode,
1769 a destination that is not vectorised (marked as scalar) will
1770 result in the element being fully sign-extended or zero-extended
1771 out to the full register file bitwidth (XLEN). When the source
1772 is also marked as scalar, this is how the compatibility with
1773 standard RV LOAD/STORE is preserved by this algorithm.
1775 ### Example Tables showing LOAD elements
1777 This section contains examples of vectorised LOAD operations, showing
1778 how the two stage process works (three if zero/sign-extension is included).
1781 #### Example: LD x8, x5(0), x8 CSR-elwidth=32, x5 CSR-elwidth=16, VL=7
1785 * a 64-bit load, with an offset of zero
1786 * with a source-address elwidth of 16-bit
1787 * into a destination-register with an elwidth of 32-bit
1789 * from register x5 (actually x5-x6) to x8 (actually x8 to half of x11)
1790 * RV64, where XLEN=64 is assumed.
1792 First, the memory table, which, due to the
1793 element width being 16 and the operation being LD (64), the 64-bits
1794 loaded from memory are subdivided into groups of **four** elements.
1795 And, with VL being 7 (deliberately to illustrate that this is reasonable
1796 and possible), the first four are sourced from the offset addresses pointed
1797 to by x5, and the next three from the ofset addresses pointed to by
1798 the next contiguous register, x6:
1801 addr | byte 0 | byte 1 | byte 2 | byte 3 | byte 4 | byte 5 | byte 6 | byte 7 |
1802 @x5 | elem 0 || elem 1 || elem 2 || elem 3 ||
1803 @x6 | elem 4 || elem 5 || elem 6 || not loaded ||
1806 Next, the elements are zero-extended from 16-bit to 32-bit, as whilst
1807 the elwidth CSR entry for x5 is 16-bit, the destination elwidth on x8 is 32.
1810 byte 3 | byte 2 | byte 1 | byte 0 |
1811 0x0 | 0x0 | elem0 ||
1812 0x0 | 0x0 | elem1 ||
1813 0x0 | 0x0 | elem2 ||
1814 0x0 | 0x0 | elem3 ||
1815 0x0 | 0x0 | elem4 ||
1816 0x0 | 0x0 | elem5 ||
1817 0x0 | 0x0 | elem6 ||
1818 0x0 | 0x0 | elem7 ||
1821 Lastly, the elements are stored in contiguous blocks, as if x8 was also
1822 byte-addressable "memory". That "memory" happens to cover registers
1823 x8, x9, x10 and x11, with the last 32 "bits" of x11 being **UNMODIFIED**:
1826 reg# | byte 7 | byte 6 | byte 5 | byte 4 | byte 3 | byte 2 | byte 1 | byte 0 |
1827 x8 | 0x0 | 0x0 | elem 1 || 0x0 | 0x0 | elem 0 ||
1828 x9 | 0x0 | 0x0 | elem 3 || 0x0 | 0x0 | elem 2 ||
1829 x10 | 0x0 | 0x0 | elem 5 || 0x0 | 0x0 | elem 4 ||
1830 x11 | **UNMODIFIED** |||| 0x0 | 0x0 | elem 6 ||
1833 Thus we have data that is loaded from the **addresses** pointed to by
1834 x5 and x6, zero-extended from 16-bit to 32-bit, stored in the **registers**
1835 x8 through to half of x11.
1836 The end result is that elements 0 and 1 end up in x8, with element 8 being
1837 shifted up 32 bits, and so on, until finally element 6 is in the
1840 Note that whilst the memory addressing table is shown left-to-right byte order,
1841 the registers are shown in right-to-left (MSB) order. This does **not**
1842 imply that bit or byte-reversal is carried out: it's just easier to visualise
1843 memory as being contiguous bytes, and emphasises that registers are not
1844 really actually "memory" as such.
1846 ## Why SV bitwidth specification is restricted to 4 entries
1848 The four entries for SV element bitwidths only allows three over-rides:
1850 * default bitwidth for a given operation *divided* by two
1851 * default bitwidth for a given operation *multiplied* by two
1854 At first glance this seems completely inadequate: for example, RV64
1855 cannot possibly operate on 16-bit operations, because 64 divided by
1856 2 is 32. However, the reader may have forgotten that it is possible,
1857 at run-time, to switch a 64-bit application into 32-bit mode, by
1858 setting UXL. Once switched, opcodes that formerly had 64-bit
1859 meanings now have 32-bit meanings, and in this way, "default/2"
1860 now reaches **16-bit** where previously it meant "32-bit".
1862 There is however an absolutely crucial aspect oF SV here that explicitly
1863 needs spelling out, and it's whether the "vectorised" bit is set in
1864 the Register's CSR entry.
1866 If "vectorised" is clear (not set), this indicates that the operation
1867 is "scalar". Under these circumstances, when set on a destination (RD),
1868 then sign-extension and zero-extension, whilst changed to match the
1869 override bitwidth (if set), will erase the **full** register entry
1872 When vectorised is *set*, this indicates that the operation now treats
1873 **elements** as if they were independent registers, so regardless of
1874 the length, any parts of a given actual register that are not involved
1875 in the operation are **NOT** modified, but are **PRESERVED**.
1877 SIMD micro-architectures may implement this by using predication on
1878 any elements in a given actual register that are beyond the end of
1879 multi-element operation.
1883 * rs1, rs2 and rd are all set to 8-bit
1885 * RV64 architecture is set (UXL=64)
1886 * add operation is carried out
1887 * bits 0-23 of RD are modified to be rs1[23..16] + rs2[23..16]
1888 concatenated with similar add operations on bits 15..8 and 7..0
1889 * bits 24 through 63 **remain as they originally were**.
1891 Example SIMD micro-architectural implementation:
1893 * SIMD architecture works out the nearest round number of elements
1894 that would fit into a full RV64 register (in this case: 8)
1895 * SIMD architecture creates a hidden predicate, binary 0b00000111
1896 i.e. the bottom 3 bits set (VL=3) and the top 5 bits clear
1897 * SIMD architecture goes ahead with the add operation as if it
1898 was a full 8-wide batch of 8 adds
1899 * SIMD architecture passes top 5 elements through the adders
1900 (which are "disabled" due to zero-bit predication)
1901 * SIMD architecture gets the 5 unmodified top 8-bits back unmodified
1902 and stores them in rd.
1904 This requires a read on rd, however this is required anyway in order
1905 to support non-zeroing mode.
1907 ## Polymorphic floating-point
1909 Standard scalar RV integer operations base the register width on XLEN,
1910 which may be changed (UXL in USTATUS, and the corresponding MXL and
1911 SXL in MSTATUS and SSTATUS respectively). Integer LOAD, STORE and
1912 arithmetic operations are therefore restricted to an active XLEN bits,
1913 with sign or zero extension to pad out the upper bits when XLEN has
1914 been dynamically set to less than the actual register size.
1916 For scalar floating-point, the active (used / changed) bits are
1917 specified exclusively by the operation: ADD.S specifies an active
1918 32-bits, with the upper bits of the source registers needing to
1919 be all 1s ("NaN-boxed"), and the destination upper bits being
1920 *set* to all 1s (including on LOAD/STOREs).
1922 Where elwidth is set to default (on any source or the destination)
1923 it is obvious that this NaN-boxing behaviour can and should be
1924 preserved. When elwidth is non-default things are less obvious,
1925 so need to be thought through. Here is a normal (scalar) sequence,
1926 assuming an RV64 which supports Quad (128-bit) FLEN:
1928 * FLD loads 64-bit wide from memory. Top 64 MSBs are set to all 1s
1929 * ADD.D performs a 64-bit-wide add. Top 64 MSBs of destination set to 1s.
1930 * FSD stores lowest 64-bits from the 128-bit-wide register to memory:
1931 top 64 MSBs ignored.
1933 Therefore it makes sense to mirror this behaviour when, for example,
1934 elwidth is set to 32. Assume elwidth set to 32 on all source and
1935 destination registers:
1937 * FLD loads 64-bit wide from memory as **two** 32-bit single-precision
1938 floating-point numbers.
1939 * ADD.D performs **two** 32-bit-wide adds, storing one of the adds
1940 in bits 0-31 and the second in bits 32-63.
1941 * FSD stores lowest 64-bits from the 128-bit-wide register to memory
1943 Here's the thing: it does not make sense to overwrite the top 64 MSBs
1944 of the registers either during the FLD **or** the ADD.D. The reason
1945 is that, effectively, the top 64 MSBs actually represent a completely
1946 independent 64-bit register, so overwriting it is not only gratuitous
1947 but may actually be harmful for a future extension to SV which may
1948 have a way to directly access those top 64 bits.
1950 The decision is therefore **not** to touch the upper parts of floating-point
1951 registers whereever elwidth is set to non-default values, including
1952 when "isvec" is false in a given register's CSR entry. Only when the
1953 elwidth is set to default **and** isvec is false will the standard
1954 RV behaviour be followed, namely that the upper bits be modified.
1956 Ultimately if elwidth is default and isvec false on *all* source
1957 and destination registers, a SimpleV instruction defaults completely
1958 to standard RV scalar behaviour (this holds true for **all** operations,
1959 right across the board).
1961 The nice thing here is that ADD.S, ADD.D and ADD.Q when elwidth are
1962 non-default values are effectively all the same: they all still perform
1963 multiple ADD operations, just at different widths. A future extension
1964 to SimpleV may actually allow ADD.S to access the upper bits of the
1965 register, effectively breaking down a 128-bit register into a bank
1966 of 4 independently-accesible 32-bit registers.
1968 In the meantime, although when e.g. setting VL to 8 it would technically
1969 make no difference to the ALU whether ADD.S, ADD.D or ADD.Q is used,
1970 using ADD.Q may be an easy way to signal to the microarchitecture that
1971 it is to receive a higher VL value. On a superscalar OoO architecture
1972 there may be absolutely no difference, however on simpler SIMD-style
1973 microarchitectures they may not necessarily have the infrastructure in
1974 place to know the difference, such that when VL=8 and an ADD.D instruction
1975 is issued, it completes in 2 cycles (or more) rather than one, where
1976 if an ADD.Q had been issued instead on such simpler microarchitectures
1977 it would complete in one.
1979 ## Specific instruction walk-throughs
1981 This section covers walk-throughs of the above-outlined procedure
1982 for converting standard RISC-V scalar arithmetic operations to
1983 polymorphic widths, to ensure that it is correct.
1987 Standard Scalar RV32/RV64 (xlen):
1994 Polymorphic variant:
1996 * RS1 @ rs1 bits, zero-extended to max(rs1, rs2) bits
1997 * RS2 @ rs2 bits, zero-extended to max(rs1, rs2) bits
1998 * add @ max(rs1, rs2) bits
1999 * RD @ rd bits. zero-extend to rd if rd > max(rs1, rs2) otherwise truncate
2001 Note here that polymorphic add zero-extends its source operands,
2002 where addw sign-extends.
2006 The RV Specification specifically states that "W" variants of arithmetic
2007 operations always produce 32-bit signed values. In a polymorphic
2008 environment it is reasonable to assume that the signed aspect is
2009 preserved, where it is the length of the operands and the result
2010 that may be changed.
2012 Standard Scalar RV64 (xlen):
2017 * RD @ xlen bits, truncate add to 32-bit and sign-extend to xlen.
2019 Polymorphic variant:
2021 * RS1 @ rs1 bits, sign-extended to max(rs1, rs2) bits
2022 * RS2 @ rs2 bits, sign-extended to max(rs1, rs2) bits
2023 * add @ max(rs1, rs2) bits
2024 * RD @ rd bits. sign-extend to rd if rd > max(rs1, rs2) otherwise truncate
2026 Note here that polymorphic addw sign-extends its source operands,
2027 where add zero-extends.
2029 This requires a little more in-depth analysis. Where the bitwidth of
2030 rs1 equals the bitwidth of rs2, no sign-extending will occur. It is
2031 only where the bitwidth of either rs1 or rs2 are different, will the
2032 lesser-width operand be sign-extended.
2034 Effectively however, both rs1 and rs2 are being sign-extended (or truncated),
2035 where for add they are both zero-extended. This holds true for all arithmetic
2036 operations ending with "W".
2040 Standard Scalar RV64I:
2042 * RS1 @ xlen bits, truncated to 32-bit
2043 * immed @ 12 bits, sign-extended to 32-bit
2045 * RD @ rd bits. sign-extend to rd if rd > 32, otherwise truncate.
2047 Polymorphic variant:
2050 * immed @ 12 bits, sign-extend to max(rs1, 12) bits
2051 * add @ max(rs1, 12) bits
2052 * RD @ rd bits. sign-extend to rd if rd > max(rs1, 12) otherwise truncate
2054 # Predication Element Zeroing
2056 The introduction of zeroing on traditional vector predication is usually
2057 intended as an optimisation for lane-based microarchitectures with register
2058 renaming to be able to save power by avoiding a register read on elements
2059 that are passed through en-masse through the ALU. Simpler microarchitectures
2060 do not have this issue: they simply do not pass the element through to
2061 the ALU at all, and therefore do not store it back in the destination.
2062 More complex non-lane-based micro-architectures can, when zeroing is
2063 not set, use the predication bits to simply avoid sending element-based
2064 operations to the ALUs, entirely: thus, over the long term, potentially
2065 keeping all ALUs 100% occupied even when elements are predicated out.
2067 SimpleV's design principle is not based on or influenced by
2068 microarchitectural design factors: it is a hardware-level API.
2069 Therefore, looking purely at whether zeroing is *useful* or not,
2070 (whether less instructions are needed for certain scenarios),
2071 given that a case can be made for zeroing *and* non-zeroing, the
2072 decision was taken to add support for both.
2074 ## Single-predication (based on destination register)
2076 Zeroing on predication for arithmetic operations is taken from
2077 the destination register's predicate. i.e. the predication *and*
2078 zeroing settings to be applied to the whole operation come from the
2079 CSR Predication table entry for the destination register.
2080 Thus when zeroing is set on predication of a destination element,
2081 if the predication bit is clear, then the destination element is *set*
2082 to zero (twin-predication is slightly different, and will be covered
2085 Thus the pseudo-code loop for a predicated arithmetic operation
2086 is modified to as follows:
2088 for (i = 0; i < VL; i++)
2089 if not zeroing: # an optimisation
2090 while (!(predval & 1<<i) && i < VL)
2091 if (int_vec[rd ].isvector) { id += 1; }
2092 if (int_vec[rs1].isvector) { irs1 += 1; }
2093 if (int_vec[rs2].isvector) { irs2 += 1; }
2100 result = src1 + src2 # actual add (or other op) here
2101 set_polymorphed_reg(rd, destwid, ird, result)
2102 if (!int_vec[rd].isvector) break
2105 set_polymorphed_reg(rd, destwid, ird, result)
2106 if (int_vec[rd ].isvector) { id += 1; }
2107 else if (predval & 1<<i) break;
2108 if (int_vec[rs1].isvector) { irs1 += 1; }
2109 if (int_vec[rs2].isvector) { irs2 += 1; }
2111 The optimisation to skip elements entirely is only possible for certain
2112 micro-architectures when zeroing is not set. However for lane-based
2113 micro-architectures this optimisation may not be practical, as it
2114 implies that elements end up in different "lanes". Under these
2115 circumstances it is perfectly fine to simply have the lanes
2116 "inactive" for predicated elements, even though it results in
2117 less than 100% ALU utilisation.
2119 ## Twin-predication (based on source and destination register)
2121 Twin-predication is not that much different, except that that
2122 the source is independently zero-predicated from the destination.
2123 This means that the source may be zero-predicated *or* the
2124 destination zero-predicated *or both*, or neither.
2126 When with twin-predication, zeroing is set on the source and not
2127 the destination, if a predicate bit is set it indicates that a zero
2128 data element is passed through the operation (the exception being:
2129 if the source data element is to be treated as an address - a LOAD -
2130 then the data returned *from* the LOAD is zero, rather than looking up an
2133 When zeroing is set on the destination and not the source, then just
2134 as with single-predicated operations, a zero is stored into the destination
2135 element (or target memory address for a STORE).
2137 Zeroing on both source and destination effectively result in a bitwise
2138 NOR operation of the source and destination predicate: the result is that
2139 where either source predicate OR destination predicate is set to 0,
2140 a zero element will ultimately end up in the destination register.
2142 However: this may not necessarily be the case for all operations;
2143 implementors, particularly of custom instructions, clearly need to
2144 think through the implications in each and every case.
2146 Here is pseudo-code for a twin zero-predicated operation:
2148 function op_mv(rd, rs) # MV not VMV!
2149 rd = int_csr[rd].active ? int_csr[rd].regidx : rd;
2150 rs = int_csr[rs].active ? int_csr[rs].regidx : rs;
2151 ps, zerosrc = get_pred_val(FALSE, rs); # predication on src
2152 pd, zerodst = get_pred_val(FALSE, rd); # ... AND on dest
2153 for (int i = 0, int j = 0; i < VL && j < VL):
2154 if (int_csr[rs].isvec && !zerosrc) while (!(ps & 1<<i)) i++;
2155 if (int_csr[rd].isvec && !zerodst) while (!(pd & 1<<j)) j++;
2158 sourcedata = ireg[rs+i];
2161 ireg[rd+j] <= sourcedata
2164 if (int_csr[rs].isvec)
2166 if (int_csr[rd].isvec)
2172 Note that in the instance where the destination is a scalar, the hardware
2173 loop is ended the moment a value *or a zero* is placed into the destination
2174 register/element. Also note that, for clarity, variable element widths
2175 have been left out of the above.
2179 TODO: expand. Exceptions may occur at any time, in any given underlying
2180 scalar operation. This implies that context-switching (traps) may
2181 occur, and operation must be returned to where it left off. That in
2182 turn implies that the full state - including the current parallel
2183 element being processed - has to be saved and restored. This is
2184 what the **STATE** CSR is for.
2186 The implications are that all underlying individual scalar operations
2187 "issued" by the parallelisation have to appear to be executed sequentially.
2188 The further implications are that if two or more individual element
2189 operations are underway, and one with an earlier index causes an exception,
2190 it may be necessary for the microarchitecture to **discard** or terminate
2191 operations with higher indices.
2193 This being somewhat dissatisfactory, an "opaque predication" variant
2194 of the STATE CSR is being considered.
2198 A "HINT" is an operation that has no effect on architectural state,
2199 where its use may, by agreed convention, give advance notification
2200 to the microarchitecture: branch prediction notification would be
2201 a good example. Usually HINTs are where rd=x0.
2203 With Simple-V being capable of issuing *parallel* instructions where
2204 rd=x0, the space for possible HINTs is expanded considerably. VL
2205 could be used to indicate different hints. In addition, if predication
2206 is set, the predication register itself could hypothetically be passed
2207 in as a *parameter* to the HINT operation.
2209 No specific hints are yet defined in Simple-V
2211 # Subsets of RV functionality
2213 This section describes the differences when SV is implemented on top of
2214 different subsets of RV.
2218 It is permitted to limit the size of either (or both) the register files
2219 down to the original size of the standard RV architecture. However, below
2220 the mandatory limits set in the RV standard will result in non-compliance
2221 with the SV Specification.
2225 When RV32 or RV32F is implemented, XLEN is set to 32, and thus the
2226 maximum limit for predication is also restricted to 32 bits. Whilst not
2227 actually specifically an "option" it is worth noting.
2231 Normally in standard RV32 it does not make much sense to have
2232 RV32G, The critical instructions that are missing in standard RV32
2233 are those for moving data to and from the double-width floating-point
2234 registers into the integer ones, as well as the FCVT routines.
2236 In an earlier draft of SV, it was possible to specify an elwidth
2237 of double the standard register size: this had to be dropped,
2238 and may be reintroduced in future revisions.
2240 ## RV32 (not RV32F / RV32G) and RV64 (not RV64F / RV64G)
2242 When floating-point is not implemented, the size of the User Register and
2243 Predication CSR tables may be halved, to only 4 2x16-bit CSRs (8 entries
2248 In embedded scenarios the User Register and Predication CSRs may be
2249 dropped entirely, or optionally limited to 1 CSR, such that the combined
2250 number of entries from the M-Mode CSR Register table plus U-Mode
2251 CSR Register table is either 4 16-bit entries or (if the U-Mode is
2252 zero) only 2 16-bit entries (M-Mode CSR table only). Likewise for
2253 the Predication CSR tables.
2255 RV32E is the most likely candidate for simply detecting that registers
2256 are marked as "vectorised", and generating an appropriate exception
2257 for the VL loop to be implemented in software.
2261 RV128 has not been especially considered, here, however it has some
2262 extremely large possibilities: double the element width implies
2263 256-bit operands, spanning 2 128-bit registers each, and predication
2264 of total length 128 bit given that XLEN is now 128.
2266 # Under consideration <a name="issues"></a>
2268 for element-grouping, if there is unused space within a register
2269 (3 16-bit elements in a 64-bit register for example), recommend:
2271 * For the unused elements in an integer register, the used element
2272 closest to the MSB is sign-extended on write and the unused elements
2273 are ignored on read.
2274 * The unused elements in a floating-point register are treated as-if
2275 they are set to all ones on write and are ignored on read, matching the
2276 existing standard for storing smaller FP values in larger registers.
2282 > One solution is to just not support LR/SC wider than a fixed
2283 > implementation-dependent size, which must be at least
2284 >1 XLEN word, which can be read from a read-only CSR
2285 > that can also be used for info like the kind and width of
2286 > hw parallelism supported (128-bit SIMD, minimal virtual
2287 > parallelism, etc.) and other things (like maybe the number
2288 > of registers supported).
2290 > That CSR would have to have a flag to make a read trap so
2291 > a hypervisor can simulate different values.
2295 > And what about instructions like JALR?
2297 answer: they're not vectorised, so not a problem
2301 * if opcode is in the RV32 group, rd, rs1 and rs2 bitwidth are
2302 XLEN if elwidth==default
2303 * if opcode is in the RV32I group, rd, rs1 and rs2 bitwidth are
2304 *32* if elwidth == default
2308 TODO: update elwidth to be default / 8 / 16 / 32
2312 TODO: document different lengths for INT / FP regfiles, and provide
2313 as part of info register. 00=32, 01=64, 10=128, 11=reserved.
2317 push/pop of vector config state:
2318 <https://groups.google.com/d/msg/comp.arch/bGBeaNjAKvc/z2d_cST7AgAJ>
2320 when Bank in CFG is altered, shift the "addressing" of Reg and
2321 Pred CSRs to match. i.e. treat the Reg and Pred CSRs as a