(no commit message)
[libreriscv.git] / simple_v_extension / specification.mdwn
1 # Simple-V (Parallelism Extension Proposal) Specification
2
3 * Copyright (C) 2017, 2018, 3029 Luke Kenneth Casson Leighton
4 * Status: DRAFTv0.6
5 * Last edited: 21 jun 2019
6 * Ancillary resource: [[opcodes]] [[sv_prefix_proposal]]
7
8 With thanks to:
9
10 * Allen Baum
11 * Bruce Hoult
12 * comp.arch
13 * Jacob Bachmeyer
14 * Guy Lemurieux
15 * Jacob Lifshay
16 * Terje Mathisen
17 * The RISC-V Founders, without whom this all would not be possible.
18
19 [[!toc ]]
20
21 # Summary and Background: Rationale
22
23 Simple-V is a uniform parallelism API for RISC-V hardware that has several
24 unplanned side-effects including code-size reduction, expansion of
25 HINT space and more. The reason for
26 creating it is to provide a manageable way to turn a pre-existing design
27 into a parallel one, in a step-by-step incremental fashion, allowing
28 the implementor to focus on adding hardware where it is needed and necessary.
29 The primary target is for mobile-class 3D GPUs and VPUs, with secondary
30 goals being to reduce executable size and reduce context-switch latency.
31
32 Critically: **No new instructions are added**. The parallelism (if any
33 is implemented) is implicitly added by tagging *standard* scalar registers
34 for redirection. When such a tagged register is used in any instruction,
35 it indicates that the PC shall **not** be incremented; instead a loop
36 is activated where *multiple* instructions are issued to the pipeline
37 (as determined by a length CSR), with contiguously incrementing register
38 numbers starting from the tagged register. When the last "element"
39 has been reached, only then is the PC permitted to move on. Thus
40 Simple-V effectively sits (slots) *in between* the instruction decode phase
41 and the ALU(s).
42
43 The barrier to entry with SV is therefore very low. The minimum
44 compliant implementation is software-emulation (traps), requiring
45 only the CSRs and CSR tables, and that an exception be thrown if an
46 instruction's registers are detected to have been tagged. The looping
47 that would otherwise be done in hardware is thus carried out in software,
48 instead. Whilst much slower, it is "compliant" with the SV specification,
49 and may be suited for implementation in RV32E and also in situations
50 where the implementor wishes to focus on certain aspects of SV, without
51 unnecessary time and resources into the silicon, whilst also conforming
52 strictly with the API. A good area to punt to software would be the
53 polymorphic element width capability for example.
54
55 Hardware Parallelism, if any, is therefore added at the implementor's
56 discretion to turn what would otherwise be a sequential loop into a
57 parallel one.
58
59 To emphasise that clearly: Simple-V (SV) is *not*:
60
61 * A SIMD system
62 * A SIMT system
63 * A Vectorisation Microarchitecture
64 * A microarchitecture of any specific kind
65 * A mandary parallel processor microarchitecture of any kind
66 * A supercomputer extension
67
68 SV does **not** tell implementors how or even if they should implement
69 parallelism: it is a hardware "API" (Application Programming Interface)
70 that, if implemented, presents a uniform and consistent way to *express*
71 parallelism, at the same time leaving the choice of if, how, how much,
72 when and whether to parallelise operations **entirely to the implementor**.
73
74 # Basic Operation
75
76 The principle of SV is as follows:
77
78 * Standard RV instructions are "prefixed" either by a 48 format or a variable
79 length VLIW-like prefix, that indicates
80 which registers are "tagged" as "vectorised"
81 * A "Vector Length" CSR is set, indicating the span of any future
82 "parallel" operations.
83 * If any operation (a **scalar** standard RV opcode)
84 uses a register that has been so "marked",
85 a hardware "macro-unrolling loop" is activated, of length
86 VL, that effectively issues **multiple** identical instructions
87 using contiguous sequentially-incrementing registers.
88 * **Whether they be executed sequentially or in parallel or a
89 mixture of both or punted to software-emulation in a trap handler
90 is entirely up to the implementor**.
91
92 In this way an entire scalar algorithm may be vectorised with
93 the minimum of modification to the hardware and to compiler toolchains.
94
95 To reiterate: **There are *no* new opcodes**
96
97 # CSRs <a name="csrs"></a>
98
99 * An optional "reshaping" CSR key-value table which remaps from a 1D
100 linear shape to 2D or 3D, including full transposition.
101
102 There are also four additional CSRs for User-Mode:
103
104 * MVL (the Maximum Vector Length)
105 * VL (which has different characteristics from standard CSRs)
106 * SUBVL (effectively a kind of SIMD)
107 * STATE (useful for saving and restoring during context switch,
108 and for providing fast transitions)
109
110 There are also three additional CSRs for Supervisor-Mode:
111
112 * SMVL
113 * SVL
114 * SSTATE
115
116 And likewise for M-Mode:
117
118 * MMVL
119 * MVL
120 * MSTATE
121
122 Both Supervisor and M-Mode have their own (small) CSR register and
123 predication tables of only 4 entries each.
124
125 The access pattern for these groups of CSRs in each mode follows the
126 same pattern for other CSRs that have M-Mode and S-Mode "mirrors":
127
128 * In M-Mode, the S-Mode and U-Mode CSRs are separate and distinct.
129 * In S-Mode, accessing and changing of the M-Mode CSRs is identical
130 to changing the S-Mode CSRs. Accessing and changing the U-Mode
131 CSRs is permitted.
132 * In U-Mode, accessing and changing of the S-Mode and U-Mode CSRs
133 is prohibited.
134
135 In M-Mode, only the M-Mode CSRs are in effect, i.e. it is only the
136 M-Mode MVL, the M-Mode STATE and so on that influences the processor
137 behaviour. Likewise for S-Mode, and likewise for U-Mode.
138
139 This has the interesting benefit of allowing M-Mode (or S-Mode)
140 to be set up, for context-switching to take place, and, on return
141 back to the higher privileged mode, the CSRs of that mode will be
142 exactly as they were. Thus, it becomes possible for example to
143 set up CSRs suited best to aiding and assisting low-latency fast
144 context-switching *once and only once*, without the need for
145 re-initialising the CSRs needed to do so.
146
147 ## MAXVECTORLENGTH (MVL) <a name="mvl" />
148
149 MAXVECTORLENGTH is the same concept as MVL in RVV, except that it
150 is variable length and may be dynamically set. MVL is
151 however limited to the regfile bitwidth XLEN (1-32 for RV32,
152 1-64 for RV64 and so on).
153
154 The reason for setting this limit is so that predication registers, when
155 marked as such, may fit into a single register as opposed to fanning out
156 over several registers. This keeps the implementation a little simpler.
157
158 The other important factor to note is that the actual MVL is **offset
159 by one**, so that it can fit into only 6 bits (for RV64) and still cover
160 a range up to XLEN bits. So, when setting the MVL CSR to 0, this actually
161 means that MVL==1. When setting the MVL CSR to 3, this actually means
162 that MVL==4, and so on. This is expressed more clearly in the "pseudocode"
163 section, where there are subtle differences between CSRRW and CSRRWI.
164
165 ## Vector Length (VL) <a name="vl" />
166
167 VSETVL is slightly different from RVV. Like RVV, VL is set to be within
168 the range 1 <= VL <= MVL (where MVL in turn is limited to 1 <= MVL <= XLEN)
169
170 VL = rd = MIN(vlen, MVL)
171
172 where 1 <= MVL <= XLEN
173
174 However just like MVL it is important to note that the range for VL has
175 subtle design implications, covered in the "CSR pseudocode" section
176
177 The fixed (specific) setting of VL allows vector LOAD/STORE to be used
178 to switch the entire bank of registers using a single instruction (see
179 Appendix, "Context Switch Example"). The reason for limiting VL to XLEN
180 is down to the fact that predication bits fit into a single register of
181 length XLEN bits.
182
183 The second change is that when VSETVL is requested to be stored
184 into x0, it is *ignored* silently (VSETVL x0, x5)
185
186 The third and most important change is that, within the limits set by
187 MVL, the value passed in **must** be set in VL (and in the
188 destination register).
189
190 This has implication for the microarchitecture, as VL is required to be
191 set (limits from MVL notwithstanding) to the actual value
192 requested. RVV has the option to set VL to an arbitrary value that suits
193 the conditions and the micro-architecture: SV does *not* permit this.
194
195 The reason is so that if SV is to be used for a context-switch or as a
196 substitute for LOAD/STORE-Multiple, the operation can be done with only
197 2-3 instructions (setup of the CSRs, VSETVL x0, x0, #{regfilelen-1},
198 single LD/ST operation). If VL does *not* get set to the register file
199 length when VSETVL is called, then a software-loop would be needed.
200 To avoid this need, VL *must* be set to exactly what is requested
201 (limits notwithstanding).
202
203 Therefore, in turn, unlike RVV, implementors *must* provide
204 pseudo-parallelism (using sequential loops in hardware) if actual
205 hardware-parallelism in the ALUs is not deployed. A hybrid is also
206 permitted (as used in Broadcom's VideoCore-IV) however this must be
207 *entirely* transparent to the ISA.
208
209 The fourth change is that VSETVL is implemented as a CSR, where the
210 behaviour of CSRRW (and CSRRWI) must be changed to specifically store
211 the *new* value in the destination register, **not** the old value.
212 Where context-load/save is to be implemented in the usual fashion
213 by using a single CSRRW instruction to obtain the old value, the
214 *secondary* CSR must be used (SVSTATE). This CSR behaves
215 exactly as standard CSRs, and contains more than just VL.
216
217 One interesting side-effect of using CSRRWI to set VL is that this
218 may be done with a single instruction, useful particularly for a
219 context-load/save. There are however limitations: CSRWI's immediate
220 is limited to 0-31 (representing VL=1-32).
221
222 Note that when VL is set to 1, all parallel operations cease: the
223 hardware loop is reduced to a single element: scalar operations.
224
225 ## STATE
226
227 This is a standard CSR that contains sufficient information for a
228 full context save/restore. It contains (and permits setting of)
229 MVL, VL, the destination element offset of the current parallel
230 instruction being executed, and, for twin-predication, the source
231 element offset as well. Interestingly it may hypothetically
232 also be used to make the immediately-following instruction to skip a
233 certain number of elements, however the recommended method to do
234 this is predication or using the offset mode of the REMAP CSRs.
235
236 Setting destoffs and srcoffs is realistically intended for saving state
237 so that exceptions (page faults in particular) may be serviced and the
238 hardware-loop that was being executed at the time of the trap, from
239 user-mode (or Supervisor-mode), may be returned to and continued from
240 where it left off. The reason why this works is because setting
241 User-Mode STATE will not change (not be used) in M-Mode or S-Mode
242 (and is entirely why M-Mode and S-Mode have their own STATE CSRs).
243
244 The format of the STATE CSR is as follows:
245
246 | (28..27) | (26..24) | (23..18) | (17..12) | (11..6) | (5...0) |
247 | -------- | -------- | -------- | -------- | ------- | ------- |
248 | rsvd | subvl | destoffs | srcoffs | vl | maxvl |
249
250 When setting this CSR, the following characteristics will be enforced:
251
252 * **MAXVL** will be truncated (after offset) to be within the range 1 to XLEN
253 * **VL** will be truncated (after offset) to be within the range 1 to MAXVL
254 * **SUBVL** which sets a SIMD-like quantity, a grouping quantity.
255 * **srcoffs** will be truncated to be within the range 0 to VL-1
256 * **destoffs** will be truncated to be within the range 0 to VL-1
257
258 ## MVL and VL Pseudocode
259
260 The pseudo-code for get and set of VL and MVL are as follows:
261
262 set_mvl_csr(value, rd):
263 regs[rd] = MVL
264 MVL = MIN(value, MVL)
265
266 get_mvl_csr(rd):
267 regs[rd] = VL
268
269 set_vl_csr(value, rd):
270 VL = MIN(value, MVL)
271 regs[rd] = VL # yes returning the new value NOT the old CSR
272 return VL
273
274 get_vl_csr(rd):
275 regs[rd] = VL
276 return VL
277
278 Note that where setting MVL behaves as a normal CSR, unlike standard CSR
279 behaviour, setting VL will return the **new** value of VL **not** the old
280 one.
281
282 For CSRRWI, the range of the immediate is restricted to 5 bits. In order to
283 maximise the effectiveness, an immediate of 0 is used to set VL=1,
284 an immediate of 1 is used to set VL=2 and so on:
285
286 CSRRWI_Set_MVL(value):
287 set_mvl_csr(value+1, x0)
288
289 CSRRWI_Set_VL(value):
290 set_vl_csr(value+1, x0)
291
292 However for CSRRW the following pseudocode is used for MVL and VL,
293 where setting the value to zero will cause an exception to be raised.
294 The reason is that if VL or MVL are set to zero, the STATE CSR is
295 not capable of returning that value.
296
297 CSRRW_Set_MVL(rs1, rd):
298 value = regs[rs1]
299 if value == 0:
300 raise Exception
301 set_mvl_csr(value, rd)
302
303 CSRRW_Set_VL(rs1, rd):
304 value = regs[rs1]
305 if value == 0:
306 raise Exception
307 set_vl_csr(value, rd)
308
309 In this way, when CSRRW is utilised with a loop variable, the value
310 that goes into VL (and into the destination register) may be used
311 in an instruction-minimal fashion:
312
313 CSRvect1 = {type: F, key: a3, val: a3, elwidth: dflt}
314 CSRvect2 = {type: F, key: a7, val: a7, elwidth: dflt}
315 CSRRWI MVL, 3 # sets MVL == **4** (not 3)
316 j zerotest # in case loop counter a0 already 0
317 loop:
318 CSRRW VL, t0, a0 # vl = t0 = min(mvl, a0)
319 ld a3, a1 # load 4 registers a3-6 from x
320 slli t1, t0, 3 # t1 = vl * 8 (in bytes)
321 ld a7, a2 # load 4 registers a7-10 from y
322 add a1, a1, t1 # increment pointer to x by vl*8
323 fmadd a7, a3, fa0, a7 # v1 += v0 * fa0 (y = a * x + y)
324 sub a0, a0, t0 # n -= vl (t0)
325 st a7, a2 # store 4 registers a7-10 to y
326 add a2, a2, t1 # increment pointer to y by vl*8
327 zerotest:
328 bnez a0, loop # repeat if n != 0
329
330 With the STATE CSR, just like with CSRRWI, in order to maximise the
331 utilisation of the limited bitspace, "000000" in binary represents
332 VL==1, "00001" represents VL==2 and so on (likewise for MVL):
333
334 CSRRW_Set_SV_STATE(rs1, rd):
335 value = regs[rs1]
336 get_state_csr(rd)
337 MVL = set_mvl_csr(value[11:6]+1)
338 VL = set_vl_csr(value[5:0]+1)
339 destoffs = value[23:18]>>18
340 srcoffs = value[23:18]>>12
341
342 get_state_csr(rd):
343 regs[rd] = (MVL-1) | (VL-1)<<6 | (srcoffs)<<12 |
344 (destoffs)<<18
345 return regs[rd]
346
347 In both cases, whilst CSR read of VL and MVL return the exact values
348 of VL and MVL respectively, reading and writing the STATE CSR returns
349 those values **minus one**. This is absolutely critical to implement
350 if the STATE CSR is to be used for fast context-switching.
351
352 ## Register CSR key-value (CAM) table <a name="regcsrtable" />
353
354 The purpose of the Register CSR table is four-fold:
355
356 * To mark integer and floating-point registers as requiring "redirection"
357 if it is ever used as a source or destination in any given operation.
358 This involves a level of indirection through a 5-to-7-bit lookup table,
359 such that **unmodified** operands with 5 bit (3 for Compressed) may
360 access up to **128** registers.
361 * To indicate whether, after redirection through the lookup table, the
362 register is a vector (or remains a scalar).
363 * To over-ride the implicit or explicit bitwidth that the operation would
364 normally give the register.
365
366 16 bit format:
367
368 | RegCAM | | 15 | (14..8) | 7 | (6..5) | (4..0) |
369 | ------ | | - | - | - | ------ | ------- |
370 | 0 | | isvec0 | regidx0 | i/f | vew0 | regkey |
371 | 1 | | isvec1 | regidx1 | i/f | vew1 | regkey |
372 | .. | | isvec.. | regidx.. | i/f | vew.. | regkey |
373 | 15 | | isvec15 | regidx15 | i/f | vew15 | regkey |
374
375 8 bit format:
376
377 | RegCAM | | 7 | (6..5) | (4..0) |
378 | ------ | | - | ------ | ------- |
379 | 0 | | i/f | vew0 | regnum |
380
381 i/f is set to "1" to indicate that the redirection/tag entry is to be applied
382 to integer registers; 0 indicates that it is relevant to floating-point
383 registers.
384
385 The 8 bit format is used for a much more compact expression. "isvec" is implicit and, as in [[sv-prefix-proposal]], the target vector is "regnum<<2", implicitly. Contrast this with the 16-bit format where the target vector is *explicitly* named in bits 8 to 14, and bit 15 may optionally set "scalar" mode.
386
387 vew has the following meanings, indicating that the instruction's
388 operand size is "over-ridden" in a polymorphic fashion:
389
390 | vew | bitwidth |
391 | --- | ------------------- |
392 | 00 | default (XLEN/FLEN) |
393 | 01 | 8 bit |
394 | 10 | 16 bit |
395 | 11 | 32 bit |
396
397 As the above table is a CAM (key-value store) it may be appropriate
398 (faster, implementation-wise) to expand it as follows:
399
400 struct vectorised fp_vec[32], int_vec[32];
401
402 for (i = 0; i < 16; i++) // 16 CSRs?
403 tb = int_vec if CSRvec[i].type == 0 else fp_vec
404 idx = CSRvec[i].regkey // INT/FP src/dst reg in opcode
405 tb[idx].elwidth = CSRvec[i].elwidth
406 tb[idx].regidx = CSRvec[i].regidx // indirection
407 tb[idx].isvector = CSRvec[i].isvector // 0=scalar
408 tb[idx].packed = CSRvec[i].packed // SIMD or not
409
410 The actual size of the CSR Register table depends on the platform
411 and on whether other Extensions are present (RV64G, RV32E, etc.).
412 For details see "Subsets" section.
413
414
415
416
417
418
419 ## Predication CSR <a name="predication_csr_table"></a>
420
421 TODO: update CSR tables, now 7-bit for regidx
422
423 The Predication CSR is a key-value store indicating whether, if a given
424 destination register (integer or floating-point) is referred to in an
425 instruction, it is to be predicated. Tt is particularly important to note
426 that the *actual* register used can be *different* from the one that is
427 in the instruction, due to the redirection through the lookup table.
428
429 * regidx is the actual register that in combination with the
430 i/f flag, if that integer or floating-point register is referred to,
431 results in the lookup table being referenced to find the predication
432 mask to use on the operation in which that (regidx) register has
433 been used
434 * predidx (in combination with the bank bit in the future) is the
435 *actual* register to be used for the predication mask. Note:
436 in effect predidx is actually a 6-bit register address, as the bank
437 bit is the MSB (and is nominally set to zero for now).
438 * inv indicates that the predication mask bits are to be inverted
439 prior to use *without* actually modifying the contents of the
440 register itself.
441 * zeroing is either 1 or 0, and if set to 1, the operation must
442 place zeros in any element position where the predication mask is
443 set to zero. If zeroing is set to 0, unpredicated elements *must*
444 be left alone. Some microarchitectures may choose to interpret
445 this as skipping the operation entirely. Others which wish to
446 stick more closely to a SIMD architecture may choose instead to
447 interpret unpredicated elements as an internal "copy element"
448 operation (which would be necessary in SIMD microarchitectures
449 that perform register-renaming)
450
451 16 bit format:
452
453 | PrCSR | (15..11) | 10 | 9 | 8 | (7..1) | 0 |
454 | ----- | - | - | - | - | ------- | ------- |
455 | 0 | predkey | zero0 | inv0 | i/f | regidx | rsrvd |
456 | 1 | predkey | zero1 | inv1 | i/f | regidx | rsvd |
457 | ... | predkey | ..... | .... | i/f | ....... | ....... |
458 | 15 | predkey | zero15 | inv15 | i/f | regidx | rsvd |
459
460
461 8 bit format:
462
463 | PrCSR | 7 | 6 | 5 | (4..0) |
464 | ----- | - | - | - | ------- |
465 | 0 | zero0 | inv0 | i/f | regnum |
466
467 The 8 bit format is a compact and less expressive variant of the full 16 bit format. Using the 8 bit formatis very different: the predicate register to use is implicit, and numbering begins inplicitly from x9. The regnum is still used to "activate" predication.
468
469 The 16 bit Predication CSR Table is a key-value store, so implementation-wise
470 it will be faster to turn the table around (maintain topologically
471 equivalent state):
472
473 struct pred {
474 bool zero;
475 bool inv;
476 bool enabled;
477 int predidx; // redirection: actual int register to use
478 }
479
480 struct pred fp_pred_reg[32]; // 64 in future (bank=1)
481 struct pred int_pred_reg[32]; // 64 in future (bank=1)
482
483 for (i = 0; i < 16; i++)
484 tb = int_pred_reg if CSRpred[i].type == 0 else fp_pred_reg;
485 idx = CSRpred[i].regidx
486 tb[idx].zero = CSRpred[i].zero
487 tb[idx].inv = CSRpred[i].inv
488 tb[idx].predidx = CSRpred[i].predidx
489 tb[idx].enabled = true
490
491 So when an operation is to be predicated, it is the internal state that
492 is used. In Section 6.4.2 of Hwacha's Manual (EECS-2015-262) the following
493 pseudo-code for operations is given, where p is the explicit (direct)
494 reference to the predication register to be used:
495
496 for (int i=0; i<vl; ++i)
497 if ([!]preg[p][i])
498 (d ? vreg[rd][i] : sreg[rd]) =
499 iop(s1 ? vreg[rs1][i] : sreg[rs1],
500 s2 ? vreg[rs2][i] : sreg[rs2]); // for insts with 2 inputs
501
502 This instead becomes an *indirect* reference using the *internal* state
503 table generated from the Predication CSR key-value store, which is used
504 as follows.
505
506 if type(iop) == INT:
507 preg = int_pred_reg[rd]
508 else:
509 preg = fp_pred_reg[rd]
510
511 for (int i=0; i<vl; ++i)
512 predicate, zeroing = get_pred_val(type(iop) == INT, rd):
513 if (predicate && (1<<i))
514 (d ? regfile[rd+i] : regfile[rd]) =
515 iop(s1 ? regfile[rs1+i] : regfile[rs1],
516 s2 ? regfile[rs2+i] : regfile[rs2]); // for insts with 2 inputs
517 else if (zeroing)
518 (d ? regfile[rd+i] : regfile[rd]) = 0
519
520 Note:
521
522 * d, s1 and s2 are booleans indicating whether destination,
523 source1 and source2 are vector or scalar
524 * key-value CSR-redirection of rd, rs1 and rs2 have NOT been included
525 above, for clarity. rd, rs1 and rs2 all also must ALSO go through
526 register-level redirection (from the Register CSR table) if they are
527 vectors.
528
529 If written as a function, obtaining the predication mask (and whether
530 zeroing takes place) may be done as follows:
531
532 def get_pred_val(bool is_fp_op, int reg):
533 tb = int_reg if is_fp_op else fp_reg
534 if (!tb[reg].enabled):
535 return ~0x0, False // all enabled; no zeroing
536 tb = int_pred if is_fp_op else fp_pred
537 if (!tb[reg].enabled):
538 return ~0x0, False // all enabled; no zeroing
539 predidx = tb[reg].predidx // redirection occurs HERE
540 predicate = intreg[predidx] // actual predicate HERE
541 if (tb[reg].inv):
542 predicate = ~predicate // invert ALL bits
543 return predicate, tb[reg].zero
544
545 Note here, critically, that **only** if the register is marked
546 in its CSR **register** table entry as being "active" does the testing
547 proceed further to check if the CSR **predicate** table entry is
548 also active.
549
550 Note also that this is in direct contrast to branch operations
551 for the storage of comparisions: in these specific circumstances
552 the requirement for there to be an active CSR *register* entry
553 is removed.
554
555 ## REMAP CSR <a name="remap" />
556
557 (Note: both the REMAP and SHAPE sections are best read after the
558 rest of the document has been read)
559
560 There is one 32-bit CSR which may be used to indicate which registers,
561 if used in any operation, must be "reshaped" (re-mapped) from a linear
562 form to a 2D or 3D transposed form, or "offset" to permit arbitrary
563 access to elements within a register.
564
565 The 32-bit REMAP CSR may reshape up to 3 registers:
566
567 | 29..28 | 27..26 | 25..24 | 23 | 22..16 | 15 | 14..8 | 7 | 6..0 |
568 | ------ | ------ | ------ | -- | ------- | -- | ------- | -- | ------- |
569 | shape2 | shape1 | shape0 | 0 | regidx2 | 0 | regidx1 | 0 | regidx0 |
570
571 regidx0-2 refer not to the Register CSR CAM entry but to the underlying
572 *real* register (see regidx, the value) and consequently is 7-bits wide.
573 When set to zero (referring to x0), clearly reshaping x0 is pointless,
574 so is used to indicate "disabled".
575 shape0-2 refers to one of three SHAPE CSRs. A value of 0x3 is reserved.
576 Bits 7, 15, 23, 30 and 31 are also reserved, and must be set to zero.
577
578 It is anticipated that these specialist CSRs not be very often used.
579 Unlike the CSR Register and Predication tables, the REMAP CSRs use
580 the full 7-bit regidx so that they can be set once and left alone,
581 whilst the CSR Register entries pointing to them are disabled, instead.
582
583 ## SHAPE 1D/2D/3D vector-matrix remapping CSRs
584
585 (Note: both the REMAP and SHAPE sections are best read after the
586 rest of the document has been read)
587
588 There are three "shape" CSRs, SHAPE0, SHAPE1, SHAPE2, 32-bits in each,
589 which have the same format. When each SHAPE CSR is set entirely to zeros,
590 remapping is disabled: the register's elements are a linear (1D) vector.
591
592 | 26..24 | 23 | 22..16 | 15 | 14..8 | 7 | 6..0 |
593 | ------- | -- | ------- | -- | ------- | -- | ------- |
594 | permute | offs[2] | zdimsz | offs[1] | ydimsz | offs[0] | xdimsz |
595
596 offs is a 3-bit field, spread out across bits 7, 15 and 23, which
597 is added to the element index during the loop calculation.
598
599 xdimsz, ydimsz and zdimsz are offset by 1, such that a value of 0 indicates
600 that the array dimensionality for that dimension is 1. A value of xdimsz=2
601 would indicate that in the first dimension there are 3 elements in the
602 array. The format of the array is therefore as follows:
603
604 array[xdim+1][ydim+1][zdim+1]
605
606 However whilst illustrative of the dimensionality, that does not take the
607 "permute" setting into account. "permute" may be any one of six values
608 (0-5, with values of 6 and 7 being reserved, and not legal). The table
609 below shows how the permutation dimensionality order works:
610
611 | permute | order | array format |
612 | ------- | ----- | ------------------------ |
613 | 000 | 0,1,2 | (xdim+1)(ydim+1)(zdim+1) |
614 | 001 | 0,2,1 | (xdim+1)(zdim+1)(ydim+1) |
615 | 010 | 1,0,2 | (ydim+1)(xdim+1)(zdim+1) |
616 | 011 | 1,2,0 | (ydim+1)(zdim+1)(xdim+1) |
617 | 100 | 2,0,1 | (zdim+1)(xdim+1)(ydim+1) |
618 | 101 | 2,1,0 | (zdim+1)(ydim+1)(xdim+1) |
619
620 In other words, the "permute" option changes the order in which
621 nested for-loops over the array would be done. The algorithm below
622 shows this more clearly, and may be executed as a python program:
623
624 # mapidx = REMAP.shape2
625 xdim = 3 # SHAPE[mapidx].xdim_sz+1
626 ydim = 4 # SHAPE[mapidx].ydim_sz+1
627 zdim = 5 # SHAPE[mapidx].zdim_sz+1
628
629 lims = [xdim, ydim, zdim]
630 idxs = [0,0,0] # starting indices
631 order = [1,0,2] # experiment with different permutations, here
632 offs = 0 # experiment with different offsets, here
633
634 for idx in range(xdim * ydim * zdim):
635 new_idx = offs + idxs[0] + idxs[1] * xdim + idxs[2] * xdim * ydim
636 print new_idx,
637 for i in range(3):
638 idxs[order[i]] = idxs[order[i]] + 1
639 if (idxs[order[i]] != lims[order[i]]):
640 break
641 print
642 idxs[order[i]] = 0
643
644 Here, it is assumed that this algorithm be run within all pseudo-code
645 throughout this document where a (parallelism) for-loop would normally
646 run from 0 to VL-1 to refer to contiguous register
647 elements; instead, where REMAP indicates to do so, the element index
648 is run through the above algorithm to work out the **actual** element
649 index, instead. Given that there are three possible SHAPE entries, up to
650 three separate registers in any given operation may be simultaneously
651 remapped:
652
653 function op_add(rd, rs1, rs2) # add not VADD!
654 ...
655 ...
656  for (i = 0; i < VL; i++)
657 if (predval & 1<<i) # predication uses intregs
658    ireg[rd+remap(id)] <= ireg[rs1+remap(irs1)] +
659 ireg[rs2+remap(irs2)];
660 if (!int_vec[rd ].isvector) break;
661 if (int_vec[rd ].isvector)  { id += 1; }
662 if (int_vec[rs1].isvector)  { irs1 += 1; }
663 if (int_vec[rs2].isvector)  { irs2 += 1; }
664
665 By changing remappings, 2D matrices may be transposed "in-place" for one
666 operation, followed by setting a different permutation order without
667 having to move the values in the registers to or from memory. Also,
668 the reason for having REMAP separate from the three SHAPE CSRs is so
669 that in a chain of matrix multiplications and additions, for example,
670 the SHAPE CSRs need only be set up once; only the REMAP CSR need be
671 changed to target different registers.
672
673 Note that:
674
675 * Over-running the register file clearly has to be detected and
676 an illegal instruction exception thrown
677 * When non-default elwidths are set, the exact same algorithm still
678 applies (i.e. it offsets elements *within* registers rather than
679 entire registers).
680 * If permute option 000 is utilised, the actual order of the
681 reindexing does not change!
682 * If two or more dimensions are set to zero, the actual order does not change!
683 * The above algorithm is pseudo-code **only**. Actual implementations
684 will need to take into account the fact that the element for-looping
685 must be **re-entrant**, due to the possibility of exceptions occurring.
686 See MSTATE CSR, which records the current element index.
687 * Twin-predicated operations require **two** separate and distinct
688 element offsets. The above pseudo-code algorithm will be applied
689 separately and independently to each, should each of the two
690 operands be remapped. *This even includes C.LDSP* and other operations
691 in that category, where in that case it will be the **offset** that is
692 remapped (see Compressed Stack LOAD/STORE section).
693 * Offset is especially useful, on its own, for accessing elements
694 within the middle of a register. Without offsets, it is necessary
695 to either use a predicated MV, skipping the first elements, or
696 performing a LOAD/STORE cycle to memory.
697 With offsets, the data does not have to be moved.
698 * Setting the total elements (xdim+1) times (ydim+1) times (zdim+1) to
699 less than MVL is **perfectly legal**, albeit very obscure. It permits
700 entries to be regularly presented to operands **more than once**, thus
701 allowing the same underlying registers to act as an accumulator of
702 multiple vector or matrix operations, for example.
703
704 Clearly here some considerable care needs to be taken as the remapping
705 could hypothetically create arithmetic operations that target the
706 exact same underlying registers, resulting in data corruption due to
707 pipeline overlaps. Out-of-order / Superscalar micro-architectures with
708 register-renaming will have an easier time dealing with this than
709 DSP-style SIMD micro-architectures.
710
711 # Instruction Execution Order
712
713 Simple-V behaves as if it is a hardware-level "macro expansion system",
714 substituting and expanding a single instruction into multiple sequential
715 instructions with contiguous and sequentially-incrementing registers.
716 As such, it does **not** modify - or specify - the behaviour and semantics of
717 the execution order: that may be deduced from the **existing** RV
718 specification in each and every case.
719
720 So for example if a particular micro-architecture permits out-of-order
721 execution, and it is augmented with Simple-V, then wherever instructions
722 may be out-of-order then so may the "post-expansion" SV ones.
723
724 If on the other hand there are memory guarantees which specifically
725 prevent and prohibit certain instructions from being re-ordered
726 (such as the Atomicity Axiom, or FENCE constraints), then clearly
727 those constraints **MUST** also be obeyed "post-expansion".
728
729 It should be absolutely clear that SV is **not** about providing new
730 functionality or changing the existing behaviour of a micro-architetural
731 design, or about changing the RISC-V Specification.
732 It is **purely** about compacting what would otherwise be contiguous
733 instructions that use sequentially-increasing register numbers down
734 to the **one** instruction.
735
736 # Instructions <a name="instructions" />
737
738 Despite being a 98% complete and accurate topological remap of RVV
739 concepts and functionality, no new instructions are needed.
740 Compared to RVV: *All* RVV instructions can be re-mapped, however xBitManip
741 becomes a critical dependency for efficient manipulation of predication
742 masks (as a bit-field). Despite the removal of all operations,
743 with the exception of CLIP and VSELECT.X
744 *all instructions from RVV Base are topologically re-mapped and retain their
745 complete functionality, intact*. Note that if RV64G ever had
746 a MV.X added as well as FCLIP, the full functionality of RVV-Base would
747 be obtained in SV.
748
749 Three instructions, VSELECT, VCLIP and VCLIPI, do not have RV Standard
750 equivalents, so are left out of Simple-V. VSELECT could be included if
751 there existed a MV.X instruction in RV (MV.X is a hypothetical
752 non-immediate variant of MV that would allow another register to
753 specify which register was to be copied). Note that if any of these three
754 instructions are added to any given RV extension, their functionality
755 will be inherently parallelised.
756
757 With some exceptions, where it does not make sense or is simply too
758 challenging, all RV-Base instructions are parallelised:
759
760 * CSR instructions, whilst a case could be made for fast-polling of
761 a CSR into multiple registers, or for being able to copy multiple
762 contiguously addressed CSRs into contiguous registers, and so on,
763 are the fundamental core basis of SV. If parallelised, extreme
764 care would need to be taken. Additionally, CSR reads are done
765 using x0, and it is *really* inadviseable to tag x0.
766 * LUI, C.J, C.JR, WFI, AUIPC are not suitable for parallelising so are
767 left as scalar.
768 * LR/SC could hypothetically be parallelised however their purpose is
769 single (complex) atomic memory operations where the LR must be followed
770 up by a matching SC. A sequence of parallel LR instructions followed
771 by a sequence of parallel SC instructions therefore is guaranteed to
772 not be useful. Not least: the guarantees of a Multi-LR/SC
773 would be impossible to provide if emulated in a trap.
774 * EBREAK, NOP, FENCE and others do not use registers so are not inherently
775 paralleliseable anyway.
776
777 All other operations using registers are automatically parallelised.
778 This includes AMOMAX, AMOSWAP and so on, where particular care and
779 attention must be paid.
780
781 Example pseudo-code for an integer ADD operation (including scalar operations).
782 Floating-point uses fp csrs.
783
784 function op_add(rd, rs1, rs2) # add not VADD!
785  int i, id=0, irs1=0, irs2=0;
786  predval = get_pred_val(FALSE, rd);
787  rd = int_vec[rd ].isvector ? int_vec[rd ].regidx : rd;
788  rs1 = int_vec[rs1].isvector ? int_vec[rs1].regidx : rs1;
789  rs2 = int_vec[rs2].isvector ? int_vec[rs2].regidx : rs2;
790  for (i = 0; i < VL; i++)
791 if (predval & 1<<i) # predication uses intregs
792    ireg[rd+id] <= ireg[rs1+irs1] + ireg[rs2+irs2];
793 if (!int_vec[rd ].isvector) break;
794 if (int_vec[rd ].isvector)  { id += 1; }
795 if (int_vec[rs1].isvector)  { irs1 += 1; }
796 if (int_vec[rs2].isvector)  { irs2 += 1; }
797
798 Note that for simplicity there is quite a lot missing from the above
799 pseudo-code: element widths, zeroing on predication, dimensional
800 reshaping and offsets and so on. However it demonstrates the basic
801 principle. Augmentations that produce the full pseudo-code are covered in
802 other sections.
803
804 ## Instruction Format
805
806 It is critical to appreciate that there are
807 **no operations added to SV, at all**.
808
809 Instead, by using CSRs to tag registers as an indication of "changed behaviour",
810 SV *overloads* pre-existing branch operations into predicated
811 variants, and implicitly overloads arithmetic operations, MV,
812 FCVT, and LOAD/STORE depending on CSR configurations for bitwidth
813 and predication. **Everything** becomes parallelised. *This includes
814 Compressed instructions* as well as any future instructions and Custom
815 Extensions.
816
817 Note: CSR tags to change behaviour of instructions is nothing new, including
818 in RISC-V. UXL, SXL and MXL change the behaviour so that XLEN=32/64/128.
819 FRM changes the behaviour of the floating-point unit, to alter the rounding
820 mode. Other architectures change the LOAD/STORE byte-order from big-endian
821 to little-endian on a per-instruction basis. SV is just a little more...
822 comprehensive in its effect on instructions.
823
824 ## Branch Instructions
825
826 ### Standard Branch <a name="standard_branch"></a>
827
828 Branch operations use standard RV opcodes that are reinterpreted to
829 be "predicate variants" in the instance where either of the two src
830 registers are marked as vectors (active=1, vector=1).
831
832 Note that the predication register to use (if one is enabled) is taken from
833 the *first* src register, and that this is used, just as with predicated
834 arithmetic operations, to mask whether the comparison operations take
835 place or not. The target (destination) predication register
836 to use (if one is enabled) is taken from the *second* src register.
837
838 If either of src1 or src2 are scalars (whether by there being no
839 CSR register entry or whether by the CSR entry specifically marking
840 the register as "scalar") the comparison goes ahead as vector-scalar
841 or scalar-vector.
842
843 In instances where no vectorisation is detected on either src registers
844 the operation is treated as an absolutely standard scalar branch operation.
845 Where vectorisation is present on either or both src registers, the
846 branch may stil go ahead if any only if *all* tests succeed (i.e. excluding
847 those tests that are predicated out).
848
849 Note that when zero-predication is enabled (from source rs1),
850 a cleared bit in the predicate indicates that the result
851 of the compare is set to "false", i.e. that the corresponding
852 destination bit (or result)) be set to zero. Contrast this with
853 when zeroing is not set: bits in the destination predicate are
854 only *set*; they are **not** cleared. This is important to appreciate,
855 as there may be an expectation that, going into the hardware-loop,
856 the destination predicate is always expected to be set to zero:
857 this is **not** the case. The destination predicate is only set
858 to zero if **zeroing** is enabled.
859
860 Note that just as with the standard (scalar, non-predicated) branch
861 operations, BLE, BGT, BLEU and BTGU may be synthesised by inverting
862 src1 and src2.
863
864 In Hwacha EECS-2015-262 Section 6.7.2 the following pseudocode is given
865 for predicated compare operations of function "cmp":
866
867 for (int i=0; i<vl; ++i)
868 if ([!]preg[p][i])
869 preg[pd][i] = cmp(s1 ? vreg[rs1][i] : sreg[rs1],
870 s2 ? vreg[rs2][i] : sreg[rs2]);
871
872 With associated predication, vector-length adjustments and so on,
873 and temporarily ignoring bitwidth (which makes the comparisons more
874 complex), this becomes:
875
876 s1 = reg_is_vectorised(src1);
877 s2 = reg_is_vectorised(src2);
878
879 if not s1 && not s2
880 if cmp(rs1, rs2) # scalar compare
881 goto branch
882 return
883
884 preg = int_pred_reg[rd]
885 reg = int_regfile
886
887 ps = get_pred_val(I/F==INT, rs1);
888 rd = get_pred_val(I/F==INT, rs2); # this may not exist
889
890 if not exists(rd) or zeroing:
891 result = 0
892 else
893 result = preg[rd]
894
895 for (int i = 0; i < VL; ++i)
896 if (zeroing)
897 if not (ps & (1<<i))
898 result &= ~(1<<i);
899 else if (ps & (1<<i))
900 if (cmp(s1 ? reg[src1+i]:reg[src1],
901 s2 ? reg[src2+i]:reg[src2])
902 result |= 1<<i;
903 else
904 result &= ~(1<<i);
905
906 if not exists(rd)
907 if result == ps
908 goto branch
909 else
910 preg[rd] = result # store in destination
911 if preg[rd] == ps
912 goto branch
913
914 Notes:
915
916 * Predicated SIMD comparisons would break src1 and src2 further down
917 into bitwidth-sized chunks (see Appendix "Bitwidth Virtual Register
918 Reordering") setting Vector-Length times (number of SIMD elements) bits
919 in Predicate Register rd, as opposed to just Vector-Length bits.
920 * The execution of "parallelised" instructions **must** be implemented
921 as "re-entrant" (to use a term from software). If an exception (trap)
922 occurs during the middle of a vectorised
923 Branch (now a SV predicated compare) operation, the partial results
924 of any comparisons must be written out to the destination
925 register before the trap is permitted to begin. If however there
926 is no predicate, the **entire** set of comparisons must be **restarted**,
927 with the offset loop indices set back to zero. This is because
928 there is no place to store the temporary result during the handling
929 of traps.
930
931 TODO: predication now taken from src2. also branch goes ahead
932 if all compares are successful.
933
934 Note also that where normally, predication requires that there must
935 also be a CSR register entry for the register being used in order
936 for the **predication** CSR register entry to also be active,
937 for branches this is **not** the case. src2 does **not** have
938 to have its CSR register entry marked as active in order for
939 predication on src2 to be active.
940
941 Also note: SV Branch operations are **not** twin-predicated
942 (see Twin Predication section). This would require three
943 element offsets: one to track src1, one to track src2 and a third
944 to track where to store the accumulation of the results. Given
945 that the element offsets need to be exposed via CSRs so that
946 the parallel hardware looping may be made re-entrant on traps
947 and exceptions, the decision was made not to make SV Branches
948 twin-predicated.
949
950 ### Floating-point Comparisons
951
952 There does not exist floating-point branch operations, only compare.
953 Interestingly no change is needed to the instruction format because
954 FP Compare already stores a 1 or a zero in its "rd" integer register
955 target, i.e. it's not actually a Branch at all: it's a compare.
956
957 In RV (scalar) Base, a branch on a floating-point compare is
958 done via the sequence "FEQ x1, f0, f5; BEQ x1, x0, #jumploc".
959 This does extend to SV, as long as x1 (in the example sequence given)
960 is vectorised. When that is the case, x1..x(1+VL-1) will also be
961 set to 0 or 1 depending on whether f0==f5, f1==f6, f2==f7 and so on.
962 The BEQ that follows will *also* compare x1==x0, x2==x0, x3==x0 and
963 so on. Consequently, unlike integer-branch, FP Compare needs no
964 modification in its behaviour.
965
966 In addition, it is noted that an entry "FNE" (the opposite of FEQ) is missing,
967 and whilst in ordinary branch code this is fine because the standard
968 RVF compare can always be followed up with an integer BEQ or a BNE (or
969 a compressed comparison to zero or non-zero), in predication terms that
970 becomes more of an impact. To deal with this, SV's predication has
971 had "invert" added to it.
972
973 Also: note that FP Compare may be predicated, using the destination
974 integer register (rd) to determine the predicate. FP Compare is **not**
975 a twin-predication operation, as, again, just as with SV Branches,
976 there are three registers involved: FP src1, FP src2 and INT rd.
977
978 ### Compressed Branch Instruction
979
980 Compressed Branch instructions are, just like standard Branch instructions,
981 reinterpreted to be vectorised and predicated based on the source register
982 (rs1s) CSR entries. As however there is only the one source register,
983 given that c.beqz a10 is equivalent to beqz a10,x0, the optional target
984 to store the results of the comparisions is taken from CSR predication
985 table entries for **x0**.
986
987 The specific required use of x0 is, with a little thought, quite obvious,
988 but is counterintuitive. Clearly it is **not** recommended to redirect
989 x0 with a CSR register entry, however as a means to opaquely obtain
990 a predication target it is the only sensible option that does not involve
991 additional special CSRs (or, worse, additional special opcodes).
992
993 Note also that, just as with standard branches, the 2nd source
994 (in this case x0 rather than src2) does **not** have to have its CSR
995 register table marked as "active" in order for predication to work.
996
997 ## Vectorised Dual-operand instructions
998
999 There is a series of 2-operand instructions involving copying (and
1000 sometimes alteration):
1001
1002 * C.MV
1003 * FMV, FNEG, FABS, FCVT, FSGNJ, FSGNJN and FSGNJX
1004 * C.LWSP, C.SWSP, C.LDSP, C.FLWSP etc.
1005 * LOAD(-FP) and STORE(-FP)
1006
1007 All of these operations follow the same two-operand pattern, so it is
1008 *both* the source *and* destination predication masks that are taken into
1009 account. This is different from
1010 the three-operand arithmetic instructions, where the predication mask
1011 is taken from the *destination* register, and applied uniformly to the
1012 elements of the source register(s), element-for-element.
1013
1014 The pseudo-code pattern for twin-predicated operations is as
1015 follows:
1016
1017 function op(rd, rs):
1018  rd = int_csr[rd].active ? int_csr[rd].regidx : rd;
1019  rs = int_csr[rs].active ? int_csr[rs].regidx : rs;
1020  ps = get_pred_val(FALSE, rs); # predication on src
1021  pd = get_pred_val(FALSE, rd); # ... AND on dest
1022  for (int i = 0, int j = 0; i < VL && j < VL;):
1023 if (int_csr[rs].isvec) while (!(ps & 1<<i)) i++;
1024 if (int_csr[rd].isvec) while (!(pd & 1<<j)) j++;
1025 reg[rd+j] = SCALAR_OPERATION_ON(reg[rs+i])
1026 if (int_csr[rs].isvec) i++;
1027 if (int_csr[rd].isvec) j++; else break
1028
1029 This pattern covers scalar-scalar, scalar-vector, vector-scalar
1030 and vector-vector, and predicated variants of all of those.
1031 Zeroing is not presently included (TODO). As such, when compared
1032 to RVV, the twin-predicated variants of C.MV and FMV cover
1033 **all** standard vector operations: VINSERT, VSPLAT, VREDUCE,
1034 VEXTRACT, VSCATTER, VGATHER, VCOPY, and more.
1035
1036 Note that:
1037
1038 * elwidth (SIMD) is not covered in the pseudo-code above
1039 * ending the loop early in scalar cases (VINSERT, VEXTRACT) is also
1040 not covered
1041 * zero predication is also not shown (TODO).
1042
1043 ### C.MV Instruction <a name="c_mv"></a>
1044
1045 There is no MV instruction in RV however there is a C.MV instruction.
1046 It is used for copying integer-to-integer registers (vectorised FMV
1047 is used for copying floating-point).
1048
1049 If either the source or the destination register are marked as vectors
1050 C.MV is reinterpreted to be a vectorised (multi-register) predicated
1051 move operation. The actual instruction's format does not change:
1052
1053 [[!table data="""
1054 15 12 | 11 7 | 6 2 | 1 0 |
1055 funct4 | rd | rs | op |
1056 4 | 5 | 5 | 2 |
1057 C.MV | dest | src | C0 |
1058 """]]
1059
1060 A simplified version of the pseudocode for this operation is as follows:
1061
1062 function op_mv(rd, rs) # MV not VMV!
1063  rd = int_csr[rd].active ? int_csr[rd].regidx : rd;
1064  rs = int_csr[rs].active ? int_csr[rs].regidx : rs;
1065  ps = get_pred_val(FALSE, rs); # predication on src
1066  pd = get_pred_val(FALSE, rd); # ... AND on dest
1067  for (int i = 0, int j = 0; i < VL && j < VL;):
1068 if (int_csr[rs].isvec) while (!(ps & 1<<i)) i++;
1069 if (int_csr[rd].isvec) while (!(pd & 1<<j)) j++;
1070 ireg[rd+j] <= ireg[rs+i];
1071 if (int_csr[rs].isvec) i++;
1072 if (int_csr[rd].isvec) j++; else break
1073
1074 There are several different instructions from RVV that are covered by
1075 this one opcode:
1076
1077 [[!table data="""
1078 src | dest | predication | op |
1079 scalar | vector | none | VSPLAT |
1080 scalar | vector | destination | sparse VSPLAT |
1081 scalar | vector | 1-bit dest | VINSERT |
1082 vector | scalar | 1-bit? src | VEXTRACT |
1083 vector | vector | none | VCOPY |
1084 vector | vector | src | Vector Gather |
1085 vector | vector | dest | Vector Scatter |
1086 vector | vector | src & dest | Gather/Scatter |
1087 vector | vector | src == dest | sparse VCOPY |
1088 """]]
1089
1090 Also, VMERGE may be implemented as back-to-back (macro-op fused) C.MV
1091 operations with inversion on the src and dest predication for one of the
1092 two C.MV operations.
1093
1094 Note that in the instance where the Compressed Extension is not implemented,
1095 MV may be used, but that is a pseudo-operation mapping to addi rd, x0, rs.
1096 Note that the behaviour is **different** from C.MV because with addi the
1097 predication mask to use is taken **only** from rd and is applied against
1098 all elements: rs[i] = rd[i].
1099
1100 ### FMV, FNEG and FABS Instructions
1101
1102 These are identical in form to C.MV, except covering floating-point
1103 register copying. The same double-predication rules also apply.
1104 However when elwidth is not set to default the instruction is implicitly
1105 and automatic converted to a (vectorised) floating-point type conversion
1106 operation of the appropriate size covering the source and destination
1107 register bitwidths.
1108
1109 (Note that FMV, FNEG and FABS are all actually pseudo-instructions)
1110
1111 ### FVCT Instructions
1112
1113 These are again identical in form to C.MV, except that they cover
1114 floating-point to integer and integer to floating-point. When element
1115 width in each vector is set to default, the instructions behave exactly
1116 as they are defined for standard RV (scalar) operations, except vectorised
1117 in exactly the same fashion as outlined in C.MV.
1118
1119 However when the source or destination element width is not set to default,
1120 the opcode's explicit element widths are *over-ridden* to new definitions,
1121 and the opcode's element width is taken as indicative of the SIMD width
1122 (if applicable i.e. if packed SIMD is requested) instead.
1123
1124 For example FCVT.S.L would normally be used to convert a 64-bit
1125 integer in register rs1 to a 64-bit floating-point number in rd.
1126 If however the source rs1 is set to be a vector, where elwidth is set to
1127 default/2 and "packed SIMD" is enabled, then the first 32 bits of
1128 rs1 are converted to a floating-point number to be stored in rd's
1129 first element and the higher 32-bits *also* converted to floating-point
1130 and stored in the second. The 32 bit size comes from the fact that
1131 FCVT.S.L's integer width is 64 bit, and with elwidth on rs1 set to
1132 divide that by two it means that rs1 element width is to be taken as 32.
1133
1134 Similar rules apply to the destination register.
1135
1136 ## LOAD / STORE Instructions and LOAD-FP/STORE-FP <a name="load_store"></a>
1137
1138 An earlier draft of SV modified the behaviour of LOAD/STORE (modified
1139 the interpretation of the instruction fields). This
1140 actually undermined the fundamental principle of SV, namely that there
1141 be no modifications to the scalar behaviour (except where absolutely
1142 necessary), in order to simplify an implementor's task if considering
1143 converting a pre-existing scalar design to support parallelism.
1144
1145 So the original RISC-V scalar LOAD/STORE and LOAD-FP/STORE-FP functionality
1146 do not change in SV, however just as with C.MV it is important to note
1147 that dual-predication is possible.
1148
1149 In vectorised architectures there are usually at least two different modes
1150 for LOAD/STORE:
1151
1152 * Read (or write for STORE) from sequential locations, where one
1153 register specifies the address, and the one address is incremented
1154 by a fixed amount. This is usually known as "Unit Stride" mode.
1155 * Read (or write) from multiple indirected addresses, where the
1156 vector elements each specify separate and distinct addresses.
1157
1158 To support these different addressing modes, the CSR Register "isvector"
1159 bit is used. So, for a LOAD, when the src register is set to
1160 scalar, the LOADs are sequentially incremented by the src register
1161 element width, and when the src register is set to "vector", the
1162 elements are treated as indirection addresses. Simplified
1163 pseudo-code would look like this:
1164
1165 function op_ld(rd, rs) # LD not VLD!
1166  rdv = int_csr[rd].active ? int_csr[rd].regidx : rd;
1167  rsv = int_csr[rs].active ? int_csr[rs].regidx : rs;
1168  ps = get_pred_val(FALSE, rs); # predication on src
1169  pd = get_pred_val(FALSE, rd); # ... AND on dest
1170  for (int i = 0, int j = 0; i < VL && j < VL;):
1171 if (int_csr[rs].isvec) while (!(ps & 1<<i)) i++;
1172 if (int_csr[rd].isvec) while (!(pd & 1<<j)) j++;
1173 if (int_csr[rd].isvec)
1174 # indirect mode (multi mode)
1175 srcbase = ireg[rsv+i];
1176 else
1177 # unit stride mode
1178 srcbase = ireg[rsv] + i * XLEN/8; # offset in bytes
1179 ireg[rdv+j] <= mem[srcbase + imm_offs];
1180 if (!int_csr[rs].isvec &&
1181 !int_csr[rd].isvec) break # scalar-scalar LD
1182 if (int_csr[rs].isvec) i++;
1183 if (int_csr[rd].isvec) j++;
1184
1185 Notes:
1186
1187 * For simplicity, zeroing and elwidth is not included in the above:
1188 the key focus here is the decision-making for srcbase; vectorised
1189 rs means use sequentially-numbered registers as the indirection
1190 address, and scalar rs is "offset" mode.
1191 * The test towards the end for whether both source and destination are
1192 scalar is what makes the above pseudo-code provide the "standard" RV
1193 Base behaviour for LD operations.
1194 * The offset in bytes (XLEN/8) changes depending on whether the
1195 operation is a LB (1 byte), LH (2 byes), LW (4 bytes) or LD
1196 (8 bytes), and also whether the element width is over-ridden
1197 (see special element width section).
1198
1199 ## Compressed Stack LOAD / STORE Instructions <a name="c_ld_st"></a>
1200
1201 C.LWSP / C.SWSP and floating-point etc. are also source-dest twin-predicated,
1202 where it is implicit in C.LWSP/FLWSP etc. that x2 is the source register.
1203 It is therefore possible to use predicated C.LWSP to efficiently
1204 pop registers off the stack (by predicating x2 as the source), cherry-picking
1205 which registers to store to (by predicating the destination). Likewise
1206 for C.SWSP. In this way, LOAD/STORE-Multiple is efficiently achieved.
1207
1208 The two modes ("unit stride" and multi-indirection) are still supported,
1209 as with standard LD/ST. Essentially, the only difference is that the
1210 use of x2 is hard-coded into the instruction.
1211
1212 **Note**: it is still possible to redirect x2 to an alternative target
1213 register. With care, this allows C.LWSP / C.SWSP (and C.FLWSP) to be used as
1214 general-purpose LOAD/STORE operations.
1215
1216 ## Compressed LOAD / STORE Instructions
1217
1218 Compressed LOAD and STORE are again exactly the same as scalar LOAD/STORE,
1219 where the same rules apply and the same pseudo-code apply as for
1220 non-compressed LOAD/STORE. Again: setting scalar or vector mode
1221 on the src for LOAD and dest for STORE switches mode from "Unit Stride"
1222 to "Multi-indirection", respectively.
1223
1224 # Element bitwidth polymorphism <a name="elwidth"></a>
1225
1226 Element bitwidth is best covered as its own special section, as it
1227 is quite involved and applies uniformly across-the-board. SV restricts
1228 bitwidth polymorphism to default, 8-bit, 16-bit and 32-bit.
1229
1230 The effect of setting an element bitwidth is to re-cast each entry
1231 in the register table, and for all memory operations involving
1232 load/stores of certain specific sizes, to a completely different width.
1233 Thus In c-style terms, on an RV64 architecture, effectively each register
1234 now looks like this:
1235
1236 typedef union {
1237 uint8_t b[8];
1238 uint16_t s[4];
1239 uint32_t i[2];
1240 uint64_t l[1];
1241 } reg_t;
1242
1243 // integer table: assume maximum SV 7-bit regfile size
1244 reg_t int_regfile[128];
1245
1246 where the CSR Register table entry (not the instruction alone) determines
1247 which of those union entries is to be used on each operation, and the
1248 VL element offset in the hardware-loop specifies the index into each array.
1249
1250 However a naive interpretation of the data structure above masks the
1251 fact that setting VL greater than 8, for example, when the bitwidth is 8,
1252 accessing one specific register "spills over" to the following parts of
1253 the register file in a sequential fashion. So a much more accurate way
1254 to reflect this would be:
1255
1256 typedef union {
1257 uint8_t actual_bytes[8]; // 8 for RV64, 4 for RV32, 16 for RV128
1258 uint8_t b[0]; // array of type uint8_t
1259 uint16_t s[0];
1260 uint32_t i[0];
1261 uint64_t l[0];
1262 uint128_t d[0];
1263 } reg_t;
1264
1265 reg_t int_regfile[128];
1266
1267 where when accessing any individual regfile[n].b entry it is permitted
1268 (in c) to arbitrarily over-run the *declared* length of the array (zero),
1269 and thus "overspill" to consecutive register file entries in a fashion
1270 that is completely transparent to a greatly-simplified software / pseudo-code
1271 representation.
1272 It is however critical to note that it is clearly the responsibility of
1273 the implementor to ensure that, towards the end of the register file,
1274 an exception is thrown if attempts to access beyond the "real" register
1275 bytes is ever attempted.
1276
1277 Now we may modify pseudo-code an operation where all element bitwidths have
1278 been set to the same size, where this pseudo-code is otherwise identical
1279 to its "non" polymorphic versions (above):
1280
1281 function op_add(rd, rs1, rs2) # add not VADD!
1282 ...
1283 ...
1284  for (i = 0; i < VL; i++)
1285 ...
1286 ...
1287 // TODO, calculate if over-run occurs, for each elwidth
1288 if (elwidth == 8) {
1289    int_regfile[rd].b[id] <= int_regfile[rs1].i[irs1] +
1290     int_regfile[rs2].i[irs2];
1291 } else if elwidth == 16 {
1292    int_regfile[rd].s[id] <= int_regfile[rs1].s[irs1] +
1293     int_regfile[rs2].s[irs2];
1294 } else if elwidth == 32 {
1295    int_regfile[rd].i[id] <= int_regfile[rs1].i[irs1] +
1296     int_regfile[rs2].i[irs2];
1297 } else { // elwidth == 64
1298    int_regfile[rd].l[id] <= int_regfile[rs1].l[irs1] +
1299     int_regfile[rs2].l[irs2];
1300 }
1301 ...
1302 ...
1303
1304 So here we can see clearly: for 8-bit entries rd, rs1 and rs2 (and registers
1305 following sequentially on respectively from the same) are "type-cast"
1306 to 8-bit; for 16-bit entries likewise and so on.
1307
1308 However that only covers the case where the element widths are the same.
1309 Where the element widths are different, the following algorithm applies:
1310
1311 * Analyse the bitwidth of all source operands and work out the
1312 maximum. Record this as "maxsrcbitwidth"
1313 * If any given source operand requires sign-extension or zero-extension
1314 (ldb, div, rem, mul, sll, srl, sra etc.), instead of mandatory 32-bit
1315 sign-extension / zero-extension or whatever is specified in the standard
1316 RV specification, **change** that to sign-extending from the respective
1317 individual source operand's bitwidth from the CSR table out to
1318 "maxsrcbitwidth" (previously calculated), instead.
1319 * Following separate and distinct (optional) sign/zero-extension of all
1320 source operands as specifically required for that operation, carry out the
1321 operation at "maxsrcbitwidth". (Note that in the case of LOAD/STORE or MV
1322 this may be a "null" (copy) operation, and that with FCVT, the changes
1323 to the source and destination bitwidths may also turn FVCT effectively
1324 into a copy).
1325 * If the destination operand requires sign-extension or zero-extension,
1326 instead of a mandatory fixed size (typically 32-bit for arithmetic,
1327 for subw for example, and otherwise various: 8-bit for sb, 16-bit for sw
1328 etc.), overload the RV specification with the bitwidth from the
1329 destination register's elwidth entry.
1330 * Finally, store the (optionally) sign/zero-extended value into its
1331 destination: memory for sb/sw etc., or an offset section of the register
1332 file for an arithmetic operation.
1333
1334 In this way, polymorphic bitwidths are achieved without requiring a
1335 massive 64-way permutation of calculations **per opcode**, for example
1336 (4 possible rs1 bitwidths times 4 possible rs2 bitwidths times 4 possible
1337 rd bitwidths). The pseudo-code is therefore as follows:
1338
1339 typedef union {
1340 uint8_t b;
1341 uint16_t s;
1342 uint32_t i;
1343 uint64_t l;
1344 } el_reg_t;
1345
1346 bw(elwidth):
1347 if elwidth == 0:
1348 return xlen
1349 if elwidth == 1:
1350 return xlen / 2
1351 if elwidth == 2:
1352 return xlen * 2
1353 // elwidth == 3:
1354 return 8
1355
1356 get_max_elwidth(rs1, rs2):
1357 return max(bw(int_csr[rs1].elwidth), # default (XLEN) if not set
1358 bw(int_csr[rs2].elwidth)) # again XLEN if no entry
1359
1360 get_polymorphed_reg(reg, bitwidth, offset):
1361 el_reg_t res;
1362 res.l = 0; // TODO: going to need sign-extending / zero-extending
1363 if bitwidth == 8:
1364 reg.b = int_regfile[reg].b[offset]
1365 elif bitwidth == 16:
1366 reg.s = int_regfile[reg].s[offset]
1367 elif bitwidth == 32:
1368 reg.i = int_regfile[reg].i[offset]
1369 elif bitwidth == 64:
1370 reg.l = int_regfile[reg].l[offset]
1371 return res
1372
1373 set_polymorphed_reg(reg, bitwidth, offset, val):
1374 if (!int_csr[reg].isvec):
1375 # sign/zero-extend depending on opcode requirements, from
1376 # the reg's bitwidth out to the full bitwidth of the regfile
1377 val = sign_or_zero_extend(val, bitwidth, xlen)
1378 int_regfile[reg].l[0] = val
1379 elif bitwidth == 8:
1380 int_regfile[reg].b[offset] = val
1381 elif bitwidth == 16:
1382 int_regfile[reg].s[offset] = val
1383 elif bitwidth == 32:
1384 int_regfile[reg].i[offset] = val
1385 elif bitwidth == 64:
1386 int_regfile[reg].l[offset] = val
1387
1388 maxsrcwid = get_max_elwidth(rs1, rs2) # source element width(s)
1389 destwid = int_csr[rs1].elwidth # destination element width
1390  for (i = 0; i < VL; i++)
1391 if (predval & 1<<i) # predication uses intregs
1392 // TODO, calculate if over-run occurs, for each elwidth
1393 src1 = get_polymorphed_reg(rs1, maxsrcwid, irs1)
1394 // TODO, sign/zero-extend src1 and src2 as operation requires
1395 if (op_requires_sign_extend_src1)
1396 src1 = sign_extend(src1, maxsrcwid)
1397 src2 = get_polymorphed_reg(rs2, maxsrcwid, irs2)
1398 result = src1 + src2 # actual add here
1399 // TODO, sign/zero-extend result, as operation requires
1400 if (op_requires_sign_extend_dest)
1401 result = sign_extend(result, maxsrcwid)
1402 set_polymorphed_reg(rd, destwid, ird, result)
1403 if (!int_vec[rd].isvector) break
1404 if (int_vec[rd ].isvector)  { id += 1; }
1405 if (int_vec[rs1].isvector)  { irs1 += 1; }
1406 if (int_vec[rs2].isvector)  { irs2 += 1; }
1407
1408 Whilst specific sign-extension and zero-extension pseudocode call
1409 details are left out, due to each operation being different, the above
1410 should be clear that;
1411
1412 * the source operands are extended out to the maximum bitwidth of all
1413 source operands
1414 * the operation takes place at that maximum source bitwidth (the
1415 destination bitwidth is not involved at this point, at all)
1416 * the result is extended (or potentially even, truncated) before being
1417 stored in the destination. i.e. truncation (if required) to the
1418 destination width occurs **after** the operation **not** before.
1419 * when the destination is not marked as "vectorised", the **full**
1420 (standard, scalar) register file entry is taken up, i.e. the
1421 element is either sign-extended or zero-extended to cover the
1422 full register bitwidth (XLEN) if it is not already XLEN bits long.
1423
1424 Implementors are entirely free to optimise the above, particularly
1425 if it is specifically known that any given operation will complete
1426 accurately in less bits, as long as the results produced are
1427 directly equivalent and equal, for all inputs and all outputs,
1428 to those produced by the above algorithm.
1429
1430 ## Polymorphic floating-point operation exceptions and error-handling
1431
1432 For floating-point operations, conversion takes place without
1433 raising any kind of exception. Exactly as specified in the standard
1434 RV specification, NAN (or appropriate) is stored if the result
1435 is beyond the range of the destination, and, again, exactly as
1436 with the standard RV specification just as with scalar
1437 operations, the floating-point flag is raised (FCSR). And, again, just as
1438 with scalar operations, it is software's responsibility to check this flag.
1439 Given that the FCSR flags are "accrued", the fact that multiple element
1440 operations could have occurred is not a problem.
1441
1442 Note that it is perfectly legitimate for floating-point bitwidths of
1443 only 8 to be specified. However whilst it is possible to apply IEEE 754
1444 principles, no actual standard yet exists. Implementors wishing to
1445 provide hardware-level 8-bit support rather than throw a trap to emulate
1446 in software should contact the author of this specification before
1447 proceeding.
1448
1449 ## Polymorphic shift operators
1450
1451 A special note is needed for changing the element width of left and right
1452 shift operators, particularly right-shift. Even for standard RV base,
1453 in order for correct results to be returned, the second operand RS2 must
1454 be truncated to be within the range of RS1's bitwidth. spike's implementation
1455 of sll for example is as follows:
1456
1457 WRITE_RD(sext_xlen(zext_xlen(RS1) << (RS2 & (xlen-1))));
1458
1459 which means: where XLEN is 32 (for RV32), restrict RS2 to cover the
1460 range 0..31 so that RS1 will only be left-shifted by the amount that
1461 is possible to fit into a 32-bit register. Whilst this appears not
1462 to matter for hardware, it matters greatly in software implementations,
1463 and it also matters where an RV64 system is set to "RV32" mode, such
1464 that the underlying registers RS1 and RS2 comprise 64 hardware bits
1465 each.
1466
1467 For SV, where each operand's element bitwidth may be over-ridden, the
1468 rule about determining the operation's bitwidth *still applies*, being
1469 defined as the maximum bitwidth of RS1 and RS2. *However*, this rule
1470 **also applies to the truncation of RS2**. In other words, *after*
1471 determining the maximum bitwidth, RS2's range must **also be truncated**
1472 to ensure a correct answer. Example:
1473
1474 * RS1 is over-ridden to a 16-bit width
1475 * RS2 is over-ridden to an 8-bit width
1476 * RD is over-ridden to a 64-bit width
1477 * the maximum bitwidth is thus determined to be 16-bit - max(8,16)
1478 * RS2 is **truncated to a range of values from 0 to 15**: RS2 & (16-1)
1479
1480 Pseudocode (in spike) for this example would therefore be:
1481
1482 WRITE_RD(sext_xlen(zext_16bit(RS1) << (RS2 & (16-1))));
1483
1484 This example illustrates that considerable care therefore needs to be
1485 taken to ensure that left and right shift operations are implemented
1486 correctly. The key is that
1487
1488 * The operation bitwidth is determined by the maximum bitwidth
1489 of the *source registers*, **not** the destination register bitwidth
1490 * The result is then sign-extend (or truncated) as appropriate.
1491
1492 ## Polymorphic MULH/MULHU/MULHSU
1493
1494 MULH is designed to take the top half MSBs of a multiply that
1495 does not fit within the range of the source operands, such that
1496 smaller width operations may produce a full double-width multiply
1497 in two cycles. The issue is: SV allows the source operands to
1498 have variable bitwidth.
1499
1500 Here again special attention has to be paid to the rules regarding
1501 bitwidth, which, again, are that the operation is performed at
1502 the maximum bitwidth of the **source** registers. Therefore:
1503
1504 * An 8-bit x 8-bit multiply will create a 16-bit result that must
1505 be shifted down by 8 bits
1506 * A 16-bit x 8-bit multiply will create a 24-bit result that must
1507 be shifted down by 16 bits (top 8 bits being zero)
1508 * A 16-bit x 16-bit multiply will create a 32-bit result that must
1509 be shifted down by 16 bits
1510 * A 32-bit x 16-bit multiply will create a 48-bit result that must
1511 be shifted down by 32 bits
1512 * A 32-bit x 8-bit multiply will create a 40-bit result that must
1513 be shifted down by 32 bits
1514
1515 So again, just as with shift-left and shift-right, the result
1516 is shifted down by the maximum of the two source register bitwidths.
1517 And, exactly again, truncation or sign-extension is performed on the
1518 result. If sign-extension is to be carried out, it is performed
1519 from the same maximum of the two source register bitwidths out
1520 to the result element's bitwidth.
1521
1522 If truncation occurs, i.e. the top MSBs of the result are lost,
1523 this is "Officially Not Our Problem", i.e. it is assumed that the
1524 programmer actually desires the result to be truncated. i.e. if the
1525 programmer wanted all of the bits, they would have set the destination
1526 elwidth to accommodate them.
1527
1528 ## Polymorphic elwidth on LOAD/STORE <a name="elwidth_loadstore"></a>
1529
1530 Polymorphic element widths in vectorised form means that the data
1531 being loaded (or stored) across multiple registers needs to be treated
1532 (reinterpreted) as a contiguous stream of elwidth-wide items, where
1533 the source register's element width is **independent** from the destination's.
1534
1535 This makes for a slightly more complex algorithm when using indirection
1536 on the "addressed" register (source for LOAD and destination for STORE),
1537 particularly given that the LOAD/STORE instruction provides important
1538 information about the width of the data to be reinterpreted.
1539
1540 Let's illustrate the "load" part, where the pseudo-code for elwidth=default
1541 was as follows, and i is the loop from 0 to VL-1:
1542
1543 srcbase = ireg[rs+i];
1544 return mem[srcbase + imm]; // returns XLEN bits
1545
1546 Instead, when elwidth != default, for a LW (32-bit LOAD), elwidth-wide
1547 chunks are taken from the source memory location addressed by the current
1548 indexed source address register, and only when a full 32-bits-worth
1549 are taken will the index be moved on to the next contiguous source
1550 address register:
1551
1552 bitwidth = bw(elwidth); // source elwidth from CSR reg entry
1553 elsperblock = 32 / bitwidth // 1 if bw=32, 2 if bw=16, 4 if bw=8
1554 srcbase = ireg[rs+i/(elsperblock)]; // integer divide
1555 offs = i % elsperblock; // modulo
1556 return &mem[srcbase + imm + offs]; // re-cast to uint8_t*, uint16_t* etc.
1557
1558 Note that the constant "32" above is replaced by 8 for LB, 16 for LH, 64 for LD
1559 and 128 for LQ.
1560
1561 The principle is basically exactly the same as if the srcbase were pointing
1562 at the memory of the *register* file: memory is re-interpreted as containing
1563 groups of elwidth-wide discrete elements.
1564
1565 When storing the result from a load, it's important to respect the fact
1566 that the destination register has its *own separate element width*. Thus,
1567 when each element is loaded (at the source element width), any sign-extension
1568 or zero-extension (or truncation) needs to be done to the *destination*
1569 bitwidth. Also, the storing has the exact same analogous algorithm as
1570 above, where in fact it is just the set\_polymorphed\_reg pseudocode
1571 (completely unchanged) used above.
1572
1573 One issue remains: when the source element width is **greater** than
1574 the width of the operation, it is obvious that a single LB for example
1575 cannot possibly obtain 16-bit-wide data. This condition may be detected
1576 where, when using integer divide, elsperblock (the width of the LOAD
1577 divided by the bitwidth of the element) is zero.
1578
1579 The issue is "fixed" by ensuring that elsperblock is a minimum of 1:
1580
1581 elsperblock = min(1, LD_OP_BITWIDTH / element_bitwidth)
1582
1583 The elements, if the element bitwidth is larger than the LD operation's
1584 size, will then be sign/zero-extended to the full LD operation size, as
1585 specified by the LOAD (LDU instead of LD, LBU instead of LB), before
1586 being passed on to the second phase.
1587
1588 As LOAD/STORE may be twin-predicated, it is important to note that
1589 the rules on twin predication still apply, except where in previous
1590 pseudo-code (elwidth=default for both source and target) it was
1591 the *registers* that the predication was applied to, it is now the
1592 **elements** that the predication is applied to.
1593
1594 Thus the full pseudocode for all LD operations may be written out
1595 as follows:
1596
1597 function LBU(rd, rs):
1598 load_elwidthed(rd, rs, 8, true)
1599 function LB(rd, rs):
1600 load_elwidthed(rd, rs, 8, false)
1601 function LH(rd, rs):
1602 load_elwidthed(rd, rs, 16, false)
1603 ...
1604 ...
1605 function LQ(rd, rs):
1606 load_elwidthed(rd, rs, 128, false)
1607
1608 # returns 1 byte of data when opwidth=8, 2 bytes when opwidth=16..
1609 function load_memory(rs, imm, i, opwidth):
1610 elwidth = int_csr[rs].elwidth
1611 bitwidth = bw(elwidth);
1612 elsperblock = min(1, opwidth / bitwidth)
1613 srcbase = ireg[rs+i/(elsperblock)];
1614 offs = i % elsperblock;
1615 return mem[srcbase + imm + offs]; # 1/2/4/8/16 bytes
1616
1617 function load_elwidthed(rd, rs, opwidth, unsigned):
1618 destwid = int_csr[rd].elwidth # destination element width
1619  rd = int_csr[rd].active ? int_csr[rd].regidx : rd;
1620  rs = int_csr[rs].active ? int_csr[rs].regidx : rs;
1621  ps = get_pred_val(FALSE, rs); # predication on src
1622  pd = get_pred_val(FALSE, rd); # ... AND on dest
1623  for (int i = 0, int j = 0; i < VL && j < VL;):
1624 if (int_csr[rs].isvec) while (!(ps & 1<<i)) i++;
1625 if (int_csr[rd].isvec) while (!(pd & 1<<j)) j++;
1626 val = load_memory(rs, imm, i, opwidth)
1627 if unsigned:
1628 val = zero_extend(val, min(opwidth, bitwidth))
1629 else:
1630 val = sign_extend(val, min(opwidth, bitwidth))
1631 set_polymorphed_reg(rd, bitwidth, j, val)
1632 if (int_csr[rs].isvec) i++;
1633 if (int_csr[rd].isvec) j++; else break;
1634
1635 Note:
1636
1637 * when comparing against for example the twin-predicated c.mv
1638 pseudo-code, the pattern of independent incrementing of rd and rs
1639 is preserved unchanged.
1640 * just as with the c.mv pseudocode, zeroing is not included and must be
1641 taken into account (TODO).
1642 * that due to the use of a twin-predication algorithm, LOAD/STORE also
1643 take on the same VSPLAT, VINSERT, VREDUCE, VEXTRACT, VGATHER and
1644 VSCATTER characteristics.
1645 * that due to the use of the same set\_polymorphed\_reg pseudocode,
1646 a destination that is not vectorised (marked as scalar) will
1647 result in the element being fully sign-extended or zero-extended
1648 out to the full register file bitwidth (XLEN). When the source
1649 is also marked as scalar, this is how the compatibility with
1650 standard RV LOAD/STORE is preserved by this algorithm.
1651
1652 ### Example Tables showing LOAD elements
1653
1654 This section contains examples of vectorised LOAD operations, showing
1655 how the two stage process works (three if zero/sign-extension is included).
1656
1657
1658 #### Example: LD x8, x5(0), x8 CSR-elwidth=32, x5 CSR-elwidth=16, VL=7
1659
1660 This is:
1661
1662 * a 64-bit load, with an offset of zero
1663 * with a source-address elwidth of 16-bit
1664 * into a destination-register with an elwidth of 32-bit
1665 * where VL=7
1666 * from register x5 (actually x5-x6) to x8 (actually x8 to half of x11)
1667 * RV64, where XLEN=64 is assumed.
1668
1669 First, the memory table, which, due to the
1670 element width being 16 and the operation being LD (64), the 64-bits
1671 loaded from memory are subdivided into groups of **four** elements.
1672 And, with VL being 7 (deliberately to illustrate that this is reasonable
1673 and possible), the first four are sourced from the offset addresses pointed
1674 to by x5, and the next three from the ofset addresses pointed to by
1675 the next contiguous register, x6:
1676
1677 [[!table data="""
1678 addr | byte 0 | byte 1 | byte 2 | byte 3 | byte 4 | byte 5 | byte 6 | byte 7 |
1679 @x5 | elem 0 || elem 1 || elem 2 || elem 3 ||
1680 @x6 | elem 4 || elem 5 || elem 6 || not loaded ||
1681 """]]
1682
1683 Next, the elements are zero-extended from 16-bit to 32-bit, as whilst
1684 the elwidth CSR entry for x5 is 16-bit, the destination elwidth on x8 is 32.
1685
1686 [[!table data="""
1687 byte 3 | byte 2 | byte 1 | byte 0 |
1688 0x0 | 0x0 | elem0 ||
1689 0x0 | 0x0 | elem1 ||
1690 0x0 | 0x0 | elem2 ||
1691 0x0 | 0x0 | elem3 ||
1692 0x0 | 0x0 | elem4 ||
1693 0x0 | 0x0 | elem5 ||
1694 0x0 | 0x0 | elem6 ||
1695 0x0 | 0x0 | elem7 ||
1696 """]]
1697
1698 Lastly, the elements are stored in contiguous blocks, as if x8 was also
1699 byte-addressable "memory". That "memory" happens to cover registers
1700 x8, x9, x10 and x11, with the last 32 "bits" of x11 being **UNMODIFIED**:
1701
1702 [[!table data="""
1703 reg# | byte 7 | byte 6 | byte 5 | byte 4 | byte 3 | byte 2 | byte 1 | byte 0 |
1704 x8 | 0x0 | 0x0 | elem 1 || 0x0 | 0x0 | elem 0 ||
1705 x9 | 0x0 | 0x0 | elem 3 || 0x0 | 0x0 | elem 2 ||
1706 x10 | 0x0 | 0x0 | elem 5 || 0x0 | 0x0 | elem 4 ||
1707 x11 | **UNMODIFIED** |||| 0x0 | 0x0 | elem 6 ||
1708 """]]
1709
1710 Thus we have data that is loaded from the **addresses** pointed to by
1711 x5 and x6, zero-extended from 16-bit to 32-bit, stored in the **registers**
1712 x8 through to half of x11.
1713 The end result is that elements 0 and 1 end up in x8, with element 8 being
1714 shifted up 32 bits, and so on, until finally element 6 is in the
1715 LSBs of x11.
1716
1717 Note that whilst the memory addressing table is shown left-to-right byte order,
1718 the registers are shown in right-to-left (MSB) order. This does **not**
1719 imply that bit or byte-reversal is carried out: it's just easier to visualise
1720 memory as being contiguous bytes, and emphasises that registers are not
1721 really actually "memory" as such.
1722
1723 ## Why SV bitwidth specification is restricted to 4 entries
1724
1725 The four entries for SV element bitwidths only allows three over-rides:
1726
1727 * default bitwidth for a given operation *divided* by two
1728 * default bitwidth for a given operation *multiplied* by two
1729 * 8-bit
1730
1731 At first glance this seems completely inadequate: for example, RV64
1732 cannot possibly operate on 16-bit operations, because 64 divided by
1733 2 is 32. However, the reader may have forgotten that it is possible,
1734 at run-time, to switch a 64-bit application into 32-bit mode, by
1735 setting UXL. Once switched, opcodes that formerly had 64-bit
1736 meanings now have 32-bit meanings, and in this way, "default/2"
1737 now reaches **16-bit** where previously it meant "32-bit".
1738
1739 There is however an absolutely crucial aspect oF SV here that explicitly
1740 needs spelling out, and it's whether the "vectorised" bit is set in
1741 the Register's CSR entry.
1742
1743 If "vectorised" is clear (not set), this indicates that the operation
1744 is "scalar". Under these circumstances, when set on a destination (RD),
1745 then sign-extension and zero-extension, whilst changed to match the
1746 override bitwidth (if set), will erase the **full** register entry
1747 (64-bit if RV64).
1748
1749 When vectorised is *set*, this indicates that the operation now treats
1750 **elements** as if they were independent registers, so regardless of
1751 the length, any parts of a given actual register that are not involved
1752 in the operation are **NOT** modified, but are **PRESERVED**.
1753
1754 SIMD micro-architectures may implement this by using predication on
1755 any elements in a given actual register that are beyond the end of
1756 multi-element operation.
1757
1758 Example:
1759
1760 * rs1, rs2 and rd are all set to 8-bit
1761 * VL is set to 3
1762 * RV64 architecture is set (UXL=64)
1763 * add operation is carried out
1764 * bits 0-23 of RD are modified to be rs1[23..16] + rs2[23..16]
1765 concatenated with similar add operations on bits 15..8 and 7..0
1766 * bits 24 through 63 **remain as they originally were**.
1767
1768 Example SIMD micro-architectural implementation:
1769
1770 * SIMD architecture works out the nearest round number of elements
1771 that would fit into a full RV64 register (in this case: 8)
1772 * SIMD architecture creates a hidden predicate, binary 0b00000111
1773 i.e. the bottom 3 bits set (VL=3) and the top 5 bits clear
1774 * SIMD architecture goes ahead with the add operation as if it
1775 was a full 8-wide batch of 8 adds
1776 * SIMD architecture passes top 5 elements through the adders
1777 (which are "disabled" due to zero-bit predication)
1778 * SIMD architecture gets the 5 unmodified top 8-bits back unmodified
1779 and stores them in rd.
1780
1781 This requires a read on rd, however this is required anyway in order
1782 to support non-zeroing mode.
1783
1784 ## Polymorphic floating-point
1785
1786 Standard scalar RV integer operations base the register width on XLEN,
1787 which may be changed (UXL in USTATUS, and the corresponding MXL and
1788 SXL in MSTATUS and SSTATUS respectively). Integer LOAD, STORE and
1789 arithmetic operations are therefore restricted to an active XLEN bits,
1790 with sign or zero extension to pad out the upper bits when XLEN has
1791 been dynamically set to less than the actual register size.
1792
1793 For scalar floating-point, the active (used / changed) bits are
1794 specified exclusively by the operation: ADD.S specifies an active
1795 32-bits, with the upper bits of the source registers needing to
1796 be all 1s ("NaN-boxed"), and the destination upper bits being
1797 *set* to all 1s (including on LOAD/STOREs).
1798
1799 Where elwidth is set to default (on any source or the destination)
1800 it is obvious that this NaN-boxing behaviour can and should be
1801 preserved. When elwidth is non-default things are less obvious,
1802 so need to be thought through. Here is a normal (scalar) sequence,
1803 assuming an RV64 which supports Quad (128-bit) FLEN:
1804
1805 * FLD loads 64-bit wide from memory. Top 64 MSBs are set to all 1s
1806 * ADD.D performs a 64-bit-wide add. Top 64 MSBs of destination set to 1s.
1807 * FSD stores lowest 64-bits from the 128-bit-wide register to memory:
1808 top 64 MSBs ignored.
1809
1810 Therefore it makes sense to mirror this behaviour when, for example,
1811 elwidth is set to 32. Assume elwidth set to 32 on all source and
1812 destination registers:
1813
1814 * FLD loads 64-bit wide from memory as **two** 32-bit single-precision
1815 floating-point numbers.
1816 * ADD.D performs **two** 32-bit-wide adds, storing one of the adds
1817 in bits 0-31 and the second in bits 32-63.
1818 * FSD stores lowest 64-bits from the 128-bit-wide register to memory
1819
1820 Here's the thing: it does not make sense to overwrite the top 64 MSBs
1821 of the registers either during the FLD **or** the ADD.D. The reason
1822 is that, effectively, the top 64 MSBs actually represent a completely
1823 independent 64-bit register, so overwriting it is not only gratuitous
1824 but may actually be harmful for a future extension to SV which may
1825 have a way to directly access those top 64 bits.
1826
1827 The decision is therefore **not** to touch the upper parts of floating-point
1828 registers whereever elwidth is set to non-default values, including
1829 when "isvec" is false in a given register's CSR entry. Only when the
1830 elwidth is set to default **and** isvec is false will the standard
1831 RV behaviour be followed, namely that the upper bits be modified.
1832
1833 Ultimately if elwidth is default and isvec false on *all* source
1834 and destination registers, a SimpleV instruction defaults completely
1835 to standard RV scalar behaviour (this holds true for **all** operations,
1836 right across the board).
1837
1838 The nice thing here is that ADD.S, ADD.D and ADD.Q when elwidth are
1839 non-default values are effectively all the same: they all still perform
1840 multiple ADD operations, just at different widths. A future extension
1841 to SimpleV may actually allow ADD.S to access the upper bits of the
1842 register, effectively breaking down a 128-bit register into a bank
1843 of 4 independently-accesible 32-bit registers.
1844
1845 In the meantime, although when e.g. setting VL to 8 it would technically
1846 make no difference to the ALU whether ADD.S, ADD.D or ADD.Q is used,
1847 using ADD.Q may be an easy way to signal to the microarchitecture that
1848 it is to receive a higher VL value. On a superscalar OoO architecture
1849 there may be absolutely no difference, however on simpler SIMD-style
1850 microarchitectures they may not necessarily have the infrastructure in
1851 place to know the difference, such that when VL=8 and an ADD.D instruction
1852 is issued, it completes in 2 cycles (or more) rather than one, where
1853 if an ADD.Q had been issued instead on such simpler microarchitectures
1854 it would complete in one.
1855
1856 ## Specific instruction walk-throughs
1857
1858 This section covers walk-throughs of the above-outlined procedure
1859 for converting standard RISC-V scalar arithmetic operations to
1860 polymorphic widths, to ensure that it is correct.
1861
1862 ### add
1863
1864 Standard Scalar RV32/RV64 (xlen):
1865
1866 * RS1 @ xlen bits
1867 * RS2 @ xlen bits
1868 * add @ xlen bits
1869 * RD @ xlen bits
1870
1871 Polymorphic variant:
1872
1873 * RS1 @ rs1 bits, zero-extended to max(rs1, rs2) bits
1874 * RS2 @ rs2 bits, zero-extended to max(rs1, rs2) bits
1875 * add @ max(rs1, rs2) bits
1876 * RD @ rd bits. zero-extend to rd if rd > max(rs1, rs2) otherwise truncate
1877
1878 Note here that polymorphic add zero-extends its source operands,
1879 where addw sign-extends.
1880
1881 ### addw
1882
1883 The RV Specification specifically states that "W" variants of arithmetic
1884 operations always produce 32-bit signed values. In a polymorphic
1885 environment it is reasonable to assume that the signed aspect is
1886 preserved, where it is the length of the operands and the result
1887 that may be changed.
1888
1889 Standard Scalar RV64 (xlen):
1890
1891 * RS1 @ xlen bits
1892 * RS2 @ xlen bits
1893 * add @ xlen bits
1894 * RD @ xlen bits, truncate add to 32-bit and sign-extend to xlen.
1895
1896 Polymorphic variant:
1897
1898 * RS1 @ rs1 bits, sign-extended to max(rs1, rs2) bits
1899 * RS2 @ rs2 bits, sign-extended to max(rs1, rs2) bits
1900 * add @ max(rs1, rs2) bits
1901 * RD @ rd bits. sign-extend to rd if rd > max(rs1, rs2) otherwise truncate
1902
1903 Note here that polymorphic addw sign-extends its source operands,
1904 where add zero-extends.
1905
1906 This requires a little more in-depth analysis. Where the bitwidth of
1907 rs1 equals the bitwidth of rs2, no sign-extending will occur. It is
1908 only where the bitwidth of either rs1 or rs2 are different, will the
1909 lesser-width operand be sign-extended.
1910
1911 Effectively however, both rs1 and rs2 are being sign-extended (or truncated),
1912 where for add they are both zero-extended. This holds true for all arithmetic
1913 operations ending with "W".
1914
1915 ### addiw
1916
1917 Standard Scalar RV64I:
1918
1919 * RS1 @ xlen bits, truncated to 32-bit
1920 * immed @ 12 bits, sign-extended to 32-bit
1921 * add @ 32 bits
1922 * RD @ rd bits. sign-extend to rd if rd > 32, otherwise truncate.
1923
1924 Polymorphic variant:
1925
1926 * RS1 @ rs1 bits
1927 * immed @ 12 bits, sign-extend to max(rs1, 12) bits
1928 * add @ max(rs1, 12) bits
1929 * RD @ rd bits. sign-extend to rd if rd > max(rs1, 12) otherwise truncate
1930
1931 # Predication Element Zeroing
1932
1933 The introduction of zeroing on traditional vector predication is usually
1934 intended as an optimisation for lane-based microarchitectures with register
1935 renaming to be able to save power by avoiding a register read on elements
1936 that are passed through en-masse through the ALU. Simpler microarchitectures
1937 do not have this issue: they simply do not pass the element through to
1938 the ALU at all, and therefore do not store it back in the destination.
1939 More complex non-lane-based micro-architectures can, when zeroing is
1940 not set, use the predication bits to simply avoid sending element-based
1941 operations to the ALUs, entirely: thus, over the long term, potentially
1942 keeping all ALUs 100% occupied even when elements are predicated out.
1943
1944 SimpleV's design principle is not based on or influenced by
1945 microarchitectural design factors: it is a hardware-level API.
1946 Therefore, looking purely at whether zeroing is *useful* or not,
1947 (whether less instructions are needed for certain scenarios),
1948 given that a case can be made for zeroing *and* non-zeroing, the
1949 decision was taken to add support for both.
1950
1951 ## Single-predication (based on destination register)
1952
1953 Zeroing on predication for arithmetic operations is taken from
1954 the destination register's predicate. i.e. the predication *and*
1955 zeroing settings to be applied to the whole operation come from the
1956 CSR Predication table entry for the destination register.
1957 Thus when zeroing is set on predication of a destination element,
1958 if the predication bit is clear, then the destination element is *set*
1959 to zero (twin-predication is slightly different, and will be covered
1960 next).
1961
1962 Thus the pseudo-code loop for a predicated arithmetic operation
1963 is modified to as follows:
1964
1965  for (i = 0; i < VL; i++)
1966 if not zeroing: # an optimisation
1967 while (!(predval & 1<<i) && i < VL)
1968 if (int_vec[rd ].isvector)  { id += 1; }
1969 if (int_vec[rs1].isvector)  { irs1 += 1; }
1970 if (int_vec[rs2].isvector)  { irs2 += 1; }
1971 if i == VL:
1972 break
1973 if (predval & 1<<i)
1974 src1 = ....
1975 src2 = ...
1976 else:
1977 result = src1 + src2 # actual add (or other op) here
1978 set_polymorphed_reg(rd, destwid, ird, result)
1979 if (!int_vec[rd].isvector) break
1980 else if zeroing:
1981 result = 0
1982 set_polymorphed_reg(rd, destwid, ird, result)
1983 if (int_vec[rd ].isvector)  { id += 1; }
1984 else if (predval & 1<<i) break;
1985 if (int_vec[rs1].isvector)  { irs1 += 1; }
1986 if (int_vec[rs2].isvector)  { irs2 += 1; }
1987
1988 The optimisation to skip elements entirely is only possible for certain
1989 micro-architectures when zeroing is not set. However for lane-based
1990 micro-architectures this optimisation may not be practical, as it
1991 implies that elements end up in different "lanes". Under these
1992 circumstances it is perfectly fine to simply have the lanes
1993 "inactive" for predicated elements, even though it results in
1994 less than 100% ALU utilisation.
1995
1996 ## Twin-predication (based on source and destination register)
1997
1998 Twin-predication is not that much different, except that that
1999 the source is independently zero-predicated from the destination.
2000 This means that the source may be zero-predicated *or* the
2001 destination zero-predicated *or both*, or neither.
2002
2003 When with twin-predication, zeroing is set on the source and not
2004 the destination, if a predicate bit is set it indicates that a zero
2005 data element is passed through the operation (the exception being:
2006 if the source data element is to be treated as an address - a LOAD -
2007 then the data returned *from* the LOAD is zero, rather than looking up an
2008 *address* of zero.
2009
2010 When zeroing is set on the destination and not the source, then just
2011 as with single-predicated operations, a zero is stored into the destination
2012 element (or target memory address for a STORE).
2013
2014 Zeroing on both source and destination effectively result in a bitwise
2015 NOR operation of the source and destination predicate: the result is that
2016 where either source predicate OR destination predicate is set to 0,
2017 a zero element will ultimately end up in the destination register.
2018
2019 However: this may not necessarily be the case for all operations;
2020 implementors, particularly of custom instructions, clearly need to
2021 think through the implications in each and every case.
2022
2023 Here is pseudo-code for a twin zero-predicated operation:
2024
2025 function op_mv(rd, rs) # MV not VMV!
2026  rd = int_csr[rd].active ? int_csr[rd].regidx : rd;
2027  rs = int_csr[rs].active ? int_csr[rs].regidx : rs;
2028  ps, zerosrc = get_pred_val(FALSE, rs); # predication on src
2029  pd, zerodst = get_pred_val(FALSE, rd); # ... AND on dest
2030  for (int i = 0, int j = 0; i < VL && j < VL):
2031 if (int_csr[rs].isvec && !zerosrc) while (!(ps & 1<<i)) i++;
2032 if (int_csr[rd].isvec && !zerodst) while (!(pd & 1<<j)) j++;
2033 if ((pd & 1<<j))
2034 if ((pd & 1<<j))
2035 sourcedata = ireg[rs+i];
2036 else
2037 sourcedata = 0
2038 ireg[rd+j] <= sourcedata
2039 else if (zerodst)
2040 ireg[rd+j] <= 0
2041 if (int_csr[rs].isvec)
2042 i++;
2043 if (int_csr[rd].isvec)
2044 j++;
2045 else
2046 if ((pd & 1<<j))
2047 break;
2048
2049 Note that in the instance where the destination is a scalar, the hardware
2050 loop is ended the moment a value *or a zero* is placed into the destination
2051 register/element. Also note that, for clarity, variable element widths
2052 have been left out of the above.
2053
2054 # Exceptions
2055
2056 TODO: expand. Exceptions may occur at any time, in any given underlying
2057 scalar operation. This implies that context-switching (traps) may
2058 occur, and operation must be returned to where it left off. That in
2059 turn implies that the full state - including the current parallel
2060 element being processed - has to be saved and restored. This is
2061 what the **STATE** CSR is for.
2062
2063 The implications are that all underlying individual scalar operations
2064 "issued" by the parallelisation have to appear to be executed sequentially.
2065 The further implications are that if two or more individual element
2066 operations are underway, and one with an earlier index causes an exception,
2067 it may be necessary for the microarchitecture to **discard** or terminate
2068 operations with higher indices.
2069
2070 This being somewhat dissatisfactory, an "opaque predication" variant
2071 of the STATE CSR is being considered.
2072
2073 # Hints
2074
2075 A "HINT" is an operation that has no effect on architectural state,
2076 where its use may, by agreed convention, give advance notification
2077 to the microarchitecture: branch prediction notification would be
2078 a good example. Usually HINTs are where rd=x0.
2079
2080 With Simple-V being capable of issuing *parallel* instructions where
2081 rd=x0, the space for possible HINTs is expanded considerably. VL
2082 could be used to indicate different hints. In addition, if predication
2083 is set, the predication register itself could hypothetically be passed
2084 in as a *parameter* to the HINT operation.
2085
2086 No specific hints are yet defined in Simple-V
2087
2088 # VLIW Format <a name="vliw-format"></a>
2089
2090 One issue with SV is the setup and teardown time of the CSRs. The cost
2091 of the use of a full CSRRW (requiring LI) is quite high. A VLIW format
2092 therefore makes sense.
2093
2094 A suitable prefix, which fits the Expanded Instruction-Length encoding
2095 for "(80 + 16 times instruction_length)", as defined in Section 1.5
2096 of the RISC-V ISA, is as follows:
2097
2098 | 15 | 14:12 | 11:10 | 9:8 | 7 | 6:0 |
2099 | - | ----- | ----- | ----- | --- | ------- |
2100 | vlset | 16xil | pplen | rplen | mode | 1111111 |
2101
2102 An optional VL Block, optional predicate entries, optional register entries and finally some 16/32/48 bit standard RV or SVPrefix opcodes follow.
2103
2104 The variable-length format from Section 1.5 of the RISC-V ISA:
2105
2106 | base+4 ... base+2 | base | number of bits |
2107 | ------ ------------------- | ---------------- -------------------------- |
2108 | ..xxxx xxxxxxxxxxxxxxxx | xnnnxxxxx1111111 | (80+16\*nnn)-bit, nnn!=111 |
2109 | {ops}{Pred}{Reg}{VL Block} | SV Prefix | |
2110
2111 VL/MAXVL/SubVL Block:
2112
2113 | 31-30 | 29:28 | 27:22 | 21:17 | 16 |
2114 | - | ----- | ------ | ------ | - |
2115 | 0 | SubVL | VLdest | VLEN | vlt |
2116 | 1 | SubVL | VLdest | VLEN ||
2117
2118 If vlt is 0, VLEN is a 5 bit immediate value. If vlt is 1, it specifies
2119 the scalar register from which VL is set by this VLIW instruction
2120 group. VL, whether set from the register or the immediate, is then
2121 modified (truncated) to be max(VL, MAXVL), and the result stored in the
2122 scalar register specified in VLdest. If VLdest is zero, no store in the
2123 regfile occurs.
2124
2125 This option will typically be used to start vectorised loops, where
2126 the VLIW instruction effectively embeds an optional "SETSUBVL, SETVL"
2127 sequence (in compact form).
2128
2129 When bit 15 is set to 1, MAXVL and VL are both set to the immediate,
2130 VLEN, which is 6 bits in length, and the same value stored in scalar
2131 register VLdest (if that register is nonzero).
2132
2133 This option will typically not be used so much for loops as it will be
2134 for one-off instructions such as saving the entire register file to the
2135 stack with a single one-off Vectorised LD/ST.
2136
2137 CSRs needed:
2138
2139 * mepcvliw
2140 * sepcvliw
2141 * uepcvliw
2142 * hepcvliw
2143
2144 Notes:
2145
2146 * Bit 7 specifies if the prefix block format is the full 16 bit format
2147 (1) or the compact less expressive format (0). In the 8 bit format,
2148 pplen is multiplied by 2.
2149 * 8 bit format predicate numbering is implicit and begins from x9. Thus it is critical to put blocks in the correct order as required.
2150 * Bit 7 also specifies if the register block format is 16 bit (1) or 8 bit
2151 (0). In the 8 bit format, rplen is multiplied by 2. If only an odd number
2152 of entries are needed the last may be set to 0x00, indicating "unused".
2153 * Bit 15 specifies if the VL Block is present. If set to 1, the VL Block immediately follows the VLIW instruction Prefix
2154 * Bits 8 and 9 define how many RegCam entries (0 to 3 if bit 15 is 1, otherwise 0 to 6) follow the (optional) VL Block.
2155 * Bits 10 and 11 define how many PredCam entries (0 to 3 if bit 7 is 1, otherwise 0 to 6) follow the (optional) RegCam entries
2156 * Bits 14 to 12 (IL) define the actual length of the instruction: total
2157 number of bits is 80 + 16 times IL. Standard RV32, RVC and also
2158 SVPrefix (P48-\*-Type) instructions fit into this space, after the
2159 (optional) VL / RegCam / PredCam entries
2160 * Anything - any registers - within the VLIW-prefixed format *MUST* have the
2161 RegCam and PredCam entries applied to it.
2162 * At the end of the VLIW Group, the RegCam and PredCam entries
2163 *no longer apply*. VL, MAXVL and SUBVL on the other hand remain at
2164 the values set by the last instruction (whether a CSRRW or the VL
2165 Block header).
2166 * Although an inefficient use of resources, it is fine to set the MAXVL, VL and SUBVL CSRs with standard CSRRW instructions, within a VLIW block.
2167
2168 All this would greatly reduce the amount of space utilised by Vectorised
2169 instructions, given that 64-bit CSRRW requires 3, even 4 32-bit opcodes: the
2170 CSR itself, a LI, and the setting up of the value into the RS register
2171 of the CSR, which, again, requires a LI / LUI to get the 32 bit
2172 data into the CSR. To get 64-bit data into the register in order to put
2173 it into the CSR(s), LOAD operations from memory are needed!
2174
2175 Given that each 64-bit CSR can hold only 4x PredCAM entries (or 4 RegCAM
2176 entries), that's potentially 6 to eight 32-bit instructions, just to
2177 establish the Vector State!
2178
2179 Not only that: even CSRRW on VL and MAXVL requires 64-bits (even more bits if
2180 VL needs to be set to greater than 32). Bear in mind that in SV, both MAXVL
2181 and VL need to be set.
2182
2183 By contrast, the VLIW prefix is only 16 bits, the VL/MAX/SubVL block is
2184 only 16 bits, and as long as not too many predicates and register vector
2185 qualifiers are specified, several 32-bit and 16-bit opcodes can fit into
2186 the format. If the full flexibility of the 16 bit block formats are not
2187 needed, more space is saved by using the 8 bit formats.
2188
2189 In this light, embedding the VL/MAXVL, PredCam and RegCam CSR entries into
2190 a VLIW format makes a lot of sense.
2191
2192 Open Questions:
2193
2194 * Is it necessary to stick to the RISC-V 1.5 format? Why not go with
2195 using the 15th bit to allow 80 + 16\*0bnnnn bits? Perhaps to be sane,
2196 limit to 256 bits (16 times 0-11).
2197 * Could a "hint" be used to set which operations are parallel and which
2198 are sequential?
2199 * Could a new sub-instruction opcode format be used, one that does not
2200 conform precisely to RISC-V rules, but *unpacks* to RISC-V opcodes?
2201 no need for byte or bit-alignment
2202 * Could a hardware compression algorithm be deployed? Quite likely,
2203 because of the sub-execution context (sub-VLIW PC)
2204
2205 ## Limitations on instructions.
2206
2207 To greatly simplify implementations, it is required to treat the VLIW
2208 group as a separate sub-program with its own separate PC. The sub-pc
2209 advances separately whilst the main PC remains pointing at the beginning
2210 of the VLIW instruction (not to be confused with how VL works, which
2211 is exactly the same principle, except it is VStart in the STATE CSR
2212 that increments).
2213
2214 This has implications, namely that a new set of CSRs identical to xepc
2215 (mepc, srpc, hepc and uepc) must be created and managed and respected
2216 as being a sub extension of the xepc set of CSRs. Thus, xepcvliw CSRs
2217 must be context switched and saved / restored in traps.
2218
2219 The VStart indices in the STATE CSR may be similarly regarded as another
2220 sub-execution context, giving in effect two sets of nested sub-levels
2221 of the RISCV Program Counter.
2222
2223 In addition, as xepcvliw CSRs are relative to the beginning of the VLIW
2224 block, branches MUST be restricted to within the block, i.e. addressing
2225 is now restricted to the start (and very short) length of the block.
2226
2227 Also: calling subroutines is inadviseable, unless they can be entirely
2228 accomplished within a block.
2229
2230 A normal jump and a normal function call may only be taken by letting
2231 the VLIW end, returning to "normal" standard RV mode, using RVC, 32 bit
2232 or P48-*-type opcodes.
2233
2234 ## Links
2235
2236 * <https://groups.google.com/d/msg/comp.arch/yIFmee-Cx-c/jRcf0evSAAAJ>
2237
2238 # Subsets of RV functionality
2239
2240 This section describes the differences when SV is implemented on top of
2241 different subsets of RV.
2242
2243 ## Common options
2244
2245 It is permitted to limit the size of either (or both) the register files
2246 down to the original size of the standard RV architecture. However, below
2247 the mandatory limits set in the RV standard will result in non-compliance
2248 with the SV Specification.
2249
2250 ## RV32 / RV32F
2251
2252 When RV32 or RV32F is implemented, XLEN is set to 32, and thus the
2253 maximum limit for predication is also restricted to 32 bits. Whilst not
2254 actually specifically an "option" it is worth noting.
2255
2256 ## RV32G
2257
2258 Normally in standard RV32 it does not make much sense to have
2259 RV32G, The critical instructions that are missing in standard RV32
2260 are those for moving data to and from the double-width floating-point
2261 registers into the integer ones, as well as the FCVT routines.
2262
2263 In an earlier draft of SV, it was possible to specify an elwidth
2264 of double the standard register size: this had to be dropped,
2265 and may be reintroduced in future revisions.
2266
2267 ## RV32 (not RV32F / RV32G) and RV64 (not RV64F / RV64G)
2268
2269 When floating-point is not implemented, the size of the User Register and
2270 Predication CSR tables may be halved, to only 4 2x16-bit CSRs (8 entries
2271 per table).
2272
2273 ## RV32E
2274
2275 In embedded scenarios the User Register and Predication CSRs may be
2276 dropped entirely, or optionally limited to 1 CSR, such that the combined
2277 number of entries from the M-Mode CSR Register table plus U-Mode
2278 CSR Register table is either 4 16-bit entries or (if the U-Mode is
2279 zero) only 2 16-bit entries (M-Mode CSR table only). Likewise for
2280 the Predication CSR tables.
2281
2282 RV32E is the most likely candidate for simply detecting that registers
2283 are marked as "vectorised", and generating an appropriate exception
2284 for the VL loop to be implemented in software.
2285
2286 ## RV128
2287
2288 RV128 has not been especially considered, here, however it has some
2289 extremely large possibilities: double the element width implies
2290 256-bit operands, spanning 2 128-bit registers each, and predication
2291 of total length 128 bit given that XLEN is now 128.
2292
2293 # Under consideration <a name="issues"></a>
2294
2295 for element-grouping, if there is unused space within a register
2296 (3 16-bit elements in a 64-bit register for example), recommend:
2297
2298 * For the unused elements in an integer register, the used element
2299 closest to the MSB is sign-extended on write and the unused elements
2300 are ignored on read.
2301 * The unused elements in a floating-point register are treated as-if
2302 they are set to all ones on write and are ignored on read, matching the
2303 existing standard for storing smaller FP values in larger registers.
2304
2305 ---
2306
2307 info register,
2308
2309 > One solution is to just not support LR/SC wider than a fixed
2310 > implementation-dependent size, which must be at least 
2311 >1 XLEN word, which can be read from a read-only CSR
2312 > that can also be used for info like the kind and width of 
2313 > hw parallelism supported (128-bit SIMD, minimal virtual 
2314 > parallelism, etc.) and other things (like maybe the number 
2315 > of registers supported). 
2316
2317 > That CSR would have to have a flag to make a read trap so
2318 > a hypervisor can simulate different values.
2319
2320 ----
2321
2322 > And what about instructions like JALR? 
2323
2324 answer: they're not vectorised, so not a problem
2325
2326 ----
2327
2328 * if opcode is in the RV32 group, rd, rs1 and rs2 bitwidth are
2329 XLEN if elwidth==default
2330 * if opcode is in the RV32I group, rd, rs1 and rs2 bitwidth are
2331 *32* if elwidth == default
2332
2333 ---
2334
2335 TODO: update elwidth to be default / 8 / 16 / 32
2336
2337 ---
2338
2339 TODO: document different lengths for INT / FP regfiles, and provide
2340 as part of info register. 00=32, 01=64, 10=128, 11=reserved.
2341
2342 ---
2343
2344 TODO, update to remove RegCam and PredCam CSRs, just use SVprefix and VLIW format