redo formatting
[libreriscv.git] / simple_v_extension / specification.mdwn
1 # Simple-V (Parallelism Extension Proposal) Specification
2
3 * Copyright (C) 2017, 2018, 3029 Luke Kenneth Casson Leighton
4 * Status: DRAFTv0.6
5 * Last edited: 21 jun 2019
6 * Ancillary resource: [[opcodes]] [[sv_prefix_proposal]]
7
8 With thanks to:
9
10 * Allen Baum
11 * Bruce Hoult
12 * comp.arch
13 * Jacob Bachmeyer
14 * Guy Lemurieux
15 * Jacob Lifshay
16 * Terje Mathisen
17 * The RISC-V Founders, without whom this all would not be possible.
18
19 [[!toc ]]
20
21 # Summary and Background: Rationale
22
23 Simple-V is a uniform parallelism API for RISC-V hardware that has several
24 unplanned side-effects including code-size reduction, expansion of
25 HINT space and more. The reason for
26 creating it is to provide a manageable way to turn a pre-existing design
27 into a parallel one, in a step-by-step incremental fashion, without adding any new opcodes, thus allowing
28 the implementor to focus on adding hardware where it is needed and necessary.
29 The primary target is for mobile-class 3D GPUs and VPUs, with secondary
30 goals being to reduce executable size and reduce context-switch latency.
31
32 Critically: **No new instructions are added**. The parallelism (if any
33 is implemented) is implicitly added by tagging *standard* scalar registers
34 for redirection. When such a tagged register is used in any instruction,
35 it indicates that the PC shall **not** be incremented; instead a loop
36 is activated where *multiple* instructions are issued to the pipeline
37 (as determined by a length CSR), with contiguously incrementing register
38 numbers starting from the tagged register. When the last "element"
39 has been reached, only then is the PC permitted to move on. Thus
40 Simple-V effectively sits (slots) *in between* the instruction decode phase
41 and the ALU(s).
42
43 The barrier to entry with SV is therefore very low. The minimum
44 compliant implementation is software-emulation (traps), requiring
45 only the CSRs and CSR tables, and that an exception be thrown if an
46 instruction's registers are detected to have been tagged. The looping
47 that would otherwise be done in hardware is thus carried out in software,
48 instead. Whilst much slower, it is "compliant" with the SV specification,
49 and may be suited for implementation in RV32E and also in situations
50 where the implementor wishes to focus on certain aspects of SV, without
51 unnecessary time and resources into the silicon, whilst also conforming
52 strictly with the API. A good area to punt to software would be the
53 polymorphic element width capability for example.
54
55 Hardware Parallelism, if any, is therefore added at the implementor's
56 discretion to turn what would otherwise be a sequential loop into a
57 parallel one.
58
59 To emphasise that clearly: Simple-V (SV) is *not*:
60
61 * A SIMD system
62 * A SIMT system
63 * A Vectorisation Microarchitecture
64 * A microarchitecture of any specific kind
65 * A mandary parallel processor microarchitecture of any kind
66 * A supercomputer extension
67
68 SV does **not** tell implementors how or even if they should implement
69 parallelism: it is a hardware "API" (Application Programming Interface)
70 that, if implemented, presents a uniform and consistent way to *express*
71 parallelism, at the same time leaving the choice of if, how, how much,
72 when and whether to parallelise operations **entirely to the implementor**.
73
74 # Basic Operation
75
76 The principle of SV is as follows:
77
78 * Standard RV instructions are "prefixed" (extended) through a 48/64
79 bit format (single instruction option) or a variable
80 length VLIW-like prefix (multi or "grouped" option).
81 * The prefix(es) indicate which registers are "tagged" as
82 "vectorised". Predicates can also be added.
83 * A "Vector Length" CSR is set, indicating the span of any future
84 "parallel" operations.
85 * If any operation (a **scalar** standard RV opcode) uses a register
86 that has been so "marked" ("tagged"), a hardware "macro-unrolling loop"
87 is activated, of length VL, that effectively issues **multiple**
88 identical instructions using contiguous sequentially-incrementing
89 register numbers, based on the "tags".
90 * **Whether they be executed sequentially or in parallel or a
91 mixture of both or punted to software-emulation in a trap handler
92 is entirely up to the implementor**.
93
94 In this way an entire scalar algorithm may be vectorised with
95 the minimum of modification to the hardware and to compiler toolchains.
96
97 To reiterate: **There are *no* new opcodes**. The scheme works *entirely*
98 on hidden context that augments *scalar* RISCV instructions.
99
100 # CSRs <a name="csrs"></a>
101
102 * An optional "reshaping" CSR key-value table which remaps from a 1D
103 linear shape to 2D or 3D, including full transposition.
104
105 There are also five additional User mode CSRs :
106
107 * uMVL (the Maximum Vector Length)
108 * uVL (which has different characteristics from standard CSRs)
109 * uSUBVL (effectively a kind of SIMD)
110 * uEPCVLIW (a copy of the sub-execution Program Counter, that is relative
111 to the start of the current VLIW Group, set on a trap).
112 * uSTATE (useful for saving and restoring during context switch,
113 and for providing fast transitions)
114
115 There are also five additional CSRs for Supervisor-Mode:
116
117 * SMVL
118 * SVL
119 * SSUBVL
120 * SEPCVLIW
121 * SSTATE
122
123 And likewise for M-Mode:
124
125 * MMVL
126 * MVL
127 * MSUBVL
128 * MEPCVLIW
129 * MSTATE
130
131 Both Supervisor and M-Mode have their own CSR registers, independent
132 of the other privilege levels, in order to make it easier to use
133 Vectorisation in each level without affecting other privilege levels.
134
135 The access pattern for these groups of CSRs in each mode follows the
136 same pattern for other CSRs that have M-Mode and S-Mode "mirrors":
137
138 * In M-Mode, the S-Mode and U-Mode CSRs are separate and distinct.
139 * In S-Mode, accessing and changing of the M-Mode CSRs is transparently
140 identical
141 to changing the S-Mode CSRs. Accessing and changing the U-Mode
142 CSRs is permitted.
143 * In U-Mode, accessing and changing of the S-Mode and U-Mode CSRs
144 is prohibited.
145
146 In M-Mode, only the M-Mode CSRs are in effect, i.e. it is only the
147 M-Mode MVL, the M-Mode STATE and so on that influences the processor
148 behaviour. Likewise for S-Mode, and likewise for U-Mode.
149
150 This has the interesting benefit of allowing M-Mode (or S-Mode) to be set
151 up, for context-switching to take place, and, on return back to the higher
152 privileged mode, the CSRs of that mode will be exactly as they were.
153 Thus, it becomes possible for example to set up CSRs suited best to aiding
154 and assisting low-latency fast context-switching *once and only once*
155 (for example at boot time), without the need for re-initialising the
156 CSRs needed to do so.
157
158 Another interesting side effect of separate S Mode CSRs is that Vectorised
159 saving of the entire register file to the stack is a single instruction
160 (accidental provision of LOAD-MULTI semantics). It can even be predicated,
161 which opens up some very interesting possibilities.
162
163 The xEPCVLIW CSRs must be treated exactly like their corresponding xepc
164 equivalents. See VLIW section for details.
165
166 ## MAXVECTORLENGTH (MVL) <a name="mvl" />
167
168 MAXVECTORLENGTH is the same concept as MVL in RVV, except that it
169 is variable length and may be dynamically set. MVL is
170 however limited to the regfile bitwidth XLEN (1-32 for RV32,
171 1-64 for RV64 and so on).
172
173 The reason for setting this limit is so that predication registers, when
174 marked as such, may fit into a single register as opposed to fanning out
175 over several registers. This keeps the implementation a little simpler.
176
177 The other important factor to note is that the actual MVL is internally
178 stored **offset by one**, so that it can fit into only 6 bits (for RV64)
179 and still cover a range up to XLEN bits. Attempts to set MVL to zero will
180 return an exception. This is expressed more clearly in the "pseudocode"
181 section, where there are subtle differences between CSRRW and CSRRWI.
182
183 ## Vector Length (VL) <a name="vl" />
184
185 VSETVL is slightly different from RVV. Similar to RVV, VL is set to be within
186 the range 1 <= VL <= MVL (where MVL in turn is limited to 1 <= MVL <= XLEN)
187
188 VL = rd = MIN(vlen, MVL)
189
190 where 1 <= MVL <= XLEN
191
192 However just like MVL it is important to note that the range for VL has
193 subtle design implications, covered in the "CSR pseudocode" section
194
195 The fixed (specific) setting of VL allows vector LOAD/STORE to be used
196 to switch the entire bank of registers using a single instruction (see
197 Appendix, "Context Switch Example"). The reason for limiting VL to XLEN
198 is down to the fact that predication bits fit into a single register of
199 length XLEN bits.
200
201 The second change is that when VSETVL is requested to be stored
202 into x0, it is *ignored* silently (VSETVL x0, x5)
203
204 The third and most important change is that, within the limits set by
205 MVL, the value passed in **must** be set in VL (and in the
206 destination register).
207
208 This has implication for the microarchitecture, as VL is required to be
209 set (limits from MVL notwithstanding) to the actual value
210 requested. RVV has the option to set VL to an arbitrary value that suits
211 the conditions and the micro-architecture: SV does *not* permit this.
212
213 The reason is so that if SV is to be used for a context-switch or as a
214 substitute for LOAD/STORE-Multiple, the operation can be done with only
215 2-3 instructions (setup of the CSRs, VSETVL x0, x0, #{regfilelen-1},
216 single LD/ST operation). If VL does *not* get set to the register file
217 length when VSETVL is called, then a software-loop would be needed.
218 To avoid this need, VL *must* be set to exactly what is requested
219 (limits notwithstanding).
220
221 Therefore, in turn, unlike RVV, implementors *must* provide
222 pseudo-parallelism (using sequential loops in hardware) if actual
223 hardware-parallelism in the ALUs is not deployed. A hybrid is also
224 permitted (as used in Broadcom's VideoCore-IV) however this must be
225 *entirely* transparent to the ISA.
226
227 The fourth change is that VSETVL is implemented as a CSR, where the
228 behaviour of CSRRW (and CSRRWI) must be changed to specifically store
229 the *new* value in the destination register, **not** the old value.
230 Where context-load/save is to be implemented in the usual fashion
231 by using a single CSRRW instruction to obtain the old value, the
232 *secondary* CSR must be used (SVSTATE). This CSR behaves
233 exactly as standard CSRs, and contains more than just VL.
234
235 One interesting side-effect of using CSRRWI to set VL is that this
236 may be done with a single instruction, useful particularly for a
237 context-load/save. There are however limitations: CSRWI's immediate
238 is limited to 0-31 (representing VL=1-32).
239
240 Note that when VL is set to 1, all parallel operations cease: the
241 hardware loop is reduced to a single element: scalar operations.
242
243 ## SUBVL - Sub Vector Length
244
245 This is a "group by quantity" that effectively divides VL into groups
246 of elements of length SUBVL. VL itself must therefore be set in advance
247 to a multiple of SUBVL.
248
249 Legal values are 1, 2, 3 and 4, and the STATE CSR must hold the 2 bit values 0b00 thru 0b11.
250
251 Setting this CSR to 0 must raise an exception. Setting it to a value
252 greater than 4 likewise.
253
254 The main effect of SUBVL is that predication bits are applied per **group**,
255 rather than by individual element.
256
257 This saves a not insignificant number of instructions when handling 3D
258 vectors, as otherwise a much longer predicate mask would have to be set
259 up with regularly-repeated bit patterns.
260
261 ## STATE
262
263 This is a standard CSR that contains sufficient information for a
264 full context save/restore. It contains (and permits setting of):
265
266 * MVL
267 * VL
268 * SUBVL
269 * the destination element offset of the current parallel instruction
270 being executed
271 * and, for twin-predication, the source element offset as well.
272
273 Interestingly STATE may hypothetically also be used to make the
274 immediately-following instruction to skip a certain number of elements,
275 by playing with destoffs and srcoffs.
276
277 Setting destoffs and srcoffs is realistically intended for saving state
278 so that exceptions (page faults in particular) may be serviced and the
279 hardware-loop that was being executed at the time of the trap, from
280 user-mode (or Supervisor-mode), may be returned to and continued from exactly
281 where it left off. The reason why this works is because setting
282 User-Mode STATE will not change (not be used) in M-Mode or S-Mode
283 (and is entirely why M-Mode and S-Mode have their own STATE CSRs).
284
285 The format of the STATE CSR is as follows:
286
287 | (28..27) | (26..24) | (23..18) | (17..12) | (11..6) | (5...0) |
288 | -------- | -------- | -------- | -------- | ------- | ------- |
289 | rsvd | subvl | destoffs | srcoffs | vl | maxvl |
290
291 When setting this CSR, the following characteristics will be enforced:
292
293 * **MAXVL** will be truncated (after offset) to be within the range 1 to XLEN
294 * **VL** will be truncated (after offset) to be within the range 1 to MAXVL
295 * **SUBVL** which sets a SIMD-like quantity, has only 4 values however
296 if VL is not a multiple of SUBVL an exception will be raised.
297 * **srcoffs** will be truncated to be within the range 0 to VL-1
298 * **destoffs** will be truncated to be within the range 0 to VL-1
299
300 ## MVL and VL Pseudocode
301
302 The pseudo-code for get and set of VL and MVL use the following internal
303 functions as follows:
304
305 set_mvl_csr(value, rd):
306 regs[rd] = MVL
307 MVL = MIN(value, MVL)
308
309 get_mvl_csr(rd):
310 regs[rd] = VL
311
312 set_vl_csr(value, rd):
313 VL = MIN(value, MVL)
314 regs[rd] = VL # yes returning the new value NOT the old CSR
315 return VL
316
317 get_vl_csr(rd):
318 regs[rd] = VL
319 return VL
320
321 Note that where setting MVL behaves as a normal CSR (returns the old
322 value), unlike standard CSR behaviour, setting VL will return the **new**
323 value of VL **not** the old one.
324
325 For CSRRWI, the range of the immediate is restricted to 5 bits. In order to
326 maximise the effectiveness, an immediate of 0 is used to set VL=1,
327 an immediate of 1 is used to set VL=2 and so on:
328
329 CSRRWI_Set_MVL(value):
330 set_mvl_csr(value+1, x0)
331
332 CSRRWI_Set_VL(value):
333 set_vl_csr(value+1, x0)
334
335 However for CSRRW the following pseudocode is used for MVL and VL,
336 where setting the value to zero will cause an exception to be raised.
337 The reason is that if VL or MVL are set to zero, the STATE CSR is
338 not capable of returning that value.
339
340 CSRRW_Set_MVL(rs1, rd):
341 value = regs[rs1]
342 if value == 0 or value > XLEN:
343 raise Exception
344 set_mvl_csr(value, rd)
345
346 CSRRW_Set_VL(rs1, rd):
347 value = regs[rs1]
348 if value == 0 or value > XLEN:
349 raise Exception
350 set_vl_csr(value, rd)
351
352 In this way, when CSRRW is utilised with a loop variable, the value
353 that goes into VL (and into the destination register) may be used
354 in an instruction-minimal fashion:
355
356 CSRvect1 = {type: F, key: a3, val: a3, elwidth: dflt}
357 CSRvect2 = {type: F, key: a7, val: a7, elwidth: dflt}
358 CSRRWI MVL, 3 # sets MVL == **4** (not 3)
359 j zerotest # in case loop counter a0 already 0
360 loop:
361 CSRRW VL, t0, a0 # vl = t0 = min(mvl, a0)
362 ld a3, a1 # load 4 registers a3-6 from x
363 slli t1, t0, 3 # t1 = vl * 8 (in bytes)
364 ld a7, a2 # load 4 registers a7-10 from y
365 add a1, a1, t1 # increment pointer to x by vl*8
366 fmadd a7, a3, fa0, a7 # v1 += v0 * fa0 (y = a * x + y)
367 sub a0, a0, t0 # n -= vl (t0)
368 st a7, a2 # store 4 registers a7-10 to y
369 add a2, a2, t1 # increment pointer to y by vl*8
370 zerotest:
371 bnez a0, loop # repeat if n != 0
372
373 With the STATE CSR, just like with CSRRWI, in order to maximise the
374 utilisation of the limited bitspace, "000000" in binary represents
375 VL==1, "00001" represents VL==2 and so on (likewise for MVL):
376
377 CSRRW_Set_SV_STATE(rs1, rd):
378 value = regs[rs1]
379 get_state_csr(rd)
380 MVL = set_mvl_csr(value[11:6]+1)
381 VL = set_vl_csr(value[5:0]+1)
382 destoffs = value[23:18]>>18
383 srcoffs = value[23:18]>>12
384
385 get_state_csr(rd):
386 regs[rd] = (MVL-1) | (VL-1)<<6 | (srcoffs)<<12 |
387 (destoffs)<<18
388 return regs[rd]
389
390 In both cases, whilst CSR read of VL and MVL return the exact values
391 of VL and MVL respectively, reading and writing the STATE CSR returns
392 those values **minus one**. This is absolutely critical to implement
393 if the STATE CSR is to be used for fast context-switching.
394
395 ## Register key-value (CAM) table <a name="regcsrtable" />
396
397 *NOTE: in prior versions of SV, this table used to be writable and
398 accessible via CSRs. It is now stored in the VLIW instruction format,
399 and entries may be overridden by the SVPrefix format*
400
401 The purpose of the Register table is four-fold:
402
403 * To mark integer and floating-point registers as requiring "redirection"
404 if it is ever used as a source or destination in any given operation.
405 This involves a level of indirection through a 5-to-7-bit lookup table,
406 such that **unmodified** operands with 5 bit (3 for Compressed) may
407 access up to **128** registers.
408 * To indicate whether, after redirection through the lookup table, the
409 register is a vector (or remains a scalar).
410 * To over-ride the implicit or explicit bitwidth that the operation would
411 normally give the register.
412
413 16 bit format:
414
415 | RegCAM | | 15 | (14..8) | 7 | (6..5) | (4..0) |
416 | ------ | | - | - | - | ------ | ------- |
417 | 0 | | isvec0 | regidx0 | i/f | vew0 | regkey |
418 | 1 | | isvec1 | regidx1 | i/f | vew1 | regkey |
419 | .. | | isvec.. | regidx.. | i/f | vew.. | regkey |
420 | 15 | | isvec15 | regidx15 | i/f | vew15 | regkey |
421
422 8 bit format:
423
424 | RegCAM | | 7 | (6..5) | (4..0) |
425 | ------ | | - | ------ | ------- |
426 | 0 | | i/f | vew0 | regnum |
427
428 i/f is set to "1" to indicate that the redirection/tag entry is to be applied
429 to integer registers; 0 indicates that it is relevant to floating-point
430 registers.
431
432 The 8 bit format is used for a much more compact expression. "isvec"
433 is implicit and, similar to [[sv-prefix-proposal]], the target vector
434 is "regnum<<2", implicitly. Contrast this with the 16-bit format where
435 the target vector is *explicitly* named in bits 8 to 14, and bit 15 may
436 optionally set "scalar" mode.
437
438 Note that whilst SVPrefis adds one extra bit to each of rd, rs1 etc.,
439 and thus the "vector" mode need only shift the (6 bit) regnum by 1 to
440 get the actual (7 bit) register number to use, there is not enough space
441 in the 8 bit format so "regnum<<2" is required.
442
443 vew has the following meanings, indicating that the instruction's
444 operand size is "over-ridden" in a polymorphic fashion:
445
446 | vew | bitwidth |
447 | --- | ------------------- |
448 | 00 | default (XLEN/FLEN) |
449 | 01 | 8 bit |
450 | 10 | 16 bit |
451 | 11 | 32 bit |
452
453 As the above table is a CAM (key-value store) it may be appropriate
454 (faster, implementation-wise) to expand it as follows:
455
456 struct vectorised fp_vec[32], int_vec[32];
457
458 for (i = 0; i < 16; i++) // 16 CSRs?
459 tb = int_vec if CSRvec[i].type == 0 else fp_vec
460 idx = CSRvec[i].regkey // INT/FP src/dst reg in opcode
461 tb[idx].elwidth = CSRvec[i].elwidth
462 tb[idx].regidx = CSRvec[i].regidx // indirection
463 tb[idx].isvector = CSRvec[i].isvector // 0=scalar
464 tb[idx].packed = CSRvec[i].packed // SIMD or not
465
466
467
468 ## Predication Table <a name="predication_csr_table"></a>
469
470 *NOTE: in prior versions of SV, this table used to be writable and
471 accessible via CSRs. It is now stored in the VLIW instruction format,
472 and entries may be overridden by the SVPrefix format*
473
474 The Predication Table is a key-value store indicating whether, if a
475 given destination register (integer or floating-point) is referred to
476 in an instruction, it is to be predicated. Like the Register table, it
477 is an indirect lookup that allows the RV opcodes to not need modification.
478
479 It is particularly important to note
480 that the *actual* register used can be *different* from the one that is
481 in the instruction, due to the redirection through the lookup table.
482
483 * regidx is the register that in combination with the
484 i/f flag, if that integer or floating-point register is referred to
485 in a (standard RV) instruction
486 results in the lookup table being referenced to find the predication
487 mask to use for this operation.
488 * predidx is the
489 *actual* (full, 7 bit) register to be used for the predication mask.
490 * inv indicates that the predication mask bits are to be inverted
491 prior to use *without* actually modifying the contents of the
492 registerfrom which those bits originated.
493 * zeroing is either 1 or 0, and if set to 1, the operation must
494 place zeros in any element position where the predication mask is
495 set to zero. If zeroing is set to 0, unpredicated elements *must*
496 be left alone. Some microarchitectures may choose to interpret
497 this as skipping the operation entirely. Others which wish to
498 stick more closely to a SIMD architecture may choose instead to
499 interpret unpredicated elements as an internal "copy element"
500 operation (which would be necessary in SIMD microarchitectures
501 that perform register-renaming)
502
503 16 bit format:
504
505 | PrCSR | (15..11) | 10 | 9 | 8 | (7..1) | 0 |
506 | ----- | - | - | - | - | ------- | ------- |
507 | 0 | predkey | zero0 | inv0 | i/f | regidx | rsrvd |
508 | 1 | predkey | zero1 | inv1 | i/f | regidx | rsvd |
509 | ... | predkey | ..... | .... | i/f | ....... | ....... |
510 | 15 | predkey | zero15 | inv15 | i/f | regidx | rsvd |
511
512
513 8 bit format:
514
515 | PrCSR | 7 | 6 | 5 | (4..0) |
516 | ----- | - | - | - | ------- |
517 | 0 | zero0 | inv0 | i/f | regnum |
518
519 The 8 bit format is a compact and less expressive variant of the full
520 16 bit format. Using the 8 bit formatis very different: the predicate
521 register to use is implicit, and numbering begins inplicitly from x9. The
522 regnum is still used to "activate" predication, in the same fashion as
523 described above.
524
525 The 16 bit Predication CSR Table is a key-value store, so implementation-wise
526 it will be faster to turn the table around (maintain topologically
527 equivalent state):
528
529 struct pred {
530 bool zero;
531 bool inv;
532 bool enabled;
533 int predidx; // redirection: actual int register to use
534 }
535
536 struct pred fp_pred_reg[32]; // 64 in future (bank=1)
537 struct pred int_pred_reg[32]; // 64 in future (bank=1)
538
539 for (i = 0; i < 16; i++)
540 tb = int_pred_reg if CSRpred[i].type == 0 else fp_pred_reg;
541 idx = CSRpred[i].regidx
542 tb[idx].zero = CSRpred[i].zero
543 tb[idx].inv = CSRpred[i].inv
544 tb[idx].predidx = CSRpred[i].predidx
545 tb[idx].enabled = true
546
547 So when an operation is to be predicated, it is the internal state that
548 is used. In Section 6.4.2 of Hwacha's Manual (EECS-2015-262) the following
549 pseudo-code for operations is given, where p is the explicit (direct)
550 reference to the predication register to be used:
551
552 for (int i=0; i<vl; ++i)
553 if ([!]preg[p][i])
554 (d ? vreg[rd][i] : sreg[rd]) =
555 iop(s1 ? vreg[rs1][i] : sreg[rs1],
556 s2 ? vreg[rs2][i] : sreg[rs2]); // for insts with 2 inputs
557
558 This instead becomes an *indirect* reference using the *internal* state
559 table generated from the Predication CSR key-value store, which is used
560 as follows.
561
562 if type(iop) == INT:
563 preg = int_pred_reg[rd]
564 else:
565 preg = fp_pred_reg[rd]
566
567 for (int i=0; i<vl; ++i)
568 predicate, zeroing = get_pred_val(type(iop) == INT, rd):
569 if (predicate && (1<<i))
570 (d ? regfile[rd+i] : regfile[rd]) =
571 iop(s1 ? regfile[rs1+i] : regfile[rs1],
572 s2 ? regfile[rs2+i] : regfile[rs2]); // for insts with 2 inputs
573 else if (zeroing)
574 (d ? regfile[rd+i] : regfile[rd]) = 0
575
576 Note:
577
578 * d, s1 and s2 are booleans indicating whether destination,
579 source1 and source2 are vector or scalar
580 * key-value CSR-redirection of rd, rs1 and rs2 have NOT been included
581 above, for clarity. rd, rs1 and rs2 all also must ALSO go through
582 register-level redirection (from the Register table) if they are
583 vectors.
584
585 If written as a function, obtaining the predication mask (and whether
586 zeroing takes place) may be done as follows:
587
588 def get_pred_val(bool is_fp_op, int reg):
589 tb = int_reg if is_fp_op else fp_reg
590 if (!tb[reg].enabled):
591 return ~0x0, False // all enabled; no zeroing
592 tb = int_pred if is_fp_op else fp_pred
593 if (!tb[reg].enabled):
594 return ~0x0, False // all enabled; no zeroing
595 predidx = tb[reg].predidx // redirection occurs HERE
596 predicate = intreg[predidx] // actual predicate HERE
597 if (tb[reg].inv):
598 predicate = ~predicate // invert ALL bits
599 return predicate, tb[reg].zero
600
601 Note here, critically, that **only** if the register is marked
602 in its **register** table entry as being "active" does the testing
603 proceed further to check if the **predicate** table entry is
604 also active.
605
606 Note also that this is in direct contrast to branch operations
607 for the storage of comparisions: in these specific circumstances
608 the requirement for there to be an active *register* entry
609 is removed.
610
611 ## REMAP CSR <a name="remap" />
612
613 (Note: both the REMAP and SHAPE sections are best read after the
614 rest of the document has been read)
615
616 There is one 32-bit CSR which may be used to indicate which registers,
617 if used in any operation, must be "reshaped" (re-mapped) from a linear
618 form to a 2D or 3D transposed form, or "offset" to permit arbitrary
619 access to elements within a register.
620
621 The 32-bit REMAP CSR may reshape up to 3 registers:
622
623 | 29..28 | 27..26 | 25..24 | 23 | 22..16 | 15 | 14..8 | 7 | 6..0 |
624 | ------ | ------ | ------ | -- | ------- | -- | ------- | -- | ------- |
625 | shape2 | shape1 | shape0 | 0 | regidx2 | 0 | regidx1 | 0 | regidx0 |
626
627 regidx0-2 refer not to the Register CSR CAM entry but to the underlying
628 *real* register (see regidx, the value) and consequently is 7-bits wide.
629 When set to zero (referring to x0), clearly reshaping x0 is pointless,
630 so is used to indicate "disabled".
631 shape0-2 refers to one of three SHAPE CSRs. A value of 0x3 is reserved.
632 Bits 7, 15, 23, 30 and 31 are also reserved, and must be set to zero.
633
634 It is anticipated that these specialist CSRs not be very often used.
635 Unlike the CSR Register and Predication tables, the REMAP CSRs use
636 the full 7-bit regidx so that they can be set once and left alone,
637 whilst the CSR Register entries pointing to them are disabled, instead.
638
639 ## SHAPE 1D/2D/3D vector-matrix remapping CSRs
640
641 (Note: both the REMAP and SHAPE sections are best read after the
642 rest of the document has been read)
643
644 There are three "shape" CSRs, SHAPE0, SHAPE1, SHAPE2, 32-bits in each,
645 which have the same format. When each SHAPE CSR is set entirely to zeros,
646 remapping is disabled: the register's elements are a linear (1D) vector.
647
648 | 26..24 | 23 | 22..16 | 15 | 14..8 | 7 | 6..0 |
649 | ------- | -- | ------- | -- | ------- | -- | ------- |
650 | permute | offs[2] | zdimsz | offs[1] | ydimsz | offs[0] | xdimsz |
651
652 offs is a 3-bit field, spread out across bits 7, 15 and 23, which
653 is added to the element index during the loop calculation.
654
655 xdimsz, ydimsz and zdimsz are offset by 1, such that a value of 0 indicates
656 that the array dimensionality for that dimension is 1. A value of xdimsz=2
657 would indicate that in the first dimension there are 3 elements in the
658 array. The format of the array is therefore as follows:
659
660 array[xdim+1][ydim+1][zdim+1]
661
662 However whilst illustrative of the dimensionality, that does not take the
663 "permute" setting into account. "permute" may be any one of six values
664 (0-5, with values of 6 and 7 being reserved, and not legal). The table
665 below shows how the permutation dimensionality order works:
666
667 | permute | order | array format |
668 | ------- | ----- | ------------------------ |
669 | 000 | 0,1,2 | (xdim+1)(ydim+1)(zdim+1) |
670 | 001 | 0,2,1 | (xdim+1)(zdim+1)(ydim+1) |
671 | 010 | 1,0,2 | (ydim+1)(xdim+1)(zdim+1) |
672 | 011 | 1,2,0 | (ydim+1)(zdim+1)(xdim+1) |
673 | 100 | 2,0,1 | (zdim+1)(xdim+1)(ydim+1) |
674 | 101 | 2,1,0 | (zdim+1)(ydim+1)(xdim+1) |
675
676 In other words, the "permute" option changes the order in which
677 nested for-loops over the array would be done. The algorithm below
678 shows this more clearly, and may be executed as a python program:
679
680 # mapidx = REMAP.shape2
681 xdim = 3 # SHAPE[mapidx].xdim_sz+1
682 ydim = 4 # SHAPE[mapidx].ydim_sz+1
683 zdim = 5 # SHAPE[mapidx].zdim_sz+1
684
685 lims = [xdim, ydim, zdim]
686 idxs = [0,0,0] # starting indices
687 order = [1,0,2] # experiment with different permutations, here
688 offs = 0 # experiment with different offsets, here
689
690 for idx in range(xdim * ydim * zdim):
691 new_idx = offs + idxs[0] + idxs[1] * xdim + idxs[2] * xdim * ydim
692 print new_idx,
693 for i in range(3):
694 idxs[order[i]] = idxs[order[i]] + 1
695 if (idxs[order[i]] != lims[order[i]]):
696 break
697 print
698 idxs[order[i]] = 0
699
700 Here, it is assumed that this algorithm be run within all pseudo-code
701 throughout this document where a (parallelism) for-loop would normally
702 run from 0 to VL-1 to refer to contiguous register
703 elements; instead, where REMAP indicates to do so, the element index
704 is run through the above algorithm to work out the **actual** element
705 index, instead. Given that there are three possible SHAPE entries, up to
706 three separate registers in any given operation may be simultaneously
707 remapped:
708
709 function op_add(rd, rs1, rs2) # add not VADD!
710 ...
711 ...
712  for (i = 0; i < VL; i++)
713 if (predval & 1<<i) # predication uses intregs
714    ireg[rd+remap(id)] <= ireg[rs1+remap(irs1)] +
715 ireg[rs2+remap(irs2)];
716 if (!int_vec[rd ].isvector) break;
717 if (int_vec[rd ].isvector)  { id += 1; }
718 if (int_vec[rs1].isvector)  { irs1 += 1; }
719 if (int_vec[rs2].isvector)  { irs2 += 1; }
720
721 By changing remappings, 2D matrices may be transposed "in-place" for one
722 operation, followed by setting a different permutation order without
723 having to move the values in the registers to or from memory. Also,
724 the reason for having REMAP separate from the three SHAPE CSRs is so
725 that in a chain of matrix multiplications and additions, for example,
726 the SHAPE CSRs need only be set up once; only the REMAP CSR need be
727 changed to target different registers.
728
729 Note that:
730
731 * Over-running the register file clearly has to be detected and
732 an illegal instruction exception thrown
733 * When non-default elwidths are set, the exact same algorithm still
734 applies (i.e. it offsets elements *within* registers rather than
735 entire registers).
736 * If permute option 000 is utilised, the actual order of the
737 reindexing does not change!
738 * If two or more dimensions are set to zero, the actual order does not change!
739 * The above algorithm is pseudo-code **only**. Actual implementations
740 will need to take into account the fact that the element for-looping
741 must be **re-entrant**, due to the possibility of exceptions occurring.
742 See MSTATE CSR, which records the current element index.
743 * Twin-predicated operations require **two** separate and distinct
744 element offsets. The above pseudo-code algorithm will be applied
745 separately and independently to each, should each of the two
746 operands be remapped. *This even includes C.LDSP* and other operations
747 in that category, where in that case it will be the **offset** that is
748 remapped (see Compressed Stack LOAD/STORE section).
749 * Offset is especially useful, on its own, for accessing elements
750 within the middle of a register. Without offsets, it is necessary
751 to either use a predicated MV, skipping the first elements, or
752 performing a LOAD/STORE cycle to memory.
753 With offsets, the data does not have to be moved.
754 * Setting the total elements (xdim+1) times (ydim+1) times (zdim+1) to
755 less than MVL is **perfectly legal**, albeit very obscure. It permits
756 entries to be regularly presented to operands **more than once**, thus
757 allowing the same underlying registers to act as an accumulator of
758 multiple vector or matrix operations, for example.
759
760 Clearly here some considerable care needs to be taken as the remapping
761 could hypothetically create arithmetic operations that target the
762 exact same underlying registers, resulting in data corruption due to
763 pipeline overlaps. Out-of-order / Superscalar micro-architectures with
764 register-renaming will have an easier time dealing with this than
765 DSP-style SIMD micro-architectures.
766
767 # Instruction Execution Order
768
769 Simple-V behaves as if it is a hardware-level "macro expansion system",
770 substituting and expanding a single instruction into multiple sequential
771 instructions with contiguous and sequentially-incrementing registers.
772 As such, it does **not** modify - or specify - the behaviour and semantics of
773 the execution order: that may be deduced from the **existing** RV
774 specification in each and every case.
775
776 So for example if a particular micro-architecture permits out-of-order
777 execution, and it is augmented with Simple-V, then wherever instructions
778 may be out-of-order then so may the "post-expansion" SV ones.
779
780 If on the other hand there are memory guarantees which specifically
781 prevent and prohibit certain instructions from being re-ordered
782 (such as the Atomicity Axiom, or FENCE constraints), then clearly
783 those constraints **MUST** also be obeyed "post-expansion".
784
785 It should be absolutely clear that SV is **not** about providing new
786 functionality or changing the existing behaviour of a micro-architetural
787 design, or about changing the RISC-V Specification.
788 It is **purely** about compacting what would otherwise be contiguous
789 instructions that use sequentially-increasing register numbers down
790 to the **one** instruction.
791
792 # Instructions <a name="instructions" />
793
794 Despite being a 98% complete and accurate topological remap of RVV
795 concepts and functionality, no new instructions are needed.
796 Compared to RVV: *All* RVV instructions can be re-mapped, however xBitManip
797 becomes a critical dependency for efficient manipulation of predication
798 masks (as a bit-field). Despite the removal of all operations,
799 with the exception of CLIP and VSELECT.X
800 *all instructions from RVV Base are topologically re-mapped and retain their
801 complete functionality, intact*. Note that if RV64G ever had
802 a MV.X added as well as FCLIP, the full functionality of RVV-Base would
803 be obtained in SV.
804
805 Three instructions, VSELECT, VCLIP and VCLIPI, do not have RV Standard
806 equivalents, so are left out of Simple-V. VSELECT could be included if
807 there existed a MV.X instruction in RV (MV.X is a hypothetical
808 non-immediate variant of MV that would allow another register to
809 specify which register was to be copied). Note that if any of these three
810 instructions are added to any given RV extension, their functionality
811 will be inherently parallelised.
812
813 With some exceptions, where it does not make sense or is simply too
814 challenging, all RV-Base instructions are parallelised:
815
816 * CSR instructions, whilst a case could be made for fast-polling of
817 a CSR into multiple registers, or for being able to copy multiple
818 contiguously addressed CSRs into contiguous registers, and so on,
819 are the fundamental core basis of SV. If parallelised, extreme
820 care would need to be taken. Additionally, CSR reads are done
821 using x0, and it is *really* inadviseable to tag x0.
822 * LUI, C.J, C.JR, WFI, AUIPC are not suitable for parallelising so are
823 left as scalar.
824 * LR/SC could hypothetically be parallelised however their purpose is
825 single (complex) atomic memory operations where the LR must be followed
826 up by a matching SC. A sequence of parallel LR instructions followed
827 by a sequence of parallel SC instructions therefore is guaranteed to
828 not be useful. Not least: the guarantees of a Multi-LR/SC
829 would be impossible to provide if emulated in a trap.
830 * EBREAK, NOP, FENCE and others do not use registers so are not inherently
831 paralleliseable anyway.
832
833 All other operations using registers are automatically parallelised.
834 This includes AMOMAX, AMOSWAP and so on, where particular care and
835 attention must be paid.
836
837 Example pseudo-code for an integer ADD operation (including scalar operations).
838 Floating-point uses fp csrs.
839
840 function op_add(rd, rs1, rs2) # add not VADD!
841  int i, id=0, irs1=0, irs2=0;
842  predval = get_pred_val(FALSE, rd);
843  rd = int_vec[rd ].isvector ? int_vec[rd ].regidx : rd;
844  rs1 = int_vec[rs1].isvector ? int_vec[rs1].regidx : rs1;
845  rs2 = int_vec[rs2].isvector ? int_vec[rs2].regidx : rs2;
846  for (i = 0; i < VL; i++)
847 if (predval & 1<<i) # predication uses intregs
848    ireg[rd+id] <= ireg[rs1+irs1] + ireg[rs2+irs2];
849 if (!int_vec[rd ].isvector) break;
850 if (int_vec[rd ].isvector)  { id += 1; }
851 if (int_vec[rs1].isvector)  { irs1 += 1; }
852 if (int_vec[rs2].isvector)  { irs2 += 1; }
853
854 Note that for simplicity there is quite a lot missing from the above
855 pseudo-code: element widths, zeroing on predication, dimensional
856 reshaping and offsets and so on. However it demonstrates the basic
857 principle. Augmentations that produce the full pseudo-code are covered in
858 other sections.
859
860 ## Instruction Format
861
862 It is critical to appreciate that there are
863 **no operations added to SV, at all**.
864
865 Instead, by using CSRs to tag registers as an indication of "changed
866 behaviour", SV *overloads* pre-existing branch operations into predicated
867 variants, and implicitly overloads arithmetic operations, MV, FCVT, and
868 LOAD/STORE depending on CSR configurations for bitwidth and predication.
869 **Everything** becomes parallelised. *This includes Compressed
870 instructions* as well as any future instructions and Custom Extensions.
871
872 Note: CSR tags to change behaviour of instructions is nothing new, including
873 in RISC-V. UXL, SXL and MXL change the behaviour so that XLEN=32/64/128.
874 FRM changes the behaviour of the floating-point unit, to alter the rounding
875 mode. Other architectures change the LOAD/STORE byte-order from big-endian
876 to little-endian on a per-instruction basis. SV is just a little more...
877 comprehensive in its effect on instructions.
878
879 ## Branch Instructions
880
881 ### Standard Branch <a name="standard_branch"></a>
882
883 Branch operations use standard RV opcodes that are reinterpreted to
884 be "predicate variants" in the instance where either of the two src
885 registers are marked as vectors (active=1, vector=1).
886
887 Note that the predication register to use (if one is enabled) is taken from
888 the *first* src register, and that this is used, just as with predicated
889 arithmetic operations, to mask whether the comparison operations take
890 place or not. The target (destination) predication register
891 to use (if one is enabled) is taken from the *second* src register.
892
893 If either of src1 or src2 are scalars (whether by there being no
894 CSR register entry or whether by the CSR entry specifically marking
895 the register as "scalar") the comparison goes ahead as vector-scalar
896 or scalar-vector.
897
898 In instances where no vectorisation is detected on either src registers
899 the operation is treated as an absolutely standard scalar branch operation.
900 Where vectorisation is present on either or both src registers, the
901 branch may stil go ahead if any only if *all* tests succeed (i.e. excluding
902 those tests that are predicated out).
903
904 Note that when zero-predication is enabled (from source rs1),
905 a cleared bit in the predicate indicates that the result
906 of the compare is set to "false", i.e. that the corresponding
907 destination bit (or result)) be set to zero. Contrast this with
908 when zeroing is not set: bits in the destination predicate are
909 only *set*; they are **not** cleared. This is important to appreciate,
910 as there may be an expectation that, going into the hardware-loop,
911 the destination predicate is always expected to be set to zero:
912 this is **not** the case. The destination predicate is only set
913 to zero if **zeroing** is enabled.
914
915 Note that just as with the standard (scalar, non-predicated) branch
916 operations, BLE, BGT, BLEU and BTGU may be synthesised by inverting
917 src1 and src2.
918
919 In Hwacha EECS-2015-262 Section 6.7.2 the following pseudocode is given
920 for predicated compare operations of function "cmp":
921
922 for (int i=0; i<vl; ++i)
923 if ([!]preg[p][i])
924 preg[pd][i] = cmp(s1 ? vreg[rs1][i] : sreg[rs1],
925 s2 ? vreg[rs2][i] : sreg[rs2]);
926
927 With associated predication, vector-length adjustments and so on,
928 and temporarily ignoring bitwidth (which makes the comparisons more
929 complex), this becomes:
930
931 s1 = reg_is_vectorised(src1);
932 s2 = reg_is_vectorised(src2);
933
934 if not s1 && not s2
935 if cmp(rs1, rs2) # scalar compare
936 goto branch
937 return
938
939 preg = int_pred_reg[rd]
940 reg = int_regfile
941
942 ps = get_pred_val(I/F==INT, rs1);
943 rd = get_pred_val(I/F==INT, rs2); # this may not exist
944
945 if not exists(rd) or zeroing:
946 result = 0
947 else
948 result = preg[rd]
949
950 for (int i = 0; i < VL; ++i)
951 if (zeroing)
952 if not (ps & (1<<i))
953 result &= ~(1<<i);
954 else if (ps & (1<<i))
955 if (cmp(s1 ? reg[src1+i]:reg[src1],
956 s2 ? reg[src2+i]:reg[src2])
957 result |= 1<<i;
958 else
959 result &= ~(1<<i);
960
961 if not exists(rd)
962 if result == ps
963 goto branch
964 else
965 preg[rd] = result # store in destination
966 if preg[rd] == ps
967 goto branch
968
969 Notes:
970
971 * Predicated SIMD comparisons would break src1 and src2 further down
972 into bitwidth-sized chunks (see Appendix "Bitwidth Virtual Register
973 Reordering") setting Vector-Length times (number of SIMD elements) bits
974 in Predicate Register rd, as opposed to just Vector-Length bits.
975 * The execution of "parallelised" instructions **must** be implemented
976 as "re-entrant" (to use a term from software). If an exception (trap)
977 occurs during the middle of a vectorised
978 Branch (now a SV predicated compare) operation, the partial results
979 of any comparisons must be written out to the destination
980 register before the trap is permitted to begin. If however there
981 is no predicate, the **entire** set of comparisons must be **restarted**,
982 with the offset loop indices set back to zero. This is because
983 there is no place to store the temporary result during the handling
984 of traps.
985
986 TODO: predication now taken from src2. also branch goes ahead
987 if all compares are successful.
988
989 Note also that where normally, predication requires that there must
990 also be a CSR register entry for the register being used in order
991 for the **predication** CSR register entry to also be active,
992 for branches this is **not** the case. src2 does **not** have
993 to have its CSR register entry marked as active in order for
994 predication on src2 to be active.
995
996 Also note: SV Branch operations are **not** twin-predicated
997 (see Twin Predication section). This would require three
998 element offsets: one to track src1, one to track src2 and a third
999 to track where to store the accumulation of the results. Given
1000 that the element offsets need to be exposed via CSRs so that
1001 the parallel hardware looping may be made re-entrant on traps
1002 and exceptions, the decision was made not to make SV Branches
1003 twin-predicated.
1004
1005 ### Floating-point Comparisons
1006
1007 There does not exist floating-point branch operations, only compare.
1008 Interestingly no change is needed to the instruction format because
1009 FP Compare already stores a 1 or a zero in its "rd" integer register
1010 target, i.e. it's not actually a Branch at all: it's a compare.
1011
1012 In RV (scalar) Base, a branch on a floating-point compare is
1013 done via the sequence "FEQ x1, f0, f5; BEQ x1, x0, #jumploc".
1014 This does extend to SV, as long as x1 (in the example sequence given)
1015 is vectorised. When that is the case, x1..x(1+VL-1) will also be
1016 set to 0 or 1 depending on whether f0==f5, f1==f6, f2==f7 and so on.
1017 The BEQ that follows will *also* compare x1==x0, x2==x0, x3==x0 and
1018 so on. Consequently, unlike integer-branch, FP Compare needs no
1019 modification in its behaviour.
1020
1021 In addition, it is noted that an entry "FNE" (the opposite of FEQ) is missing,
1022 and whilst in ordinary branch code this is fine because the standard
1023 RVF compare can always be followed up with an integer BEQ or a BNE (or
1024 a compressed comparison to zero or non-zero), in predication terms that
1025 becomes more of an impact. To deal with this, SV's predication has
1026 had "invert" added to it.
1027
1028 Also: note that FP Compare may be predicated, using the destination
1029 integer register (rd) to determine the predicate. FP Compare is **not**
1030 a twin-predication operation, as, again, just as with SV Branches,
1031 there are three registers involved: FP src1, FP src2 and INT rd.
1032
1033 ### Compressed Branch Instruction
1034
1035 Compressed Branch instructions are, just like standard Branch instructions,
1036 reinterpreted to be vectorised and predicated based on the source register
1037 (rs1s) CSR entries. As however there is only the one source register,
1038 given that c.beqz a10 is equivalent to beqz a10,x0, the optional target
1039 to store the results of the comparisions is taken from CSR predication
1040 table entries for **x0**.
1041
1042 The specific required use of x0 is, with a little thought, quite obvious,
1043 but is counterintuitive. Clearly it is **not** recommended to redirect
1044 x0 with a CSR register entry, however as a means to opaquely obtain
1045 a predication target it is the only sensible option that does not involve
1046 additional special CSRs (or, worse, additional special opcodes).
1047
1048 Note also that, just as with standard branches, the 2nd source
1049 (in this case x0 rather than src2) does **not** have to have its CSR
1050 register table marked as "active" in order for predication to work.
1051
1052 ## Vectorised Dual-operand instructions
1053
1054 There is a series of 2-operand instructions involving copying (and
1055 sometimes alteration):
1056
1057 * C.MV
1058 * FMV, FNEG, FABS, FCVT, FSGNJ, FSGNJN and FSGNJX
1059 * C.LWSP, C.SWSP, C.LDSP, C.FLWSP etc.
1060 * LOAD(-FP) and STORE(-FP)
1061
1062 All of these operations follow the same two-operand pattern, so it is
1063 *both* the source *and* destination predication masks that are taken into
1064 account. This is different from
1065 the three-operand arithmetic instructions, where the predication mask
1066 is taken from the *destination* register, and applied uniformly to the
1067 elements of the source register(s), element-for-element.
1068
1069 The pseudo-code pattern for twin-predicated operations is as
1070 follows:
1071
1072 function op(rd, rs):
1073  rd = int_csr[rd].active ? int_csr[rd].regidx : rd;
1074  rs = int_csr[rs].active ? int_csr[rs].regidx : rs;
1075  ps = get_pred_val(FALSE, rs); # predication on src
1076  pd = get_pred_val(FALSE, rd); # ... AND on dest
1077  for (int i = 0, int j = 0; i < VL && j < VL;):
1078 if (int_csr[rs].isvec) while (!(ps & 1<<i)) i++;
1079 if (int_csr[rd].isvec) while (!(pd & 1<<j)) j++;
1080 reg[rd+j] = SCALAR_OPERATION_ON(reg[rs+i])
1081 if (int_csr[rs].isvec) i++;
1082 if (int_csr[rd].isvec) j++; else break
1083
1084 This pattern covers scalar-scalar, scalar-vector, vector-scalar
1085 and vector-vector, and predicated variants of all of those.
1086 Zeroing is not presently included (TODO). As such, when compared
1087 to RVV, the twin-predicated variants of C.MV and FMV cover
1088 **all** standard vector operations: VINSERT, VSPLAT, VREDUCE,
1089 VEXTRACT, VSCATTER, VGATHER, VCOPY, and more.
1090
1091 Note that:
1092
1093 * elwidth (SIMD) is not covered in the pseudo-code above
1094 * ending the loop early in scalar cases (VINSERT, VEXTRACT) is also
1095 not covered
1096 * zero predication is also not shown (TODO).
1097
1098 ### C.MV Instruction <a name="c_mv"></a>
1099
1100 There is no MV instruction in RV however there is a C.MV instruction.
1101 It is used for copying integer-to-integer registers (vectorised FMV
1102 is used for copying floating-point).
1103
1104 If either the source or the destination register are marked as vectors
1105 C.MV is reinterpreted to be a vectorised (multi-register) predicated
1106 move operation. The actual instruction's format does not change:
1107
1108 [[!table data="""
1109 15 12 | 11 7 | 6 2 | 1 0 |
1110 funct4 | rd | rs | op |
1111 4 | 5 | 5 | 2 |
1112 C.MV | dest | src | C0 |
1113 """]]
1114
1115 A simplified version of the pseudocode for this operation is as follows:
1116
1117 function op_mv(rd, rs) # MV not VMV!
1118  rd = int_csr[rd].active ? int_csr[rd].regidx : rd;
1119  rs = int_csr[rs].active ? int_csr[rs].regidx : rs;
1120  ps = get_pred_val(FALSE, rs); # predication on src
1121  pd = get_pred_val(FALSE, rd); # ... AND on dest
1122  for (int i = 0, int j = 0; i < VL && j < VL;):
1123 if (int_csr[rs].isvec) while (!(ps & 1<<i)) i++;
1124 if (int_csr[rd].isvec) while (!(pd & 1<<j)) j++;
1125 ireg[rd+j] <= ireg[rs+i];
1126 if (int_csr[rs].isvec) i++;
1127 if (int_csr[rd].isvec) j++; else break
1128
1129 There are several different instructions from RVV that are covered by
1130 this one opcode:
1131
1132 [[!table data="""
1133 src | dest | predication | op |
1134 scalar | vector | none | VSPLAT |
1135 scalar | vector | destination | sparse VSPLAT |
1136 scalar | vector | 1-bit dest | VINSERT |
1137 vector | scalar | 1-bit? src | VEXTRACT |
1138 vector | vector | none | VCOPY |
1139 vector | vector | src | Vector Gather |
1140 vector | vector | dest | Vector Scatter |
1141 vector | vector | src & dest | Gather/Scatter |
1142 vector | vector | src == dest | sparse VCOPY |
1143 """]]
1144
1145 Also, VMERGE may be implemented as back-to-back (macro-op fused) C.MV
1146 operations with inversion on the src and dest predication for one of the
1147 two C.MV operations.
1148
1149 Note that in the instance where the Compressed Extension is not implemented,
1150 MV may be used, but that is a pseudo-operation mapping to addi rd, x0, rs.
1151 Note that the behaviour is **different** from C.MV because with addi the
1152 predication mask to use is taken **only** from rd and is applied against
1153 all elements: rs[i] = rd[i].
1154
1155 ### FMV, FNEG and FABS Instructions
1156
1157 These are identical in form to C.MV, except covering floating-point
1158 register copying. The same double-predication rules also apply.
1159 However when elwidth is not set to default the instruction is implicitly
1160 and automatic converted to a (vectorised) floating-point type conversion
1161 operation of the appropriate size covering the source and destination
1162 register bitwidths.
1163
1164 (Note that FMV, FNEG and FABS are all actually pseudo-instructions)
1165
1166 ### FVCT Instructions
1167
1168 These are again identical in form to C.MV, except that they cover
1169 floating-point to integer and integer to floating-point. When element
1170 width in each vector is set to default, the instructions behave exactly
1171 as they are defined for standard RV (scalar) operations, except vectorised
1172 in exactly the same fashion as outlined in C.MV.
1173
1174 However when the source or destination element width is not set to default,
1175 the opcode's explicit element widths are *over-ridden* to new definitions,
1176 and the opcode's element width is taken as indicative of the SIMD width
1177 (if applicable i.e. if packed SIMD is requested) instead.
1178
1179 For example FCVT.S.L would normally be used to convert a 64-bit
1180 integer in register rs1 to a 64-bit floating-point number in rd.
1181 If however the source rs1 is set to be a vector, where elwidth is set to
1182 default/2 and "packed SIMD" is enabled, then the first 32 bits of
1183 rs1 are converted to a floating-point number to be stored in rd's
1184 first element and the higher 32-bits *also* converted to floating-point
1185 and stored in the second. The 32 bit size comes from the fact that
1186 FCVT.S.L's integer width is 64 bit, and with elwidth on rs1 set to
1187 divide that by two it means that rs1 element width is to be taken as 32.
1188
1189 Similar rules apply to the destination register.
1190
1191 ## LOAD / STORE Instructions and LOAD-FP/STORE-FP <a name="load_store"></a>
1192
1193 An earlier draft of SV modified the behaviour of LOAD/STORE (modified
1194 the interpretation of the instruction fields). This
1195 actually undermined the fundamental principle of SV, namely that there
1196 be no modifications to the scalar behaviour (except where absolutely
1197 necessary), in order to simplify an implementor's task if considering
1198 converting a pre-existing scalar design to support parallelism.
1199
1200 So the original RISC-V scalar LOAD/STORE and LOAD-FP/STORE-FP functionality
1201 do not change in SV, however just as with C.MV it is important to note
1202 that dual-predication is possible.
1203
1204 In vectorised architectures there are usually at least two different modes
1205 for LOAD/STORE:
1206
1207 * Read (or write for STORE) from sequential locations, where one
1208 register specifies the address, and the one address is incremented
1209 by a fixed amount. This is usually known as "Unit Stride" mode.
1210 * Read (or write) from multiple indirected addresses, where the
1211 vector elements each specify separate and distinct addresses.
1212
1213 To support these different addressing modes, the CSR Register "isvector"
1214 bit is used. So, for a LOAD, when the src register is set to
1215 scalar, the LOADs are sequentially incremented by the src register
1216 element width, and when the src register is set to "vector", the
1217 elements are treated as indirection addresses. Simplified
1218 pseudo-code would look like this:
1219
1220 function op_ld(rd, rs) # LD not VLD!
1221  rdv = int_csr[rd].active ? int_csr[rd].regidx : rd;
1222  rsv = int_csr[rs].active ? int_csr[rs].regidx : rs;
1223  ps = get_pred_val(FALSE, rs); # predication on src
1224  pd = get_pred_val(FALSE, rd); # ... AND on dest
1225  for (int i = 0, int j = 0; i < VL && j < VL;):
1226 if (int_csr[rs].isvec) while (!(ps & 1<<i)) i++;
1227 if (int_csr[rd].isvec) while (!(pd & 1<<j)) j++;
1228 if (int_csr[rd].isvec)
1229 # indirect mode (multi mode)
1230 srcbase = ireg[rsv+i];
1231 else
1232 # unit stride mode
1233 srcbase = ireg[rsv] + i * XLEN/8; # offset in bytes
1234 ireg[rdv+j] <= mem[srcbase + imm_offs];
1235 if (!int_csr[rs].isvec &&
1236 !int_csr[rd].isvec) break # scalar-scalar LD
1237 if (int_csr[rs].isvec) i++;
1238 if (int_csr[rd].isvec) j++;
1239
1240 Notes:
1241
1242 * For simplicity, zeroing and elwidth is not included in the above:
1243 the key focus here is the decision-making for srcbase; vectorised
1244 rs means use sequentially-numbered registers as the indirection
1245 address, and scalar rs is "offset" mode.
1246 * The test towards the end for whether both source and destination are
1247 scalar is what makes the above pseudo-code provide the "standard" RV
1248 Base behaviour for LD operations.
1249 * The offset in bytes (XLEN/8) changes depending on whether the
1250 operation is a LB (1 byte), LH (2 byes), LW (4 bytes) or LD
1251 (8 bytes), and also whether the element width is over-ridden
1252 (see special element width section).
1253
1254 ## Compressed Stack LOAD / STORE Instructions <a name="c_ld_st"></a>
1255
1256 C.LWSP / C.SWSP and floating-point etc. are also source-dest twin-predicated,
1257 where it is implicit in C.LWSP/FLWSP etc. that x2 is the source register.
1258 It is therefore possible to use predicated C.LWSP to efficiently
1259 pop registers off the stack (by predicating x2 as the source), cherry-picking
1260 which registers to store to (by predicating the destination). Likewise
1261 for C.SWSP. In this way, LOAD/STORE-Multiple is efficiently achieved.
1262
1263 The two modes ("unit stride" and multi-indirection) are still supported,
1264 as with standard LD/ST. Essentially, the only difference is that the
1265 use of x2 is hard-coded into the instruction.
1266
1267 **Note**: it is still possible to redirect x2 to an alternative target
1268 register. With care, this allows C.LWSP / C.SWSP (and C.FLWSP) to be used as
1269 general-purpose LOAD/STORE operations.
1270
1271 ## Compressed LOAD / STORE Instructions
1272
1273 Compressed LOAD and STORE are again exactly the same as scalar LOAD/STORE,
1274 where the same rules apply and the same pseudo-code apply as for
1275 non-compressed LOAD/STORE. Again: setting scalar or vector mode
1276 on the src for LOAD and dest for STORE switches mode from "Unit Stride"
1277 to "Multi-indirection", respectively.
1278
1279 # Element bitwidth polymorphism <a name="elwidth"></a>
1280
1281 Element bitwidth is best covered as its own special section, as it
1282 is quite involved and applies uniformly across-the-board. SV restricts
1283 bitwidth polymorphism to default, 8-bit, 16-bit and 32-bit.
1284
1285 The effect of setting an element bitwidth is to re-cast each entry
1286 in the register table, and for all memory operations involving
1287 load/stores of certain specific sizes, to a completely different width.
1288 Thus In c-style terms, on an RV64 architecture, effectively each register
1289 now looks like this:
1290
1291 typedef union {
1292 uint8_t b[8];
1293 uint16_t s[4];
1294 uint32_t i[2];
1295 uint64_t l[1];
1296 } reg_t;
1297
1298 // integer table: assume maximum SV 7-bit regfile size
1299 reg_t int_regfile[128];
1300
1301 where the CSR Register table entry (not the instruction alone) determines
1302 which of those union entries is to be used on each operation, and the
1303 VL element offset in the hardware-loop specifies the index into each array.
1304
1305 However a naive interpretation of the data structure above masks the
1306 fact that setting VL greater than 8, for example, when the bitwidth is 8,
1307 accessing one specific register "spills over" to the following parts of
1308 the register file in a sequential fashion. So a much more accurate way
1309 to reflect this would be:
1310
1311 typedef union {
1312 uint8_t actual_bytes[8]; // 8 for RV64, 4 for RV32, 16 for RV128
1313 uint8_t b[0]; // array of type uint8_t
1314 uint16_t s[0];
1315 uint32_t i[0];
1316 uint64_t l[0];
1317 uint128_t d[0];
1318 } reg_t;
1319
1320 reg_t int_regfile[128];
1321
1322 where when accessing any individual regfile[n].b entry it is permitted
1323 (in c) to arbitrarily over-run the *declared* length of the array (zero),
1324 and thus "overspill" to consecutive register file entries in a fashion
1325 that is completely transparent to a greatly-simplified software / pseudo-code
1326 representation.
1327 It is however critical to note that it is clearly the responsibility of
1328 the implementor to ensure that, towards the end of the register file,
1329 an exception is thrown if attempts to access beyond the "real" register
1330 bytes is ever attempted.
1331
1332 Now we may modify pseudo-code an operation where all element bitwidths have
1333 been set to the same size, where this pseudo-code is otherwise identical
1334 to its "non" polymorphic versions (above):
1335
1336 function op_add(rd, rs1, rs2) # add not VADD!
1337 ...
1338 ...
1339  for (i = 0; i < VL; i++)
1340 ...
1341 ...
1342 // TODO, calculate if over-run occurs, for each elwidth
1343 if (elwidth == 8) {
1344    int_regfile[rd].b[id] <= int_regfile[rs1].i[irs1] +
1345     int_regfile[rs2].i[irs2];
1346 } else if elwidth == 16 {
1347    int_regfile[rd].s[id] <= int_regfile[rs1].s[irs1] +
1348     int_regfile[rs2].s[irs2];
1349 } else if elwidth == 32 {
1350    int_regfile[rd].i[id] <= int_regfile[rs1].i[irs1] +
1351     int_regfile[rs2].i[irs2];
1352 } else { // elwidth == 64
1353    int_regfile[rd].l[id] <= int_regfile[rs1].l[irs1] +
1354     int_regfile[rs2].l[irs2];
1355 }
1356 ...
1357 ...
1358
1359 So here we can see clearly: for 8-bit entries rd, rs1 and rs2 (and registers
1360 following sequentially on respectively from the same) are "type-cast"
1361 to 8-bit; for 16-bit entries likewise and so on.
1362
1363 However that only covers the case where the element widths are the same.
1364 Where the element widths are different, the following algorithm applies:
1365
1366 * Analyse the bitwidth of all source operands and work out the
1367 maximum. Record this as "maxsrcbitwidth"
1368 * If any given source operand requires sign-extension or zero-extension
1369 (ldb, div, rem, mul, sll, srl, sra etc.), instead of mandatory 32-bit
1370 sign-extension / zero-extension or whatever is specified in the standard
1371 RV specification, **change** that to sign-extending from the respective
1372 individual source operand's bitwidth from the CSR table out to
1373 "maxsrcbitwidth" (previously calculated), instead.
1374 * Following separate and distinct (optional) sign/zero-extension of all
1375 source operands as specifically required for that operation, carry out the
1376 operation at "maxsrcbitwidth". (Note that in the case of LOAD/STORE or MV
1377 this may be a "null" (copy) operation, and that with FCVT, the changes
1378 to the source and destination bitwidths may also turn FVCT effectively
1379 into a copy).
1380 * If the destination operand requires sign-extension or zero-extension,
1381 instead of a mandatory fixed size (typically 32-bit for arithmetic,
1382 for subw for example, and otherwise various: 8-bit for sb, 16-bit for sw
1383 etc.), overload the RV specification with the bitwidth from the
1384 destination register's elwidth entry.
1385 * Finally, store the (optionally) sign/zero-extended value into its
1386 destination: memory for sb/sw etc., or an offset section of the register
1387 file for an arithmetic operation.
1388
1389 In this way, polymorphic bitwidths are achieved without requiring a
1390 massive 64-way permutation of calculations **per opcode**, for example
1391 (4 possible rs1 bitwidths times 4 possible rs2 bitwidths times 4 possible
1392 rd bitwidths). The pseudo-code is therefore as follows:
1393
1394 typedef union {
1395 uint8_t b;
1396 uint16_t s;
1397 uint32_t i;
1398 uint64_t l;
1399 } el_reg_t;
1400
1401 bw(elwidth):
1402 if elwidth == 0:
1403 return xlen
1404 if elwidth == 1:
1405 return xlen / 2
1406 if elwidth == 2:
1407 return xlen * 2
1408 // elwidth == 3:
1409 return 8
1410
1411 get_max_elwidth(rs1, rs2):
1412 return max(bw(int_csr[rs1].elwidth), # default (XLEN) if not set
1413 bw(int_csr[rs2].elwidth)) # again XLEN if no entry
1414
1415 get_polymorphed_reg(reg, bitwidth, offset):
1416 el_reg_t res;
1417 res.l = 0; // TODO: going to need sign-extending / zero-extending
1418 if bitwidth == 8:
1419 reg.b = int_regfile[reg].b[offset]
1420 elif bitwidth == 16:
1421 reg.s = int_regfile[reg].s[offset]
1422 elif bitwidth == 32:
1423 reg.i = int_regfile[reg].i[offset]
1424 elif bitwidth == 64:
1425 reg.l = int_regfile[reg].l[offset]
1426 return res
1427
1428 set_polymorphed_reg(reg, bitwidth, offset, val):
1429 if (!int_csr[reg].isvec):
1430 # sign/zero-extend depending on opcode requirements, from
1431 # the reg's bitwidth out to the full bitwidth of the regfile
1432 val = sign_or_zero_extend(val, bitwidth, xlen)
1433 int_regfile[reg].l[0] = val
1434 elif bitwidth == 8:
1435 int_regfile[reg].b[offset] = val
1436 elif bitwidth == 16:
1437 int_regfile[reg].s[offset] = val
1438 elif bitwidth == 32:
1439 int_regfile[reg].i[offset] = val
1440 elif bitwidth == 64:
1441 int_regfile[reg].l[offset] = val
1442
1443 maxsrcwid = get_max_elwidth(rs1, rs2) # source element width(s)
1444 destwid = int_csr[rs1].elwidth # destination element width
1445  for (i = 0; i < VL; i++)
1446 if (predval & 1<<i) # predication uses intregs
1447 // TODO, calculate if over-run occurs, for each elwidth
1448 src1 = get_polymorphed_reg(rs1, maxsrcwid, irs1)
1449 // TODO, sign/zero-extend src1 and src2 as operation requires
1450 if (op_requires_sign_extend_src1)
1451 src1 = sign_extend(src1, maxsrcwid)
1452 src2 = get_polymorphed_reg(rs2, maxsrcwid, irs2)
1453 result = src1 + src2 # actual add here
1454 // TODO, sign/zero-extend result, as operation requires
1455 if (op_requires_sign_extend_dest)
1456 result = sign_extend(result, maxsrcwid)
1457 set_polymorphed_reg(rd, destwid, ird, result)
1458 if (!int_vec[rd].isvector) break
1459 if (int_vec[rd ].isvector)  { id += 1; }
1460 if (int_vec[rs1].isvector)  { irs1 += 1; }
1461 if (int_vec[rs2].isvector)  { irs2 += 1; }
1462
1463 Whilst specific sign-extension and zero-extension pseudocode call
1464 details are left out, due to each operation being different, the above
1465 should be clear that;
1466
1467 * the source operands are extended out to the maximum bitwidth of all
1468 source operands
1469 * the operation takes place at that maximum source bitwidth (the
1470 destination bitwidth is not involved at this point, at all)
1471 * the result is extended (or potentially even, truncated) before being
1472 stored in the destination. i.e. truncation (if required) to the
1473 destination width occurs **after** the operation **not** before.
1474 * when the destination is not marked as "vectorised", the **full**
1475 (standard, scalar) register file entry is taken up, i.e. the
1476 element is either sign-extended or zero-extended to cover the
1477 full register bitwidth (XLEN) if it is not already XLEN bits long.
1478
1479 Implementors are entirely free to optimise the above, particularly
1480 if it is specifically known that any given operation will complete
1481 accurately in less bits, as long as the results produced are
1482 directly equivalent and equal, for all inputs and all outputs,
1483 to those produced by the above algorithm.
1484
1485 ## Polymorphic floating-point operation exceptions and error-handling
1486
1487 For floating-point operations, conversion takes place without
1488 raising any kind of exception. Exactly as specified in the standard
1489 RV specification, NAN (or appropriate) is stored if the result
1490 is beyond the range of the destination, and, again, exactly as
1491 with the standard RV specification just as with scalar
1492 operations, the floating-point flag is raised (FCSR). And, again, just as
1493 with scalar operations, it is software's responsibility to check this flag.
1494 Given that the FCSR flags are "accrued", the fact that multiple element
1495 operations could have occurred is not a problem.
1496
1497 Note that it is perfectly legitimate for floating-point bitwidths of
1498 only 8 to be specified. However whilst it is possible to apply IEEE 754
1499 principles, no actual standard yet exists. Implementors wishing to
1500 provide hardware-level 8-bit support rather than throw a trap to emulate
1501 in software should contact the author of this specification before
1502 proceeding.
1503
1504 ## Polymorphic shift operators
1505
1506 A special note is needed for changing the element width of left and right
1507 shift operators, particularly right-shift. Even for standard RV base,
1508 in order for correct results to be returned, the second operand RS2 must
1509 be truncated to be within the range of RS1's bitwidth. spike's implementation
1510 of sll for example is as follows:
1511
1512 WRITE_RD(sext_xlen(zext_xlen(RS1) << (RS2 & (xlen-1))));
1513
1514 which means: where XLEN is 32 (for RV32), restrict RS2 to cover the
1515 range 0..31 so that RS1 will only be left-shifted by the amount that
1516 is possible to fit into a 32-bit register. Whilst this appears not
1517 to matter for hardware, it matters greatly in software implementations,
1518 and it also matters where an RV64 system is set to "RV32" mode, such
1519 that the underlying registers RS1 and RS2 comprise 64 hardware bits
1520 each.
1521
1522 For SV, where each operand's element bitwidth may be over-ridden, the
1523 rule about determining the operation's bitwidth *still applies*, being
1524 defined as the maximum bitwidth of RS1 and RS2. *However*, this rule
1525 **also applies to the truncation of RS2**. In other words, *after*
1526 determining the maximum bitwidth, RS2's range must **also be truncated**
1527 to ensure a correct answer. Example:
1528
1529 * RS1 is over-ridden to a 16-bit width
1530 * RS2 is over-ridden to an 8-bit width
1531 * RD is over-ridden to a 64-bit width
1532 * the maximum bitwidth is thus determined to be 16-bit - max(8,16)
1533 * RS2 is **truncated to a range of values from 0 to 15**: RS2 & (16-1)
1534
1535 Pseudocode (in spike) for this example would therefore be:
1536
1537 WRITE_RD(sext_xlen(zext_16bit(RS1) << (RS2 & (16-1))));
1538
1539 This example illustrates that considerable care therefore needs to be
1540 taken to ensure that left and right shift operations are implemented
1541 correctly. The key is that
1542
1543 * The operation bitwidth is determined by the maximum bitwidth
1544 of the *source registers*, **not** the destination register bitwidth
1545 * The result is then sign-extend (or truncated) as appropriate.
1546
1547 ## Polymorphic MULH/MULHU/MULHSU
1548
1549 MULH is designed to take the top half MSBs of a multiply that
1550 does not fit within the range of the source operands, such that
1551 smaller width operations may produce a full double-width multiply
1552 in two cycles. The issue is: SV allows the source operands to
1553 have variable bitwidth.
1554
1555 Here again special attention has to be paid to the rules regarding
1556 bitwidth, which, again, are that the operation is performed at
1557 the maximum bitwidth of the **source** registers. Therefore:
1558
1559 * An 8-bit x 8-bit multiply will create a 16-bit result that must
1560 be shifted down by 8 bits
1561 * A 16-bit x 8-bit multiply will create a 24-bit result that must
1562 be shifted down by 16 bits (top 8 bits being zero)
1563 * A 16-bit x 16-bit multiply will create a 32-bit result that must
1564 be shifted down by 16 bits
1565 * A 32-bit x 16-bit multiply will create a 48-bit result that must
1566 be shifted down by 32 bits
1567 * A 32-bit x 8-bit multiply will create a 40-bit result that must
1568 be shifted down by 32 bits
1569
1570 So again, just as with shift-left and shift-right, the result
1571 is shifted down by the maximum of the two source register bitwidths.
1572 And, exactly again, truncation or sign-extension is performed on the
1573 result. If sign-extension is to be carried out, it is performed
1574 from the same maximum of the two source register bitwidths out
1575 to the result element's bitwidth.
1576
1577 If truncation occurs, i.e. the top MSBs of the result are lost,
1578 this is "Officially Not Our Problem", i.e. it is assumed that the
1579 programmer actually desires the result to be truncated. i.e. if the
1580 programmer wanted all of the bits, they would have set the destination
1581 elwidth to accommodate them.
1582
1583 ## Polymorphic elwidth on LOAD/STORE <a name="elwidth_loadstore"></a>
1584
1585 Polymorphic element widths in vectorised form means that the data
1586 being loaded (or stored) across multiple registers needs to be treated
1587 (reinterpreted) as a contiguous stream of elwidth-wide items, where
1588 the source register's element width is **independent** from the destination's.
1589
1590 This makes for a slightly more complex algorithm when using indirection
1591 on the "addressed" register (source for LOAD and destination for STORE),
1592 particularly given that the LOAD/STORE instruction provides important
1593 information about the width of the data to be reinterpreted.
1594
1595 Let's illustrate the "load" part, where the pseudo-code for elwidth=default
1596 was as follows, and i is the loop from 0 to VL-1:
1597
1598 srcbase = ireg[rs+i];
1599 return mem[srcbase + imm]; // returns XLEN bits
1600
1601 Instead, when elwidth != default, for a LW (32-bit LOAD), elwidth-wide
1602 chunks are taken from the source memory location addressed by the current
1603 indexed source address register, and only when a full 32-bits-worth
1604 are taken will the index be moved on to the next contiguous source
1605 address register:
1606
1607 bitwidth = bw(elwidth); // source elwidth from CSR reg entry
1608 elsperblock = 32 / bitwidth // 1 if bw=32, 2 if bw=16, 4 if bw=8
1609 srcbase = ireg[rs+i/(elsperblock)]; // integer divide
1610 offs = i % elsperblock; // modulo
1611 return &mem[srcbase + imm + offs]; // re-cast to uint8_t*, uint16_t* etc.
1612
1613 Note that the constant "32" above is replaced by 8 for LB, 16 for LH, 64 for LD
1614 and 128 for LQ.
1615
1616 The principle is basically exactly the same as if the srcbase were pointing
1617 at the memory of the *register* file: memory is re-interpreted as containing
1618 groups of elwidth-wide discrete elements.
1619
1620 When storing the result from a load, it's important to respect the fact
1621 that the destination register has its *own separate element width*. Thus,
1622 when each element is loaded (at the source element width), any sign-extension
1623 or zero-extension (or truncation) needs to be done to the *destination*
1624 bitwidth. Also, the storing has the exact same analogous algorithm as
1625 above, where in fact it is just the set\_polymorphed\_reg pseudocode
1626 (completely unchanged) used above.
1627
1628 One issue remains: when the source element width is **greater** than
1629 the width of the operation, it is obvious that a single LB for example
1630 cannot possibly obtain 16-bit-wide data. This condition may be detected
1631 where, when using integer divide, elsperblock (the width of the LOAD
1632 divided by the bitwidth of the element) is zero.
1633
1634 The issue is "fixed" by ensuring that elsperblock is a minimum of 1:
1635
1636 elsperblock = min(1, LD_OP_BITWIDTH / element_bitwidth)
1637
1638 The elements, if the element bitwidth is larger than the LD operation's
1639 size, will then be sign/zero-extended to the full LD operation size, as
1640 specified by the LOAD (LDU instead of LD, LBU instead of LB), before
1641 being passed on to the second phase.
1642
1643 As LOAD/STORE may be twin-predicated, it is important to note that
1644 the rules on twin predication still apply, except where in previous
1645 pseudo-code (elwidth=default for both source and target) it was
1646 the *registers* that the predication was applied to, it is now the
1647 **elements** that the predication is applied to.
1648
1649 Thus the full pseudocode for all LD operations may be written out
1650 as follows:
1651
1652 function LBU(rd, rs):
1653 load_elwidthed(rd, rs, 8, true)
1654 function LB(rd, rs):
1655 load_elwidthed(rd, rs, 8, false)
1656 function LH(rd, rs):
1657 load_elwidthed(rd, rs, 16, false)
1658 ...
1659 ...
1660 function LQ(rd, rs):
1661 load_elwidthed(rd, rs, 128, false)
1662
1663 # returns 1 byte of data when opwidth=8, 2 bytes when opwidth=16..
1664 function load_memory(rs, imm, i, opwidth):
1665 elwidth = int_csr[rs].elwidth
1666 bitwidth = bw(elwidth);
1667 elsperblock = min(1, opwidth / bitwidth)
1668 srcbase = ireg[rs+i/(elsperblock)];
1669 offs = i % elsperblock;
1670 return mem[srcbase + imm + offs]; # 1/2/4/8/16 bytes
1671
1672 function load_elwidthed(rd, rs, opwidth, unsigned):
1673 destwid = int_csr[rd].elwidth # destination element width
1674  rd = int_csr[rd].active ? int_csr[rd].regidx : rd;
1675  rs = int_csr[rs].active ? int_csr[rs].regidx : rs;
1676  ps = get_pred_val(FALSE, rs); # predication on src
1677  pd = get_pred_val(FALSE, rd); # ... AND on dest
1678  for (int i = 0, int j = 0; i < VL && j < VL;):
1679 if (int_csr[rs].isvec) while (!(ps & 1<<i)) i++;
1680 if (int_csr[rd].isvec) while (!(pd & 1<<j)) j++;
1681 val = load_memory(rs, imm, i, opwidth)
1682 if unsigned:
1683 val = zero_extend(val, min(opwidth, bitwidth))
1684 else:
1685 val = sign_extend(val, min(opwidth, bitwidth))
1686 set_polymorphed_reg(rd, bitwidth, j, val)
1687 if (int_csr[rs].isvec) i++;
1688 if (int_csr[rd].isvec) j++; else break;
1689
1690 Note:
1691
1692 * when comparing against for example the twin-predicated c.mv
1693 pseudo-code, the pattern of independent incrementing of rd and rs
1694 is preserved unchanged.
1695 * just as with the c.mv pseudocode, zeroing is not included and must be
1696 taken into account (TODO).
1697 * that due to the use of a twin-predication algorithm, LOAD/STORE also
1698 take on the same VSPLAT, VINSERT, VREDUCE, VEXTRACT, VGATHER and
1699 VSCATTER characteristics.
1700 * that due to the use of the same set\_polymorphed\_reg pseudocode,
1701 a destination that is not vectorised (marked as scalar) will
1702 result in the element being fully sign-extended or zero-extended
1703 out to the full register file bitwidth (XLEN). When the source
1704 is also marked as scalar, this is how the compatibility with
1705 standard RV LOAD/STORE is preserved by this algorithm.
1706
1707 ### Example Tables showing LOAD elements
1708
1709 This section contains examples of vectorised LOAD operations, showing
1710 how the two stage process works (three if zero/sign-extension is included).
1711
1712
1713 #### Example: LD x8, x5(0), x8 CSR-elwidth=32, x5 CSR-elwidth=16, VL=7
1714
1715 This is:
1716
1717 * a 64-bit load, with an offset of zero
1718 * with a source-address elwidth of 16-bit
1719 * into a destination-register with an elwidth of 32-bit
1720 * where VL=7
1721 * from register x5 (actually x5-x6) to x8 (actually x8 to half of x11)
1722 * RV64, where XLEN=64 is assumed.
1723
1724 First, the memory table, which, due to the
1725 element width being 16 and the operation being LD (64), the 64-bits
1726 loaded from memory are subdivided into groups of **four** elements.
1727 And, with VL being 7 (deliberately to illustrate that this is reasonable
1728 and possible), the first four are sourced from the offset addresses pointed
1729 to by x5, and the next three from the ofset addresses pointed to by
1730 the next contiguous register, x6:
1731
1732 [[!table data="""
1733 addr | byte 0 | byte 1 | byte 2 | byte 3 | byte 4 | byte 5 | byte 6 | byte 7 |
1734 @x5 | elem 0 || elem 1 || elem 2 || elem 3 ||
1735 @x6 | elem 4 || elem 5 || elem 6 || not loaded ||
1736 """]]
1737
1738 Next, the elements are zero-extended from 16-bit to 32-bit, as whilst
1739 the elwidth CSR entry for x5 is 16-bit, the destination elwidth on x8 is 32.
1740
1741 [[!table data="""
1742 byte 3 | byte 2 | byte 1 | byte 0 |
1743 0x0 | 0x0 | elem0 ||
1744 0x0 | 0x0 | elem1 ||
1745 0x0 | 0x0 | elem2 ||
1746 0x0 | 0x0 | elem3 ||
1747 0x0 | 0x0 | elem4 ||
1748 0x0 | 0x0 | elem5 ||
1749 0x0 | 0x0 | elem6 ||
1750 0x0 | 0x0 | elem7 ||
1751 """]]
1752
1753 Lastly, the elements are stored in contiguous blocks, as if x8 was also
1754 byte-addressable "memory". That "memory" happens to cover registers
1755 x8, x9, x10 and x11, with the last 32 "bits" of x11 being **UNMODIFIED**:
1756
1757 [[!table data="""
1758 reg# | byte 7 | byte 6 | byte 5 | byte 4 | byte 3 | byte 2 | byte 1 | byte 0 |
1759 x8 | 0x0 | 0x0 | elem 1 || 0x0 | 0x0 | elem 0 ||
1760 x9 | 0x0 | 0x0 | elem 3 || 0x0 | 0x0 | elem 2 ||
1761 x10 | 0x0 | 0x0 | elem 5 || 0x0 | 0x0 | elem 4 ||
1762 x11 | **UNMODIFIED** |||| 0x0 | 0x0 | elem 6 ||
1763 """]]
1764
1765 Thus we have data that is loaded from the **addresses** pointed to by
1766 x5 and x6, zero-extended from 16-bit to 32-bit, stored in the **registers**
1767 x8 through to half of x11.
1768 The end result is that elements 0 and 1 end up in x8, with element 8 being
1769 shifted up 32 bits, and so on, until finally element 6 is in the
1770 LSBs of x11.
1771
1772 Note that whilst the memory addressing table is shown left-to-right byte order,
1773 the registers are shown in right-to-left (MSB) order. This does **not**
1774 imply that bit or byte-reversal is carried out: it's just easier to visualise
1775 memory as being contiguous bytes, and emphasises that registers are not
1776 really actually "memory" as such.
1777
1778 ## Why SV bitwidth specification is restricted to 4 entries
1779
1780 The four entries for SV element bitwidths only allows three over-rides:
1781
1782 * 8 bit
1783 * 16 hit
1784 * 32 bit
1785
1786 This would seem inadequate, surely it would be better to have 3 bits or
1787 more and allow 64, 128 and some other options besides. The answer here
1788 is, it gets too complex, no RV128 implementation yet exists, and so RV64's
1789 default is 64 bit, so the 4 major element widths are covered anyway.
1790
1791 There is an absolutely crucial aspect oF SV here that explicitly
1792 needs spelling out, and it's whether the "vectorised" bit is set in
1793 the Register's CSR entry.
1794
1795 If "vectorised" is clear (not set), this indicates that the operation
1796 is "scalar". Under these circumstances, when set on a destination (RD),
1797 then sign-extension and zero-extension, whilst changed to match the
1798 override bitwidth (if set), will erase the **full** register entry
1799 (64-bit if RV64).
1800
1801 When vectorised is *set*, this indicates that the operation now treats
1802 **elements** as if they were independent registers, so regardless of
1803 the length, any parts of a given actual register that are not involved
1804 in the operation are **NOT** modified, but are **PRESERVED**.
1805
1806 For example:
1807
1808 * when the vector bit is clear and elwidth set to 16 on the destination
1809 register, operations are truncated to 16 bit and then sign or zero
1810 extended to the *FULL* XLEN register width.
1811 * when the vector bit is set, elwidth is 16 and VL=1 (or other value where
1812 groups of elwidth sized elements do not fill an entire XLEN register),
1813 the "top" bits of the destination register do *NOT* get modified, zero'd
1814 or otherwise overwritten.
1815
1816 SIMD micro-architectures may implement this by using predication on
1817 any elements in a given actual register that are beyond the end of
1818 multi-element operation.
1819
1820 Other microarchitectures may choose to provide byte-level write-enable
1821 lines on the register file, such that each 64 bit register in an RV64
1822 system requires 8 WE lines. Scalar RV64 operations would require
1823 activation of all 8 lines, where SV elwidth based operations would
1824 activate the required subset of those byte-level write lines.
1825
1826 Example:
1827
1828 * rs1, rs2 and rd are all set to 8-bit
1829 * VL is set to 3
1830 * RV64 architecture is set (UXL=64)
1831 * add operation is carried out
1832 * bits 0-23 of RD are modified to be rs1[23..16] + rs2[23..16]
1833 concatenated with similar add operations on bits 15..8 and 7..0
1834 * bits 24 through 63 **remain as they originally were**.
1835
1836 Example SIMD micro-architectural implementation:
1837
1838 * SIMD architecture works out the nearest round number of elements
1839 that would fit into a full RV64 register (in this case: 8)
1840 * SIMD architecture creates a hidden predicate, binary 0b00000111
1841 i.e. the bottom 3 bits set (VL=3) and the top 5 bits clear
1842 * SIMD architecture goes ahead with the add operation as if it
1843 was a full 8-wide batch of 8 adds
1844 * SIMD architecture passes top 5 elements through the adders
1845 (which are "disabled" due to zero-bit predication)
1846 * SIMD architecture gets the 5 unmodified top 8-bits back unmodified
1847 and stores them in rd.
1848
1849 This requires a read on rd, however this is required anyway in order
1850 to support non-zeroing mode.
1851
1852 ## Polymorphic floating-point
1853
1854 Standard scalar RV integer operations base the register width on XLEN,
1855 which may be changed (UXL in USTATUS, and the corresponding MXL and
1856 SXL in MSTATUS and SSTATUS respectively). Integer LOAD, STORE and
1857 arithmetic operations are therefore restricted to an active XLEN bits,
1858 with sign or zero extension to pad out the upper bits when XLEN has
1859 been dynamically set to less than the actual register size.
1860
1861 For scalar floating-point, the active (used / changed) bits are
1862 specified exclusively by the operation: ADD.S specifies an active
1863 32-bits, with the upper bits of the source registers needing to
1864 be all 1s ("NaN-boxed"), and the destination upper bits being
1865 *set* to all 1s (including on LOAD/STOREs).
1866
1867 Where elwidth is set to default (on any source or the destination)
1868 it is obvious that this NaN-boxing behaviour can and should be
1869 preserved. When elwidth is non-default things are less obvious,
1870 so need to be thought through. Here is a normal (scalar) sequence,
1871 assuming an RV64 which supports Quad (128-bit) FLEN:
1872
1873 * FLD loads 64-bit wide from memory. Top 64 MSBs are set to all 1s
1874 * ADD.D performs a 64-bit-wide add. Top 64 MSBs of destination set to 1s.
1875 * FSD stores lowest 64-bits from the 128-bit-wide register to memory:
1876 top 64 MSBs ignored.
1877
1878 Therefore it makes sense to mirror this behaviour when, for example,
1879 elwidth is set to 32. Assume elwidth set to 32 on all source and
1880 destination registers:
1881
1882 * FLD loads 64-bit wide from memory as **two** 32-bit single-precision
1883 floating-point numbers.
1884 * ADD.D performs **two** 32-bit-wide adds, storing one of the adds
1885 in bits 0-31 and the second in bits 32-63.
1886 * FSD stores lowest 64-bits from the 128-bit-wide register to memory
1887
1888 Here's the thing: it does not make sense to overwrite the top 64 MSBs
1889 of the registers either during the FLD **or** the ADD.D. The reason
1890 is that, effectively, the top 64 MSBs actually represent a completely
1891 independent 64-bit register, so overwriting it is not only gratuitous
1892 but may actually be harmful for a future extension to SV which may
1893 have a way to directly access those top 64 bits.
1894
1895 The decision is therefore **not** to touch the upper parts of floating-point
1896 registers whereever elwidth is set to non-default values, including
1897 when "isvec" is false in a given register's CSR entry. Only when the
1898 elwidth is set to default **and** isvec is false will the standard
1899 RV behaviour be followed, namely that the upper bits be modified.
1900
1901 Ultimately if elwidth is default and isvec false on *all* source
1902 and destination registers, a SimpleV instruction defaults completely
1903 to standard RV scalar behaviour (this holds true for **all** operations,
1904 right across the board).
1905
1906 The nice thing here is that ADD.S, ADD.D and ADD.Q when elwidth are
1907 non-default values are effectively all the same: they all still perform
1908 multiple ADD operations, just at different widths. A future extension
1909 to SimpleV may actually allow ADD.S to access the upper bits of the
1910 register, effectively breaking down a 128-bit register into a bank
1911 of 4 independently-accesible 32-bit registers.
1912
1913 In the meantime, although when e.g. setting VL to 8 it would technically
1914 make no difference to the ALU whether ADD.S, ADD.D or ADD.Q is used,
1915 using ADD.Q may be an easy way to signal to the microarchitecture that
1916 it is to receive a higher VL value. On a superscalar OoO architecture
1917 there may be absolutely no difference, however on simpler SIMD-style
1918 microarchitectures they may not necessarily have the infrastructure in
1919 place to know the difference, such that when VL=8 and an ADD.D instruction
1920 is issued, it completes in 2 cycles (or more) rather than one, where
1921 if an ADD.Q had been issued instead on such simpler microarchitectures
1922 it would complete in one.
1923
1924 ## Specific instruction walk-throughs
1925
1926 This section covers walk-throughs of the above-outlined procedure
1927 for converting standard RISC-V scalar arithmetic operations to
1928 polymorphic widths, to ensure that it is correct.
1929
1930 ### add
1931
1932 Standard Scalar RV32/RV64 (xlen):
1933
1934 * RS1 @ xlen bits
1935 * RS2 @ xlen bits
1936 * add @ xlen bits
1937 * RD @ xlen bits
1938
1939 Polymorphic variant:
1940
1941 * RS1 @ rs1 bits, zero-extended to max(rs1, rs2) bits
1942 * RS2 @ rs2 bits, zero-extended to max(rs1, rs2) bits
1943 * add @ max(rs1, rs2) bits
1944 * RD @ rd bits. zero-extend to rd if rd > max(rs1, rs2) otherwise truncate
1945
1946 Note here that polymorphic add zero-extends its source operands,
1947 where addw sign-extends.
1948
1949 ### addw
1950
1951 The RV Specification specifically states that "W" variants of arithmetic
1952 operations always produce 32-bit signed values. In a polymorphic
1953 environment it is reasonable to assume that the signed aspect is
1954 preserved, where it is the length of the operands and the result
1955 that may be changed.
1956
1957 Standard Scalar RV64 (xlen):
1958
1959 * RS1 @ xlen bits
1960 * RS2 @ xlen bits
1961 * add @ xlen bits
1962 * RD @ xlen bits, truncate add to 32-bit and sign-extend to xlen.
1963
1964 Polymorphic variant:
1965
1966 * RS1 @ rs1 bits, sign-extended to max(rs1, rs2) bits
1967 * RS2 @ rs2 bits, sign-extended to max(rs1, rs2) bits
1968 * add @ max(rs1, rs2) bits
1969 * RD @ rd bits. sign-extend to rd if rd > max(rs1, rs2) otherwise truncate
1970
1971 Note here that polymorphic addw sign-extends its source operands,
1972 where add zero-extends.
1973
1974 This requires a little more in-depth analysis. Where the bitwidth of
1975 rs1 equals the bitwidth of rs2, no sign-extending will occur. It is
1976 only where the bitwidth of either rs1 or rs2 are different, will the
1977 lesser-width operand be sign-extended.
1978
1979 Effectively however, both rs1 and rs2 are being sign-extended (or truncated),
1980 where for add they are both zero-extended. This holds true for all arithmetic
1981 operations ending with "W".
1982
1983 ### addiw
1984
1985 Standard Scalar RV64I:
1986
1987 * RS1 @ xlen bits, truncated to 32-bit
1988 * immed @ 12 bits, sign-extended to 32-bit
1989 * add @ 32 bits
1990 * RD @ rd bits. sign-extend to rd if rd > 32, otherwise truncate.
1991
1992 Polymorphic variant:
1993
1994 * RS1 @ rs1 bits
1995 * immed @ 12 bits, sign-extend to max(rs1, 12) bits
1996 * add @ max(rs1, 12) bits
1997 * RD @ rd bits. sign-extend to rd if rd > max(rs1, 12) otherwise truncate
1998
1999 # Predication Element Zeroing
2000
2001 The introduction of zeroing on traditional vector predication is usually
2002 intended as an optimisation for lane-based microarchitectures with register
2003 renaming to be able to save power by avoiding a register read on elements
2004 that are passed through en-masse through the ALU. Simpler microarchitectures
2005 do not have this issue: they simply do not pass the element through to
2006 the ALU at all, and therefore do not store it back in the destination.
2007 More complex non-lane-based micro-architectures can, when zeroing is
2008 not set, use the predication bits to simply avoid sending element-based
2009 operations to the ALUs, entirely: thus, over the long term, potentially
2010 keeping all ALUs 100% occupied even when elements are predicated out.
2011
2012 SimpleV's design principle is not based on or influenced by
2013 microarchitectural design factors: it is a hardware-level API.
2014 Therefore, looking purely at whether zeroing is *useful* or not,
2015 (whether less instructions are needed for certain scenarios),
2016 given that a case can be made for zeroing *and* non-zeroing, the
2017 decision was taken to add support for both.
2018
2019 ## Single-predication (based on destination register)
2020
2021 Zeroing on predication for arithmetic operations is taken from
2022 the destination register's predicate. i.e. the predication *and*
2023 zeroing settings to be applied to the whole operation come from the
2024 CSR Predication table entry for the destination register.
2025 Thus when zeroing is set on predication of a destination element,
2026 if the predication bit is clear, then the destination element is *set*
2027 to zero (twin-predication is slightly different, and will be covered
2028 next).
2029
2030 Thus the pseudo-code loop for a predicated arithmetic operation
2031 is modified to as follows:
2032
2033  for (i = 0; i < VL; i++)
2034 if not zeroing: # an optimisation
2035 while (!(predval & 1<<i) && i < VL)
2036 if (int_vec[rd ].isvector)  { id += 1; }
2037 if (int_vec[rs1].isvector)  { irs1 += 1; }
2038 if (int_vec[rs2].isvector)  { irs2 += 1; }
2039 if i == VL:
2040 break
2041 if (predval & 1<<i)
2042 src1 = ....
2043 src2 = ...
2044 else:
2045 result = src1 + src2 # actual add (or other op) here
2046 set_polymorphed_reg(rd, destwid, ird, result)
2047 if (!int_vec[rd].isvector) break
2048 else if zeroing:
2049 result = 0
2050 set_polymorphed_reg(rd, destwid, ird, result)
2051 if (int_vec[rd ].isvector)  { id += 1; }
2052 else if (predval & 1<<i) break;
2053 if (int_vec[rs1].isvector)  { irs1 += 1; }
2054 if (int_vec[rs2].isvector)  { irs2 += 1; }
2055
2056 The optimisation to skip elements entirely is only possible for certain
2057 micro-architectures when zeroing is not set. However for lane-based
2058 micro-architectures this optimisation may not be practical, as it
2059 implies that elements end up in different "lanes". Under these
2060 circumstances it is perfectly fine to simply have the lanes
2061 "inactive" for predicated elements, even though it results in
2062 less than 100% ALU utilisation.
2063
2064 ## Twin-predication (based on source and destination register)
2065
2066 Twin-predication is not that much different, except that that
2067 the source is independently zero-predicated from the destination.
2068 This means that the source may be zero-predicated *or* the
2069 destination zero-predicated *or both*, or neither.
2070
2071 When with twin-predication, zeroing is set on the source and not
2072 the destination, if a predicate bit is set it indicates that a zero
2073 data element is passed through the operation (the exception being:
2074 if the source data element is to be treated as an address - a LOAD -
2075 then the data returned *from* the LOAD is zero, rather than looking up an
2076 *address* of zero.
2077
2078 When zeroing is set on the destination and not the source, then just
2079 as with single-predicated operations, a zero is stored into the destination
2080 element (or target memory address for a STORE).
2081
2082 Zeroing on both source and destination effectively result in a bitwise
2083 NOR operation of the source and destination predicate: the result is that
2084 where either source predicate OR destination predicate is set to 0,
2085 a zero element will ultimately end up in the destination register.
2086
2087 However: this may not necessarily be the case for all operations;
2088 implementors, particularly of custom instructions, clearly need to
2089 think through the implications in each and every case.
2090
2091 Here is pseudo-code for a twin zero-predicated operation:
2092
2093 function op_mv(rd, rs) # MV not VMV!
2094  rd = int_csr[rd].active ? int_csr[rd].regidx : rd;
2095  rs = int_csr[rs].active ? int_csr[rs].regidx : rs;
2096  ps, zerosrc = get_pred_val(FALSE, rs); # predication on src
2097  pd, zerodst = get_pred_val(FALSE, rd); # ... AND on dest
2098  for (int i = 0, int j = 0; i < VL && j < VL):
2099 if (int_csr[rs].isvec && !zerosrc) while (!(ps & 1<<i)) i++;
2100 if (int_csr[rd].isvec && !zerodst) while (!(pd & 1<<j)) j++;
2101 if ((pd & 1<<j))
2102 if ((pd & 1<<j))
2103 sourcedata = ireg[rs+i];
2104 else
2105 sourcedata = 0
2106 ireg[rd+j] <= sourcedata
2107 else if (zerodst)
2108 ireg[rd+j] <= 0
2109 if (int_csr[rs].isvec)
2110 i++;
2111 if (int_csr[rd].isvec)
2112 j++;
2113 else
2114 if ((pd & 1<<j))
2115 break;
2116
2117 Note that in the instance where the destination is a scalar, the hardware
2118 loop is ended the moment a value *or a zero* is placed into the destination
2119 register/element. Also note that, for clarity, variable element widths
2120 have been left out of the above.
2121
2122 # Exceptions
2123
2124 TODO: expand. Exceptions may occur at any time, in any given underlying
2125 scalar operation. This implies that context-switching (traps) may
2126 occur, and operation must be returned to where it left off. That in
2127 turn implies that the full state - including the current parallel
2128 element being processed - has to be saved and restored. This is
2129 what the **STATE** CSR is for.
2130
2131 The implications are that all underlying individual scalar operations
2132 "issued" by the parallelisation have to appear to be executed sequentially.
2133 The further implications are that if two or more individual element
2134 operations are underway, and one with an earlier index causes an exception,
2135 it may be necessary for the microarchitecture to **discard** or terminate
2136 operations with higher indices.
2137
2138 This being somewhat dissatisfactory, an "opaque predication" variant
2139 of the STATE CSR is being considered.
2140
2141 # Hints
2142
2143 A "HINT" is an operation that has no effect on architectural state,
2144 where its use may, by agreed convention, give advance notification
2145 to the microarchitecture: branch prediction notification would be
2146 a good example. Usually HINTs are where rd=x0.
2147
2148 With Simple-V being capable of issuing *parallel* instructions where
2149 rd=x0, the space for possible HINTs is expanded considerably. VL
2150 could be used to indicate different hints. In addition, if predication
2151 is set, the predication register itself could hypothetically be passed
2152 in as a *parameter* to the HINT operation.
2153
2154 No specific hints are yet defined in Simple-V
2155
2156 # VLIW Format <a name="vliw-format"></a>
2157
2158 One issue with SV is the setup and teardown time of the CSRs. The cost
2159 of the use of a full CSRRW (requiring LI) is quite high. A VLIW format
2160 therefore makes sense.
2161
2162 A suitable prefix, which fits the Expanded Instruction-Length encoding
2163 for "(80 + 16 times instruction_length)", as defined in Section 1.5
2164 of the RISC-V ISA, is as follows:
2165
2166 | 15 | 14:12 | 11:10 | 9:8 | 7 | 6:0 |
2167 | - | ----- | ----- | ----- | --- | ------- |
2168 | vlset | 16xil | pplen | rplen | mode | 1111111 |
2169
2170 An optional VL Block, optional predicate entries, optional register
2171 entries and finally some 16/32/48 bit standard RV or SVPrefix opcodes
2172 follow.
2173
2174 The variable-length format from Section 1.5 of the RISC-V ISA:
2175
2176 | base+4 ... base+2 | base | number of bits |
2177 | ------ ----------------- | ---------------- | -------------------------- |
2178 | ..xxxx xxxxxxxxxxxxxxxx | xnnnxxxxx1111111 | (80+16\*nnn)-bit, nnn!=111 |
2179 | {ops}{Pred}{Reg}{VL Block} | SV Prefix | |
2180
2181 VL/MAXVL/SubVL Block:
2182
2183 | 31-30 | 29:28 | 27:22 | 21:17 - 16 |
2184 | - | ----- | ------ | ------ - - |
2185 | 0 | SubVL | VLdest | VLEN vlt |
2186 | 1 | SubVL | VLdest | VLEN |
2187
2188 Note: this format is very similar to that used in [[sv_prefix_proposal]]
2189
2190 If vlt is 0, VLEN is a 5 bit immediate value, offset by one (i.e
2191 a bit sequence of 0b00000 represents VL=1 and so on). If vlt is 1,
2192 it specifies the scalar register from which VL is set by this VLIW
2193 instruction group. VL, whether set from the register or the immediate,
2194 is then modified (truncated) to be MIN(VL, MAXVL), and the result stored
2195 in the scalar register specified in VLdest. If VLdest is zero, no store
2196 in the regfile occurs (however VL is still set).
2197
2198 This option will typically be used to start vectorised loops, where
2199 the VLIW instruction effectively embeds an optional "SETSUBVL, SETVL"
2200 sequence (in compact form).
2201
2202 When bit 15 is set to 1, MAXVL and VL are both set to the immediate,
2203 VLEN (again, offset by one), which is 6 bits in length, and the same
2204 value stored in scalar register VLdest (if that register is nonzero).
2205 A value of 0b000000 will set MAXVL=VL=1, a value of 0b000001 will
2206 set MAXVL=VL= 2 and so on.
2207
2208 This option will typically not be used so much for loops as it will be
2209 for one-off instructions such as saving the entire register file to the
2210 stack with a single one-off Vectorised and predicated LD/ST, or as a way
2211 to save or restore registers in a function call with a single instruction.
2212
2213 CSRs needed:
2214
2215 * mepcvliw
2216 * sepcvliw
2217 * uepcvliw
2218 * hepcvliw
2219
2220 Notes:
2221
2222 * Bit 7 specifies if the prefix block format is the full 16 bit format
2223 (1) or the compact less expressive format (0). In the 8 bit format,
2224 pplen is multiplied by 2.
2225 * 8 bit format predicate numbering is implicit and begins from x9. Thus
2226 it is critical to put blocks in the correct order as required.
2227 * Bit 7 also specifies if the register block format is 16 bit (1) or 8 bit
2228 (0). In the 8 bit format, rplen is multiplied by 2. If only an odd number
2229 of entries are needed the last may be set to 0x00, indicating "unused".
2230 * Bit 15 specifies if the VL Block is present. If set to 1, the VL Block
2231 immediately follows the VLIW instruction Prefix
2232 * Bits 8 and 9 define how many RegCam entries (0 to 3 if bit 15 is 1,
2233 otherwise 0 to 6) follow the (optional) VL Block.
2234 * Bits 10 and 11 define how many PredCam entries (0 to 3 if bit 7 is 1,
2235 otherwise 0 to 6) follow the (optional) RegCam entries
2236 * Bits 14 to 12 (IL) define the actual length of the instruction: total
2237 number of bits is 80 + 16 times IL. Standard RV32, RVC and also
2238 SVPrefix (P48/64-\*-Type) instructions fit into this space, after the
2239 (optional) VL / RegCam / PredCam entries
2240 * Anything - any registers - within the VLIW-prefixed format *MUST* have the
2241 RegCam and PredCam entries applied to it.
2242 * At the end of the VLIW Group, the RegCam and PredCam entries
2243 *no longer apply*. VL, MAXVL and SUBVL on the other hand remain at
2244 the values set by the last instruction (whether a CSRRW or the VL
2245 Block header).
2246 * Although an inefficient use of resources, it is fine to set the MAXVL,
2247 VL and SUBVL CSRs with standard CSRRW instructions, within a VLIW block.
2248
2249 All this would greatly reduce the amount of space utilised by Vectorised
2250 instructions, given that 64-bit CSRRW requires 3, even 4 32-bit opcodes: the
2251 CSR itself, a LI, and the setting up of the value into the RS register
2252 of the CSR, which, again, requires a LI / LUI to get the 32 bit
2253 data into the CSR. To get 64-bit data into the register in order to put
2254 it into the CSR(s), LOAD operations from memory are needed!
2255
2256 Given that each 64-bit CSR can hold only 4x PredCAM entries (or 4 RegCAM
2257 entries), that's potentially 6 to eight 32-bit instructions, just to
2258 establish the Vector State!
2259
2260 Not only that: even CSRRW on VL and MAXVL requires 64-bits (even more bits if
2261 VL needs to be set to greater than 32). Bear in mind that in SV, both MAXVL
2262 and VL need to be set.
2263
2264 By contrast, the VLIW prefix is only 16 bits, the VL/MAX/SubVL block is
2265 only 16 bits, and as long as not too many predicates and register vector
2266 qualifiers are specified, several 32-bit and 16-bit opcodes can fit into
2267 the format. If the full flexibility of the 16 bit block formats are not
2268 needed, more space is saved by using the 8 bit formats.
2269
2270 In this light, embedding the VL/MAXVL, PredCam and RegCam CSR entries into
2271 a VLIW format makes a lot of sense.
2272
2273 Open Questions:
2274
2275 * Is it necessary to stick to the RISC-V 1.5 format? Why not go with
2276 using the 15th bit to allow 80 + 16\*0bnnnn bits? Perhaps to be sane,
2277 limit to 256 bits (16 times 0-11).
2278 * Could a "hint" be used to set which operations are parallel and which
2279 are sequential?
2280 * Could a new sub-instruction opcode format be used, one that does not
2281 conform precisely to RISC-V rules, but *unpacks* to RISC-V opcodes?
2282 no need for byte or bit-alignment
2283 * Could a hardware compression algorithm be deployed? Quite likely,
2284 because of the sub-execution context (sub-VLIW PC)
2285
2286 ## Limitations on instructions.
2287
2288 To greatly simplify implementations, it is required to treat the VLIW
2289 group as a separate sub-program with its own separate PC. The sub-pc
2290 advances separately whilst the main PC remains pointing at the beginning
2291 of the VLIW instruction (not to be confused with how VL works, which
2292 is exactly the same principle, except it is VStart in the STATE CSR
2293 that increments).
2294
2295 This has implications, namely that a new set of CSRs identical to xepc
2296 (mepc, srpc, hepc and uepc) must be created and managed and respected
2297 as being a sub extension of the xepc set of CSRs. Thus, xepcvliw CSRs
2298 must be context switched and saved / restored in traps.
2299
2300 The VStart indices in the STATE CSR may be similarly regarded as another
2301 sub-execution context, giving in effect two sets of nested sub-levels
2302 of the RISCV Program Counter.
2303
2304 In addition, as xepcvliw CSRs are relative to the beginning of the VLIW
2305 block, branches MUST be restricted to within the block, i.e. addressing
2306 is now restricted to the start (and very short) length of the block.
2307
2308 Also: calling subroutines is inadviseable, unless they can be entirely
2309 accomplished within a block.
2310
2311 A normal jump and a normal function call may only be taken by letting
2312 the VLIW end, returning to "normal" standard RV mode, using RVC, 32 bit
2313 or P48/64-\*-type opcodes.
2314
2315 ## Links
2316
2317 * <https://groups.google.com/d/msg/comp.arch/yIFmee-Cx-c/jRcf0evSAAAJ>
2318
2319 # Subsets of RV functionality
2320
2321 This section describes the differences when SV is implemented on top of
2322 different subsets of RV.
2323
2324 ## Common options
2325
2326 It is permitted to limit the size of either (or both) the register files
2327 down to the original size of the standard RV architecture. However, below
2328 the mandatory limits set in the RV standard will result in non-compliance
2329 with the SV Specification.
2330
2331 ## RV32 / RV32F
2332
2333 When RV32 or RV32F is implemented, XLEN is set to 32, and thus the
2334 maximum limit for predication is also restricted to 32 bits. Whilst not
2335 actually specifically an "option" it is worth noting.
2336
2337 ## RV32G
2338
2339 Normally in standard RV32 it does not make much sense to have
2340 RV32G, The critical instructions that are missing in standard RV32
2341 are those for moving data to and from the double-width floating-point
2342 registers into the integer ones, as well as the FCVT routines.
2343
2344 In an earlier draft of SV, it was possible to specify an elwidth
2345 of double the standard register size: this had to be dropped,
2346 and may be reintroduced in future revisions.
2347
2348 ## RV32 (not RV32F / RV32G) and RV64 (not RV64F / RV64G)
2349
2350 When floating-point is not implemented, the size of the User Register and
2351 Predication CSR tables may be halved, to only 4 2x16-bit CSRs (8 entries
2352 per table).
2353
2354 ## RV32E
2355
2356 In embedded scenarios the User Register and Predication CSRs may be
2357 dropped entirely, or optionally limited to 1 CSR, such that the combined
2358 number of entries from the M-Mode CSR Register table plus U-Mode
2359 CSR Register table is either 4 16-bit entries or (if the U-Mode is
2360 zero) only 2 16-bit entries (M-Mode CSR table only). Likewise for
2361 the Predication CSR tables.
2362
2363 RV32E is the most likely candidate for simply detecting that registers
2364 are marked as "vectorised", and generating an appropriate exception
2365 for the VL loop to be implemented in software.
2366
2367 ## RV128
2368
2369 RV128 has not been especially considered, here, however it has some
2370 extremely large possibilities: double the element width implies
2371 256-bit operands, spanning 2 128-bit registers each, and predication
2372 of total length 128 bit given that XLEN is now 128.
2373
2374 # Under consideration <a name="issues"></a>
2375
2376 for element-grouping, if there is unused space within a register
2377 (3 16-bit elements in a 64-bit register for example), recommend:
2378
2379 * For the unused elements in an integer register, the used element
2380 closest to the MSB is sign-extended on write and the unused elements
2381 are ignored on read.
2382 * The unused elements in a floating-point register are treated as-if
2383 they are set to all ones on write and are ignored on read, matching the
2384 existing standard for storing smaller FP values in larger registers.
2385
2386 ---
2387
2388 info register,
2389
2390 > One solution is to just not support LR/SC wider than a fixed
2391 > implementation-dependent size, which must be at least 
2392 >1 XLEN word, which can be read from a read-only CSR
2393 > that can also be used for info like the kind and width of 
2394 > hw parallelism supported (128-bit SIMD, minimal virtual 
2395 > parallelism, etc.) and other things (like maybe the number 
2396 > of registers supported). 
2397
2398 > That CSR would have to have a flag to make a read trap so
2399 > a hypervisor can simulate different values.
2400
2401 ----
2402
2403 > And what about instructions like JALR? 
2404
2405 answer: they're not vectorised, so not a problem
2406
2407 ----
2408
2409 * if opcode is in the RV32 group, rd, rs1 and rs2 bitwidth are
2410 XLEN if elwidth==default
2411 * if opcode is in the RV32I group, rd, rs1 and rs2 bitwidth are
2412 *32* if elwidth == default
2413
2414 ---
2415
2416 TODO: document different lengths for INT / FP regfiles, and provide
2417 as part of info register. 00=32, 01=64, 10=128, 11=reserved.
2418
2419 ---
2420
2421 TODO, update to remove RegCam and PredCam CSRs, just use SVprefix and
2422 VLIW format