# Simple-V (Parallelism Extension Proposal) Specification
* Status: DRAFTv0.2
* Last edited: 17 oct 2018
* Ancillary resource: [[opcodes]]
With thanks to:
* Allen Baum
* Jacob Bachmeyer
* Guy Lemurieux
* Jacob Lifshay
* The RISC-V Founders, without whom this all would not be possible.
[[!toc ]]
# Summary and Background: Rationale
Simple-V is a uniform parallelism API for RISC-V hardware that has several
unplanned side-effects including code-size reduction, expansion of
HINT space and more. The reason for
creating it is to provide a manageable way to turn a pre-existing design
into a parallel one, in a step-by-step incremental fashion, allowing
the implementor to focus on adding hardware where it is needed and necessary.
The primary target is for mobile-class 3D GPUs and VPUs, with secondary
goals being to reduce executable size and reduce context-switch latency.
Critically: **No new instructions are added**. The parallelism (if any
is implemented) is implicitly added by tagging *standard* scalar registers
for redirection. When such a tagged register is used in any instruction,
it indicates that the PC shall **not** be incremented; instead a loop
is activated where *multiple* instructions are issued to the pipeline
(as determined by a length CSR), with contiguously incrementing register
numbers starting from the tagged register. When the last "element"
has been reached, only then is the PC permitted to move on. Thus
Simple-V effectively sits (slots) *in between* the instruction decode phase
and the ALU(s).
The barrier to entry with SV is therefore very low. The minimum
compliant implementation is software-emulation (traps), requiring
only the CSRs and CSR tables, and that an exception be thrown if an
instruction's registers are detected to have been tagged. The looping
that would otherwise be done in hardware is thus carried out in software,
instead. Whilst much slower, it is "compliant" with the SV specification,
and may be suited for implementation in RV32E and also in situations
where the implementor wishes to focus on certain aspects of SV, without
unnecessary time and resources into the silicon, whilst also conforming
strictly with the API. A good area to punt to software would be the
polymorphic element width capability for example.
Hardware Parallelism, if any, is therefore added at the implementor's
discretion to turn what would otherwise be a sequential loop into a
parallel one.
To emphasise that clearly: Simple-V (SV) is *not*:
* A SIMD system
* A SIMT system
* A Vectorisation Microarchitecture
* A microarchitecture of any specific kind
* A mandary parallel processor microarchitecture of any kind
* A supercomputer extension
SV does **not** tell implementors how or even if they should implement
parallelism: it is a hardware "API" (Application Programming Interface)
that, if implemented, presents a uniform and consistent way to *express*
parallelism, at the same time leaving the choice of if, how, how much,
when and whether to parallelise operations **entirely to the implementor**.
# Basic Operation
The principle of SV is as follows:
* CSRs indicating which registers are "tagged" as "vectorised"
(potentially parallel, depending on the microarchitecture)
must be set up
* A "Vector Length" CSR is set, indicating the span of any future
"parallel" operations.
* A **scalar** operation, just after the decode phase and before the
execution phase, checks the CSR register tables to see if any of
its registers have been marked as "vectorised"
* If so, a hardware "macro-unrolling loop" is activated, of length
VL, that effectively issues **multiple** identical instructions
using contiguous sequentially-incrementing registers.
**Whether they be executed sequentially or in parallel or a
mixture of both is entirely up to the implementor**.
In this way an entire scalar algorithm may be vectorised with
the minimum of modification to the hardware and to compiler toolchains.
There are **no** new opcodes.
# CSRs
For U-Mode there are two CSR key-value stores needed to create lookup
tables which are used at the register decode phase.
* A register CSR key-value table (typically 8 32-bit CSRs of 2 16-bits each)
* A predication CSR key-value table (again, 8 32-bit CSRs of 2 16-bits each)
* Small U-Mode and S-Mode register and predication CSR key-value tables
(2 32-bit CSRs of 2x 16-bit entries each).
* An optional "reshaping" CSR key-value table which remaps from a 1D
linear shape to 2D or 3D, including full transposition.
There are also four additional CSRs for User-Mode:
* CFG subsets the CSR tables
* MVL (the Maximum Vector Length)
* VL (which has different characteristics from standard CSRs)
* STATE (useful for saving and restoring during context switch,
and for providing fast transitions)
There are also three additional CSRs for Supervisor-Mode:
* SMVL
* SVL
* SSTATE
And likewise for M-Mode:
* MMVL
* MVL
* MSTATE
Both Supervisor and M-Mode have their own (small) CSR register and
predication tables of only 4 entries each.
## CFG
This CSR may be used to switch between subsets of the CSR Register and
Predication Tables: it is kept to 5 bits so that a single CSRRWI instruction
can be used. A setting of all ones is reserved to indicate that SimpleV
is disabled.
| (4..3) | (2...0) |
| ------ | ------- |
| size | bank |
Bank is 3 bits in size, and indicates the starting index of the CSR
Register and Predication Table entries that are "enabled". Given that
each CSR table row is 16 bits and contains 2 CAM entries each, there
are only 8 CSRs to cover in each table, so 8 bits is sufficient.
Size is 2 bits. With the exception of when bank == 7 and size == 3,
the number of elements enabled is taken by right-shifting 2 by size:
| size | elements |
| ------ | -------- |
| 0 | 2 |
| 1 | 4 |
| 2 | 8 |
| 3 | 16 |
Given that there are 2 16-bit CAM entries per CSR table row, this
may also be viewed as the number of CSR rows to enable, by raising size to
the power of 2.
Examples:
* When bank = 0 and size = 3, SVREGCFG0 through to SVREGCFG7 are
enabled, and SVPREDCFG0 through to SVPREGCFG7 are enabled.
* When bank = 1 and size = 3, SVREGCFG1 through to SVREGCFG7 are
enabled, and SVPREDCFG1 through to SVPREGCFG7 are enabled.
* When bank = 3 and size = 0, SVREGCFG3 and SVPREDCFG3 are enabled.
* When bank = 3 and size = 1, SVREGCFG3-4 and SVPREDCFG3-4 are enabled.
* When bank = 7 and size = 1, SVREGCFG7 and SVPREDCFG7 are enabled
(because there are only 8 32-bit CSRs there does not exist a
SVREGCFG8 or SVPREDCFG8 to enable).
* When bank = 7 and size = 3, SimpleV is entirely disabled.
In this way it is possible to enable and disable SimpleV with a
single instruction, and, furthermore, on context-switching the quantity
of CSRs to be saved and restored is greatly reduced.
## MAXVECTORLENGTH (MVL)
MAXVECTORLENGTH is the same concept as MVL in RVV, except that it
is variable length and may be dynamically set. MVL is
however limited to the regfile bitwidth XLEN (1-32 for RV32,
1-64 for RV64 and so on).
The reason for setting this limit is so that predication registers, when
marked as such, may fit into a single register as opposed to fanning out
over several registers. This keeps the implementation a little simpler.
The other important factor to note is that the actual MVL is **offset
by one**, so that it can fit into only 6 bits (for RV64) and still cover
a range up to XLEN bits. So, when setting the MVL CSR to 0, this actually
means that MVL==1. When setting the MVL CSR to 3, this actually means
that MVL==4, and so on. This is expressed more clearly in the "pseudocode"
section, where there are subtle differences between CSRRW and CSRRWI.
## Vector Length (VL)
VSETVL is slightly different from RVV. Like RVV, VL is set to be within
the range 1 <= VL <= MVL (where MVL in turn is limited to 1 <= MVL <= XLEN)
VL = rd = MIN(vlen, MVL)
where 1 <= MVL <= XLEN
However just like MVL it is important to note that the range for VL has
subtle design implications, covered in the "CSR pseudocode" section
The fixed (specific) setting of VL allows vector LOAD/STORE to be used
to switch the entire bank of registers using a single instruction (see
Appendix, "Context Switch Example"). The reason for limiting VL to XLEN
is down to the fact that predication bits fit into a single register of
length XLEN bits.
The second change is that when VSETVL is requested to be stored
into x0, it is *ignored* silently (VSETVL x0, x5)
The third and most important change is that, within the limits set by
MVL, the value passed in **must** be set in VL (and in the
destination register).
This has implication for the microarchitecture, as VL is required to be
set (limits from MVL notwithstanding) to the actual value
requested. RVV has the option to set VL to an arbitrary value that suits
the conditions and the micro-architecture: SV does *not* permit this.
The reason is so that if SV is to be used for a context-switch or as a
substitute for LOAD/STORE-Multiple, the operation can be done with only
2-3 instructions (setup of the CSRs, VSETVL x0, x0, #{regfilelen-1},
single LD/ST operation). If VL does *not* get set to the register file
length when VSETVL is called, then a software-loop would be needed.
To avoid this need, VL *must* be set to exactly what is requested
(limits notwithstanding).
Therefore, in turn, unlike RVV, implementors *must* provide
pseudo-parallelism (using sequential loops in hardware) if actual
hardware-parallelism in the ALUs is not deployed. A hybrid is also
permitted (as used in Broadcom's VideoCore-IV) however this must be
*entirely* transparent to the ISA.
The fourth change is that VSETVL is implemented as a CSR, where the
behaviour of CSRRW (and CSRRWI) must be changed to specifically store
the *new* value in the destination register, **not** the old value.
Where context-load/save is to be implemented in the usual fashion
by using a single CSRRW instruction to obtain the old value, the
*secondary* CSR must be used (SVSTATE). This CSR behaves
exactly as standard CSRs, and contains more than just VL.
One interesting side-effect of using CSRRWI to set VL is that this
may be done with a single instruction, useful particularly for a
context-load/save. There are however limitations: CSRWWI's immediate
is limited to 0-31.
## STATE
This is a standard CSR that contains sufficient information for a
full context save/restore. It contains (and permits setting of)
MVL, VL, CFG, the destination element offset of the current parallel
instruction being executed, and, for twin-predication, the source
element offset as well. Interestingly it may hypothetically
also be used to make the immediately-following instruction to skip a
certain number of elements, however the recommended method to do
this is predication or using the offset mode of the REMAP CSRs.
Setting destoffs and srcoffs is realistically intended for saving state
so that exceptions (page faults in particular) may be serviced and the
hardware-loop that was being executed at the time of the trap, from
user-mode (or Supervisor-mode), may be returned to and continued from
where it left off. The reason why this works is because setting
User-Mode STATE will not change (not be used) in M-Mode or S-Mode
(and is entirely why M-Mode and S-Mode have their own STATE CSRs).
The format of the STATE CSR is as follows:
| (28..26) | (25..24) | (23..18) | (17..12) | (11..6) | (5...0) |
| -------- | -------- | -------- | -------- | ------- | ------- |
| size | bank | destoffs | srcoffs | vl | maxvl |
When setting this CSR, the following characteristics will be enforced:
* **MAXVL** will be truncated (after offset) to be within the range 1 to XLEN
* **VL** will be truncated (after offset) to be within the range 1 to MAXVL
* **srcoffs** will be truncated to be within the range 0 to VL-1
* **destoffs** will be truncated to be within the range 0 to VL-1
## MVL, VL and CSR Pseudocode
The pseudo-code for get and set of VL and MVL are as follows:
set_mvl_csr(value, rd):
regs[rd] = MVL
MVL = MIN(value, MVL)
get_mvl_csr(rd):
regs[rd] = VL
set_vl_csr(value, rd):
VL = MIN(value, MVL)
regs[rd] = VL # yes returning the new value NOT the old CSR
get_vl_csr(rd):
regs[rd] = VL
Note that where setting MVL behaves as a normal CSR, unlike standard CSR
behaviour, setting VL will return the **new** value of VL **not** the old
one.
For CSRRWI, the range of the immediate is restricted to 5 bits. In order to
maximise the effectiveness, an immediate of 0 is used to set VL=1,
an immediate of 1 is used to set VL=2 and so on:
CSRRWI_Set_MVL(value):
set_mvl_csr(value+1, x0)
CSRRWI_Set_VL(value):
set_vl_csr(value+1, x0)
However for CSRRW the following pseudocide is used for MVL and VL,
where setting the value to zero will cause an exception to be raised.
The reason is that if VL or MVL are set to zero, the STATE CSR is
not capable of returning that value.
CSRRW_Set_MVL(rs1, rd):
value = regs[rs1]
if value == 0:
raise Exception
set_mvl_csr(value, rd)
CSRRW_Set_VL(rs1, rd):
value = regs[rs1]
if value == 0:
raise Exception
set_vl_csr(value, rd)
In this way, when CSRRW is utilised with a loop variable, the value
that goes into VL (and into the destination register) may be used
in an instruction-minimal fashion:
CSRvect1 = {type: F, key: a3, val: a3, elwidth: dflt}
CSRvect2 = {type: F, key: a7, val: a7, elwidth: dflt}
CSRRWI MVL, 3 # sets MVL == **4** (not 3)
j zerotest # in case loop counter a0 already 0
loop:
CSRRW VL, t0, a0 # vl = t0 = min(mvl, a0)
ld a3, a1 # load 4 registers a3-6 from x
slli t1, t0, 3 # t1 = vl * 8 (in bytes)
ld a7, a2 # load 4 registers a7-10 from y
add a1, a1, t1 # increment pointer to x by vl*8
fmadd a7, a3, fa0, a7 # v1 += v0 * fa0 (y = a * x + y)
sub a0, a0, t0 # n -= vl (t0)
st a7, a2 # store 4 registers a7-10 to y
add a2, a2, t1 # increment pointer to y by vl*8
zerotest:
bnez a0, loop # repeat if n != 0
With the STATE CSR, just like with CSRRWI, in order to maximise the
utilisation of the limited bitspace, "000000" in binary represents
VL==1, "00001" represents VL==2 and so on (likewise for MVL):
CSRRW_Set_SV_STATE(rs1, rd):
value = regs[rs1]
get_state_csr(rd)
MVL = set_mvl_csr(value[11:6]+1)
VL = set_vl_csr(value[5:0]+1)
CFG = value[28:24]>>24
destoffs = value[23:18]>>18
srcoffs = value[23:18]>>12
get_state_csr(rd):
regs[rd] = (MVL-1) | (VL-1)<<6 | (srcoffs)<<12 |
(destoffs)<<18 | (CFG)<<24
return regs[rd]
In both cases, whilst CSR read of VL and MVL return the exact values
of VL and MVL respectively, reading and writing the STATE CSR returns
those values **minus one**. This is absolutely critical to implement
if the STATE CSR is to be used for fast context-switching.
## Register CSR key-value (CAM) table
The purpose of the Register CSR table is four-fold:
* To mark integer and floating-point registers as requiring "redirection"
if it is ever used as a source or destination in any given operation.
This involves a level of indirection through a 5-to-7-bit lookup table,
such that **unmodified** operands with 5 bit (3 for Compressed) may
access up to **64** registers.
* To indicate whether, after redirection through the lookup table, the
register is a vector (or remains a scalar).
* To over-ride the implicit or explicit bitwidth that the operation would
normally give the register.
| RgCSR | | 15 | (14..8) | 7 | (6..5) | (4..0) |
| ----- | | - | - | - | ------ | ------- |
| 0 | | isvec0 | regidx0 | i/f | vew0 | regkey |
| 1 | | isvec1 | regidx1 | i/f | vew1 | regkey |
| .. | | isvec.. | regidx.. | i/f | vew.. | regkey |
| 15 | | isvec15 | regidx15 | i/f | vew15 | regkey |
i/f is set to "1" to indicate that the redirection/tag entry is to be applied
to integer registers; 0 indicates that it is relevant to floating-point
registers. vew has the following meanings, indicating that the instruction's
operand size is "over-ridden" in a polymorphic fashion:
| vew | bitwidth |
| --- | ---------- |
| 00 | default |
| 01 | default/2 |
| 10 | default\*2 |
| 11 | 8 |
As the above table is a CAM (key-value store) it may be appropriate
(faster, implementation-wise) to expand it as follows:
struct vectorised fp_vec[32], int_vec[32];
for (i = 0; i < 16; i++) // 16 CSRs?
tb = int_vec if CSRvec[i].type == 0 else fp_vec
idx = CSRvec[i].regkey // INT/FP src/dst reg in opcode
tb[idx].elwidth = CSRvec[i].elwidth
tb[idx].regidx = CSRvec[i].regidx // indirection
tb[idx].isvector = CSRvec[i].isvector // 0=scalar
tb[idx].packed = CSRvec[i].packed // SIMD or not
The actual size of the CSR Register table depends on the platform
and on whether other Extensions are present (RV64G, RV32E, etc.).
For details see "Subsets" section.
16-bit CSR Register CAM entries are mapped directly into 32-bit
on any RV32-based system, however RV64 (XLEN=64) and RV128 (XLEN=128)
are slightly different: the 16-bit entries appear (and can be set)
multiple times, in an overlapping fashion. Here is the table for RV64:
| CSR# | 63..48 | 47..32 | 31..16 | 15..0 |
| 0x4c0 | RgCSR3 | RgCSR2 | RgCSR1 | RgCSR0 |
| 0x4c1 | RgCSR5 | RgCSR4 | RgCSR3 | RgCSR2 |
| 0x4c2 | ... | ... | ... | ... |
| 0x4c1 | RgCSR15 | RgCSR14 | RgCSR13 | RgCSR12 |
| 0x4c8 | n/a | n/a | RgCSR15 | RgCSR4 |
The rules for writing to these CSRs are that any entries above the ones
being set will be automatically wiped (to zero), so to fill several entries
they must be written in a sequentially increasing manner. This functionality
was in an early draft of RVV and it means that, firstly, compilers do not have
to spend time zero-ing out CSRs unnecessarily, and secondly, that on
context-switching (and function calls) the number of CSRs that may need
saving is implicitly known.
The reason for the overlapping entries is that in the worst-case on an
RV64 system, only 4 64-bit CSR reads/writes are required for a full
context-switch (and an RV128 system, only 2 128-bit CSR reads/writes).
--
TODO: move elsewhere
# TODO: use elsewhere (retire for now)
vew = CSRbitwidth[rs1]
if (vew == 0)
bytesperreg = (XLEN/8) # or FLEN as appropriate
elif (vew == 1)
bytesperreg = (XLEN/4) # or FLEN/2 as appropriate
else:
bytesperreg = bytestable[vew] # 8 or 16
simdmult = (XLEN/8) / bytesperreg # or FLEN as appropriate
vlen = CSRvectorlen[rs1] * simdmult
CSRvlength = MIN(MIN(vlen, MAXVECTORLENGTH), rs2)
The reason for multiplying the vector length by the number of SIMD elements
(in each individual register) is so that each SIMD element may optionally be
predicated.
An example of how to subdivide the register file when bitwidth != default
is given in the section "Bitwidth Virtual Register Reordering".
## Predication CSR
TODO: update CSR tables, now 7-bit for regidx
The Predication CSR is a key-value store indicating whether, if a given
destination register (integer or floating-point) is referred to in an
instruction, it is to be predicated. Tt is particularly important to note
that the *actual* register used can be *different* from the one that is
in the instruction, due to the redirection through the lookup table.
* regidx is the actual register that in combination with the
i/f flag, if that integer or floating-point register is referred to,
results in the lookup table being referenced to find the predication
mask to use on the operation in which that (regidx) register has
been used
* predidx (in combination with the bank bit in the future) is the
*actual* register to be used for the predication mask. Note:
in effect predidx is actually a 6-bit register address, as the bank
bit is the MSB (and is nominally set to zero for now).
* inv indicates that the predication mask bits are to be inverted
prior to use *without* actually modifying the contents of the
register itself.
* zeroing is either 1 or 0, and if set to 1, the operation must
place zeros in any element position where the predication mask is
set to zero. If zeroing is set to 0, unpredicated elements *must*
be left alone. Some microarchitectures may choose to interpret
this as skipping the operation entirely. Others which wish to
stick more closely to a SIMD architecture may choose instead to
interpret unpredicated elements as an internal "copy element"
operation (which would be necessary in SIMD microarchitectures
that perform register-renaming)
* "packed" indicates if the register is to be interpreted as SIMD
i.e. containing multiple contiguous elements of size equal to "bitwidth".
(Note: in earlier drafts this was in the Register CSR table.
However after extending to 7 bits there was not enough space.
To use "unpredicated" packed SIMD, set the predicate to x0 and
set "invert". This has the effect of setting a predicate of all 1s)
| PrCSR | 13 | 12 | 11 | 10 | (9..5) | (4..0) |
| ----- | - | - | - | - | ------- | ------- |
| 0 | bank0 | zero0 | inv0 | i/f | regidx | predkey |
| 1 | bank1 | zero1 | inv1 | i/f | regidx | predkey |
| .. | bank.. | zero.. | inv.. | i/f | regidx | predkey |
| 15 | bank15 | zero15 | inv15 | i/f | regidx | predkey |
The Predication CSR Table is a key-value store, so implementation-wise
it will be faster to turn the table around (maintain topologically
equivalent state):
struct pred {
bool zero;
bool inv;
bool enabled;
int predidx; // redirection: actual int register to use
}
struct pred fp_pred_reg[32]; // 64 in future (bank=1)
struct pred int_pred_reg[32]; // 64 in future (bank=1)
for (i = 0; i < 16; i++)
tb = int_pred_reg if CSRpred[i].type == 0 else fp_pred_reg;
idx = CSRpred[i].regidx
tb[idx].zero = CSRpred[i].zero
tb[idx].inv = CSRpred[i].inv
tb[idx].predidx = CSRpred[i].predidx
tb[idx].enabled = true
So when an operation is to be predicated, it is the internal state that
is used. In Section 6.4.2 of Hwacha's Manual (EECS-2015-262) the following
pseudo-code for operations is given, where p is the explicit (direct)
reference to the predication register to be used:
for (int i=0; i
Branch operations use standard RV opcodes that are reinterpreted to
be "predicate variants" in the instance where either of the two src
registers are marked as vectors (active=1, vector=1).
Note that he predication register to use (if one is enabled) is taken from
the *first* src register. The target (destination) predication register
to use (if one is enabled) is taken from the *second* src register.
If either of src1 or src2 are scalars (whether by there being no
CSR register entry or whether by the CSR entry specifically marking
the register as "scalar") the comparison goes ahead as vector-scalar
or scalar-vector.
In instances where no vectorisation is detected on either src registers
the operation is treated as an absolutely standard scalar branch operation.
Where vectorisation is present on either or both src registers, the
branch may stil go ahead if any only if *all* tests succeed (i.e. excluding
those tests that are predicated out).
Note that just as with the standard (scalar, non-predicated) branch
operations, BLE, BGT, BLEU and BTGU may be synthesised by inverting
src1 and src2.
In Hwacha EECS-2015-262 Section 6.7.2 the following pseudocode is given
for predicated compare operations of function "cmp":
for (int i=0; i
There is no MV instruction in RV however there is a C.MV instruction.
It is used for copying integer-to-integer registers (vectorised FMV
is used for copying floating-point).
If either the source or the destination register are marked as vectors
C.MV is reinterpreted to be a vectorised (multi-register) predicated
move operation. The actual instruction's format does not change:
[[!table data="""
15 12 | 11 7 | 6 2 | 1 0 |
funct4 | rd | rs | op |
4 | 5 | 5 | 2 |
C.MV | dest | src | C0 |
"""]]
A simplified version of the pseudocode for this operation is as follows:
function op_mv(rd, rs) # MV not VMV!
rd = int_csr[rd].active ? int_csr[rd].regidx : rd;
rs = int_csr[rs].active ? int_csr[rs].regidx : rs;
ps = get_pred_val(FALSE, rs); # predication on src
pd = get_pred_val(FALSE, rd); # ... AND on dest
for (int i = 0, int j = 0; i < VL && j < VL;):
if (int_csr[rs].isvec) while (!(ps & 1<
An earlier draft of SV modified the behaviour of LOAD/STORE. This
actually undermined the fundamental principle of SV, namely that there
be no modifications to the scalar behaviour (except where absolutely
necessary), in order to simplify an implementor's task if considering
converting a pre-existing scalar design to support parallelism.
So the original RISC-V scalar LOAD/STORE and LOAD-FP/STORE-FP functionality
do not change in SV, however just as with C.MV it is important to note
that dual-predication is possible. Using the template outlined in
the section "Vectorised dual-op instructions", the pseudo-code covering
scalar-scalar, scalar-vector, vector-scalar and vector-vector applies,
where SCALAR\_OPERATION is as follows, exactly as for a standard
scalar RV LOAD operation:
srcbase = ireg[rs+i];
return mem[srcbase + imm];
Whilst LOAD and STORE remain as-is when compared to their scalar
counterparts, the incrementing on the source register (for LOAD)
means that pointers-to-structures can be easily implemented, and
if contiguous offsets are required, those pointers (the contents
of the contiguous source registers) may simply be set up to point
to contiguous locations.
## Compressed Stack LOAD / STORE Instructions
C.LWSP / C.SWSP and floating-point etc. are also source-dest twin-predicated,
where it is implicit in C.LWSP/FLWSP that x2 is the source register.
It is therefore possible to use predicated C.LWSP to efficiently
pop registers off the stack (by predicating x2 as the source), cherry-picking
which registers to store to (by predicating the destination). Likewise
for C.SWSP. In this way, LOAD/STORE-Multiple is efficiently achieved.
However, to do so, the behaviour of C.LWSP/C.SWSP needs to be slightly
different: where x2 is marked as vectorised, instead of incrementing
the register on each loop (x2, x3, x4...), instead it is the *immediate*
that must be incremented. Pseudo-code follows:
function lwsp(rd, rs):
rd = int_csr[rd].active ? int_csr[rd].regidx : rd;
rs = x2 # effectively no redirection on x2.
ps = get_pred_val(FALSE, rs); # predication on src
pd = get_pred_val(FALSE, rd); # ... AND on dest
for (int i = 0, int j = 0; i < VL && j < VL;):
if (int_csr[rs].isvec) while (!(ps & 1<
Element bitwidth is best covered as its own special section, as it
is quite involved and applies uniformly across-the-board. SV restricts
bitwidth polymorphism to default, default/2, default\*2 and 8-bit
(whilst this seems limiting, the justification is covered in a later
sub-section).
The effect of setting an element bitwidth is to re-cast each entry
in the register table, and for all memory operations involving
load/stores of certain specific sizes, to a completely different width.
Thus In c-style terms, on an RV64 architecture, effectively each register
now looks like this:
typedef union {
uint8_t b[8];
uint16_t s[4];
uint32_t i[2];
uint64_t l[1];
} reg_t;
// integer table: assume maximum SV 7-bit regfile size
reg_t int_regfile[128];
where the CSR Register table entry (not the instruction alone) determines
which of those union entries is to be used on each operation, and the
VL element offset in the hardware-loop specifies the index into each array.
However a naive interpretation of the data structure above masks the
fact that setting VL greater than 8, for example, when the bitwidth is 8,
accessing one specific register "spills over" to the following parts of
the register file in a sequential fashion. So a much more accurate way
to reflect this would be:
typedef union {
uint8_t actual_bytes[8]; // 8 for RV64, 4 for RV32, 16 for RV128
uint8_t b[0]; // array of type uint8_t
uint16_t s[0];
uint32_t i[0];
uint64_t l[0];
uint128_t d[0];
} reg_t;
reg_t int_regfile[128];
where when accessing any individual regfile[n].b entry it is permitted
(in c) to arbitrarily over-run the *declared* length of the array (zero),
and thus "overspill" to consecutive register file entries in a fashion
that is completely transparent to a greatly-simplified software / pseudo-code
representation.
It is however critical to note that it is clearly the responsibility of
the implementor to ensure that, towards the end of the register file,
an exception is thrown if attempts to access beyond the "real" register
bytes is ever attempted.
Now we may modify pseudo-code an operation where all element bitwidths have
been set to the same size, where this pseudo-code is otherwise identical
to its "non" polymorphic versions (above):
function op_add(rd, rs1, rs2) # add not VADD!
...
...
for (i = 0; i < VL; i++)
...
...
// TODO, calculate if over-run occurs, for each elwidth
if (elwidth == 8) {
int_regfile[rd].b[id] <= int_regfile[rs1].i[irs1] +
int_regfile[rs2].i[irs2];
} else if elwidth == 16 {
int_regfile[rd].s[id] <= int_regfile[rs1].s[irs1] +
int_regfile[rs2].s[irs2];
} else if elwidth == 32 {
int_regfile[rd].i[id] <= int_regfile[rs1].i[irs1] +
int_regfile[rs2].i[irs2];
} else { // elwidth == 64
int_regfile[rd].l[id] <= int_regfile[rs1].l[irs1] +
int_regfile[rs2].l[irs2];
}
...
...
So here we can see clearly: for 8-bit entries rd, rs1 and rs2 (and registers
following sequentially on respectively from the same) are "type-cast"
to 8-bit; for 16-bit entries likewise and so on.
However that only covers the case where the element widths are the same.
Where the element widths are different, the following algorithm applies:
* Analyse the bitwidth of all source operands and work out the
maximum. Record this as "maxsrcbitwidth"
* If any given source operand requires sign-extension or zero-extension
(ldb, div, rem, mul, sll, srl, sra etc.), instead of mandatory 32-bit
sign-extension / zero-extension or whatever is specified in the standard
RV specification, **change** that to sign-extending from the respective
individual source operand's bitwidth from the CSR table out to
"maxsrcbitwidth" (previously calculated), instead.
* Following separate and distinct (optional) sign/zero-extension of all
source operands as specifically required for that operation, carry out the
operation at "maxsrcbitwidth". (Note that in the case of LOAD/STORE or MV
this may be a "null" (copy) operation, and that with FCVT, the changes
to the source and destination bitwidths may also turn FVCT effectively
into a copy).
* If the destination operand requires sign-extension or zero-extension,
instead of a mandatory fixed size (typically 32-bit for arithmetic,
for subw for example, and otherwise various: 8-bit for sb, 16-bit for sw
etc.), overload the RV specification with the bitwidth from the
destination register's elwidth entry.
* Finally, store the (optionally) sign/zero-extended value into its
destination: memory for sb/sw etc., or an offset section of the register
file for an arithmetic operation.
In this way, polymorphic bitwidths are achieved without requiring a
massive 64-way permutation of calculations **per opcode**, for example
(4 possible rs1 bitwidths times 4 possible rs2 bitwidths times 4 possible
rd bitwidths). The pseudo-code is therefore as follows:
typedef union {
uint8_t b;
uint16_t s;
uint32_t i;
uint64_t l;
} el_reg_t;
bw(elwidth):
if elwidth == 0:
return xlen
if elwidth == 1:
return xlen / 2
if elwidth == 2:
return xlen * 2
// elwidth == 3:
return 8
get_max_elwidth(rs1, rs2):
return max(bw(int_csr[rs1].elwidth), # default (XLEN) if not set
bw(int_csr[rs2].elwidth)) # again XLEN if no entry
get_polymorphed_reg(reg, bitwidth, offset):
el_reg_t res;
res.l = 0; // TODO: going to need sign-extending / zero-extending
if bitwidth == 8:
reg.b = int_regfile[reg].b[offset]
elif bitwidth == 16:
reg.s = int_regfile[reg].s[offset]
elif bitwidth == 32:
reg.i = int_regfile[reg].i[offset]
elif bitwidth == 64:
reg.l = int_regfile[reg].l[offset]
return res
set_polymorphed_reg(reg, bitwidth, offset, val):
if (!int_csr[reg].isvec):
# sign/zero-extend depending on opcode requirements, from
# the reg's bitwidth out to the full bitwidth of the regfile
val = sign_or_zero_extend(val, bitwidth, xlen)
int_regfile[reg].l[0] = val
elif bitwidth == 8:
int_regfile[reg].b[offset] = val
elif bitwidth == 16:
int_regfile[reg].s[offset] = val
elif bitwidth == 32:
int_regfile[reg].i[offset] = val
elif bitwidth == 64:
int_regfile[reg].l[offset] = val
maxsrcwid = get_max_elwidth(rs1, rs2) # source element width(s)
destwid = int_csr[rs1].elwidth # destination element width
for (i = 0; i < VL; i++)
if (predval & 1<
Polymorphic element widths in vectorised form means that the data
being loaded (or stored) across multiple registers needs to be treated
(reinterpreted) as a contiguous stream of elwidth-wide items, where
the source register's element width is **independent** from the destination's.
This makes for a slightly more complex algorithm when using indirection
on the "addressed" register (source for LOAD and destination for STORE),
particularly given that the LOAD/STORE instruction provides important
information about the width of the data to be reinterpreted.
Let's illustrate the "load" part, where the pseudo-code for elwidth=default
was as follows, and i is the loop from 0 to VL-1:
srcbase = ireg[rs+i];
return mem[srcbase + imm]; // returns XLEN bits
Instead, when elwidth != default, for a LW (32-bit LOAD), elwidth-wide
chunks are taken from the source memory location addressed by the current
indexed source address register, and only when a full 32-bits-worth
are taken will the index be moved on to the next contiguous source
address register:
bitwidth = bw(elwidth); // source elwidth from CSR reg entry
elsperblock = 32 / bitwidth // 1 if bw=32, 2 if bw=16, 4 if bw=8
srcbase = ireg[rs+i/(elsperblock)]; // integer divide
offs = i % elsperblock; // modulo
return &mem[srcbase + imm + offs]; // re-cast to uint8_t*, uint16_t* etc.
Note that the constant "32" above is replaced by 8 for LB, 16 for LH, 64 for LD
and 128 for LQ.
The principle is basically exactly the same as if the srcbase were pointing
at the memory of the *register* file: memory is re-interpreted as containing
groups of elwidth-wide discrete elements.
When storing the result from a load, it's important to respect the fact
that the destination register has its *own separate element width*. Thus,
when each element is loaded (at the source element width), any sign-extension
or zero-extension (or truncation) needs to be done to the *destination*
bitwidth. Also, the storing has the exact same analogous algorithm as
above, where in fact it is just the set\_polymorphed\_reg pseudocode
(completely unchanged) used above.
One issue remains: when the source element width is **greater** than
the width of the operation, it is obvious that a single LB for example
cannot possibly obtain 16-bit-wide data. This condition may be detected
where, when using integer divide, elsperblock (the width of the LOAD
divided by the bitwidth of the element) is zero.
The issue is "fixed" by ensuring that elsperblock is a minimum of 1:
elsperblock = min(1, LD_OP_BITWIDTH / element_bitwidth)
The elements, if the element bitwidth is larger than the LD operation's
size, will then be sign/zero-extended to the full LD operation size, as
specified by the LOAD (LDU instead of LD, LBU instead of LB), before
being passed on to the second phase.
As LOAD/STORE may be twin-predicated, it is important to note that
the rules on twin predication still apply, except where in previous
pseudo-code (elwidth=default for both source and target) it was
the *registers* that the predication was applied to, it is now the
**elements** that the predication are applied to.
Thus the full pseudocode for all LD operations may be written out
as follows:
function LBU(rd, rs):
load_elwidthed(rd, rs, 8, true)
function LB(rd, rs):
load_elwidthed(rd, rs, 8, false)
function LH(rd, rs):
load_elwidthed(rd, rs, 16, false)
...
...
function LQ(rd, rs):
load_elwidthed(rd, rs, 128, false)
# returns 1 byte of data when opwidth=8, 2 bytes when opwidth=16..
function load_memory(rs, imm, i, opwidth):
elwidth = int_csr[rs].elwidth
bitwidth = bw(elwidth);
elsperblock = min(1, opwidth / bitwidth)
srcbase = ireg[rs+i/(elsperblock)];
offs = i % elsperblock;
return mem[srcbase + imm + offs]; # 1/2/4/8/16 bytes
function load_elwidthed(rd, rs, opwidth, unsigned):
destwid = int_csr[rd].elwidth # destination element width
rd = int_csr[rd].active ? int_csr[rd].regidx : rd;
rs = int_csr[rs].active ? int_csr[rs].regidx : rs;
ps = get_pred_val(FALSE, rs); # predication on src
pd = get_pred_val(FALSE, rd); # ... AND on dest
for (int i = 0, int j = 0; i < VL && j < VL;):
if (int_csr[rs].isvec) while (!(ps & 1< max(rs1, rs2) otherwise truncate
Note here that polymorphic add zero-extends its source operands,
where addw sign-extends.
### addw
The RV Specification specifically states that "W" variants of arithmetic
operations always produce 32-bit signed values. In a polymorphic
environment it is reasonable to assume that the signed aspect is
preserved, where it is the length of the operands and the result
that may be changed.
Standard Scalar RV64 (xlen):
* RS1 @ xlen bits
* RS2 @ xlen bits
* add @ xlen bits
* RD @ xlen bits, truncate add to 32-bit and sign-extend to xlen.
Polymorphic variant:
* RS1 @ rs1 bits, sign-extended to max(rs1, rs2) bits
* RS2 @ rs2 bits, sign-extended to max(rs1, rs2) bits
* add @ max(rs1, rs2) bits
* RD @ rd bits. sign-extend to rd if rd > max(rs1, rs2) otherwise truncate
Note here that polymorphic addw sign-extends its source operands,
where add zero-extends.
This requires a little more in-depth analysis. Where the bitwidth of
rs1 equals the bitwidth of rs2, no sign-extending will occur. It is
only where the bitwidth of either rs1 or rs2 are different, will the
lesser-width operand be sign-extended.
Effectively however, both rs1 and rs2 are being sign-extended (or truncated),
where for add they are both zero-extended. This holds true for all arithmetic
operations ending with "W".
### addiw
Standard Scalar RV64I:
* RS1 @ xlen bits, truncated to 32-bit
* immed @ 12 bits, sign-extended to 32-bit
* add @ 32 bits
* RD @ rd bits. sign-extend to rd if rd > 32, otherwise truncate.
Polymorphic variant:
* RS1 @ rs1 bits
* immed @ 12 bits, sign-extend to max(rs1, 12) bits
* add @ max(rs1, 12) bits
* RD @ rd bits. sign-extend to rd if rd > max(rs1, 12) otherwise truncate
# Predication Element Zeroing
The introduction of zeroing on traditional vector predication is usually
intended as an optimisation for lane-based microarchitectures with register
renaming to be able to save power by avoiding a register read on elements
that are passed through en-masse through the ALU. Simpler microarchitectures
do not have this issue: they simply do not pass the element through to
the ALU at all, and therefore do not store it back in the destination.
More complex non-lane-based micro-architectures can, when zeroing is
not set, use the predication bits to simply avoid sending element-based
operations to the ALUs, entirely: thus, over the long term, potentially
keeping all ALUs 100% occupied even when elements are predicated out.
SimpleV's design principle is not based on or influenced by
microarchitectural design factors: it is a hardware-level API.
Therefore, looking purely at whether zeroing is *useful* or not,
(whether less instructions are needed for certain scenarios),
given that a case can be made for zeroing *and* non-zeroing, the
decision was taken to add support for both.
## Single-predication (based on destination register)
Zeroing on predication for arithmetic operations is taken from
the destination register's predicate. i.e. the predication *and*
zeroing settings to be applied to the whole operation come from the
CSR Predication table entry for the destination register.
Thus when zeroing is set on predication of a destination element,
if the predication bit is clear, then the destination element is *set*
to zero (twin-predication is slightly different, and will be covered
next).
Thus the pseudo-code loop for a predicated arithmetic operation
is modified to as follows:
for (i = 0; i < VL; i++)
if not zeroing: # an optimisation
while (!(predval & 1<
for element-grouping, if there is unused space within a register
(3 16-bit elements in a 64-bit register for example), recommend:
* For the unused elements in an integer register, the used element
closest to the MSB is sign-extended on write and the unused elements
are ignored on read.
* The unused elements in a floating-point register are treated as-if
they are set to all ones on write and are ignored on read, matching the
existing standard for storing smaller FP values in larger registers.
---
info register,
> One solution is to just not support LR/SC wider than a fixed
> implementation-dependent size, which must be at least
>1 XLEN word, which can be read from a read-only CSR
> that can also be used for info like the kind and width of
> hw parallelism supported (128-bit SIMD, minimal virtual
> parallelism, etc.) and other things (like maybe the number
> of registers supported).
> That CSR would have to have a flag to make a read trap so
> a hypervisor can simulate different values.
----
> And what about instructions like JALR?
answer: they're not vectorised, so not a problem
----
* if opcode is in the RV32 group, rd, rs1 and rs2 bitwidth are
XLEN if elwidth==default
* if opcode is in the RV32I group, rd, rs1 and rs2 bitwidth are
*32* if elwidth == default
---
TODO: update elwidth to be default / 8 / 16 / 32
---
TODO: document different lengths for INT / FP regfiles, and provide
as part of info register. 00=32, 01=64, 10=128, 11=reserved.