# Simple-V (Parallelism Extension Proposal) Specification
* Copyright (C) 2017, 2018, 2019 Luke Kenneth Casson Leighton
* Status: DRAFTv0.6
* Last edited: 21 jun 2019
* Ancillary resource: [[opcodes]]
* Ancillary resource: [[sv_prefix_proposal]]
* Ancillary resource: [[abridged_spec]]
* Ancillary resource: [[vblock_format]]
With thanks to:
* Allen Baum
* Bruce Hoult
* comp.arch
* Jacob Bachmeyer
* Guy Lemurieux
* Jacob Lifshay
* Terje Mathisen
* The RISC-V Founders, without whom this all would not be possible.
[[!toc ]]
# Summary and Background: Rationale
Simple-V is a uniform parallelism API for RISC-V hardware that has several
unplanned side-effects including code-size reduction, expansion of
HINT space and more. The reason for
creating it is to provide a manageable way to turn a pre-existing design
into a parallel one, in a step-by-step incremental fashion, without adding any new opcodes, thus allowing
the implementor to focus on adding hardware where it is needed and necessary.
The primary target is for mobile-class 3D GPUs and VPUs, with secondary
goals being to reduce executable size (by extending the effectiveness of RV opcodes, RVC in particular) and reduce context-switch latency.
Critically: **No new instructions are added**. The parallelism (if any
is implemented) is implicitly added by tagging *standard* scalar registers
for redirection. When such a tagged register is used in any instruction,
it indicates that the PC shall **not** be incremented; instead a loop
is activated where *multiple* instructions are issued to the pipeline
(as determined by a length CSR), with contiguously incrementing register
numbers starting from the tagged register. When the last "element"
has been reached, only then is the PC permitted to move on. Thus
Simple-V effectively sits (slots) *in between* the instruction decode phase
and the ALU(s).
The barrier to entry with SV is therefore very low. The minimum
compliant implementation is software-emulation (traps), requiring
only the CSRs and CSR tables, and that an exception be thrown if an
instruction's registers are detected to have been tagged. The looping
that would otherwise be done in hardware is thus carried out in software,
instead. Whilst much slower, it is "compliant" with the SV specification,
and may be suited for implementation in RV32E and also in situations
where the implementor wishes to focus on certain aspects of SV, without
unnecessary time and resources into the silicon, whilst also conforming
strictly with the API. A good area to punt to software would be the
polymorphic element width capability for example.
Hardware Parallelism, if any, is therefore added at the implementor's
discretion to turn what would otherwise be a sequential loop into a
parallel one.
To emphasise that clearly: Simple-V (SV) is *not*:
* A SIMD system
* A SIMT system
* A Vectorisation Microarchitecture
* A microarchitecture of any specific kind
* A mandary parallel processor microarchitecture of any kind
* A supercomputer extension
SV does **not** tell implementors how or even if they should implement
parallelism: it is a hardware "API" (Application Programming Interface)
that, if implemented, presents a uniform and consistent way to *express*
parallelism, at the same time leaving the choice of if, how, how much,
when and whether to parallelise operations **entirely to the implementor**.
# Basic Operation
The principle of SV is as follows:
* Standard RV instructions are "prefixed" (extended) through a 48/64
bit format (single instruction option) or a variable
length VLIW-like prefix (multi or "grouped" option).
* The prefix(es) indicate which registers are "tagged" as
"vectorised". Predicates can also be added, and element widths
overridden on any src or dest register.
* A "Vector Length" CSR is set, indicating the span of any future
"parallel" operations.
* If any operation (a **scalar** standard RV opcode) uses a register
that has been so "marked" ("tagged"), a hardware "macro-unrolling loop"
is activated, of length VL, that effectively issues **multiple**
identical instructions using contiguous sequentially-incrementing
register numbers, based on the "tags".
* **Whether they be executed sequentially or in parallel or a
mixture of both or punted to software-emulation in a trap handler
is entirely up to the implementor**.
In this way an entire scalar algorithm may be vectorised with
the minimum of modification to the hardware and to compiler toolchains.
To reiterate: **There are *no* new opcodes**. The scheme works *entirely*
on hidden context that augments *scalar* RISCV instructions.
# CSRs
* An optional "reshaping" CSR key-value table which remaps from a 1D
linear shape to 2D or 3D, including full transposition.
There are five additional CSRs, available in any privilege level:
* MVL (the Maximum Vector Length)
* VL (which has different characteristics from standard CSRs)
* SUBVL (effectively a kind of SIMD)
* STATE (containing copies of MVL, VL and SUBVL as well as context information)
* PCVBLK (the current operation being executed within a VBLOCK Group)
For User Mode there are the following CSRs:
* uePCVBLK (a copy of the sub-execution Program Counter, that is relative
to the start of the current VBLOCK Group, set on a trap).
* ueSTATE (useful for saving and restoring during context switch,
and for providing fast transitions)
There are also two additional CSRs for Supervisor-Mode:
* sePCVBLK
* seSTATE
And likewise for M-Mode:
* mePCVBLK
* meSTATE
The u/m/s CSRs are treated and handled exactly like their (x)epc
equivalents. On entry to or exit from a privilege level, the contents of its (x)eSTATE are swapped with STATE.
Thus for example, a User Mode trap will end up swapping STATE and ueSTATE
(on both entry and exit), allowing User Mode traps to have their own
Vectorisation Context set up, separated from and unaffected by normal
user applications. If an M Mode trap occurs in the middle of the U Mode trap, STATE is swapped with meSTATE, and restored on exit: the U Mode trap continues unaware that the M Mode trap even occurred.
Likewise, Supervisor Mode may perform context-switches, safe in the
knowledge that its Vectorisation State is unaffected by User Mode.
The access pattern for these groups of CSRs in each mode follows the
same pattern for other CSRs that have M-Mode and S-Mode "mirrors":
* In M-Mode, the S-Mode and U-Mode CSRs are separate and distinct.
* In S-Mode, accessing and changing of the M-Mode CSRs is transparently
identical
to changing the S-Mode CSRs. Accessing and changing the U-Mode
CSRs is permitted.
* In U-Mode, accessing and changing of the S-Mode and U-Mode CSRs
is prohibited.
An interesting side effect of SV STATE being
separate and distinct in S Mode
is that
Vectorised saving of an entire register file to the stack is a single
instruction (through accidental provision of LOAD-MULTI semantics). If the
SVPrefix P64-LD-type format is used, LOAD-MULTI may even be done with a
single standalone 64 bit opcode (P64 may set up SUBVL, VL and MVL from an
immediate field, to cover the full regfile). It can even be predicated, which opens up some very
interesting possibilities.
(x)EPCVBLK CSRs must be treated exactly like their corresponding (x)epc
equivalents. See VBLOCK section for details.
## MAXVECTORLENGTH (MVL)
MAXVECTORLENGTH is the same concept as MVL in RVV, except that it
is variable length and may be dynamically set. MVL is
however limited to the regfile bitwidth XLEN (1-32 for RV32,
1-64 for RV64 and so on).
The reason for setting this limit is so that predication registers, when
marked as such, may fit into a single register as opposed to fanning
out over several registers. This keeps the hardware implementation a
little simpler.
The other important factor to note is that the actual MVL is internally
stored **offset by one**, so that it can fit into only 6 bits (for RV64)
and still cover a range up to XLEN bits. Attempts to set MVL to zero will
return an exception. This is expressed more clearly in the "pseudocode"
section, where there are subtle differences between CSRRW and CSRRWI.
## Vector Length (VL)
VSETVL is slightly different from RVV. Similar to RVV, VL is set to be within
the range 1 <= VL <= MVL (where MVL in turn is limited to 1 <= MVL <= XLEN)
VL = rd = MIN(vlen, MVL)
where 1 <= MVL <= XLEN
However just like MVL it is important to note that the range for VL has
subtle design implications, covered in the "CSR pseudocode" section
The fixed (specific) setting of VL allows vector LOAD/STORE to be used
to switch the entire bank of registers using a single instruction (see
Appendix, "Context Switch Example"). The reason for limiting VL to XLEN
is down to the fact that predication bits fit into a single register of
length XLEN bits.
The second and most important change is that, within the limits set by
MVL, the value passed in **must** be set in VL (and in the
destination register).
This has implication for the microarchitecture, as VL is required to be
set (limits from MVL notwithstanding) to the actual value
requested. RVV has the option to set VL to an arbitrary value that suits
the conditions and the micro-architecture: SV does *not* permit this.
The reason is so that if SV is to be used for a context-switch or as a
substitute for LOAD/STORE-Multiple, the operation can be done with only
2-3 instructions (setup of the CSRs, VSETVL x0, x0, #{regfilelen-1},
single LD/ST operation). If VL does *not* get set to the register file
length when VSETVL is called, then a software-loop would be needed.
To avoid this need, VL *must* be set to exactly what is requested
(limits notwithstanding).
Therefore, in turn, unlike RVV, implementors *must* provide
pseudo-parallelism (using sequential loops in hardware) if actual
hardware-parallelism in the ALUs is not deployed. A hybrid is also
permitted (as used in Broadcom's VideoCore-IV) however this must be
*entirely* transparent to the ISA.
The third change is that VSETVL is implemented as a CSR, where the
behaviour of CSRRW (and CSRRWI) must be changed to specifically store
the *new* value in the destination register, **not** the old value.
Where context-load/save is to be implemented in the usual fashion
by using a single CSRRW instruction to obtain the old value, the
*secondary* CSR must be used (STATE). This CSR by contrast behaves
exactly as standard CSRs, and contains more than just VL.
One interesting side-effect of using CSRRWI to set VL is that this
may be done with a single instruction, useful particularly for a
context-load/save. There are however limitations: CSRWI's immediate
is limited to 0-31 (representing VL=1-32).
Note that when VL is set to 1, vector operations cease (but not subvector
operations: that requires setting SUBVL=1) the hardware loop is reduced
to a single element: scalar operations. This is in effect the default,
normal operating mode. However it is important to appreciate that this
does **not** result in the Register table or SUBVL being disabled. Only
when the Register table is empty (P48/64 prefix fields notwithstanding)
would SV have no effect.
## SUBVL - Sub Vector Length
This is a "group by quantity" that effectivrly asks each iteration
of the hardware loop to load SUBVL elements of width elwidth at a
time. Effectively, SUBVL is like a SIMD multiplier: instead of just 1
operation issued, SUBVL operations are issued.
Another way to view SUBVL is that each element in the VL length vector is
now SUBVL times elwidth bits in length and now comprises SUBVL discrete
sub operations. An inner SUBVL for-loop within a VL for-loop in effect,
with the sub-element increased every time in the innermost loop. This
is best illustrated in the (simplified) pseudocode example, later.
The primary use case for SUBVL is for 3D FP Vectors. A Vector of 3D
coordinates X,Y,Z for example may be loaded and multiplied the stored, per
VL element iteration, rather than having to set VL to three times larger.
Legal values are 1, 2, 3 and 4 (and the STATE CSR must hold the 2 bit
values 0b00 thru 0b11 to represent them).
Setting this CSR to 0 must raise an exception. Setting it to a value
greater than 4 likewise.
The main effect of SUBVL is that predication bits are applied per
**group**, rather than by individual element.
This saves a not insignificant number of instructions when handling 3D
vectors, as otherwise a much longer predicate mask would have to be set
up with regularly-repeated bit patterns.
See SUBVL Pseudocode illustration for details.
## STATE
This is a standard CSR that contains sufficient information for a
full context save/restore. It contains (and permits setting of):
* MVL
* VL
* destoffs - the destination element offset of the current parallel
instruction being executed
* srcoffs - for twin-predication, the source element offset as well.
* SUBVL
* svdestoffs - the subvector destination element offset of the current
parallel instruction being executed
* svsrcoffs - for twin-predication, the subvector source element offset
as well.
Interestingly STATE may hypothetically also be modified to make the
immediately-following instruction to skip a certain number of elements,
by playing with destoffs and srcoffs (and the subvector offsets as well)
Setting destoffs and srcoffs is realistically intended for saving state
so that exceptions (page faults in particular) may be serviced and the
hardware-loop that was being executed at the time of the trap, from
user-mode (or Supervisor-mode), may be returned to and continued from
exactly where it left off. The reason why this works is because setting
User-Mode STATE will not change (not be used) in M-Mode or S-Mode (and
is entirely why M-Mode and S-Mode have their own STATE CSRs, meSTATE
and seSTATE).
The format of the STATE CSR is as follows:
| (29..28 | (27..26) | (25..24) | (23..18) | (17..12) | (11..6) | (5...0) |
| ------- | -------- | -------- | -------- | -------- | ------- | ------- |
| dsvoffs | ssvoffs | subvl | destoffs | srcoffs | vl | maxvl |
When setting this CSR, the following characteristics will be enforced:
* **MAXVL** will be truncated (after offset) to be within the range 1 to XLEN
* **VL** will be truncated (after offset) to be within the range 1 to MAXVL
* **SUBVL** which sets a SIMD-like quantity, has only 4 values so there
are no changes needed
* **srcoffs** will be truncated to be within the range 0 to VL-1
* **destoffs** will be truncated to be within the range 0 to VL-1
* **ssvoffs** will be truncated to be within the range 0 to SUBVL-1
* **dsvoffs** will be truncated to be within the range 0 to SUBVL-1
NOTE: if the following instruction is not a twin predicated instruction,
and destoffs or dsvoffs has been set to non-zero, subsequent execution
behaviour is undefined. **USE WITH CARE**.
### Hardware rules for when to increment STATE offsets
The offsets inside STATE are like the indices in a loop, except
in hardware. They are also partially (conceptually) similar to a
"sub-execution Program Counter". As such, and to allow proper context
switching and to define correct exception behaviour, the following rules
must be observed:
* When the VL CSR is set, srcoffs and destoffs are reset to zero.
* Each instruction that contains a "tagged" register shall start
execution at the *current* value of srcoffs (and destoffs in the case
of twin predication)
* Unpredicated bits (in nonzeroing mode) shall cause the element operation
to skip, incrementing the srcoffs (or destoffs)
* On execution of an element operation, Exceptions shall **NOT** cause
srcoffs or destoffs to increment.
* On completion of the full Vector Loop (srcoffs = VL-1 or destoffs =
VL-1 after the last element is executed), both srcoffs and destoffs
shall be reset to zero.
This latter is why srcoffs and destoffs may be stored as values from
0 to XLEN-1 in the STATE CSR, because as loop indices they refer to
elements. srcoffs and destoffs never need to be set to VL: their maximum
operating values are limited to 0 to VL-1.
The same corresponding rules apply to SUBVL, svsrcoffs and svdestoffs.
## MVL and VL Pseudocode
The pseudo-code for get and set of VL and MVL use the following internal
functions as follows:
set_mvl_csr(value, rd):
regs[rd] = STATE.MVL
STATE.MVL = MIN(value, STATE.MVL)
get_mvl_csr(rd):
regs[rd] = STATE.VL
set_vl_csr(value, rd):
STATE.VL = MIN(value, STATE.MVL)
regs[rd] = STATE.VL # yes returning the new value NOT the old CSR
return STATE.VL
get_vl_csr(rd):
regs[rd] = STATE.VL
return STATE.VL
Note that where setting MVL behaves as a normal CSR (returns the old
value), unlike standard CSR behaviour, setting VL will return the **new**
value of VL **not** the old one.
For CSRRWI, the range of the immediate is restricted to 5 bits. In order to
maximise the effectiveness, an immediate of 0 is used to set VL=1,
an immediate of 1 is used to set VL=2 and so on:
CSRRWI_Set_MVL(value):
set_mvl_csr(value+1, x0)
CSRRWI_Set_VL(value):
set_vl_csr(value+1, x0)
However for CSRRW the following pseudocode is used for MVL and VL,
where setting the value to zero will cause an exception to be raised.
The reason is that if VL or MVL are set to zero, the STATE CSR is
not capable of storing that value.
CSRRW_Set_MVL(rs1, rd):
value = regs[rs1]
if value == 0 or value > XLEN:
raise Exception
set_mvl_csr(value, rd)
CSRRW_Set_VL(rs1, rd):
value = regs[rs1]
if value == 0 or value > XLEN:
raise Exception
set_vl_csr(value, rd)
In this way, when CSRRW is utilised with a loop variable, the value
that goes into VL (and into the destination register) may be used
in an instruction-minimal fashion:
CSRvect1 = {type: F, key: a3, val: a3, elwidth: dflt}
CSRvect2 = {type: F, key: a7, val: a7, elwidth: dflt}
CSRRWI MVL, 3 # sets MVL == **4** (not 3)
j zerotest # in case loop counter a0 already 0
loop:
CSRRW VL, t0, a0 # vl = t0 = min(mvl, a0)
ld a3, a1 # load 4 registers a3-6 from x
slli t1, t0, 3 # t1 = vl * 8 (in bytes)
ld a7, a2 # load 4 registers a7-10 from y
add a1, a1, t1 # increment pointer to x by vl*8
fmadd a7, a3, fa0, a7 # v1 += v0 * fa0 (y = a * x + y)
sub a0, a0, t0 # n -= vl (t0)
st a7, a2 # store 4 registers a7-10 to y
add a2, a2, t1 # increment pointer to y by vl*8
zerotest:
bnez a0, loop # repeat if n != 0
With the STATE CSR, just like with CSRRWI, in order to maximise the
utilisation of the limited bitspace, "000000" in binary represents
VL==1, "00001" represents VL==2 and so on (likewise for MVL):
CSRRW_Set_SV_STATE(rs1, rd):
value = regs[rs1]
get_state_csr(rd)
STATE.MVL = set_mvl_csr(value[11:6]+1)
STATE.VL = set_vl_csr(value[5:0]+1)
STATE.destoffs = value[23:18]>>18
STATE.srcoffs = value[23:18]>>12
get_state_csr(rd):
regs[rd] = (STATE.MVL-1) | (STATE.VL-1)<<6 | (STATE.srcoffs)<<12 |
(STATE.destoffs)<<18
return regs[rd]
In both cases, whilst CSR read of VL and MVL return the exact values
of VL and MVL respectively, reading and writing the STATE CSR returns
those values **minus one**. This is absolutely critical to implement
if the STATE CSR is to be used for fast context-switching.
## VL, MVL and SUBVL instruction aliases
This table contains pseudo-assembly instruction aliases. Note the
subtraction of 1 from the CSRRWI pseudo variants, to compensate for the
reduced range of the 5 bit immediate.
| alias | CSR |
| - | - |
| SETVL rd, rs | CSRRW VL, rd, rs |
| SETVLi rd, #n | CSRRWI VL, rd, #n-1 |
| GETVL rd | CSRRW VL, rd, x0 |
| SETMVL rd, rs | CSRRW MVL, rd, rs |
| SETMVLi rd, #n | CSRRWI MVL,rd, #n-1 |
| GETMVL rd | CSRRW MVL, rd, x0 |
Note: CSRRC and other bitsetting may still be used, they are however not particularly useful (very obscure).
## Register key-value (CAM) table
*NOTE: in prior versions of SV, this table used to be writable and
accessible via CSRs. It is now stored in the VBLOCK instruction format. Note
that this table does *not* get applied to the SVPrefix P48/64 format,
only to scalar opcodes*
The purpose of the Register table is three-fold:
* To mark integer and floating-point registers as requiring "redirection"
if it is ever used as a source or destination in any given operation.
This involves a level of indirection through a 5-to-7-bit lookup table,
such that **unmodified** operands with 5 bits (3 for some RVC ops) may
access up to **128** registers.
* To indicate whether, after redirection through the lookup table, the
register is a vector (or remains a scalar).
* To over-ride the implicit or explicit bitwidth that the operation would
normally give the register.
Note: clearly, if an RVC operation uses a 3 bit spec'd register (x8-x15)
and the Register table contains entried that only refer to registerd
x1-x14 or x16-x31, such operations will *never* activate the VL hardware
loop!
If however the (16 bit) Register table does contain such an entry (x8-x15
or x2 in the case of LWSP), that src or dest reg may be redirected
anywhere to the *full* 128 register range. Thus, RVC becomes far more
powerful and has many more opportunities to reduce code size that in
Standard RV32/RV64 executables.
16 bit format:
| RegCAM | | 15 | (14..8) | 7 | (6..5) | (4..0) |
| ------ | | - | - | - | ------ | ------- |
| 0 | | isvec0 | regidx0 | i/f | vew0 | regkey |
| 1 | | isvec1 | regidx1 | i/f | vew1 | regkey |
| .. | | isvec.. | regidx.. | i/f | vew.. | regkey |
| 15 | | isvec15 | regidx15 | i/f | vew15 | regkey |
8 bit format:
| RegCAM | | 7 | (6..5) | (4..0) |
| ------ | | - | ------ | ------- |
| 0 | | i/f | vew0 | regnum |
i/f is set to "1" to indicate that the redirection/tag entry is to
be applied to integer registers; 0 indicates that it is relevant to
floating-point
registers.
The 8 bit format is used for a much more compact expression. "isvec"
is implicit and, similar to [[sv-prefix-proposal]], the target vector
is "regnum<<2", implicitly. Contrast this with the 16-bit format where
the target vector is *explicitly* named in bits 8 to 14, and bit 15 may
optionally set "scalar" mode.
Note that whilst SVPrefix adds one extra bit to each of rd, rs1 etc.,
and thus the "vector" mode need only shift the (6 bit) regnum by 1 to
get the actual (7 bit) register number to use, there is not enough space
in the 8 bit format (only 5 bits for regnum) so "regnum<<2" is required.
vew has the following meanings, indicating that the instruction's
operand size is "over-ridden" in a polymorphic fashion:
| vew | bitwidth |
| --- | ------------------- |
| 00 | default (XLEN/FLEN) |
| 01 | 8 bit |
| 10 | 16 bit |
| 11 | 32 bit |
As the above table is a CAM (key-value store) it may be appropriate
(faster, implementation-wise) to expand it as follows:
struct vectorised fp_vec[32], int_vec[32];
for (i = 0; i < len; i++) // from VBLOCK Format
tb = int_vec if CSRvec[i].type == 0 else fp_vec
idx = CSRvec[i].regkey // INT/FP src/dst reg in opcode
tb[idx].elwidth = CSRvec[i].elwidth
tb[idx].regidx = CSRvec[i].regidx // indirection
tb[idx].isvector = CSRvec[i].isvector // 0=scalar
## Predication Table
*NOTE: in prior versions of SV, this table used to be writable and
accessible via CSRs. It is now stored in the VBLOCK instruction format.
The table does **not** apply to SVPrefix opcodes*
The Predication Table is a key-value store indicating whether, if a
given destination register (integer or floating-point) is referred to
in an instruction, it is to be predicated. Like the Register table, it
is an indirect lookup that allows the RV opcodes to not need modification.
It is particularly important to note
that the *actual* register used can be *different* from the one that is
in the instruction, due to the redirection through the lookup table.
* regidx is the register that in combination with the
i/f flag, if that integer or floating-point register is referred to in a
(standard RV) instruction results in the lookup table being referenced
to find the predication mask to use for this operation.
* predidx is the *actual* (full, 7 bit) register to be used for the
predication mask.
* inv indicates that the predication mask bits are to be inverted
prior to use *without* actually modifying the contents of the
registerfrom which those bits originated.
* zeroing is either 1 or 0, and if set to 1, the operation must
place zeros in any element position where the predication mask is
set to zero. If zeroing is set to 0, unpredicated elements *must*
be left alone. Some microarchitectures may choose to interpret
this as skipping the operation entirely. Others which wish to
stick more closely to a SIMD architecture may choose instead to
interpret unpredicated elements as an internal "copy element"
operation (which would be necessary in SIMD microarchitectures
that perform register-renaming)
* ffirst is a special mode that stops sequential element processing when
a data-dependent condition occurs, whether a trap or a conditional test.
The handling of each (trap or conditional test) is slightly different:
see Instruction sections for further details
16 bit format:
| PrCSR | (15..11) | 10 | 9 | 8 | (7..1) | 0 |
| ----- | - | - | - | - | ------- | ------- |
| 0 | predidx | zero0 | inv0 | i/f | regidx | ffirst0 |
| 1 | predidx | zero1 | inv1 | i/f | regidx | ffirst1 |
| 2 | predidx | zero2 | inv2 | i/f | regidx | ffirst2 |
| 3 | predidx | zero3 | inv3 | i/f | regidx | ffirst3 |
Note: predidx=x0, zero=1, inv=1 is a RESERVED encoding. Its use must
generate an illegal instruction trap.
8 bit format:
| PrCSR | 7 | 6 | 5 | (4..0) |
| ----- | - | - | - | ------- |
| 0 | zero0 | inv0 | i/f | regnum |
The 8 bit format is a compact and less expressive variant of the full
16 bit format. Using the 8 bit formatis very different: the predicate
register to use is implicit, and numbering begins inplicitly from x9. The
regnum is still used to "activate" predication, in the same fashion as
described above.
Thus if we map from 8 to 16 bit format, the table becomes:
| PrCSR | (15..11) | 10 | 9 | 8 | (7..1) | 0 |
| ----- | - | - | - | - | ------- | ------- |
| 0 | x9 | zero0 | inv0 | i/f | regnum | ff=0 |
| 1 | x10 | zero1 | inv1 | i/f | regnum | ff=0 |
| 2 | x11 | zero2 | inv2 | i/f | regnum | ff=0 |
| 3 | x12 | zero3 | inv3 | i/f | regnum | ff=0 |
The 16 bit Predication CSR Table is a key-value store, so
implementation-wise it will be faster to turn the table around (maintain
topologically equivalent state):
struct pred {
bool zero; // zeroing
bool inv; // register at predidx is inverted
bool ffirst; // fail-on-first
bool enabled; // use this to tell if the table-entry is active
int predidx; // redirection: actual int register to use
}
struct pred fp_pred_reg[32]; // 64 in future (bank=1)
struct pred int_pred_reg[32]; // 64 in future (bank=1)
for (i = 0; i < len; i++) // number of Predication entries in VBLOCK
tb = int_pred_reg if PredicateTable[i].type == 0 else fp_pred_reg;
idx = PredicateTable[i].regidx
tb[idx].zero = CSRpred[i].zero
tb[idx].inv = CSRpred[i].inv
tb[idx].ffirst = CSRpred[i].ffirst
tb[idx].predidx = CSRpred[i].predidx
tb[idx].enabled = true
So when an operation is to be predicated, it is the internal state that
is used. In Section 6.4.2 of Hwacha's Manual (EECS-2015-262) the following
pseudo-code for operations is given, where p is the explicit (direct)
reference to the predication register to be used:
for (int i=0; i
ffirst is a special data-dependent predicate mode. There are two
variants: one is for faults: typically for LOAD/STORE operations,
which may encounter end of page faults during a series of operations.
The other variant is comparisons such as FEQ (or the augmented behaviour
of Branch), and any operation that returns a result of zero (whether
integer or floating-point). In the FP case, this includes negative-zero.
Note that the execution order must "appear" to be sequential for ffirst
mode to work correctly. An in-order architecture must execute the element
operations in sequence, whilst an out-of-order architecture must *commit*
the element operations in sequence (giving the appearance of in-order
execution).
Note also, that if ffirst mode is needed without predication, a special
"always-on" Predicate Table Entry may be constructed by setting
inverse-on and using x0 as the predicate register. This
will have the effect of creating a mask of all ones, allowing ffirst
to be set.
### Fail-on-first traps
Except for the first element, ffault stops sequential element processing
when a trap occurs. The first element is treated normally (as if ffirst
is clear). Should any subsequent element instruction require a trap,
instead it and subsequent indexed elements are ignored (or cancelled in
out-of-order designs), and VL is set to the *last* instruction that did
not take the trap.
Note that predicated-out elements (where the predicate mask bit is zero)
are clearly excluded (i.e. the trap will not occur). However, note that
the loop still had to test the predicate bit: thus on return,
VL is set to include elements that did not take the trap *and* includes
the elements that were predicated (masked) out (not tested up to the
point where the trap occurred).
If SUBVL is being used (SUBVL!=1), the first *sub-group* of elements
will cause a trap as normal (as if ffirst is not set); subsequently,
the trap must not occur in the *sub-group* of elements. SUBVL will **NOT**
be modified.
Given that predication bits apply to SUBVL groups, the same rules apply
to predicated-out (masked-out) sub-groups in calculating the value that VL
is set to.
### Fail-on-first conditional tests
ffault stops sequential element conditional testing on the first element result
being zero. VL is set to the number of elements that were processed before
the fail-condition was encountered.
Note that just as with traps, if SUBVL!=1, the first of any of the *sub-group*
will cause the processing to end, and, even if there were elements within
the *sub-group* that passed the test, that sub-group is still (entirely)
excluded from the count (from setting VL). i.e. VL is set to the total
number of *sub-groups* that had no fail-condition up until execution was
stopped.
Note again that, just as with traps, predicated-out (masked-out) elements
are included in the count leading up to the fail-condition, even though they
were not tested.
The pseudo-code for Predication makes this clearer and simpler than it is
in words (the loop ends, VL is set to the current element index, "i").
## REMAP CSR
(Note: both the REMAP and SHAPE sections are best read after the
rest of the document has been read)
There is one 32-bit CSR which may be used to indicate which registers,
if used in any operation, must be "reshaped" (re-mapped) from a linear
form to a 2D or 3D transposed form, or "offset" to permit arbitrary
access to elements within a register.
The 32-bit REMAP CSR may reshape up to 3 registers:
| 29..28 | 27..26 | 25..24 | 23 | 22..16 | 15 | 14..8 | 7 | 6..0 |
| ------ | ------ | ------ | -- | ------- | -- | ------- | -- | ------- |
| shape2 | shape1 | shape0 | 0 | regidx2 | 0 | regidx1 | 0 | regidx0 |
regidx0-2 refer not to the Register CSR CAM entry but to the underlying
*real* register (see regidx, the value) and consequently is 7-bits wide.
When set to zero (referring to x0), clearly reshaping x0 is pointless,
so is used to indicate "disabled".
shape0-2 refers to one of three SHAPE CSRs. A value of 0x3 is reserved.
Bits 7, 15, 23, 30 and 31 are also reserved, and must be set to zero.
It is anticipated that these specialist CSRs not be very often used.
Unlike the CSR Register and Predication tables, the REMAP CSRs use
the full 7-bit regidx so that they can be set once and left alone,
whilst the CSR Register entries pointing to them are disabled, instead.
## SHAPE 1D/2D/3D vector-matrix remapping CSRs
(Note: both the REMAP and SHAPE sections are best read after the
rest of the document has been read)
There are three "shape" CSRs, SHAPE0, SHAPE1, SHAPE2, 32-bits in each,
which have the same format. When each SHAPE CSR is set entirely to zeros,
remapping is disabled: the register's elements are a linear (1D) vector.
| 26..24 | 23 | 22..16 | 15 | 14..8 | 7 | 6..0 |
| ------- | -- | ------- | -- | ------- | -- | ------- |
| permute | offs[2] | zdimsz | offs[1] | ydimsz | offs[0] | xdimsz |
offs is a 3-bit field, spread out across bits 7, 15 and 23, which
is added to the element index during the loop calculation.
xdimsz, ydimsz and zdimsz are offset by 1, such that a value of 0 indicates
that the array dimensionality for that dimension is 1. A value of xdimsz=2
would indicate that in the first dimension there are 3 elements in the
array. The format of the array is therefore as follows:
array[xdim+1][ydim+1][zdim+1]
However whilst illustrative of the dimensionality, that does not take the
"permute" setting into account. "permute" may be any one of six values
(0-5, with values of 6 and 7 being reserved, and not legal). The table
below shows how the permutation dimensionality order works:
| permute | order | array format |
| ------- | ----- | ------------------------ |
| 000 | 0,1,2 | (xdim+1)(ydim+1)(zdim+1) |
| 001 | 0,2,1 | (xdim+1)(zdim+1)(ydim+1) |
| 010 | 1,0,2 | (ydim+1)(xdim+1)(zdim+1) |
| 011 | 1,2,0 | (ydim+1)(zdim+1)(xdim+1) |
| 100 | 2,0,1 | (zdim+1)(xdim+1)(ydim+1) |
| 101 | 2,1,0 | (zdim+1)(ydim+1)(xdim+1) |
In other words, the "permute" option changes the order in which
nested for-loops over the array would be done. The algorithm below
shows this more clearly, and may be executed as a python program:
# mapidx = REMAP.shape2
xdim = 3 # SHAPE[mapidx].xdim_sz+1
ydim = 4 # SHAPE[mapidx].ydim_sz+1
zdim = 5 # SHAPE[mapidx].zdim_sz+1
lims = [xdim, ydim, zdim]
idxs = [0,0,0] # starting indices
order = [1,0,2] # experiment with different permutations, here
offs = 0 # experiment with different offsets, here
for idx in range(xdim * ydim * zdim):
new_idx = offs + idxs[0] + idxs[1] * xdim + idxs[2] * xdim * ydim
print new_idx,
for i in range(3):
idxs[order[i]] = idxs[order[i]] + 1
if (idxs[order[i]] != lims[order[i]]):
break
print
idxs[order[i]] = 0
Here, it is assumed that this algorithm be run within all pseudo-code
throughout this document where a (parallelism) for-loop would normally
run from 0 to VL-1 to refer to contiguous register
elements; instead, where REMAP indicates to do so, the element index
is run through the above algorithm to work out the **actual** element
index, instead. Given that there are three possible SHAPE entries, up to
three separate registers in any given operation may be simultaneously
remapped:
function op_add(rd, rs1, rs2) # add not VADD!
...
...
for (i = 0; i < VL; i++)
xSTATE.srcoffs = i # save context
if (predval & 1<
Despite being a 98% complete and accurate topological remap of RVV
concepts and functionality, no new instructions are needed.
Compared to RVV: *All* RVV instructions can be re-mapped, however xBitManip
becomes a critical dependency for efficient manipulation of predication
masks (as a bit-field). Despite the removal of all operations,
with the exception of CLIP and VSELECT.X
*all instructions from RVV Base are topologically re-mapped and retain their
complete functionality, intact*. Note that if RV64G ever had
a MV.X added as well as FCLIP, the full functionality of RVV-Base would
be obtained in SV.
Three instructions, VSELECT, VCLIP and VCLIPI, do not have RV Standard
equivalents, so are left out of Simple-V. VSELECT could be included if
there existed a MV.X instruction in RV (MV.X is a hypothetical
non-immediate variant of MV that would allow another register to
specify which register was to be copied). Note that if any of these three
instructions are added to any given RV extension, their functionality
will be inherently parallelised.
With some exceptions, where it does not make sense or is simply too
challenging, all RV-Base instructions are parallelised:
* CSR instructions, whilst a case could be made for fast-polling of
a CSR into multiple registers, or for being able to copy multiple
contiguously addressed CSRs into contiguous registers, and so on,
are the fundamental core basis of SV. If parallelised, extreme
care would need to be taken. Additionally, CSR reads are done
using x0, and it is *really* inadviseable to tag x0.
* LUI, C.J, C.JR, WFI, AUIPC are not suitable for parallelising so are
left as scalar.
* LR/SC could hypothetically be parallelised however their purpose is
single (complex) atomic memory operations where the LR must be followed
up by a matching SC. A sequence of parallel LR instructions followed
by a sequence of parallel SC instructions therefore is guaranteed to
not be useful. Not least: the guarantees of a Multi-LR/SC
would be impossible to provide if emulated in a trap.
* EBREAK, NOP, FENCE and others do not use registers so are not inherently
paralleliseable anyway.
All other operations using registers are automatically parallelised.
This includes AMOMAX, AMOSWAP and so on, where particular care and
attention must be paid.
Example pseudo-code for an integer ADD operation (including scalar
operations). Floating-point uses the FP Register Table.
function op_add(rd, rs1, rs2) # add not VADD!
int i, id=0, irs1=0, irs2=0;
predval = get_pred_val(FALSE, rd);
rd = int_vec[rd ].isvector ? int_vec[rd ].regidx : rd;
rs1 = int_vec[rs1].isvector ? int_vec[rs1].regidx : rs1;
rs2 = int_vec[rs2].isvector ? int_vec[rs2].regidx : rs2;
for (i = 0; i < VL; i++)
xSTATE.srcoffs = i # save context
if (predval & 1<
Adding in support for SUBVL is a matter of adding in an extra inner
for-loop, where register src and dest are still incremented inside the
inner part. Not that the predication is still taken from the VL index.
So whilst elements are indexed by "(i * SUBVL + s)", predicate bits are
indexed by "(i)"
function op_add(rd, rs1, rs2) # add not VADD!
int i, id=0, irs1=0, irs2=0;
predval = get_pred_val(FALSE, rd);
rd = int_vec[rd ].isvector ? int_vec[rd ].regidx : rd;
rs1 = int_vec[rs1].isvector ? int_vec[rs1].regidx : rs1;
rs2 = int_vec[rs2].isvector ? int_vec[rs2].regidx : rs2;
for (i = 0; i < VL; i++)
xSTATE.srcoffs = i # save context
for (s = 0; s < SUBVL; s++)
xSTATE.ssvoffs = s # save context
if (predval & 1<
Branch operations use standard RV opcodes that are reinterpreted to
be "predicate variants" in the instance where either of the two src
registers are marked as vectors (active=1, vector=1).
Note that the predication register to use (if one is enabled) is taken from
the *first* src register, and that this is used, just as with predicated
arithmetic operations, to mask whether the comparison operations take
place or not. The target (destination) predication register
to use (if one is enabled) is taken from the *second* src register.
If either of src1 or src2 are scalars (whether by there being no
CSR register entry or whether by the CSR entry specifically marking
the register as "scalar") the comparison goes ahead as vector-scalar
or scalar-vector.
In instances where no vectorisation is detected on either src registers
the operation is treated as an absolutely standard scalar branch operation.
Where vectorisation is present on either or both src registers, the
branch may stil go ahead if any only if *all* tests succeed (i.e. excluding
those tests that are predicated out).
Note that when zero-predication is enabled (from source rs1),
a cleared bit in the predicate indicates that the result
of the compare is set to "false", i.e. that the corresponding
destination bit (or result)) be set to zero. Contrast this with
when zeroing is not set: bits in the destination predicate are
only *set*; they are **not** cleared. This is important to appreciate,
as there may be an expectation that, going into the hardware-loop,
the destination predicate is always expected to be set to zero:
this is **not** the case. The destination predicate is only set
to zero if **zeroing** is enabled.
Note that just as with the standard (scalar, non-predicated) branch
operations, BLE, BGT, BLEU and BTGU may be synthesised by inverting
src1 and src2.
In Hwacha EECS-2015-262 Section 6.7.2 the following pseudocode is given
for predicated compare operations of function "cmp":
for (int i=0; i
There is no MV instruction in RV however there is a C.MV instruction.
It is used for copying integer-to-integer registers (vectorised FMV
is used for copying floating-point).
If either the source or the destination register are marked as vectors
C.MV is reinterpreted to be a vectorised (multi-register) predicated
move operation. The actual instruction's format does not change:
[[!table data="""
15 12 | 11 7 | 6 2 | 1 0 |
funct4 | rd | rs | op |
4 | 5 | 5 | 2 |
C.MV | dest | src | C0 |
"""]]
A simplified version of the pseudocode for this operation is as follows:
function op_mv(rd, rs) # MV not VMV!
rd = int_csr[rd].active ? int_csr[rd].regidx : rd;
rs = int_csr[rs].active ? int_csr[rs].regidx : rs;
ps = get_pred_val(FALSE, rs); # predication on src
pd = get_pred_val(FALSE, rd); # ... AND on dest
for (int i = 0, int j = 0; i < VL && j < VL;):
if (int_csr[rs].isvec) while (!(ps & 1<
An earlier draft of SV modified the behaviour of LOAD/STORE (modified
the interpretation of the instruction fields). This
actually undermined the fundamental principle of SV, namely that there
be no modifications to the scalar behaviour (except where absolutely
necessary), in order to simplify an implementor's task if considering
converting a pre-existing scalar design to support parallelism.
So the original RISC-V scalar LOAD/STORE and LOAD-FP/STORE-FP functionality
do not change in SV, however just as with C.MV it is important to note
that dual-predication is possible.
In vectorised architectures there are usually at least two different modes
for LOAD/STORE:
* Read (or write for STORE) from sequential locations, where one
register specifies the address, and the one address is incremented
by a fixed amount. This is usually known as "Unit Stride" mode.
* Read (or write) from multiple indirected addresses, where the
vector elements each specify separate and distinct addresses.
To support these different addressing modes, the CSR Register "isvector"
bit is used. So, for a LOAD, when the src register is set to
scalar, the LOADs are sequentially incremented by the src register
element width, and when the src register is set to "vector", the
elements are treated as indirection addresses. Simplified
pseudo-code would look like this:
function op_ld(rd, rs) # LD not VLD!
rdv = int_csr[rd].active ? int_csr[rd].regidx : rd;
rsv = int_csr[rs].active ? int_csr[rs].regidx : rs;
ps = get_pred_val(FALSE, rs); # predication on src
pd = get_pred_val(FALSE, rd); # ... AND on dest
for (int i = 0, int j = 0; i < VL && j < VL;):
if (int_csr[rs].isvec) while (!(ps & 1<
C.LWSP / C.SWSP and floating-point etc. are also source-dest twin-predicated,
where it is implicit in C.LWSP/FLWSP etc. that x2 is the source register.
It is therefore possible to use predicated C.LWSP to efficiently
pop registers off the stack (by predicating x2 as the source), cherry-picking
which registers to store to (by predicating the destination). Likewise
for C.SWSP. In this way, LOAD/STORE-Multiple is efficiently achieved.
The two modes ("unit stride" and multi-indirection) are still supported,
as with standard LD/ST. Essentially, the only difference is that the
use of x2 is hard-coded into the instruction.
**Note**: it is still possible to redirect x2 to an alternative target
register. With care, this allows C.LWSP / C.SWSP (and C.FLWSP) to be used as
general-purpose LOAD/STORE operations.
## Compressed LOAD / STORE Instructions
Compressed LOAD and STORE are again exactly the same as scalar LOAD/STORE,
where the same rules apply and the same pseudo-code apply as for
non-compressed LOAD/STORE. Again: setting scalar or vector mode
on the src for LOAD and dest for STORE switches mode from "Unit Stride"
to "Multi-indirection", respectively.
# Element bitwidth polymorphism
Element bitwidth is best covered as its own special section, as it
is quite involved and applies uniformly across-the-board. SV restricts
bitwidth polymorphism to default, 8-bit, 16-bit and 32-bit.
The effect of setting an element bitwidth is to re-cast each entry
in the register table, and for all memory operations involving
load/stores of certain specific sizes, to a completely different width.
Thus In c-style terms, on an RV64 architecture, effectively each register
now looks like this:
typedef union {
uint8_t b[8];
uint16_t s[4];
uint32_t i[2];
uint64_t l[1];
} reg_t;
// integer table: assume maximum SV 7-bit regfile size
reg_t int_regfile[128];
where the CSR Register table entry (not the instruction alone) determines
which of those union entries is to be used on each operation, and the
VL element offset in the hardware-loop specifies the index into each array.
However a naive interpretation of the data structure above masks the
fact that setting VL greater than 8, for example, when the bitwidth is 8,
accessing one specific register "spills over" to the following parts of
the register file in a sequential fashion. So a much more accurate way
to reflect this would be:
typedef union {
uint8_t actual_bytes[8]; // 8 for RV64, 4 for RV32, 16 for RV128
uint8_t b[0]; // array of type uint8_t
uint16_t s[0];
uint32_t i[0];
uint64_t l[0];
uint128_t d[0];
} reg_t;
reg_t int_regfile[128];
where when accessing any individual regfile[n].b entry it is permitted
(in c) to arbitrarily over-run the *declared* length of the array (zero),
and thus "overspill" to consecutive register file entries in a fashion
that is completely transparent to a greatly-simplified software / pseudo-code
representation.
It is however critical to note that it is clearly the responsibility of
the implementor to ensure that, towards the end of the register file,
an exception is thrown if attempts to access beyond the "real" register
bytes is ever attempted.
Now we may modify pseudo-code an operation where all element bitwidths have
been set to the same size, where this pseudo-code is otherwise identical
to its "non" polymorphic versions (above):
function op_add(rd, rs1, rs2) # add not VADD!
...
...
for (i = 0; i < VL; i++)
...
...
// TODO, calculate if over-run occurs, for each elwidth
if (elwidth == 8) {
int_regfile[rd].b[id] <= int_regfile[rs1].i[irs1] +
int_regfile[rs2].i[irs2];
} else if elwidth == 16 {
int_regfile[rd].s[id] <= int_regfile[rs1].s[irs1] +
int_regfile[rs2].s[irs2];
} else if elwidth == 32 {
int_regfile[rd].i[id] <= int_regfile[rs1].i[irs1] +
int_regfile[rs2].i[irs2];
} else { // elwidth == 64
int_regfile[rd].l[id] <= int_regfile[rs1].l[irs1] +
int_regfile[rs2].l[irs2];
}
...
...
So here we can see clearly: for 8-bit entries rd, rs1 and rs2 (and registers
following sequentially on respectively from the same) are "type-cast"
to 8-bit; for 16-bit entries likewise and so on.
However that only covers the case where the element widths are the same.
Where the element widths are different, the following algorithm applies:
* Analyse the bitwidth of all source operands and work out the
maximum. Record this as "maxsrcbitwidth"
* If any given source operand requires sign-extension or zero-extension
(ldb, div, rem, mul, sll, srl, sra etc.), instead of mandatory 32-bit
sign-extension / zero-extension or whatever is specified in the standard
RV specification, **change** that to sign-extending from the respective
individual source operand's bitwidth from the CSR table out to
"maxsrcbitwidth" (previously calculated), instead.
* Following separate and distinct (optional) sign/zero-extension of all
source operands as specifically required for that operation, carry out the
operation at "maxsrcbitwidth". (Note that in the case of LOAD/STORE or MV
this may be a "null" (copy) operation, and that with FCVT, the changes
to the source and destination bitwidths may also turn FVCT effectively
into a copy).
* If the destination operand requires sign-extension or zero-extension,
instead of a mandatory fixed size (typically 32-bit for arithmetic,
for subw for example, and otherwise various: 8-bit for sb, 16-bit for sw
etc.), overload the RV specification with the bitwidth from the
destination register's elwidth entry.
* Finally, store the (optionally) sign/zero-extended value into its
destination: memory for sb/sw etc., or an offset section of the register
file for an arithmetic operation.
In this way, polymorphic bitwidths are achieved without requiring a
massive 64-way permutation of calculations **per opcode**, for example
(4 possible rs1 bitwidths times 4 possible rs2 bitwidths times 4 possible
rd bitwidths). The pseudo-code is therefore as follows:
typedef union {
uint8_t b;
uint16_t s;
uint32_t i;
uint64_t l;
} el_reg_t;
bw(elwidth):
if elwidth == 0:
return xlen
if elwidth == 1:
return xlen / 2
if elwidth == 2:
return xlen * 2
// elwidth == 3:
return 8
get_max_elwidth(rs1, rs2):
return max(bw(int_csr[rs1].elwidth), # default (XLEN) if not set
bw(int_csr[rs2].elwidth)) # again XLEN if no entry
get_polymorphed_reg(reg, bitwidth, offset):
el_reg_t res;
res.l = 0; // TODO: going to need sign-extending / zero-extending
if bitwidth == 8:
reg.b = int_regfile[reg].b[offset]
elif bitwidth == 16:
reg.s = int_regfile[reg].s[offset]
elif bitwidth == 32:
reg.i = int_regfile[reg].i[offset]
elif bitwidth == 64:
reg.l = int_regfile[reg].l[offset]
return res
set_polymorphed_reg(reg, bitwidth, offset, val):
if (!int_csr[reg].isvec):
# sign/zero-extend depending on opcode requirements, from
# the reg's bitwidth out to the full bitwidth of the regfile
val = sign_or_zero_extend(val, bitwidth, xlen)
int_regfile[reg].l[0] = val
elif bitwidth == 8:
int_regfile[reg].b[offset] = val
elif bitwidth == 16:
int_regfile[reg].s[offset] = val
elif bitwidth == 32:
int_regfile[reg].i[offset] = val
elif bitwidth == 64:
int_regfile[reg].l[offset] = val
maxsrcwid = get_max_elwidth(rs1, rs2) # source element width(s)
destwid = int_csr[rs1].elwidth # destination element width
for (i = 0; i < VL; i++)
if (predval & 1<
Polymorphic element widths in vectorised form means that the data
being loaded (or stored) across multiple registers needs to be treated
(reinterpreted) as a contiguous stream of elwidth-wide items, where
the source register's element width is **independent** from the destination's.
This makes for a slightly more complex algorithm when using indirection
on the "addressed" register (source for LOAD and destination for STORE),
particularly given that the LOAD/STORE instruction provides important
information about the width of the data to be reinterpreted.
Let's illustrate the "load" part, where the pseudo-code for elwidth=default
was as follows, and i is the loop from 0 to VL-1:
srcbase = ireg[rs+i];
return mem[srcbase + imm]; // returns XLEN bits
Instead, when elwidth != default, for a LW (32-bit LOAD), elwidth-wide
chunks are taken from the source memory location addressed by the current
indexed source address register, and only when a full 32-bits-worth
are taken will the index be moved on to the next contiguous source
address register:
bitwidth = bw(elwidth); // source elwidth from CSR reg entry
elsperblock = 32 / bitwidth // 1 if bw=32, 2 if bw=16, 4 if bw=8
srcbase = ireg[rs+i/(elsperblock)]; // integer divide
offs = i % elsperblock; // modulo
return &mem[srcbase + imm + offs]; // re-cast to uint8_t*, uint16_t* etc.
Note that the constant "32" above is replaced by 8 for LB, 16 for LH, 64 for LD
and 128 for LQ.
The principle is basically exactly the same as if the srcbase were pointing
at the memory of the *register* file: memory is re-interpreted as containing
groups of elwidth-wide discrete elements.
When storing the result from a load, it's important to respect the fact
that the destination register has its *own separate element width*. Thus,
when each element is loaded (at the source element width), any sign-extension
or zero-extension (or truncation) needs to be done to the *destination*
bitwidth. Also, the storing has the exact same analogous algorithm as
above, where in fact it is just the set\_polymorphed\_reg pseudocode
(completely unchanged) used above.
One issue remains: when the source element width is **greater** than
the width of the operation, it is obvious that a single LB for example
cannot possibly obtain 16-bit-wide data. This condition may be detected
where, when using integer divide, elsperblock (the width of the LOAD
divided by the bitwidth of the element) is zero.
The issue is "fixed" by ensuring that elsperblock is a minimum of 1:
elsperblock = min(1, LD_OP_BITWIDTH / element_bitwidth)
The elements, if the element bitwidth is larger than the LD operation's
size, will then be sign/zero-extended to the full LD operation size, as
specified by the LOAD (LDU instead of LD, LBU instead of LB), before
being passed on to the second phase.
As LOAD/STORE may be twin-predicated, it is important to note that
the rules on twin predication still apply, except where in previous
pseudo-code (elwidth=default for both source and target) it was
the *registers* that the predication was applied to, it is now the
**elements** that the predication is applied to.
Thus the full pseudocode for all LD operations may be written out
as follows:
function LBU(rd, rs):
load_elwidthed(rd, rs, 8, true)
function LB(rd, rs):
load_elwidthed(rd, rs, 8, false)
function LH(rd, rs):
load_elwidthed(rd, rs, 16, false)
...
...
function LQ(rd, rs):
load_elwidthed(rd, rs, 128, false)
# returns 1 byte of data when opwidth=8, 2 bytes when opwidth=16..
function load_memory(rs, imm, i, opwidth):
elwidth = int_csr[rs].elwidth
bitwidth = bw(elwidth);
elsperblock = min(1, opwidth / bitwidth)
srcbase = ireg[rs+i/(elsperblock)];
offs = i % elsperblock;
return mem[srcbase + imm + offs]; # 1/2/4/8/16 bytes
function load_elwidthed(rd, rs, opwidth, unsigned):
destwid = int_csr[rd].elwidth # destination element width
rd = int_csr[rd].active ? int_csr[rd].regidx : rd;
rs = int_csr[rs].active ? int_csr[rs].regidx : rs;
ps = get_pred_val(FALSE, rs); # predication on src
pd = get_pred_val(FALSE, rd); # ... AND on dest
for (int i = 0, int j = 0; i < VL && j < VL;):
if (int_csr[rs].isvec) while (!(ps & 1< max(rs1, rs2) otherwise truncate
Note here that polymorphic add zero-extends its source operands,
where addw sign-extends.
### addw
The RV Specification specifically states that "W" variants of arithmetic
operations always produce 32-bit signed values. In a polymorphic
environment it is reasonable to assume that the signed aspect is
preserved, where it is the length of the operands and the result
that may be changed.
Standard Scalar RV64 (xlen):
* RS1 @ xlen bits
* RS2 @ xlen bits
* add @ xlen bits
* RD @ xlen bits, truncate add to 32-bit and sign-extend to xlen.
Polymorphic variant:
* RS1 @ rs1 bits, sign-extended to max(rs1, rs2) bits
* RS2 @ rs2 bits, sign-extended to max(rs1, rs2) bits
* add @ max(rs1, rs2) bits
* RD @ rd bits. sign-extend to rd if rd > max(rs1, rs2) otherwise truncate
Note here that polymorphic addw sign-extends its source operands,
where add zero-extends.
This requires a little more in-depth analysis. Where the bitwidth of
rs1 equals the bitwidth of rs2, no sign-extending will occur. It is
only where the bitwidth of either rs1 or rs2 are different, will the
lesser-width operand be sign-extended.
Effectively however, both rs1 and rs2 are being sign-extended (or truncated),
where for add they are both zero-extended. This holds true for all arithmetic
operations ending with "W".
### addiw
Standard Scalar RV64I:
* RS1 @ xlen bits, truncated to 32-bit
* immed @ 12 bits, sign-extended to 32-bit
* add @ 32 bits
* RD @ rd bits. sign-extend to rd if rd > 32, otherwise truncate.
Polymorphic variant:
* RS1 @ rs1 bits
* immed @ 12 bits, sign-extend to max(rs1, 12) bits
* add @ max(rs1, 12) bits
* RD @ rd bits. sign-extend to rd if rd > max(rs1, 12) otherwise truncate
# Predication Element Zeroing
The introduction of zeroing on traditional vector predication is usually
intended as an optimisation for lane-based microarchitectures with register
renaming to be able to save power by avoiding a register read on elements
that are passed through en-masse through the ALU. Simpler microarchitectures
do not have this issue: they simply do not pass the element through to
the ALU at all, and therefore do not store it back in the destination.
More complex non-lane-based micro-architectures can, when zeroing is
not set, use the predication bits to simply avoid sending element-based
operations to the ALUs, entirely: thus, over the long term, potentially
keeping all ALUs 100% occupied even when elements are predicated out.
SimpleV's design principle is not based on or influenced by
microarchitectural design factors: it is a hardware-level API.
Therefore, looking purely at whether zeroing is *useful* or not,
(whether less instructions are needed for certain scenarios),
given that a case can be made for zeroing *and* non-zeroing, the
decision was taken to add support for both.
## Single-predication (based on destination register)
Zeroing on predication for arithmetic operations is taken from
the destination register's predicate. i.e. the predication *and*
zeroing settings to be applied to the whole operation come from the
CSR Predication table entry for the destination register.
Thus when zeroing is set on predication of a destination element,
if the predication bit is clear, then the destination element is *set*
to zero (twin-predication is slightly different, and will be covered
next).
Thus the pseudo-code loop for a predicated arithmetic operation
is modified to as follows:
for (i = 0; i < VL; i++)
if not zeroing: # an optimisation
while (!(predval & 1<
See ancillary resource: [[vblock_format]]
# Subsets of RV functionality
This section describes the differences when SV is implemented on top of
different subsets of RV.
## Common options
It is permitted to only implement SVprefix and not the VBLOCK instruction
format option, and vice-versa. UNIX Platforms **MUST** raise illegal
instruction on seeing an unsupported VBLOCK or SVprefix opcode, so that
traps may emulate the format.
It is permitted in SVprefix to either not implement VL or not implement
SUBVL (see [[sv_prefix_proposal]] for full details. Again, UNIX Platforms
*MUST* raise illegal instruction on implementations that do not support
VL or SUBVL.
It is permitted to limit the size of either (or both) the register files
down to the original size of the standard RV architecture. However, below
the mandatory limits set in the RV standard will result in non-compliance
with the SV Specification.
## RV32 / RV32F
When RV32 or RV32F is implemented, XLEN is set to 32, and thus the
maximum limit for predication is also restricted to 32 bits. Whilst not
actually specifically an "option" it is worth noting.
## RV32G
Normally in standard RV32 it does not make much sense to have
RV32G, The critical instructions that are missing in standard RV32
are those for moving data to and from the double-width floating-point
registers into the integer ones, as well as the FCVT routines.
In an earlier draft of SV, it was possible to specify an elwidth
of double the standard register size: this had to be dropped,
and may be reintroduced in future revisions.
## RV32 (not RV32F / RV32G) and RV64 (not RV64F / RV64G)
When floating-point is not implemented, the size of the User Register and
Predication CSR tables may be halved, to only 4 2x16-bit CSRs (8 entries
per table).
## RV32E
In embedded scenarios the User Register and Predication CSRs may be
dropped entirely, or optionally limited to 1 CSR, such that the combined
number of entries from the M-Mode CSR Register table plus U-Mode
CSR Register table is either 4 16-bit entries or (if the U-Mode is
zero) only 2 16-bit entries (M-Mode CSR table only). Likewise for
the Predication CSR tables.
RV32E is the most likely candidate for simply detecting that registers
are marked as "vectorised", and generating an appropriate exception
for the VL loop to be implemented in software.
## RV128
RV128 has not been especially considered, here, however it has some
extremely large possibilities: double the element width implies
256-bit operands, spanning 2 128-bit registers each, and predication
of total length 128 bit given that XLEN is now 128.
# Under consideration
for element-grouping, if there is unused space within a register
(3 16-bit elements in a 64-bit register for example), recommend:
* For the unused elements in an integer register, the used element
closest to the MSB is sign-extended on write and the unused elements
are ignored on read.
* The unused elements in a floating-point register are treated as-if
they are set to all ones on write and are ignored on read, matching the
existing standard for storing smaller FP values in larger registers.
---
info register,
> One solution is to just not support LR/SC wider than a fixed
> implementation-dependent size, which must be at least
>1 XLEN word, which can be read from a read-only CSR
> that can also be used for info like the kind and width of
> hw parallelism supported (128-bit SIMD, minimal virtual
> parallelism, etc.) and other things (like maybe the number
> of registers supported).
> That CSR would have to have a flag to make a read trap so
> a hypervisor can simulate different values.
----
> And what about instructions like JALR?
answer: they're not vectorised, so not a problem
----
* if opcode is in the RV32 group, rd, rs1 and rs2 bitwidth are
XLEN if elwidth==default
* if opcode is in the RV32I group, rd, rs1 and rs2 bitwidth are
*32* if elwidth == default
---
TODO: document different lengths for INT / FP regfiles, and provide
as part of info register. 00=32, 01=64, 10=128, 11=reserved.
---
TODO, update to remove RegCam and PredCam CSRs, just use SVprefix and
VBLOCK format
---
Could the 8 bit Register VBLOCK format use regnum<<1 instead, only accessing regs 0 to 64?
--
Expand the range of SUBVL and its associated svsrcoffs and svdestoffs by
adding a 2nd STATE CSR (or extending STATE to 64 bits). Future version?
--
TODO evaluate strncpy and strlen
RVV version: >
strncpy:
mv a3, a0 # Copy dst
loop:
setvli x0, a2, vint8 # Vectors of bytes.
vlbff.v v1, (a1) # Get src bytes
vseq.vi v0, v1, 0 # Flag zero bytes
vmfirst a4, v0 # Zero found?
vmsif.v v0, v0 # Set mask up to and including zero byte. Ppplio
vsb.v v1, (a3), v0.t # Write out bytes
bgez a4, exit # Done
csrr t1, vl # Get number of bytes fetched
add a1, a1, t1 # Bump src pointer
sub a2, a2, t1 # Decrement count.
add a3, a3, t1 # Bump dst pointer
bnez a2, loop # Anymore?
exit:
ret
SV version (WIP):
strncpy:
mv a3, a0
SETMVLI 8 # set max vector to 8
RegCSR[a3] = 8bit, a3, scalar
RegCSR[a1] = 8bit, a1, scalar
RegCSR[t0] = 8bit, t0, vector
PredTb[t0] = ffirst, x0, inv
loop:
SETVLI a2, t4 # t4 and VL now 1..8
ldb t0, (a1) # t0 fail first mode
bne t0, x0, allnonzero # still ff
# VL points to last nonzero
GETVL t4 # from bne tests
addi t4, t4, 1 # include zero
SETVL t4 # set exactly to t4
stb t0, (a3) # store incl zero
ret # end subroutine
allnonzero:
stb t0, (a3) # VL legal range
GETVL t4 # from bne tests
add a1, a1, t4 # Bump src pointer
sub a2, a2, t4 # Decrement count.
add a3, a3, t4 # Bump dst pointer
bnez a2, loop # Anymore?
exit:
ret
Notes:
* Setting MVL to 8 is just an example. If enough registers are spare it may be set to XLEN which will require a bank of 8 scalar registers for a1, a3 and t0.
* obviously if that is done, t0 is not separated by 8 full registers, and would overwrite t1 thru t7. x80 would work well, as an example, instead.
* with the exception of the GETVL (a pseudo code alias for csrr), every single instruction above may use RVC.
* RVC C.BNEZ can be used because rs1' may be extended to the full 128 registers through redirection
* RVC C.LW and C.SW may be used because the W format may be overridden by the 8 bit format. All of t0, a3 and a1 are overridden to make that work.
* with the exception of the GETVL, all Vector Context may be done in VBLOCK form.
* setting predication to x0 (zero) and invert on t0 is a trick to enable just ffirst on t0
* ldb and bne are both using t0, both in ffirst mode
* ldb will end on illegal mem, reduce VL, but copied all sorts of stuff into t0
* bne t0 x0 tests up to the NEW VL for nonzero, vector t0 against scalar x0
* however as t0 is in ffirst mode, the first fail wil ALSO stop the compares, and reduce VL as well
* the branch only goes to allnonzero if all tests succeed
* if it did not, we can safely increment VL by 1 (using a4) to include the zero.
* SETVL sets *exactly* the requested amount into VL.
* the SETVL just after allnonzero label is needed in case the ldb ffirst activates but the bne allzeros does not.
* this would cause the stb to copy up to the end of the legal memory
* of course, on the next loop the ldb would throw a trap, as a1 now points to the first illegal mem location.
RVV version:
mv a3, a0 # Save start
loop:
setvli a1, x0, vint8 # byte vec, x0 (Zero reg) => use max hardware len
vldbff.v v1, (a3) # Get bytes
csrr a1, vl # Get bytes actually read e.g. if fault
vseq.vi v0, v1, 0 # Set v0[i] where v1[i] = 0
add a3, a3, a1 # Bump pointer
vmfirst a2, v0 # Find first set bit in mask, returns -1 if none
bltz a2, loop # Not found?
add a0, a0, a1 # Sum start + bump
add a3, a3, a2 # Add index of zero byte
sub a0, a3, a0 # Subtract start address+bump
ret