* Copyright (C) 2017, 2018, 2019 Luke Kenneth Casson Leighton
* Status: DRAFTv0.6
* Last edited: 25 jun 2019
+* See: main [[specification]] and [[appendix]]
[[!toc ]]
The pseudo-code for Predication makes this clearer and simpler than it is
in words (the loop ends, VL is set to the current element index, "i").
-# Instructions <a name="instructions" />
-
-To illustrate how Scalar operations are turned "vector" and "predicated",
-simplified example pseudo-code for an integer ADD operation is shown below.
-Floating-point would use the FP Register Table.
-
- function op_add(rd, rs1, rs2) # add not VADD!
- int i, id=0, irs1=0, irs2=0;
- predval = get_pred_val(FALSE, rd);
- rd = int_vec[rd ].isvector ? int_vec[rd ].regidx : rd;
- rs1 = int_vec[rs1].isvector ? int_vec[rs1].regidx : rs1;
- rs2 = int_vec[rs2].isvector ? int_vec[rs2].regidx : rs2;
- for (i = 0; i < VL; i++)
- xSTATE.srcoffs = i # save context
- if (predval & 1<<i) # predication uses intregs
- ireg[rd+id] <= ireg[rs1+irs1] + ireg[rs2+irs2];
- if (!int_vec[rd ].isvector) break;
- if (int_vec[rd ].isvector) { id += 1; }
- if (int_vec[rs1].isvector) { irs1 += 1; }
- if (int_vec[rs2].isvector) { irs2 += 1; }
-
-Note that for simplicity there is quite a lot missing from the above
-pseudo-code.
-
-## SUBVL Pseudocode <a name="subvl-pseudocode"></a>
-
-Adding in support for SUBVL is a matter of adding in an extra inner
-for-loop, where register src and dest are still incremented inside the
-inner part. Not that the predication is still taken from the VL index.
-
-So whilst elements are indexed by "(i * SUBVL + s)", predicate bits are
-indexed by "(i)"
-
- function op_add(rd, rs1, rs2) # add not VADD!
- int i, id=0, irs1=0, irs2=0;
- predval = get_pred_val(FALSE, rd);
- rd = int_vec[rd ].isvector ? int_vec[rd ].regidx : rd;
- rs1 = int_vec[rs1].isvector ? int_vec[rs1].regidx : rs1;
- rs2 = int_vec[rs2].isvector ? int_vec[rs2].regidx : rs2;
- for (i = 0; i < VL; i++)
- xSTATE.srcoffs = i # save context
- for (s = 0; s < SUBVL; s++)
- xSTATE.ssvoffs = s # save context
- if (predval & 1<<i) # predication uses intregs
- # actual add is here (at last)
- ireg[rd+id] <= ireg[rs1+irs1] + ireg[rs2+irs2];
- if (!int_vec[rd ].isvector) break;
- if (int_vec[rd ].isvector) { id += 1; }
- if (int_vec[rs1].isvector) { irs1 += 1; }
- if (int_vec[rs2].isvector) { irs2 += 1; }
- if (id == VL or irs1 == VL or irs2 == VL) {
- # end VL hardware loop
- xSTATE.srcoffs = 0; # reset
- xSTATE.ssvoffs = 0; # reset
- return;
- }
-
-NOTE: pseudocode simplified greatly: zeroing, proper predicate handling,
-elwidth handling etc. all left out.
-
-## Instruction Format
-
-It is critical to appreciate that there are
-**no operations added to SV, at all**.
-
-Examples are given below where "standard" RV scalar behaviour is augmented.
-
-## Branch Instructions
-
-Branch operations are augmented slightly to be a little more like FP
-Compares (FEQ, FNE etc.), by permitting the cumulation (and storage)
-of multiple comparisons into a register (taken indirectly from the predicate
-table). As such, "ffirst" - fail-on-first - condition mode can be enabled.
-See ffirst mode in the Predication Table section.
-
-### Standard Branch <a name="standard_branch"></a>
-
-Branch operations use standard RV opcodes that are reinterpreted to
-be "predicate variants" in the instance where either of the two src
-registers are marked as vectors (active=1, vector=1).
-
-Note that the predication register to use (if one is enabled) is taken from
-the *first* src register, and that this is used, just as with predicated
-arithmetic operations, to mask whether the comparison operations take
-place or not. If the second register is also marked as predicated,
-that (scalar) predicate register is used as a **destination** to store
-the results of all the comparisons.
-
-In instances where no vectorisation is detected on either src registers
-the operation is treated as an absolutely standard scalar branch operation.
-Where vectorisation is present on either or both src registers, the
-branch may stil go ahead if any only if *all* tests succeed (i.e. excluding
-those tests that are predicated out).
-
-Pseudo-code for branch:
-
- s1 = reg_is_vectorised(src1);
- s2 = reg_is_vectorised(src2);
-
- if not s1 && not s2
- if cmp(rs1, rs2) # scalar compare
- goto branch
- return
-
- preg = int_pred_reg[rd]
- reg = int_regfile
-
- ps = get_pred_val(I/F==INT, rs1);
- rd = get_pred_val(I/F==INT, rs2); # this may not exist
-
- if not exists(rd) or zeroing:
- result = 0
- else
- result = preg[rd]
-
- for (int i = 0; i < VL; ++i)
- if (zeroing)
- if not (ps & (1<<i))
- result &= ~(1<<i);
- else if (ps & (1<<i))
- if (cmp(s1 ? reg[src1+i]:reg[src1],
- s2 ? reg[src2+i]:reg[src2])
- result |= 1<<i;
- else
- result &= ~(1<<i);
-
- if not exists(rd)
- if result == ps
- goto branch
- else
- preg[rd] = result # store in destination
- if preg[rd] == ps
- goto branch
-
-Notes:
-
-* Predicated SIMD comparisons would break src1 and src2 further down
- into bitwidth-sized chunks (see Appendix "Bitwidth Virtual Register
- Reordering") setting Vector-Length times (number of SIMD elements) bits
- in Predicate Register rd, as opposed to just Vector-Length bits.
-* The execution of "parallelised" instructions **must** be implemented
- as "re-entrant" (to use a term from software). If an exception (trap)
- occurs during the middle of a vectorised
- Branch (now a SV predicated compare) operation, the partial results
- of any comparisons must be written out to the destination
- register before the trap is permitted to begin. If however there
- is no predicate, the **entire** set of comparisons must be **restarted**,
- with the offset loop indices set back to zero. This is because
- there is no place to store the temporary result during the handling
- of traps.
-
-Note also that where normally, predication requires that there must
-also be a CSR register entry for the register being used in order
-for the **predication** CSR register entry to also be active,
-for branches this is **not** the case. src2 does **not** have
-to have its CSR register entry marked as active in order for
-predication on src2 to be active.
-
-### Floating-point Comparisons
-
-There does not exist floating-point branch operations, only compare.
-Interestingly no change is needed to the instruction format because
-FP Compare already stores a 1 or a zero in its "rd" integer register
-target, i.e. it's not actually a Branch at all: it's a compare.
-
-As RV Scalar does not have "FNE", predication inversion must be used.
-Also: note that FP Compare may be predicated, using the destination
-integer register (rd) to determine the predicate. FP Compare is **not**
-a twin-predication operation, as, again, just as with SV Branches,
-there are three registers involved: FP src1, FP src2 and INT rd.
-
-Also: note that ffirst (fail first mode) applies directly to this operation.
-
-### Compressed Branch Instruction
-
-Compressed Branch instructions are, just like standard Branch instructions,
-reinterpreted to be vectorised and predicated based on the source register
-(rs1s) CSR entries. As however there is only the one source register,
-given that c.beqz a10 is equivalent to beqz a10,x0, the optional target
-to store the results of the comparisions is taken from CSR predication
-table entries for **x0**.
-
-The specific required use of x0 is, with a little thought, quite obvious,
-but is counterintuitive. Clearly it is **not** recommended to redirect
-x0 with a CSR register entry, however as a means to opaquely obtain
-a predication target it is the only sensible option that does not involve
-additional special CSRs (or, worse, additional special opcodes).
-
-Note also that, just as with standard branches, the 2nd source
-(in this case x0 rather than src2) does **not** have to have its CSR
-register table marked as "active" in order for predication to work.
-
-## Vectorised Dual-operand instructions
-
-There is a series of 2-operand instructions involving copying (and
-sometimes alteration):
-
-* C.MV
-* FMV, FNEG, FABS, FCVT, FSGNJ, FSGNJN and FSGNJX
-* C.LWSP, C.SWSP, C.LDSP, C.FLWSP etc.
-* LOAD(-FP) and STORE(-FP)
-
-All of these operations follow the same two-operand pattern, so it is
-*both* the source *and* destination predication masks that are taken into
-account. This is different from
-the three-operand arithmetic instructions, where the predication mask
-is taken from the *destination* register, and applied uniformly to the
-elements of the source register(s), element-for-element.
-
-The pseudo-code pattern for twin-predicated operations is as
-follows:
-
- function op(rd, rs):
- rd = int_csr[rd].active ? int_csr[rd].regidx : rd;
- rs = int_csr[rs].active ? int_csr[rs].regidx : rs;
- ps = get_pred_val(FALSE, rs); # predication on src
- pd = get_pred_val(FALSE, rd); # ... AND on dest
- for (int i = 0, int j = 0; i < VL && j < VL;):
- if (int_csr[rs].isvec) while (!(ps & 1<<i)) i++;
- if (int_csr[rd].isvec) while (!(pd & 1<<j)) j++;
- xSTATE.srcoffs = i # save context
- xSTATE.destoffs = j # save context
- reg[rd+j] = SCALAR_OPERATION_ON(reg[rs+i])
- if (int_csr[rs].isvec) i++;
- if (int_csr[rd].isvec) j++; else break
-
-This pattern covers scalar-scalar, scalar-vector, vector-scalar
-and vector-vector, and predicated variants of all of those.
-Zeroing is not presently included (TODO). As such, when compared
-to RVV, the twin-predicated variants of C.MV and FMV cover
-**all** standard vector operations: VINSERT, VSPLAT, VREDUCE,
-VEXTRACT, VSCATTER, VGATHER, VCOPY, and more.
-
-### C.MV Instruction <a name="c_mv"></a>
-
-There is no MV instruction in RV however there is a C.MV instruction.
-It is used for copying integer-to-integer registers (vectorised FMV
-is used for copying floating-point).
-
-If either the source or the destination register are marked as vectors
-C.MV is reinterpreted to be a vectorised (multi-register) predicated
-move operation. The actual instruction's format does not change.
-
-There are several different instructions from RVV that are covered by
-this one opcode:
-
-[[!table data="""
-src | dest | predication | op |
-scalar | vector | none | VSPLAT |
-scalar | vector | destination | sparse VSPLAT |
-scalar | vector | 1-bit dest | VINSERT |
-vector | scalar | 1-bit? src | VEXTRACT |
-vector | vector | none | VCOPY |
-vector | vector | src | Vector Gather |
-vector | vector | dest | Vector Scatter |
-vector | vector | src & dest | Gather/Scatter |
-vector | vector | src == dest | sparse VCOPY |
-"""]]
-
-Also, VMERGE may be implemented as back-to-back (macro-op fused) C.MV
-operations with zeroing off, and inversion on the src and dest
-predication for one of the two C.MV operations.
-
-### FMV, FNEG and FABS Instructions
-
-These are identical in form to C.MV, except covering floating-point
-register copying. The same double-predication rules also apply.
-However when elwidth is not set to default the instruction is implicitly
-and automatic converted to a (vectorised) floating-point type conversion
-operation of the appropriate size covering the source and destination
-register bitwidths.
-
-(Note that FMV, FNEG and FABS are all actually pseudo-instructions)
-
-### FVCT Instructions
-
-These are again identical in form to C.MV, except that they cover
-floating-point to integer and integer to floating-point. When element
-width in each vector is set to default, the instructions behave exactly
-as they are defined for standard RV (scalar) operations, except vectorised
-in exactly the same fashion as outlined in C.MV.
-
-However when the source or destination element width is not set to default,
-the opcode's explicit element widths are *over-ridden* to new definitions,
-and the opcode's element width is taken as indicative of the SIMD width
-(if applicable i.e. if packed SIMD is requested) instead.
-
-## LOAD / STORE Instructions and LOAD-FP/STORE-FP <a name="load_store"></a>
-
-In vectorised architectures there are usually at least two different modes
-for LOAD/STORE:
-
-* Read (or write for STORE) from sequential locations, where one
- register specifies the address, and the one address is incremented
- by a fixed amount. This is usually known as "Unit Stride" mode.
-* Read (or write) from multiple indirected addresses, where the
- vector elements each specify separate and distinct addresses.
-
-To support these different addressing modes, the CSR Register "isvector"
-bit is used. So, for a LOAD, when the src register is set to
-scalar, the LOADs are sequentially incremented by the src register
-element width, and when the src register is set to "vector", the
-elements are treated as indirection addresses. Simplified
-pseudo-code would look like this:
-
- function op_ld(rd, rs) # LD not VLD!
- rdv = int_csr[rd].active ? int_csr[rd].regidx : rd;
- rsv = int_csr[rs].active ? int_csr[rs].regidx : rs;
- ps = get_pred_val(FALSE, rs); # predication on src
- pd = get_pred_val(FALSE, rd); # ... AND on dest
- for (int i = 0, int j = 0; i < VL && j < VL;):
- if (int_csr[rs].isvec) while (!(ps & 1<<i)) i++;
- if (int_csr[rd].isvec) while (!(pd & 1<<j)) j++;
- if (int_csr[rd].isvec)
- # indirect mode (multi mode)
- srcbase = ireg[rsv+i];
- else
- # unit stride mode
- srcbase = ireg[rsv] + i * XLEN/8; # offset in bytes
- ireg[rdv+j] <= mem[srcbase + imm_offs];
- if (!int_csr[rs].isvec &&
- !int_csr[rd].isvec) break # scalar-scalar LD
- if (int_csr[rs].isvec) i++;
- if (int_csr[rd].isvec) j++;
-
-## Compressed Stack LOAD / STORE Instructions <a name="c_ld_st"></a>
-
-C.LWSP / C.SWSP and floating-point etc. are also source-dest twin-predicated,
-where it is implicit in C.LWSP/FLWSP etc. that x2 is the source register.
-It is therefore possible to use predicated C.LWSP to efficiently
-pop registers off the stack (by predicating x2 as the source), cherry-picking
-which registers to store to (by predicating the destination). Likewise
-for C.SWSP. In this way, LOAD/STORE-Multiple is efficiently achieved.
-
-**Note**: it is still possible to redirect x2 to an alternative target
-register. With care, this allows C.LWSP / C.SWSP (and C.FLWSP) to be used as
-general-purpose LOAD/STORE operations.
-
-## Compressed LOAD / STORE Instructions
-
-Compressed LOAD and STORE are again exactly the same as scalar LOAD/STORE,
-where the same rules apply and the same pseudo-code apply as for
-non-compressed LOAD/STORE. Again: setting scalar or vector mode
-on the src for LOAD and dest for STORE switches mode from "Unit Stride"
-to "Multi-indirection", respectively.
-
-# Element bitwidth polymorphism <a name="elwidth"></a>
-
-Element bitwidth is best covered as its own special section, as it
-is quite involved and applies uniformly across-the-board. SV restricts
-bitwidth polymorphism to default, 8-bit, 16-bit and 32-bit.
-
-The effect of setting an element bitwidth is to re-cast each entry
-in the register table, and for all memory operations involving
-load/stores of certain specific sizes, to a completely different width.
-Thus In c-style terms, on an RV64 architecture, effectively each register
-now looks like this:
-
- typedef union {
- uint8_t actual_bytes[8]; // 8 for RV64, 4 for RV32, 16 for RV128
- uint8_t b[0]; // array of type uint8_t
- uint16_t s[0];
- uint32_t i[0];
- uint64_t l[0];
- uint128_t d[0];
- } reg_t;
-
- reg_t int_regfile[128];
-
-Implementors must ensure that over-runs of the register file throw
-an exception.
-
-The pseudo-code is as follows, to demonstrate how the sign-extending
-and width-extending works:
-
- typedef union {
- uint8_t b;
- uint16_t s;
- uint32_t i;
- uint64_t l;
- } el_reg_t;
-
- bw(elwidth):
- if elwidth == 0: return xlen
- if elwidth == 1: return 8
- if elwidth == 2: return 16
- // elwidth == 3:
- return 32
-
- get_max_elwidth(rs1, rs2):
- return max(bw(int_csr[rs1].elwidth), # default (XLEN) if not set
- bw(int_csr[rs2].elwidth)) # again XLEN if no entry
-
- get_polymorphed_reg(reg, bitwidth, offset):
- el_reg_t res;
- res.l = 0; // TODO: going to need sign-extending / zero-extending
- if bitwidth == 8:
- reg.b = int_regfile[reg].b[offset]
- elif bitwidth == 16:
- reg.s = int_regfile[reg].s[offset]
- elif bitwidth == 32:
- reg.i = int_regfile[reg].i[offset]
- elif bitwidth == 64:
- reg.l = int_regfile[reg].l[offset]
- return res
-
- set_polymorphed_reg(reg, bitwidth, offset, val):
- if (!int_csr[reg].isvec):
- # sign/zero-extend depending on opcode requirements, from
- # the reg's bitwidth out to the full bitwidth of the regfile
- val = sign_or_zero_extend(val, bitwidth, xlen)
- int_regfile[reg].l[0] = val
- elif bitwidth == 8:
- int_regfile[reg].b[offset] = val
- elif bitwidth == 16:
- int_regfile[reg].s[offset] = val
- elif bitwidth == 32:
- int_regfile[reg].i[offset] = val
- elif bitwidth == 64:
- int_regfile[reg].l[offset] = val
-
- maxsrcwid = get_max_elwidth(rs1, rs2) # source element width(s)
- destwid = int_csr[rs1].elwidth # destination element width
- for (i = 0; i < VL; i++)
- if (predval & 1<<i) # predication uses intregs
- // TODO, calculate if over-run occurs, for each elwidth
- src1 = get_polymorphed_reg(rs1, maxsrcwid, irs1)
- // TODO, sign/zero-extend src1 and src2 as operation requires
- if (op_requires_sign_extend_src1)
- src1 = sign_extend(src1, maxsrcwid)
- src2 = get_polymorphed_reg(rs2, maxsrcwid, irs2)
- result = src1 + src2 # actual add here
- // TODO, sign/zero-extend result, as operation requires
- if (op_requires_sign_extend_dest)
- result = sign_extend(result, maxsrcwid)
- set_polymorphed_reg(rd, destwid, ird, result)
- if (!int_vec[rd].isvector) break
- if (int_vec[rd ].isvector) { id += 1; }
- if (int_vec[rs1].isvector) { irs1 += 1; }
- if (int_vec[rs2].isvector) { irs2 += 1; }
-
-## Polymorphic floating-point operation exceptions and error-handling
-
-For floating-point operations, conversion takes place without
-raising any kind of exception. Exactly as specified in the standard
-RV specification, NAN (or appropriate) is stored if the result
-is beyond the range of the destination, and, again, exactly as
-with the standard RV specification just as with scalar
-operations, the floating-point flag is raised (FCSR). And, again, just as
-with scalar operations, it is software's responsibility to check this flag.
-Given that the FCSR flags are "accrued", the fact that multiple element
-operations could have occurred is not a problem.
-
-Note that it is perfectly legitimate for floating-point bitwidths of
-only 8 to be specified. However whilst it is possible to apply IEEE 754
-principles, no actual standard yet exists. Implementors wishing to
-provide hardware-level 8-bit support rather than throw a trap to emulate
-in software should contact the author of this specification before
-proceeding.
-
-## Polymorphic shift operators
-
-A special note is needed for changing the element width of left and right
-shift operators, particularly right-shift.
-
-For SV, where each operand's element bitwidth may be over-ridden, the
-rule about determining the operation's bitwidth *still applies*, being
-defined as the maximum bitwidth of RS1 and RS2. *However*, this rule
-**also applies to the truncation of RS2**. In other words, *after*
-determining the maximum bitwidth, RS2's range must **also be truncated**
-to ensure a correct answer. Example:
-
-* RS1 is over-ridden to a 16-bit width
-* RS2 is over-ridden to an 8-bit width
-* RD is over-ridden to a 64-bit width
-* the maximum bitwidth is thus determined to be 16-bit - max(8,16)
-* RS2 is **truncated to a range of values from 0 to 15**: RS2 & (16-1)
-
-Pseudocode (in spike) for this example would therefore be:
-
- WRITE_RD(sext_xlen(zext_16bit(RS1) << (RS2 & (16-1))));
-
-## Polymorphic MULH/MULHU/MULHSU
-
-MULH is designed to take the top half MSBs of a multiply that
-does not fit within the range of the source operands, such that
-smaller width operations may produce a full double-width multiply
-in two cycles. The issue is: SV allows the source operands to
-have variable bitwidth.
-
-Here again special attention has to be paid to the rules regarding
-bitwidth, which, again, are that the operation is performed at
-the maximum bitwidth of the **source** registers. Therefore:
-
-* An 8-bit x 8-bit multiply will create a 16-bit result that must
- be shifted down by 8 bits
-* A 16-bit x 8-bit multiply will create a 24-bit result that must
- be shifted down by 16 bits (top 8 bits being zero)
-* A 16-bit x 16-bit multiply will create a 32-bit result that must
- be shifted down by 16 bits
-* A 32-bit x 16-bit multiply will create a 48-bit result that must
- be shifted down by 32 bits
-* A 32-bit x 8-bit multiply will create a 40-bit result that must
- be shifted down by 32 bits
-
-So again, just as with shift-left and shift-right, the result
-is shifted down by the maximum of the two source register bitwidths.
-And, exactly again, truncation or sign-extension is performed on the
-result. If sign-extension is to be carried out, it is performed
-from the same maximum of the two source register bitwidths out
-to the result element's bitwidth.
-
-If truncation occurs, i.e. the top MSBs of the result are lost,
-this is "Officially Not Our Problem", i.e. it is assumed that the
-programmer actually desires the result to be truncated. i.e. if the
-programmer wanted all of the bits, they would have set the destination
-elwidth to accommodate them.
-
-## Polymorphic elwidth on LOAD/STORE <a name="elwidth_loadstore"></a>
-
-Polymorphic element widths in vectorised form means that the data
-being loaded (or stored) across multiple registers needs to be treated
-(reinterpreted) as a contiguous stream of elwidth-wide items, where
-the source register's element width is **independent** from the destination's.
-
-This makes for a slightly more complex algorithm when using indirection
-on the "addressed" register (source for LOAD and destination for STORE),
-particularly given that the LOAD/STORE instruction provides important
-information about the width of the data to be reinterpreted.
-
-As LOAD/STORE may be twin-predicated, it is important to note that
-the rules on twin predication still apply. Where in previous
-pseudo-code (elwidth=default for both source and target) it was
-the *registers* that the predication was applied to, it is now the
-**elements** that the predication is applied to.
-
-The pseudocode for all LD operations may be written out
-as follows:
-
- function LBU(rd, rs):
- load_elwidthed(rd, rs, 8, true)
- function LB(rd, rs):
- load_elwidthed(rd, rs, 8, false)
- function LH(rd, rs):
- load_elwidthed(rd, rs, 16, false)
- ...
- ...
- function LQ(rd, rs):
- load_elwidthed(rd, rs, 128, false)
-
- # returns 1 byte of data when opwidth=8, 2 bytes when opwidth=16..
- function load_memory(rs, imm, i, opwidth):
- elwidth = int_csr[rs].elwidth
- bitwidth = bw(elwidth);
- elsperblock = min(1, opwidth / bitwidth)
- srcbase = ireg[rs+i/(elsperblock)];
- offs = i % elsperblock;
- return mem[srcbase + imm + offs]; # 1/2/4/8/16 bytes
-
- function load_elwidthed(rd, rs, opwidth, unsigned):
- destwid = int_csr[rd].elwidth # destination element width
- rd = int_csr[rd].active ? int_csr[rd].regidx : rd;
- rs = int_csr[rs].active ? int_csr[rs].regidx : rs;
- ps = get_pred_val(FALSE, rs); # predication on src
- pd = get_pred_val(FALSE, rd); # ... AND on dest
- for (int i = 0, int j = 0; i < VL && j < VL;):
- if (int_csr[rs].isvec) while (!(ps & 1<<i)) i++;
- if (int_csr[rd].isvec) while (!(pd & 1<<j)) j++;
- val = load_memory(rs, imm, i, opwidth)
- if unsigned:
- val = zero_extend(val, min(opwidth, bitwidth))
- else:
- val = sign_extend(val, min(opwidth, bitwidth))
- set_polymorphed_reg(rd, bitwidth, j, val)
- if (int_csr[rs].isvec) i++;
- if (int_csr[rd].isvec) j++; else break;
-
-# Predication Element Zeroing
-
-The decision to add the *option* to zero unpredicated (masked-out)
-elements was based on whether it would be useful, rather than on
-how the microarchitecture is implemented (or optimised). Therefore,
-both zeroing and non-zeroing are mandatory.
-
-## Single-predication (based on destination register)
-
-Zeroing on predication for arithmetic operations is taken from
-the destination register's predicate. i.e. the predication *and*
-zeroing settings to be applied to the whole operation come from the
-CSR Predication table entry for the destination register.
-
-Thus when zeroing is set on predication of a destination element,
-if the predication bit is clear, then the destination element is *set*
-to zero (twin-predication is slightly different, and is covered below)
-
-Thus the pseudo-code loop for a predicated arithmetic operation
-is modified to as follows:
-
- for (i = 0; i < VL; i++)
- if not zeroing: # an optimisation
- while (!(predval & 1<<i) && i < VL)
- if (int_vec[rd ].isvector) { id += 1; }
- if (int_vec[rs1].isvector) { irs1 += 1; }
- if (int_vec[rs2].isvector) { irs2 += 1; }
- if i == VL:
- return
- if (predval & 1<<i)
- src1 = ....
- src2 = ...
- else:
- result = src1 + src2 # actual add (or other op) here
- set_polymorphed_reg(rd, destwid, ird, result)
- if int_vec[rd].ffirst and result == 0:
- VL = i # result was zero, end loop early, return VL
- return
- if (!int_vec[rd].isvector) return
- else if zeroing:
- result = 0
- set_polymorphed_reg(rd, destwid, ird, result)
- if (int_vec[rd ].isvector) { id += 1; }
- else if (predval & 1<<i) return
- if (int_vec[rs1].isvector) { irs1 += 1; }
- if (int_vec[rs2].isvector) { irs2 += 1; }
- if (rd == VL or rs1 == VL or rs2 == VL): return
-
-## Twin-predication (based on source and destination register)
-
-In twin-predication, the source is independently zero-predicated from
-the destination. This means that the source may be zero-predicated *or*
-the destination zero-predicated *or both*, or neither.
-
-When with twin-predication, zeroing is set on the source and not
-the destination, if a predicate bit is set it indicates that a zero
-data element is passed through the operation (the exception being:
-if the source data element is to be treated as an address - a LOAD -
-then the data returned *from* the LOAD is zero, rather than looking up an
-*address* of zero.
-
-When zeroing is set on the destination and not the source, then just
-as with single-predicated operations, a zero is stored into the destination
-element (or target memory address for a STORE).
-
-Zeroing on both source and destination effectively result in a bitwise
-NOR operation of the source and destination predicate: the result is that
-where either source predicate OR destination predicate is set to 0,
-a zero element will ultimately end up in the destination register.
-
-However: this may not necessarily be the case for all operations;
-implementors, particularly of custom instructions, clearly need to
-think through the implications in each and every case.
-
-Here is (simplified) pseudo-code for a twin zero-predicated MV operation:
-
- function op_mv(rd, rs) # MV, not VMV!
- rd = int_csr[rd].active ? int_csr[rd].regidx : rd;
- rs = int_csr[rs].active ? int_csr[rs].regidx : rs;
- ps, zerosrc = get_pred_val(FALSE, rs); # predication on src
- pd, zerodst = get_pred_val(FALSE, rd); # ... AND on dest
- for (int i = 0, int j = 0; i < VL && j < VL):
- if (int_csr[rs].isvec && !zerosrc) while (!(ps & 1<<i)) i++;
- if (int_csr[rd].isvec && !zerodst) while (!(pd & 1<<j)) j++;
- if ((pd & 1<<j))
- ireg[rd+j] <= (pd & 1<<j) ? ireg[rs+1] : 0
- else if (zerodst)
- ireg[rd+j] <= 0
- if (int_csr[rs].isvec) i++;
- if (int_csr[rd].isvec) j++;
- else if ((pd & 1<<j)) break;
-
-Note that in the instance where the destination is a scalar, the hardware
-loop is ended the moment a value *or a zero* is placed into the destination
-register/element. Also note that, for clarity, variable element widths
-have been left out of the above.
-
# Exceptions
TODO: expand.