-# Variable-width Variable-packed SIMD / Simple-V / Parallelism Extension Proposal
-
-Key insight: Simple-V is intended as an abstraction layer to provide
-a consistent "API" to parallelisation of existing *and future* operations.
-*Actual* internal hardware-level parallelism is *not* required, such
-that Simple-V may be viewed as providing a "compact" or "consolidated"
-means of issuing multiple near-identical arithmetic instructions to an
-instruction queue (FILO), pending execution.
-
-*Actual* parallelism, if added independently of Simple-V in the form
-of Out-of-order restructuring (including parallel ALU lanes) or VLIW
-implementations, or SIMD, or anything else, would then benefit *if*
-Simple-V was added on top.
-
-[[!toc ]]
-
-# Introduction
-
-This proposal exists so as to be able to satisfy several disparate
-requirements: power-conscious, area-conscious, and performance-conscious
-designs all pull an ISA and its implementation in different conflicting
-directions, as do the specific intended uses for any given implementation.
-
-Additionally, the existing P (SIMD) proposal and the V (Vector) proposals,
-whilst each extremely powerful in their own right and clearly desirable,
-are also:
-
-* Clearly independent in their origins (Cray and AndesStar v3 respectively)
- so need work to adapt to the RISC-V ethos and paradigm
-* Are sufficiently large so as to make adoption (and exploration for
- analysis and review purposes) prohibitively expensive
-* Both contain partial duplication of pre-existing RISC-V instructions
- (an undesirable characteristic)
-* Both have independent and disparate methods for introducing parallelism
- at the instruction level.
-* Both require that their respective parallelism paradigm be implemented
- along-side and integral to their respective functionality *or not at all*.
-* Both independently have methods for introducing parallelism that
- could, if separated, benefit
- *other areas of RISC-V not just DSP or Floating-point respectively*.
-
-There are also key differences between Vectorisation and SIMD (full
-details outlined in the Appendix), the key points being:
-
-* SIMD has an extremely seductively compelling ease of implementation argument:
- each operation is passed to the ALU, which is where the parallelism
- lies. There is *negligeable* (if any) impact on the rest of the core
- (with life instead being made hell for compiler writers and applications
- writers due to extreme ISA proliferation).
-* By contrast, Vectorisation has quite some complexity (for considerable
- flexibility, reduction in opcode proliferation and much more).
-* Vectorisation typically includes much more comprehensive memory load
- and store schemes (unit stride, constant-stride and indexed), which
- in turn have ramifications: virtual memory misses (TLB cache misses)
- and even multiple page-faults... all caused by a *single instruction*.
-* By contrast, SIMD can use "standard" memory load/stores (32-bit aligned
- to pages), and these load/stores have absolutely nothing to do with the
- SIMD / ALU engine, no matter how wide the operand.
-
-Overall it makes a huge amount of sense to have a means and method
-of introducing instruction parallelism in a flexible way that provides
-implementors with the option to choose exactly where they wish to offer
-performance improvements and where they wish to optimise for power
-and/or area (and if that can be offered even on a per-operation basis that
-would provide even more flexibility).
-
-Additionally it makes sense to *split out* the parallelism inherent within
-each of P and V, and to see if each of P and V then, in *combination* with
-a "best-of-both" parallelism extension, could be added on *on top* of
-this proposal, to topologically provide the exact same functionality of
-each of P and V. Each of P and V then can focus on providing the best
-operations possible for their respective target areas, without being
-hugely concerned about the actual parallelism.
-
-Furthermore, an additional goal of this proposal is to reduce the number
-of opcodes utilised by each of P and V as they currently stand, leveraging
-existing RISC-V opcodes where possible, and also potentially allowing
-P and V to make use of Compressed Instructions as a result.
-
-# Analysis and discussion of Vector vs SIMD
-
-There are six combined areas between the two proposals that help with
-parallelism (increased performance, reduced power / area) without
-over-burdening the ISA with a huge proliferation of
-instructions:
-
-* Fixed vs variable parallelism (fixed or variable "M" in SIMD)
-* Implicit vs fixed instruction bit-width (integral to instruction or not)
-* Implicit vs explicit type-conversion (compounded on bit-width)
-* Implicit vs explicit inner loops.
-* Single-instruction LOAD/STORE.
-* Masks / tagging (selecting/preventing certain indexed elements from execution)
-
-The pros and cons of each are discussed and analysed below.
-
-## Fixed vs variable parallelism length
-
-In David Patterson and Andrew Waterman's analysis of SIMD and Vector
-ISAs, the analysis comes out clearly in favour of (effectively) variable
-length SIMD. As SIMD is a fixed width, typically 4, 8 or in extreme cases
-16 or 32 simultaneous operations, the setup, teardown and corner-cases of SIMD
-are extremely burdensome except for applications whose requirements
-*specifically* match the *precise and exact* depth of the SIMD engine.
-
-Thus, SIMD, no matter what width is chosen, is never going to be acceptable
-for general-purpose computation, and in the context of developing a
-general-purpose ISA, is never going to satisfy 100 percent of implementors.
-
-To explain this further: for increased workloads over time, as the
-performance requirements increase for new target markets, implementors
-choose to extend the SIMD width (so as to again avoid mixing parallelism
-into the instruction issue phases: the primary "simplicity" benefit of
-SIMD in the first place), with the result that the entire opcode space
-effectively doubles with each new SIMD width that's added to the ISA.
-
-That basically leaves "variable-length vector" as the clear *general-purpose*
-winner, at least in terms of greatly simplifying the instruction set,
-reducing the number of instructions required for any given task, and thus
-reducing power consumption for the same.
-
-## Implicit vs fixed instruction bit-width
-
-SIMD again has a severe disadvantage here, over Vector: huge proliferation
-of specialist instructions that target 8-bit, 16-bit, 32-bit, 64-bit, and
-have to then have operations *for each and between each*. It gets very
-messy, very quickly.
-
-The V-Extension on the other hand proposes to set the bit-width of
-future instructions on a per-register basis, such that subsequent instructions
-involving that register are *implicitly* of that particular bit-width until
-otherwise changed or reset.
-
-This has some extremely useful properties, without being particularly
-burdensome to implementations, given that instruction decode already has
-to direct the operation to a correctly-sized width ALU engine, anyway.
-
-Not least: in places where an ISA was previously constrained (due for
-whatever reason, including limitations of the available operand spcace),
-implicit bit-width allows the meaning of certain operations to be
-type-overloaded *without* pollution or alteration of frozen and immutable
-instructions, in a fully backwards-compatible fashion.
-
-## Implicit and explicit type-conversion
-
-The Draft 2.3 V-extension proposal has (deprecated) polymorphism to help
-deal with over-population of instructions, such that type-casting from
-integer (and floating point) of various sizes is automatically inferred
-due to "type tagging" that is set with a special instruction. A register
-will be *specifically* marked as "16-bit Floating-Point" and, if added
-to an operand that is specifically tagged as "32-bit Integer" an implicit
-type-conversion will take place *without* requiring that type-conversion
-to be explicitly done with its own separate instruction.
-
-However, implicit type-conversion is not only quite burdensome to
-implement (explosion of inferred type-to-type conversion) but also is
-never really going to be complete. It gets even worse when bit-widths
-also have to be taken into consideration. Each new type results in
-an increased O(N^2) conversion space that, as anyone who has examined
-python's source code (which has built-in polymorphic type-conversion),
-knows that the task is more complex than it first seems.
-
-Overall, type-conversion is generally best to leave to explicit
-type-conversion instructions, or in definite specific use-cases left to
-be part of an actual instruction (DSP or FP)
-
-## Zero-overhead loops vs explicit loops
-
-The initial Draft P-SIMD Proposal by Chuanhua Chang of Andes Technology
-contains an extremely interesting feature: zero-overhead loops. This
-proposal would basically allow an inner loop of instructions to be
-repeated indefinitely, a fixed number of times.
-
-Its specific advantage over explicit loops is that the pipeline in a DSP
-can potentially be kept completely full *even in an in-order single-issue
-implementation*. Normally, it requires a superscalar architecture and
-out-of-order execution capabilities to "pre-process" instructions in
-order to keep ALU pipelines 100% occupied.
-
-By bringing that capability in, this proposal could offer a way to increase
-pipeline activity even in simpler implementations in the one key area
-which really matters: the inner loop.
-
-However when looking at much more comprehensive schemes
-"A portable specification of zero-overhead loop control hardware
-applied to embedded processors" (ZOLC), optimising only the single
-inner loop seems inadequate, tending to suggest that ZOLC may be
-better off being proposed as an entirely separate Extension.
-
-## Single-instruction LOAD/STORE
-
-In traditional Vector Architectures there are instructions which
-result in multiple register-memory transfer operations resulting
-from a single instruction. They're complicated to implement in hardware,
-yet the benefits are a huge consistent regularisation of memory accesses
-that can be highly optimised with respect to both actual memory and any
-L1, L2 or other caches. In Hwacha EECS-2015-263 it is explicitly made
-clear the consequences of getting this architecturally wrong:
-L2 cache-thrashing at the very least.
-
-Complications arise when Virtual Memory is involved: TLB cache misses
-need to be dealt with, as do page faults. Some of the tradeoffs are
-discussed in <http://people.eecs.berkeley.edu/~krste/thesis.pdf>, Section
-4.6, and an article by Jeff Bush when faced with some of these issues
-is particularly enlightening
-<https://jbush001.github.io/2015/11/03/lost-in-translation.html>
-
-Interestingly, none of this complexity is faced in SIMD architectures...
-but then they do not get the opportunity to optimise for highly-streamlined
-memory accesses either.
-
-With the "bang-per-buck" ratio being so high and the direct improvement
-in L1 Instruction Cache usage, as well as the opportunity to optimise
-L1 and L2 cache usage, the case for including Vector LOAD/STORE is
-compelling.
-
-## Mask and Tagging (Predication)
-
-Tagging (aka Masks aka Predication) is a pseudo-method of implementing
-simplistic branching in a parallel fashion, by allowing execution on
-elements of a vector to be switched on or off depending on the results
-of prior operations in the same array position.
-
-The reason for considering this is simple: by *definition* it
-is not possible to perform individual parallel branches in a SIMD
-(Single-Instruction, **Multiple**-Data) context. Branches (modifying
-of the Program Counter) will result in *all* parallel data having
-a different instruction executed on it: that's just the definition of
-SIMD, and it is simply unavoidable.
-
-So these are the ways in which conditional execution may be implemented:
-
-* explicit compare and branch: BNE x, y -> offs would jump offs
- instructions if x was not equal to y
-* explicit store of tag condition: CMP x, y -> tagbit
-* implicit (condition-code) ADD results in a carry, carry bit implicitly
- (or sometimes explicitly) goes into a "tag" (mask) register
-
-The first of these is a "normal" branch method, which is flat-out impossible
-to parallelise without look-ahead and effectively rewriting instructions.
-This would defeat the purpose of RISC.
-
-The latter two are where parallelism becomes easy to do without complexity:
-every operation is modified to be "conditionally executed" (in an explicit
-way directly in the instruction format *or* implicitly).
-
-RVV (Vector-Extension) proposes to have *explicit* storing of the compare
-in a tag/mask register, and to *explicitly* have every vector operation
-*require* that its operation be "predicated" on the bits within an
-explicitly-named tag/mask register.
-
-SIMD (P-Extension) has not yet published precise documentation on what its
-schema is to be: there is however verbal indication at the time of writing
-that:
-
-> The "compare" instructions in the DSP/SIMD ISA proposed by Andes will
-> be executed using the same compare ALU logic for the base ISA with some
-> minor modifications to handle smaller data types. The function will not
-> be duplicated.
-
-This is an *implicit* form of predication as the base RV ISA does not have
-condition-codes or predication. By adding a CSR it becomes possible
-to also tag certain registers as "predicated if referenced as a destination".
-Example:
-
- // in future operations from now on, if r0 is the destination use r5 as
- // the PREDICATION register
- SET_IMPLICIT_CSRPREDICATE r0, r5
- // store the compares in r5 as the PREDICATION register
- CMPEQ8 r5, r1, r2
- // r0 is used here. ah ha! that means it's predicated using r5!
- ADD8 r0, r1, r3
-
-With enough registers (and in RISC-V there are enough registers) some fairly
-complex predication can be set up and yet still execute without significant
-stalling, even in a simple non-superscalar architecture.
-
-(For details on how Branch Instructions would be retro-fitted to indirectly
-predicated equivalents, see Appendix)
-
-## Conclusions
-
-In the above sections the five different ways where parallel instruction
-execution has closely and loosely inter-related implications for the ISA and
-for implementors, were outlined. The pluses and minuses came out as
-follows:
-
-* Fixed vs variable parallelism: <b>variable</b>
-* Implicit (indirect) vs fixed (integral) instruction bit-width: <b>indirect</b>
-* Implicit vs explicit type-conversion: <b>explicit</b>
-* Implicit vs explicit inner loops: <b>implicit but best done separately</b>
-* Single-instruction Vector LOAD/STORE: <b>Complex but highly beneficial</b>
-* Tag or no-tag: <b>Complex but highly beneficial</b>
-
-In particular:
-
-* variable-length vectors came out on top because of the high setup, teardown
- and corner-cases associated with the fixed width of SIMD.
-* Implicit bit-width helps to extend the ISA to escape from
- former limitations and restrictions (in a backwards-compatible fashion),
- whilst also leaving implementors free to simmplify implementations
- by using actual explicit internal parallelism.
-* Implicit (zero-overhead) loops provide a means to keep pipelines
- potentially 100% occupied in a single-issue in-order implementation
- i.e. *without* requiring a super-scalar or out-of-order architecture,
- but doing a proper, full job (ZOLC) is an entirely different matter.
-
-Constructing a SIMD/Simple-Vector proposal based around four of these five
-requirements would therefore seem to be a logical thing to do.
-
-# Instruction Format
-
-The instruction format for Simple-V does not actually have *any* explicit
-compare operations, *any* arithmetic, floating point or memory instructions.
-Instead it *overloads* pre-existing branch operations into predicated
-variants, and implicitly overloads arithmetic operations and LOAD/STORE
-depending on implicit CSR configurations for both vector length and
-bitwidth. *This includes Compressed instructions* as well as future ones.
-
-* For analysis of RVV see [[v_comparative_analysis]] which begins to
- outline topologically-equivalent mappings of instructions
-* Also see Appendix "Retro-fitting Predication into branch-explicit ISA"
- for format of Branch opcodes.
-
-**TODO**: *analyse and decide whether the implicit nature of predication
-as proposed is or is not a lot of hassle, and if explicit prefixes are
-a better idea instead. Parallelism therefore effectively may end up
-as always being 64-bit opcodes (32 for the prefix, 32 for the instruction)
-with some opportunities for to use Compressed bringing it down to 48.
-Also to consider is whether one or both of the last two remaining Compressed
-instruction codes in Quadrant 1 could be used as a parallelism prefix,
-bringing parallelised opcodes down to 32-bit (when combined with C)
-and having the benefit of being explicit.*
-
-## Branch Instruction:
-
-This is the overloaded table for Integer-base Branch operations. Opcode
-(bits 6..0) is set in all cases to 1100011.
-
-[[!table data="""
-31 .. 25 |24 ... 20 | 19 15 | 14 12 | 11 .. 8 | 7 | 6 ... 0 |
-imm[12|10:5]| rs2 | rs1 | funct3 | imm[4:1] | imm[11] | opcode |
-7 | 5 | 5 | 3 | 4 | 1 | 7 |
-reserved | src2 | src1 | BPR | predicate rs3 || BRANCH |
-reserved | src2 | src1 | 000 | predicate rs3 || BEQ |
-reserved | src2 | src1 | 001 | predicate rs3 || BNE |
-reserved | src2 | src1 | 010 | predicate rs3 || rsvd |
-reserved | src2 | src1 | 011 | predicate rs3 || rsvd |
-reserved | src2 | src1 | 100 | predicate rs3 || BLE |
-reserved | src2 | src1 | 101 | predicate rs3 || BGE |
-reserved | src2 | src1 | 110 | predicate rs3 || BLTU |
-reserved | src2 | src1 | 111 | predicate rs3 || BGEU |
-"""]]
-
-This is the overloaded table for Floating-point Predication operations.
-Interestingly no change is needed to the instruction format because
-FP Compare already stores a 1 or a zero in its "rd" integer register
-target, i.e. it's not actually a Branch at all: it's a compare.
-The target needs to simply change to be a predication bitfield.
-
-As with
-Standard RVF/D/Q, Opcode (bits 6..0) is set in all cases to 1010011.
-Likewise Single-precision, fmt bits 26..25) is still set to 00.
-Double-precision is still set to 01, whilst Quad-precision
-appears not to have a definition in V2.3-Draft (but should be unaffected).
-
-It is however noted that an entry "FNE" (the opposite of FEQ) is missing,
-and whilst in ordinary branch code this is fine because the standard
-RVF compare can always be followed up with an integer BEQ or a BNE (or
-a compressed comparison to zero or non-zero), in predication terms that
-becomes more of an impact as an explicit (scalar) instruction is needed
-to invert the predicate. An additional encoding funct3=011 is therefore
-proposed to cater for this.
-
-[[!table data="""
-31 .. 27| 26 .. 25 |24 ... 20 | 19 15 | 14 12 | 11 .. 7 | 6 ... 0 |
-funct5 | fmt | rs2 | rs1 | funct3 | rd | opcode |
-5 | 2 | 5 | 5 | 3 | 4 | 7 |
-10100 | 00/01/11 | src2 | src1 | 010 | pred rs3 | FEQ |
-10100 | 00/01/11 | src2 | src1 | *011* | pred rs3 | FNE |
-10100 | 00/01/11 | src2 | src1 | 001 | pred rs3 | FLT |
-10100 | 00/01/11 | src2 | src1 | 000 | pred rs3 | FLE |
-"""]]
-
-Note (**TBD**): floating-point exceptions will need to be extended
-to cater for multiple exceptions (and statuses of the same). The
-usual approach is to have an array of status codes and bit-fields,
-and one exception, rather than throw separate exceptions for each
-Vector element.
-
-In Hwacha EECS-2015-262 Section 6.7.2 the following pseudocode is given
-for predicated compare operations of function "cmp":
-
- for (int i=0; i<vl; ++i)
- if ([!]preg[p][i])
- preg[pd][i] = cmp(s1 ? vreg[rs1][i] : sreg[rs1],
- s2 ? vreg[rs2][i] : sreg[rs2]);
-
-With associated predication, vector-length adjustments and so on,
-and temporarily ignoring bitwidth (which makes the comparisons more
-complex), this becomes:
-
- if I/F == INT: # integer type cmp
- pred_enabled = int_pred_enabled # TODO: exception if not set!
- preg = int_pred_reg[rd]
- else:
- pred_enabled = fp_pred_enabled # TODO: exception if not set!
- preg = fp_pred_reg[rd]
-
- s1 = CSRvectorlen[src1] > 1;
- s2 = CSRvectorlen[src2] > 1;
- for (int i=0; i<vl; ++i)
- preg[rs3][i] = cmp(s1 ? reg[src1+i] : reg[src1],
- s2 ? reg[src2+i] : reg[src2]);
-
-Notes:
-
-* Predicated SIMD comparisons would break src1 and src2 further down
- into bitwidth-sized chunks (see Appendix "Bitwidth Virtual Register
- Reordering") setting Vector-Length * (number of SIMD elements) bits
- in Predicate Register rs3 as opposed to just Vector-Length bits.
-* Predicated Branches do not actually have an adjustment to the Program
- Counter, so all of bits 25 through 30 in every case are not needed.
-* There are plenty of reserved opcodes for which bits 25 through 30 could
- be put to good use if there is a suitable use-case.
-* FEQ and FNE (and BEQ and BNE) are included in order to save one
- instruction having to invert the resultant predicate bitfield.
- FLT and FLE may be inverted to FGT and FGE if needed by swapping
- src1 and src2 (likewise the integer counterparts).
-
-## Compressed Branch Instruction:
-
-[[!table data="""
-15..13 | 12...10 | 9..7 | 6..5 | 4..2 | 1..0 | name |
-funct3 | imm | rs10 | imm | | op | |
-3 | 3 | 3 | 2 | 3 | 2 | |
-C.BPR | pred rs3 | src1 | I/F B | src2 | C1 | |
-110 | pred rs3 | src1 | I/F 0 | src2 | C1 | P.EQ |
-111 | pred rs3 | src1 | I/F 0 | src2 | C1 | P.NE |
-110 | pred rs3 | src1 | I/F 1 | src2 | C1 | P.LT |
-111 | pred rs3 | src1 | I/F 1 | src2 | C1 | P.LE |
-"""]]
-
-Notes:
-
-* Bits 5 13 14 and 15 make up the comparator type
-* In both floating-point and integer cases there are four predication
- comparators: EQ/NEQ/LT/LE (with GT and GE being synthesised by inverting
- src1 and src2).
-
-## LOAD / STORE Instructions
-
-For full analysis of topological adaptation of RVV LOAD/STORE
-see [[v_comparative_analysis]]. All three types (LD, LD.S and LD.X)
-may be implicitly overloaded into the one base RV LOAD instruction.
-
-Revised LOAD:
-
-[[!table data="""
-31 | 30 | 29 25 | 24 20 | 19 15 | 14 12 | 11 7 | 6 0 |
-imm[11:0] |||| rs1 | funct3 | rd | opcode |
-1 | 1 | 5 | 5 | 5 | 3 | 5 | 7 |
-? | s | rs2 | imm[4:0] | base | width | dest | LOAD |
-"""]]
-
-The exact same corresponding adaptation is also carried out on the single,
-double and quad precision floating-point LOAD-FP and STORE-FP operations,
-which fit the exact same instruction format. Thus all three types
-(unit, stride and indexed) may be fitted into FLW, FLD and FLQ,
-as well as FSW, FSD and FSQ.
-
-Notes:
-
-* LOAD remains functionally (topologically) identical to RVV LOAD
- (for both integer and floating-point variants).
-* Predication CSR-marking register is not explicitly shown in instruction, it's
- implicit based on the CSR predicate state for the rd (destination) register
-* rs2, the source, may *also be marked as a vector*, which implicitly
- is taken to indicate "Indexed Load" (LD.X)
-* Bit 30 indicates "element stride" or "constant-stride" (LD or LD.S)
-* Bit 31 is reserved (ideas under consideration: auto-increment)
-* **TODO**: include CSR SIMD bitwidth in the pseudo-code below.
-* **TODO**: clarify where width maps to elsize
-
-Pseudo-code (excludes CSR SIMD bitwidth):
-
- if (unit-strided) stride = elsize;
- else stride = areg[as2]; // constant-strided
-
- pred_enabled = int_pred_enabled
- preg = int_pred_reg[rd]
-
- for (int i=0; i<vl; ++i)
- if (preg_enabled[rd] && [!]preg[i])
- for (int j=0; j<seglen+1; j++)
- {
- if CSRvectorised[rs2])
- offs = vreg[rs2][i]
- else
- offs = i*(seglen+1)*stride;
- vreg[rd+j][i] = mem[sreg[base] + offs + j*stride];
- }
-
-Taking CSR (SIMD) bitwidth into account involves using the vector
-length and register encoding according to the "Bitwidth Virtual Register
-Reordering" scheme shown in the Appendix (see function "regoffs").
-
-A similar instruction exists for STORE, with identical topological
-translation of all features. **TODO**
-
-## Compressed LOAD / STORE Instructions
-
-Compressed LOAD and STORE are of the same format, where bits 2-4 are
-a src register instead of dest:
-
-[[!table data="""
-15 13 | 12 10 | 9 7 | 6 5 | 4 2 | 1 0 |
-funct3 | imm | rs10 | imm | rd0 | op |
-3 | 3 | 3 | 2 | 3 | 2 |
-C.LW | offset[5:3] | base | offset[2|6] | dest | C0 |
-"""]]
-
-Unfortunately it is not possible to fit the full functionality
-of vectorised LOAD / STORE into C.LD / C.ST: the "X" variants (Indexed)
-require another operand (rs2) in addition to the operand width
-(which is also missing), offset, base, and src/dest.
-
-However a close approximation may be achieved by taking the top bit
-of the offset in each of the five types of LD (and ST), reducing the
-offset to 4 bits and utilising the 5th bit to indicate whether "stride"
-is to be enabled. In this way it is at least possible to introduce
-that functionality.
-
-(**TODO**: *assess whether the loss of one bit from offset is worth having
-"stride" capability.*)
-
-We also assume (including for the "stride" variant) that the "width"
-parameter, which is missing, is derived and implicit, just as it is
-with the standard Compressed LOAD/STORE instructions. For C.LW, C.LD
-and C.LQ, the width is implicitly 4, 8 and 16 respectively, whilst for
-C.FLW and C.FLD the width is implicitly 4 and 8 respectively.
-
-Interestingly we note that the Vectorised Simple-V variant of
-LOAD/STORE (Compressed and otherwise), due to it effectively using the
-standard register file(s), is the direct functional equivalent of
-standard load-multiple and store-multiple instructions found in other
-processors.
-
-In Section 12.3 riscv-isa manual V2.3-draft it is noted the comments on
-page 76, "For virtual memory systems some data accesses could be resident
-in physical memory and some not". The interesting question then arises:
-how does RVV deal with the exact same scenario?
-Expired U.S. Patent 5895501 (Filing Date Sep 3 1996) describes a method
-of detecting early page / segmentation faults and adjusting the TLB
-in advance, accordingly: other strategies are explored in the Appendix
-Section "Virtual Memory Page Faults".
-
-# Note on implementation of parallelism
-
-One extremely important aspect of this proposal is to respect and support
-implementors desire to focus on power, area or performance. In that regard,
-it is proposed that implementors be free to choose whether to implement
-the Vector (or variable-width SIMD) parallelism as sequential operations
-with a single ALU, fully parallel (if practical) with multiple ALUs, or
-a hybrid combination of both.
-
-In Broadcom's Videocore-IV, they chose hybrid, and called it "Virtual
-Parallelism". They achieve a 16-way SIMD at an **instruction** level
-by providing a combination of a 4-way parallel ALU *and* an externally
-transparent loop that feeds 4 sequential sets of data into each of the
-4 ALUs.
-
-Also in the same core, it is worth noting that particularly uncommon
-but essential operations (Reciprocal-Square-Root for example) are
-*not* part of the 4-way parallel ALU but instead have a *single* ALU.
-Under the proposed Vector (varible-width SIMD) implementors would
-be free to do precisely that: i.e. free to choose *on a per operation
-basis* whether and how much "Virtual Parallelism" to deploy.
-
-It is absolutely critical to note that it is proposed that such choices MUST
-be **entirely transparent** to the end-user and the compiler. Whilst
-a Vector (varible-width SIM) may not precisely match the width of the
-parallelism within the implementation, the end-user **should not care**
-and in this way the performance benefits are gained but the ISA remains
-straightforward. All that happens at the end of an instruction run is: some
-parallel units (if there are any) would remain offline, completely
-transparently to the ISA, the program, and the compiler.
-
-The "SIMD considered harmful" trap of having huge complexity and extra
-instructions to deal with corner-cases is thus avoided, and implementors
-get to choose precisely where to focus and target the benefits of their
-implementation efforts, without "extra baggage".
-
-# CSRs <a name="csrs"></a>
-
-There are a number of CSRs needed, which are used at the instruction
-decode phase to re-interpret standard RV opcodes (a practice that has
-precedent in the setting of MISA to enable / disable extensions).
-
-* Integer Register N is Vector of length M: r(N) -> r(N..N+M-1)
-* Integer Register N is of implicit bitwidth M (M=default,8,16,32,64)
-* Floating-point Register N is Vector of length M: r(N) -> r(N..N+M-1)
-* Floating-point Register N is of implicit bitwidth M (M=default,8,16,32,64)
-* Integer Register N is a Predication Register (note: a key-value store)
-* Vector Length CSR (VSETVL, VGETVL)
-
-Notes:
-
-* for the purposes of LOAD / STORE, Integer Registers which are
- marked as a Vector will result in a Vector LOAD / STORE.
-* Vector Lengths are *not* the same as vsetl but are an integral part
- of vsetl.
-* Actual vector length is *multipled* by how many blocks of length
- "bitwidth" may fit into an XLEN-sized register file.
-* Predication is a key-value store due to the implicit referencing,
- as opposed to having the predicate register explicitly in the instruction.
-
-## Predication CSR
-
-The Predication CSR is a key-value store indicating whether, if a given
-destination register (integer or floating-point) is referred to in an
-instruction, it is to be predicated. The first entry is whether predication
-is enabled. The second entry is whether the register index refers to a
-floating-point or an integer register. The third entry is the index
-of that register which is to be predicated (if referred to). The fourth entry
-is the integer register that is treated as a bitfield, indexable by the
-vector element index.
-
-| RegNo | 6 | 5 | (4..0) | (4..0) |
-| ----- | - | - | ------- | ------- |
-| r0 | pren0 | i/f | regidx | predidx |
-| r1 | pren1 | i/f | regidx | predidx |
-| .. | pren.. | i/f | regidx | predidx |
-| r15 | pren15 | i/f | regidx | predidx |
-
-The Predication CSR Table is a key-value store, so implementation-wise
-it will be faster to turn the table around (maintain topologically
-equivalent state):
-
- fp_pred_enabled[32];
- int_pred_enabled[32];
- for (i = 0; i < 16; i++)
- if CSRpred[i].pren:
- idx = CSRpred[i].regidx
- predidx = CSRpred[i].predidx
- if CSRpred[i].type == 0: # integer
- int_pred_enabled[idx] = 1
- int_pred_reg[idx] = predidx
- else:
- fp_pred_enabled[idx] = 1
- fp_pred_reg[idx] = predidx
-
-So when an operation is to be predicated, it is the internal state that
-is used. In Section 6.4.2 of Hwacha's Manual (EECS-2015-262) the following
-pseudo-code for operations is given, where p is the explicit (direct)
-reference to the predication register to be used:
-
- for (int i=0; i<vl; ++i)
- if ([!]preg[p][i])
- (d ? vreg[rd][i] : sreg[rd]) =
- iop(s1 ? vreg[rs1][i] : sreg[rs1],
- s2 ? vreg[rs2][i] : sreg[rs2]); // for insts with 2 inputs
-
-This instead becomes an *indirect* reference using the *internal* state
-table generated from the Predication CSR key-value store:
-
- if type(iop) == INT:
- pred_enabled = int_pred_enabled
- preg = int_pred_reg[rd]
- else:
- pred_enabled = fp_pred_enabled
- preg = fp_pred_reg[rd]
-
- for (int i=0; i<vl; ++i)
- if (preg_enabled[rd] && [!]preg[i])
- (d ? vreg[rd][i] : sreg[rd]) =
- iop(s1 ? vreg[rs1][i] : sreg[rs1],
- s2 ? vreg[rs2][i] : sreg[rs2]); // for insts with 2 inputs
-
-## MAXVECTORDEPTH
-
-MAXVECTORDEPTH is the same concept as MVL in RVV. However in Simple-V,
-given that its primary (base, unextended) purpose is for 3D, Video and
-other purposes (not requiring supercomputing capability), it makes sense
-to limit MAXVECTORDEPTH to the regfile bitwidth (32 for RV32, 64 for RV64
-and so on).
-
-The reason for setting this limit is so that predication registers, when
-marked as such, may fit into a single register as opposed to fanning out
-over several registers. This keeps the implementation a little simpler.
-Note that RVV on top of Simple-V may choose to over-ride this decision.
-
-## Vector-length CSRs
-
-Vector lengths are interpreted as meaning "any instruction referring to
-r(N) generates implicit identical instructions referring to registers
-r(N+M-1) where M is the Vector Length". Vector Lengths may be set to
-use up to 16 registers in the register file.
-
-One separate CSR table is needed for each of the integer and floating-point
-register files:
-
-| RegNo | (3..0) |
-| ----- | ------ |
-| r0 | vlen0 |
-| r1 | vlen1 |
-| .. | vlen.. |
-| r31 | vlen31 |
-
-An array of 32 4-bit CSRs is needed (4 bits per register) to indicate
-whether a register was, if referred to in any standard instructions,
-implicitly to be treated as a vector. A vector length of 1 indicates
-that it is to be treated as a scalar. Vector lengths of 0 are reserved.
-
-Internally, implementations may choose to use the non-zero vector length
-to set a bit-field per register, to be used in the instruction decode phase.
-In this way any standard (current or future) operation involving
-register operands may detect if the operation is to be vector-vector,
-vector-scalar or scalar-scalar (standard) simply through a single
-bit test.
-
-Note that when using the "vsetl rs1, rs2" instruction (caveat: when the
-bitwidth is specifically not set) it becomes:
-
- CSRvlength = MIN(MIN(CSRvectorlen[rs1], MAXVECTORDEPTH), rs2)
-
-This is in contrast to RVV:
-
- CSRvlength = MIN(MIN(rs1, MAXVECTORDEPTH), rs2)
-
-## Element (SIMD) bitwidth CSRs
-
-Element bitwidths may be specified with a per-register CSR, and indicate
-how a register (integer or floating-point) is to be subdivided.