X-Git-Url: https://git.libre-soc.org/?a=blobdiff_plain;f=simple_v_extension.mdwn;h=0642ce926d702469769bae2210773b1035398a95;hb=0aaf14ff15c8e21add212e97680868980bedc761;hp=6599f930bf7f6aa597e8405b61cc967fecb381b7;hpb=54878b3c98f187d978adfcf1b96cae0b1485b255;p=libreriscv.git diff --git a/simple_v_extension.mdwn b/simple_v_extension.mdwn index 6599f930b..0642ce926 100644 --- a/simple_v_extension.mdwn +++ b/simple_v_extension.mdwn @@ -1,16 +1,23 @@ # Variable-width Variable-packed SIMD / Simple-V / Parallelism Extension Proposal +**Note: this document is out of date and involved early ideas and discussions** + Key insight: Simple-V is intended as an abstraction layer to provide a consistent "API" to parallelisation of existing *and future* operations. *Actual* internal hardware-level parallelism is *not* required, such that Simple-V may be viewed as providing a "compact" or "consolidated" means of issuing multiple near-identical arithmetic instructions to an -instruction queue (FILO), pending execution. +instruction queue (FIFO), pending execution. *Actual* parallelism, if added independently of Simple-V in the form of Out-of-order restructuring (including parallel ALU lanes) or VLIW -implementations, or SIMD, or anything else, would then benefit *if* -Simple-V was added on top. +implementations, or SIMD, or anything else, would then benefit from +the uniformity of a consistent API. + +**No arithmetic operations are added or required to be added.** SV is purely a parallelism API and consequentially is suitable for use even with RV32E. + +* Talk slides: +* Specification: now move to its own page: [[specification]] [[!toc ]] @@ -21,7 +28,7 @@ requirements: power-conscious, area-conscious, and performance-conscious designs all pull an ISA and its implementation in different conflicting directions, as do the specific intended uses for any given implementation. -Additionally, the existing P (SIMD) proposal and the V (Vector) proposals, +The existing P (SIMD) proposal and the V (Vector) proposals, whilst each extremely powerful in their own right and clearly desirable, are also: @@ -31,15 +38,36 @@ are also: analysis and review purposes) prohibitively expensive * Both contain partial duplication of pre-existing RISC-V instructions (an undesirable characteristic) -* Both have independent and disparate methods for introducing parallelism - at the instruction level. +* Both have independent, incompatible and disparate methods for introducing + parallelism at the instruction level * Both require that their respective parallelism paradigm be implemented along-side and integral to their respective functionality *or not at all*. * Both independently have methods for introducing parallelism that could, if separated, benefit *other areas of RISC-V not just DSP or Floating-point respectively*. -Therefore it makes a huge amount of sense to have a means and method +There are also key differences between Vectorisation and SIMD (full +details outlined in the Appendix), the key points being: + +* SIMD has an extremely seductively compelling ease of implementation argument: + each operation is passed to the ALU, which is where the parallelism + lies. There is *negligeable* (if any) impact on the rest of the core + (with life instead being made hell for compiler writers and applications + writers due to extreme ISA proliferation). +* By contrast, Vectorisation has quite some complexity (for considerable + flexibility, reduction in opcode proliferation and much more). +* Vectorisation typically includes much more comprehensive memory load + and store schemes (unit stride, constant-stride and indexed), which + in turn have ramifications: virtual memory misses (TLB cache misses) + and even multiple page-faults... all caused by a *single instruction*, + yet with a clear benefit that the regularisation of LOAD/STOREs can + be optimised for minimal impact on caches and maximised throughput. +* By contrast, SIMD can use "standard" memory load/stores (32-bit aligned + to pages), and these load/stores have absolutely nothing to do with the + SIMD / ALU engine, no matter how wide the operand. Simplicity but with + more impact on instruction and data caches. + +Overall it makes a huge amount of sense to have a means and method of introducing instruction parallelism in a flexible way that provides implementors with the option to choose exactly where they wish to offer performance improvements and where they wish to optimise for power @@ -59,23 +87,18 @@ of opcodes utilised by each of P and V as they currently stand, leveraging existing RISC-V opcodes where possible, and also potentially allowing P and V to make use of Compressed Instructions as a result. -**TODO**: propose overflow registers be actually one of the integer regs -(flowing to multiple regs). - -**TODO**: propose "mask" (predication) registers likewise. combination with -standard RV instructions and overflow registers extremely powerful, see -Aspex ASP. - # Analysis and discussion of Vector vs SIMD -There are five combined areas between the two proposals that help with -parallelism without over-burdening the ISA with a huge proliferation of +There are six combined areas between the two proposals that help with +parallelism (increased performance, reduced power / area) without +over-burdening the ISA with a huge proliferation of instructions: * Fixed vs variable parallelism (fixed or variable "M" in SIMD) * Implicit vs fixed instruction bit-width (integral to instruction or not) * Implicit vs explicit type-conversion (compounded on bit-width) * Implicit vs explicit inner loops. +* Single-instruction LOAD/STORE. * Masks / tagging (selecting/preventing certain indexed elements from execution) The pros and cons of each are discussed and analysed below. @@ -110,7 +133,8 @@ reducing power consumption for the same. SIMD again has a severe disadvantage here, over Vector: huge proliferation of specialist instructions that target 8-bit, 16-bit, 32-bit, 64-bit, and have to then have operations *for each and between each*. It gets very -messy, very quickly. +messy, very quickly: *six* separate dimensions giving an O(N^6) instruction +proliferation profile. The V-Extension on the other hand proposes to set the bit-width of future instructions on a per-register basis, such that subsequent instructions @@ -122,7 +146,7 @@ burdensome to implementations, given that instruction decode already has to direct the operation to a correctly-sized width ALU engine, anyway. Not least: in places where an ISA was previously constrained (due for -whatever reason, including limitations of the available operand spcace), +whatever reason, including limitations of the available operand space), implicit bit-width allows the meaning of certain operations to be type-overloaded *without* pollution or alteration of frozen and immutable instructions, in a fully backwards-compatible fashion. @@ -173,6 +197,33 @@ applied to embedded processors" (ZOLC), optimising only the single inner loop seems inadequate, tending to suggest that ZOLC may be better off being proposed as an entirely separate Extension. +## Single-instruction LOAD/STORE + +In traditional Vector Architectures there are instructions which +result in multiple register-memory transfer operations resulting +from a single instruction. They're complicated to implement in hardware, +yet the benefits are a huge consistent regularisation of memory accesses +that can be highly optimised with respect to both actual memory and any +L1, L2 or other caches. In Hwacha EECS-2015-263 it is explicitly made +clear the consequences of getting this architecturally wrong: +L2 cache-thrashing at the very least. + +Complications arise when Virtual Memory is involved: TLB cache misses +need to be dealt with, as do page faults. Some of the tradeoffs are +discussed in , Section +4.6, and an article by Jeff Bush when faced with some of these issues +is particularly enlightening + + +Interestingly, none of this complexity is faced in SIMD architectures... +but then they do not get the opportunity to optimise for highly-streamlined +memory accesses either. + +With the "bang-per-buck" ratio being so high and the indirect improvement +in L1 Instruction Cache usage (reduced instruction count), as well as +the opportunity to optimise L1 and L2 cache usage, the case for including +Vector LOAD/STORE is compelling. + ## Mask and Tagging (Predication) Tagging (aka Masks aka Predication) is a pseudo-method of implementing @@ -192,8 +243,8 @@ So these are the ways in which conditional execution may be implemented: * explicit compare and branch: BNE x, y -> offs would jump offs instructions if x was not equal to y * explicit store of tag condition: CMP x, y -> tagbit -* implicit (condition-code) ADD results in a carry, carry bit implicitly - (or sometimes explicitly) goes into a "tag" (mask) register +* implicit (condition-code) such as ADD results in a carry, carry bit + implicitly (or sometimes explicitly) goes into a "tag" (mask) register The first of these is a "normal" branch method, which is flat-out impossible to parallelise without look-ahead and effectively rewriting instructions. @@ -248,6 +299,7 @@ follows: * Implicit (indirect) vs fixed (integral) instruction bit-width: indirect * Implicit vs explicit type-conversion: explicit * Implicit vs explicit inner loops: implicit but best done separately +* Single-instruction Vector LOAD/STORE: Complex but highly beneficial * Tag or no-tag: Complex but highly beneficial In particular: @@ -263,464 +315,329 @@ In particular: i.e. *without* requiring a super-scalar or out-of-order architecture, but doing a proper, full job (ZOLC) is an entirely different matter. -Constructing a SIMD/Simple-Vector proposal based around four of these five +Constructing a SIMD/Simple-Vector proposal based around four of these six requirements would therefore seem to be a logical thing to do. -# Instruction Format +# Note on implementation of parallelism -The instruction format for Simple-V does not actually have *any* compare -operations, *any* arithmetic, floating point or memory instructions. -Instead it *overloads* pre-existing branch operations into predicated -variants, and implicitly overloads arithmetic operations and LOAD/STORE -depending on implicit CSR configurations for both vector length and -bitwidth. This includes Compressed instructions. +One extremely important aspect of this proposal is to respect and support +implementors desire to focus on power, area or performance. In that regard, +it is proposed that implementors be free to choose whether to implement +the Vector (or variable-width SIMD) parallelism as sequential operations +with a single ALU, fully parallel (if practical) with multiple ALUs, or +a hybrid combination of both. -* For analysis of RVV see [[v_comparative_analysis]] which begins to - outline topologically-equivalent mappings of instructions -* Also see Appendix "Retro-fitting Predication into branch-explicit ISA" - for format of Branch opcodes. +In Broadcom's Videocore-IV, they chose hybrid, and called it "Virtual +Parallelism". They achieve a 16-way SIMD at an **instruction** level +by providing a combination of a 4-way parallel ALU *and* an externally +transparent loop that feeds 4 sequential sets of data into each of the +4 ALUs. -**TODO**: *analyse and decide whether the implicit nature of predication -as proposed is or is not a lot of hassle, and if explicit prefixes are -a better idea instead. Parallelism therefore effectively may end up -as always being 64-bit opcodes (32 for the prefix, 32 for the instruction) -with some opportunities for to use Compressed bringing it down to 48. -Also to consider is whether one or both of the last two remaining Compressed -instruction codes in Quadrant 1 could be used as a parallelism prefix, -bringing parallelised opcodes down to 32-bit and having the benefit of -being explicit.* +Also in the same core, it is worth noting that particularly uncommon +but essential operations (Reciprocal-Square-Root for example) are +*not* part of the 4-way parallel ALU but instead have a *single* ALU. +Under the proposed Vector (varible-width SIMD) implementors would +be free to do precisely that: i.e. free to choose *on a per operation +basis* whether and how much "Virtual Parallelism" to deploy. -## Branch Instruction: +It is absolutely critical to note that it is proposed that such choices MUST +be **entirely transparent** to the end-user and the compiler. Whilst +a Vector (varible-width SIMD) may not precisely match the width of the +parallelism within the implementation, the end-user **should not care** +and in this way the performance benefits are gained but the ISA remains +straightforward. All that happens at the end of an instruction run is: some +parallel units (if there are any) would remain offline, completely +transparently to the ISA, the program, and the compiler. -[[!table data=""" -31 | 30 .. 25 |24 ... 20 | 19 15 | 14 12 | 11 .. 8 | 7 | 6 ... 0 | -imm[12] | imm[10:5]| rs2 | rs1 | funct3 | imm[4:1] | imm[11] | opcode | -1 | 6 | 5 | 5 | 3 | 4 | 1 | 7 | -I/F | reserved | src2 | src1 | BPR | predicate rs3 || BRANCH | -0 | reserved | src2 | src1 | 000 | predicate rs3 || BEQ | -0 | reserved | src2 | src1 | 001 | predicate rs3 || BNE | -0 | reserved | src2 | src1 | 010 | predicate rs3 || rsvd | -0 | reserved | src2 | src1 | 011 | predicate rs3 || rsvd | -0 | reserved | src2 | src1 | 100 | predicate rs3 || BLE | -0 | reserved | src2 | src1 | 101 | predicate rs3 || BGE | -0 | reserved | src2 | src1 | 110 | predicate rs3 || BLTU | -0 | reserved | src2 | src1 | 111 | predicate rs3 || BGEU | -1 | reserved | src2 | src1 | 000 | predicate rs3 || FEQ | -1 | reserved | src2 | src1 | 001 | predicate rs3 || FNE | -1 | reserved | src2 | src1 | 010 | predicate rs3 || rsvd | -1 | reserved | src2 | src1 | 011 | predicate rs3 || rsvd | -1 | reserved | src2 | src1 | 100 | predicate rs3 || FLT | -1 | reserved | src2 | src1 | 101 | predicate rs3 || FLE | -1 | reserved | src2 | src1 | 110 | predicate rs3 || rsvd | -1 | reserved | src2 | src1 | 111 | predicate rs3 || rsvd | -"""]] +To make that clear: should an implementor choose a particularly wide +SIMD-style ALU, each parallel unit *must* have predication so that +the parallel SIMD ALU may emulate variable-length parallel operations. +Thus the "SIMD considered harmful" trap of having huge complexity and extra +instructions to deal with corner-cases is thus avoided, and implementors +get to choose precisely where to focus and target the benefits of their +implementation efforts, without "extra baggage". -In Hwacha EECS-2015-262 Section 6.7.2 the following pseudocode is given -for predicated compare operations of function "cmp": +In addition, implementors will be free to choose whether to provide an +absolute bare minimum level of compliance with the "API" (software-traps +when vectorisation is detected), all the way up to full supercomputing +level all-hardware parallelism. Options are covered in the Appendix. - for (int i=0; i 1; - s2 = CSRvectorlen[src2] > 1; - for (int i=0; i What does an ADD of two different-sized vectors do in simple-V? -Notes: +* if the two source operands are not the same, throw an exception. +* if the destination operand is also a vector, and the source is longer + than the destination, throw an exception. -* LOAD remains functionally (topologically) identical to RVV LOAD - (for both integer and floating-point variants). -* Predication CSR-marking register is not explicitly shown in instruction, it's - implicit based on the CSR predicate state for the rd (destination) register -* rs2, the source, may *also be marked as a vector*, which implicitly - is taken to indicate "Indexed Load" (LD.X) -* Bit 30 indicates "element stride" or "constant-stride" (LD or LD.S) -* Bit 31 is reserved (ideas under consideration: auto-increment) -* **TODO**: include CSR SIMD bitwidth in the pseudo-code below. -* **TODO**: clarify where width maps to elsize +> And what about instructions like JALR?  +> What does jumping to a vector do? -Pseudo-code (excludes CSR SIMD bitwidth): +* Throw an exception. Whether that actually results in spawning threads + as part of the trap-handling remains to be seen. - if (unit-strided) stride = elsize; - else stride = areg[as2]; // constant-strided +# Under consideration - pred_enabled = int_pred_enabled - preg = int_pred_reg[rd] +From the Chennai 2018 slides the following issues were raised. +Efforts to analyse and answer these questions are below. + +* Should future extra bank be included now? +* How many Register and Predication CSRs should there be? + (and how many in RV32E) +* How many in M-Mode (for doing context-switch)? +* Should use of registers be allowed to "wrap" (x30 x31 x1 x2)? +* Can CLIP be done as a CSR (mode, like elwidth) +* SIMD saturation (etc.) also set as a mode? +* Include src1/src2 predication on Comparison Ops? + (same arrangement as C.MV, with same flexibility/power) +* 8/16-bit ops is it worthwhile adding a "start offset"? + (a bit like misaligned addressing... for registers) + or just use predication to skip start? - for (int i=0; i - -There are a number of CSRs needed, which are used at the instruction -decode phase to re-interpret standard RV opcodes (a practice that has -precedent in the setting of MISA to enable / disable extensions). - -* Integer Register N is Vector of length M: r(N) -> r(N..N+M-1) -* Integer Register N is of implicit bitwidth M (M=default,8,16,32,64) -* Floating-point Register N is Vector of length M: r(N) -> r(N..N+M-1) -* Floating-point Register N is of implicit bitwidth M (M=default,8,16,32,64) -* Integer Register N is a Predication Register (note: a key-value store) -* Vector Length CSR (VSETVL, VGETVL) - -Notes: - -* for the purposes of LOAD / STORE, Integer Registers which are - marked as a Vector will result in a Vector LOAD / STORE. -* Vector Lengths are *not* the same as vsetl but are an integral part - of vsetl. -* Actual vector length is *multipled* by how many blocks of length - "bitwidth" may fit into an XLEN-sized register file. -* Predication is a key-value store due to the implicit referencing, - as opposed to having the predicate register explicitly in the instruction. - -## Predication CSR - -The Predication CSR is a key-value store indicating whether, if a given -destination register (integer or floating-point) is referred to in an -instruction, it is to be predicated. The first entry is whether predication -is enabled. The second entry is whether the register index refers to a -floating-point or an integer register. The third entry is the index -of that register which is to be predicated (if referred to). The fourth entry -is the integer register that is treated as a bitfield, indexable by the -vector element index. - -| RegNo | 6 | 5 | (4..0) | (4..0) | -| ----- | - | - | ------- | ------- | -| r0 | pren0 | i/f | regidx | predidx | -| r1 | pren1 | i/f | regidx | predidx | -| .. | pren.. | i/f | regidx | predidx | -| r15 | pren15 | i/f | regidx | predidx | - -The Predication CSR Table is a key-value store, so implementation-wise -it will be faster to turn the table around (maintain topologically -equivalent state): - - fp_pred_enabled[32]; - int_pred_enabled[32]; - for (i = 0; i < 16; i++) - if CSRpred[i].pren: - idx = CSRpred[i].regidx - predidx = CSRpred[i].predidx - if CSRpred[i].type == 0: # integer - int_pred_enabled[idx] = 1 - int_pred_reg[idx] = predidx - else: - fp_pred_enabled[idx] = 1 - fp_pred_reg[idx] = predidx - -So when an operation is to be predicated, it is the internal state that -is used. In Section 6.4.2 of Hwacha's Manual (EECS-2015-262) the following -pseudo-code for operations is given, where p is the explicit (direct) -reference to the predication register to be used: - - for (int i=0; i What does an ADD of two different-sized vectors do in simple-V? +* Implementors indicate chosen bitwidth support in Vector-bitwidth CSR + (caveat: anything not specified drops through to software-emulation / traps) +* TODO -* if the two source operands are not the same, throw an exception. -* if the destination operand is also a vector, and the source is longer - than the destination, throw an exception. +# Appendix -> And what about instructions like JALR?  -> What does jumping to a vector do? +## V-Extension to Simple-V Comparative Analysis -* Throw an exception. Whether that actually results in spawning threads - as part of the trap-handling remains to be seen. +This section has been moved to its own page [[v_comparative_analysis]] + +## P-Ext ISA + +This section has been moved to its own page [[p_comparative_analysis]] -# Comparison of "Traditional" SIMD, Alt-RVP, Simple-V and RVV Proposals +## Comparison of "Traditional" SIMD, Alt-RVP, Simple-V and RVV Proposals This section compares the various parallelism proposals as they stand, including traditional SIMD, in terms of features, ease of implementation, complexity, flexibility, and die area. -## [[alt_rvp]] +### [[harmonised_rvv_rvp]] + +This is an interesting proposal under development to retro-fit the AndesStar +P-Ext into V-Ext. + +### [[alt_rvp]] Primary benefit of Alt-RVP is the simplicity with which parallelism may be introduced (effective multiplication of regfiles and associated ALUs). @@ -743,16 +660,16 @@ may be introduced (effective multiplication of regfiles and associated ALUs). * minus: Access to registers across multiple lanes is challenging. "Solution" is to drop data into memory and immediately back in again (like MMX). -## Simple-V +### Simple-V Primary benefit of Simple-V is the OO abstraction of parallel principles from actual (internal) parallel hardware. It's an API in effect that's designed to be slotted in to an existing implementation (just after instruction decode) with minimum disruption and effort. -* minus: the complexity of having to use register renames, OoO, VLIW, - register file cacheing, all of which has been done before but is a - pain +* minus: the complexity (if full parallelism is to be exploited) + of having to use register renames, OoO, VLIW, register file cacheing, + all of which has been done before but is a pain * plus: transparent re-use of existing opcodes as-is just indirectly saying "this register's now a vector" which * plus: means that future instructions also get to be inherently @@ -782,7 +699,7 @@ instruction decode) with minimum disruption and effort. would be "no worse" than existing register renaming, OoO, VLIW and register file cacheing schemes. -## RVV (as it stands, Draft 0.4 Section 17, RISC-V ISA V2.3-Draft) +### RVV (as it stands, Draft 0.4 Section 17, RISC-V ISA V2.3-Draft) RVV is extremely well-designed and has some amazing features, including 2D reorganisation of memory through LOAD/STORE "strides". @@ -811,7 +728,7 @@ RVV is extremely well-designed and has some amazing features, including to be in high-performance specialist supercomputing (where it will be absolutely superb). -## Traditional SIMD +### Traditional SIMD The only really good things about SIMD are how easy it is to implement and get good performance. Unfortunately that makes it quite seductive... @@ -848,20 +765,20 @@ get good performance. Unfortunately that makes it quite seductive... * minor-saving-grace: some implementations *may* have predication masks that allow control over individual elements within the SIMD block. -# Comparison *to* Traditional SIMD: Alt-RVP, Simple-V and RVV Proposals +## Comparison *to* Traditional SIMD: Alt-RVP, Simple-V and RVV Proposals This section compares the various parallelism proposals as they stand, *against* traditional SIMD as opposed to *alongside* SIMD. In other words, the question is asked "How can each of the proposals effectively implement (or replace) SIMD, and how effective would they be"? -## [[alt_rvp]] +### [[alt_rvp]] * Alt-RVP would not actually replace SIMD but would augment it: just as with a SIMD architecture where the ALU becomes responsible for the parallelism, Alt-RVP ALUs would likewise be so responsible... with *additional* (lane-based) parallelism on top. -* Thus at least some of the downsides of SIMD ISA O(N^3) proliferation by +* Thus at least some of the downsides of SIMD ISA O(N^5) proliferation by at least one dimension are avoided (architectural upgrades introducing 128-bit then 256-bit then 512-bit variants of the exact same 64-bit SIMD block) @@ -876,7 +793,7 @@ the question is asked "How can each of the proposals effectively implement "swapping" instructions were then introduced, some of the disadvantages of SIMD could be mitigated. -## RVV +### RVV * RVV is designed to replace SIMD with a better paradigm: arbitrary-length parallelism. @@ -892,7 +809,7 @@ the question is asked "How can each of the proposals effectively implement implementation overhead of RVV were acceptable (compared to normal SIMD/DSP-style single-issue in-order simplicity). -## Simple-V +### Simple-V * Simple-V borrows hugely from RVV as it is intended to be easy to topologically transplant every single instruction from RVV (as @@ -937,55 +854,37 @@ the question is asked "How can each of the proposals effectively implement operations, all the while keeping a consistent ISA-level "API" irrespective of implementor design choices (or indeed actual implementations). -# Impementing V on top of Simple-V - -* Number of Offset CSRs extends from 2 -* Extra register file: vector-file -* Setup of Vector length and bitwidth CSRs now can specify vector-file - as well as integer or float file. -* Extend CSR tables (bitwidth) with extra bits -* TODO +### Example Instruction translation: -# Implementing P (renamed to DSP) on top of Simple-V +Instructions "ADD r7 r4 r4" would result in three instructions being +generated and placed into the FIFO. r7 and r4 are marked as "vectorised": -* Implementors indicate chosen bitwidth support in Vector-bitwidth CSR - (caveat: anything not specified drops through to software-emulation / traps) -* TODO +* ADD r7 r4 r4 +* ADD r8 r5 r5 +* ADD r9 r6 r6 -# Appendix - -## V-Extension to Simple-V Comparative Analysis +Instructions "ADD r7 r4 r1" would result in three instructions being +generated and placed into the FIFO. r7 and r1 are marked as "vectorised" +whilst r4 is not: -This section has been moved to its own page [[v_comparative_analysis]] - -## P-Ext ISA - -This section has been moved to its own page [[p_comparative_analysis]] +* ADD r7 r4 r1 +* ADD r8 r4 r2 +* ADD r9 r4 r3 ## Example of vector / vector, vector / scalar, scalar / scalar => vector add - register CSRvectorlen[XLEN][4]; # not quite decided yet about this one... - register CSRpredicate[XLEN][4]; # 2^4 is max vector length - register CSRreg_is_vectorised[XLEN]; # just for fun support scalars as well - register x[32][XLEN]; - - function op_add(rd, rs1, rs2, predr) - { -    /* note that this is ADD, not PADD */ -    int i, id, irs1, irs2; -    # checks CSRvectorlen[rd] == CSRvectorlen[rs] etc. ignored -    # also destination makes no sense as a scalar but what the hell... -    for (i = 0, id=0, irs1=0, irs2=0; i @@ -1013,10 +912,10 @@ There is, in the standard Conditional Branch instruction, more than adequate space to interpret it in a similar fashion: [[!table data=""" - 31 |30 ..... 25 |24 ... 20 | 19 ... 15 | 14 ...... 12 | 11 ....... 8 | 7 | 6 ....... 0 | -imm[12] | imm[10:5] | rs2 | rs1 | funct3 | imm[4:1] | imm[11] | opcode | - 1 | 6 | 5 | 5 | 3 | 4 | 1 | 7 | - offset[12,10:5] || src2 | src1 | BEQ | offset[11,4:1] || BRANCH | +31 |30 ..... 25 |24..20|19..15| 14...12| 11.....8 | 7 | 6....0 | +imm[12] | imm[10:5] |rs2 | rs1 | funct3 | imm[4:1] | imm[11] | opcode | + 1 | 6 | 5 | 5 | 3 | 4 | 1 | 7 | + offset[12,10:5] || src2 | src1 | BEQ | offset[11,4:1] || BRANCH | """]] This would become: @@ -1036,19 +935,19 @@ not only to add in a second source register, but also use some of the bits as a predication target as well. [[!table data=""" -15 ...... 13 | 12 ........... 10 | 9..... 7 | 6 ................. 2 | 1 .. 0 | - funct3 | imm | rs10 | imm | op | - 3 | 3 | 3 | 5 | 2 | - C.BEQZ | offset[8,4:3] | src | offset[7:6,2:1,5] | C1 | +15..13 | 12 ....... 10 | 9...7 | 6 ......... 2 | 1 .. 0 | +funct3 | imm | rs10 | imm | op | +3 | 3 | 3 | 5 | 2 | +C.BEQZ | offset[8,4:3] | src | offset[7:6,2:1,5] | C1 | """]] Now uses the CS format: [[!table data=""" -15 ...... 13 | 12 ........... 10 | 9..... 7 | 6 .. 5 | 4......... 2 | 1 .. 0 | - funct3 | imm | rs10 | imm | | op | - 3 | 3 | 3 | 2 | 3 | 2 | - C.BEQZ | predicate rs3 | src1 | I/F B | src2 | C1 | +15..13 | 12 . 10 | 9 .. 7 | 6 .. 5 | 4..2 | 1 .. 0 | +funct3 | imm | rs10 | imm | | op | +3 | 3 | 3 | 2 | 3 | 2 | +C.BEQZ | pred rs3 | src1 | I/F B | src2 | C1 | """]] Bit 6 would be decoded as "operation refers to Integer or Float" including @@ -1151,16 +1050,16 @@ still be respected*, making Simple-V in effect the "consistent public API". vew may be one of the following (giving a table "bytestable", used below): -| vew | bitwidth | -| --- | -------- | -| 000 | default | -| 001 | 8 | -| 010 | 16 | -| 011 | 32 | -| 100 | 64 | -| 101 | 128 | -| 110 | rsvd | -| 111 | rsvd | +| vew | bitwidth | bytestable | +| --- | -------- | ---------- | +| 000 | default | XLEN/8 | +| 001 | 8 | 1 | +| 010 | 16 | 2 | +| 011 | 32 | 4 | +| 100 | 64 | 8 | +| 101 | 128 | 16 | +| 110 | rsvd | rsvd | +| 111 | rsvd | rsvd | Pseudocode for vector length taking CSR SIMD-bitwidth into account: @@ -1181,15 +1080,6 @@ To index an element in a register rnum where the vector element index is i: byteidx * 8, # low byteidx * 8 + (vew-1), # high -### Example Instruction translation: - -Instructions "ADD r2 r4 r4" would result in three instructions being -generated and placed into the FILO: - -* ADD r2 r4 r4 -* ADD r2 r5 r5 -* ADD r2 r6 r6 - ### Insights SIMD register file splitting still to consider. For RV64, benefits of doubling @@ -1287,7 +1177,7 @@ So the question boils down to: Whilst the above may seem to be severe minuses, there are some strong pluses: -* Significant reduction of V's opcode space: over 85%. +* Significant reduction of V's opcode space: over 95%. * Smaller reduction of P's opcode space: around 10%. * The potential to use Compressed instructions in both Vector and SIMD due to the overloading of register meaning (implicit vectorisation, @@ -1316,6 +1206,514 @@ The nice thing about a vector architecture is that you *know* that to optimise L1/L2 cache-line usage (avoid thrashing), strangely enough by *introducing* deliberate latency into the execution phase. +## Overflow registers in combination with predication + +**TODO**: propose overflow registers be actually one of the integer regs +(flowing to multiple regs). + +**TODO**: propose "mask" (predication) registers likewise. combination with +standard RV instructions and overflow registers extremely powerful, see +Aspex ASP. + +When integer overflow is stored in an easily-accessible bit (or another +register), parallelisation turns this into a group of bits which can +potentially be interacted with in predication, in interesting and powerful +ways. For example, by taking the integer-overflow result as a predication +field and shifting it by one, a predicated vectorised "add one" can emulate +"carry" on arbitrary (unlimited) length addition. + +However despite RVV having made room for floating-point exceptions, neither +RVV nor base RV have taken integer-overflow (carry) into account, which +makes proposing it quite challenging given that the relevant (Base) RV +sections are frozen. Consequently it makes sense to forgo this feature. + +## Context Switch Example + +An unusual side-effect of Simple-V mapping onto the standard register files +is that LOAD-multiple and STORE-multiple are accidentally available, as long +as it is acceptable that the register(s) to be loaded/stored are contiguous +(per instruction). An additional accidental benefit is that Compressed LD/ST +may also be used. + +To illustrate how this works, here is some example code from FreeRTOS +(GPLv2 licensed, portasm.S): + + /* Macro for saving task context */ + .macro portSAVE_CONTEXT + .global pxCurrentTCB + /* make room in stack */ + addi sp, sp, -REGBYTES * 32 + + /* Save Context */ + STORE x1, 0x0(sp) + STORE x2, 1 * REGBYTES(sp) + STORE x3, 2 * REGBYTES(sp) + ... + ... + STORE x30, 29 * REGBYTES(sp) + STORE x31, 30 * REGBYTES(sp) + + /* Store current stackpointer in task control block (TCB) */ + LOAD t0, pxCurrentTCB //pointer + STORE sp, 0x0(t0) + .endm + + /* Saves current error program counter (EPC) as task program counter */ + .macro portSAVE_EPC + csrr t0, mepc + STORE t0, 31 * REGBYTES(sp) + .endm + + /* Saves current return adress (RA) as task program counter */ + .macro portSAVE_RA + STORE ra, 31 * REGBYTES(sp) + .endm + + /* Macro for restoring task context */ + .macro portRESTORE_CONTEXT + + .global pxCurrentTCB + /* Load stack pointer from the current TCB */ + LOAD sp, pxCurrentTCB + LOAD sp, 0x0(sp) + + /* Load task program counter */ + LOAD t0, 31 * REGBYTES(sp) + csrw mepc, t0 + + /* Run in machine mode */ + li t0, MSTATUS_PRV1 + csrs mstatus, t0 + + /* Restore registers, + Skip global pointer because that does not change */ + LOAD x1, 0x0(sp) + LOAD x4, 3 * REGBYTES(sp) + LOAD x5, 4 * REGBYTES(sp) + ... + ... + LOAD x30, 29 * REGBYTES(sp) + LOAD x31, 30 * REGBYTES(sp) + + addi sp, sp, REGBYTES * 32 + mret + .endm + +The important bits are the Load / Save context, which may be replaced +with firstly setting up the Vectors and secondly using a *single* STORE +(or LOAD) including using C.ST or C.LD, to indicate that the entire +bank of registers is to be loaded/saved: + + /* a few things are assumed here: (a) that when switching to + M-Mode an entirely different set of CSRs is used from that + which is used in U-Mode and (b) that the M-Mode x1 and x4 + vectors are also not used anywhere else in M-Mode, consequently + only need to be set up just the once. + */ + .macroVectorSetup + MVECTORCSRx1 = 31, defaultlen + MVECTORCSRx4 = 28, defaultlen + + /* Save Context */ + SETVL x0, x0, 31 /* x0 ignored silently */ + STORE x1, 0x0(sp) // x1 marked as 31-long vector of default bitwidth + + /* Restore registers, + Skip global pointer because that does not change */ + LOAD x1, 0x0(sp) + SETVL x0, x0, 28 /* x0 ignored silently */ + LOAD x4, 3 * REGBYTES(sp) // x4 marked as 28-long default bitwidth + +Note that although it may just be a bug in portasm.S, x2 and x3 appear not +to be being restored. If however this is a bug and they *do* need to be +restored, then the SETVL call may be moved to *outside* the Save / Restore +Context assembly code, into the macroVectorSetup, as long as vectors are +never used anywhere else (i.e. VL is never altered by M-Mode). + +In effect the entire bank of repeated LOAD / STORE instructions is replaced +by one single (compressed if it is available) instruction. + +## Virtual Memory page-faults on LOAD/STORE + + +### Notes from conversations + +> I was going through the C.LOAD / C.STORE section 12.3 of V2.3-Draft +> riscv-isa-manual in order to work out how to re-map RVV onto the standard +> ISA, and came across an interesting comments at the bottom of pages 75 +> and 76: + +> " A common mechanism used in other ISAs to further reduce save/restore +> code size is load- multiple and store-multiple instructions. " + +> Fascinatingly, due to Simple-V proposing to use the *standard* register +> file, both C.LOAD / C.STORE *and* LOAD / STORE would in effect be exactly +> that: load-multiple and store-multiple instructions. Which brings us +> on to this comment: + +> "For virtual memory systems, some data accesses could be resident in +> physical memory and +> some could not, which requires a new restart mechanism for partially +> executed instructions." + +> Which then of course brings us to the interesting question: how does RVV +> cope with the scenario when, particularly with LD.X (Indexed / indirect +> loads), part-way through the loading a page fault occurs? + +> Has this been noted or discussed before? + +For applications-class platforms, the RVV exception model is +element-precise (that is, if an exception occurs on element j of a +vector instruction, elements 0..j-1 have completed execution and elements +j+1..vl-1 have not executed). + +Certain classes of embedded platforms where exceptions are always fatal +might choose to offer resumable/swappable interrupts but not precise +exceptions. + + +> Is RVV designed in any way to be re-entrant? + +Yes. + + +> What would the implications be for instructions that were in a FIFO at +> the time, in out-of-order and VLIW implementations, where partial decode +> had taken place? + +The usual bag of tricks for maintaining precise exceptions applies to +vector machines as well. Register renaming makes the job easier, and +it's relatively cheaper for vectors, since the control cost is amortized +over longer registers. + + +> Would it be reasonable at least to say *bypass* (and freeze) the +> instruction FIFO (drop down to a single-issue execution model temporarily) +> for the purposes of executing the instructions in the interrupt (whilst +> setting up the VM page), then re-continue the instruction with all +> state intact? + +This approach has been done successfully, but it's desirable to be +able to swap out the vector unit state to support context switches on +exceptions that result in long-latency I/O. + + +> Or would it be better to switch to an entirely separate secondary +> hyperthread context? + +> Does anyone have any ideas or know if there is any academic literature +> on solutions to this problem? + +The Vector VAX offered imprecise but restartable and swappable exceptions: +http://mprc.pku.edu.cn/~liuxianhua/chn/corpus/Notes/articles/isca/1990/VAX%20vector%20architecture.pdf + +Sec. 4.6 of Krste's dissertation assesses some of +the tradeoffs and references a bunch of related work: +http://people.eecs.berkeley.edu/~krste/thesis.pdf + + +---- + +Started reading section 4.6 of Krste's thesis, noted the "IEE85 F.P +exceptions" and thought, "hmmm that could go into a CSR, must re-read +the section on FP state CSRs in RVV 0.4-Draft again" then i suddenly +thought, "ah ha! what if the memory exceptions were, instead of having +an immediate exception thrown, were simply stored in a type of predication +bit-field with a flag "error this element failed"? + +Then, *after* the vector load (or store, or even operation) was +performed, you could *then* raise an exception, at which point it +would be possible (yes in software... I know....) to go "hmmm, these +indexed operations didn't work, let's get them into memory by triggering +page-loads", then *re-run the entire instruction* but this time with a +"memory-predication CSR" that stops the already-performed operations +(whether they be loads, stores or an arithmetic / FP operation) from +being carried out a second time. + +This theoretically could end up being done multiple times in an SMP +environment, and also for LD.X there would be the remote outside annoying +possibility that the indexed memory address could end up being modified. + +The advantage would be that the order of execution need not be +sequential, which potentially could have some big advantages. +Am still thinking through the implications as any dependent operations +(particularly ones already decoded and moved into the execution FIFO) +would still be there (and stalled). hmmm. + +---- + + > > # assume internal parallelism of 8 and MAXVECTORLEN of 8 + > > VSETL r0, 8 + > > FADD x1, x2, x3 + > + > > x3[0]: ok + > > x3[1]: exception + > > x3[2]: ok + > > ... + > > ... + > > x3[7]: ok + > + > > what happens to result elements 2-7?  those may be *big* results + > > (RV128) + > > or in the RVV-Extended may be arbitrary bit-widths far greater. + > + >  (you replied:) + > + > Thrown away. + +discussion then led to the question of OoO architectures + +> The costs of the imprecise-exception model are greater than the benefit. +> Software doesn't want to cope with it.  It's hard to debug.  You can't +> migrate state between different microarchitectures--unless you force all +> implementations to support the same imprecise-exception model, which would +> greatly limit implementation flexibility.  (Less important, but still +> relevant, is that the imprecise model increases the size of the context +> structure, as the microarchitectural guts have to be spilled to memory.) + +## Zero/Non-zero Predication + +>> >  it just occurred to me that there's another reason why the data +>> > should be left instead of zeroed.  if the standard register file is +>> > used, such that vectorised operations are translated to mean "please +>> > insert multiple register-contiguous operations into the instruction +>> > FIFO" and predication is used to *skip* some of those, then if the +>> > next "vector" operation uses the (standard) registers that were masked +>> > *out* of the previous operation it may proceed without blocking. +>> > +>> >  if however zeroing is made mandatory then that optimisation becomes +>> > flat-out impossible to deploy. +>> > +>> >  whilst i haven't fully thought through the full implications, i +>> > suspect RVV might also be able to benefit by being able to fit more +>> > overlapping operations into the available SRAM by doing something +>> > similar. +> +> +> Luke, this is called density time masking. It doesn’t apply to only your +> model with the “standard register file” is used. it applies to any +> architecture that attempts to speed up by skipping computation and writeback +> of masked elements. +> +> That said, the writing of zeros need not be explicit. It is possible to add +> a “zero bit” per element that, when set, forces a zero to be read from the +> vector (although the underlying storage may have old data). In this case, +> there may be a way to implement DTM as well. + + +## Implementation detail for scalar-only op detection + +Note 1: this idea is a pipeline-bypass concept, which may *or may not* be +worthwhile. + +Note 2: this is just one possible implementation. Another implementation +may choose to treat *all* operations as vectorised (including treating +scalars as vectors of length 1), choosing to add an extra pipeline stage +dedicated to *all* instructions. + +This section *specifically* covers the implementor's freedom to choose +that they wish to minimise disruption to an existing design by detecting +"scalar-only operations", bypassing the vectorisation phase (which may +or may not require an additional pipeline stage) + +[[scalardetect.png]] + +>> For scalar ops an implementation may choose to compare 2-3 bits through an +>> AND gate: are src & dest scalar? Yep, ok send straight to ALU  (or instr +>> FIFO). + +> Those bits cannot be known until after the registers are decoded from the +> instruction and a lookup in the "vector length table" has completed. +> Considering that one of the reasons RISC-V keeps registers in invariant +> positions across all instructions is to simplify register decoding, I expect +> that inserting an SRAM read would lengthen the critical path in most +> implementations. + +reply: + +> briefly: the trick i mentioned about ANDing bits together to check if +> an op was fully-scalar or not was to be read out of a single 32-bit +> 3R1W SRAM (64-bit if FPU exists). the 32/64-bit SRAM contains 1 bit per +> register indicating "is register vectorised yes no". 3R because you need +> to check src1, src2 and dest simultaneously. the entries are *generated* +> from the CSRs and are an optimisation that on slower embedded systems +> would likely not be needed. + +> is there anything unreasonable that anyone can foresee about that? +> what are the down-sides? + +## C.MV predicated src, predicated dest + +> Can this be usefully defined in such a way that it is +> equivalent to vector gather-scatter on each source, followed by a +> non-predicated vector-compare, followed by vector gather-scatter on the +> result? + +## element width conversion: restrict or remove? + +summary: don't restrict / remove. it's fine. + +> > it has virtually no cost/overhead as long as you specify +> > that inputs can only upconvert, and operations are always done at the +> > largest size, and downconversion only happens at the output. +> +> okaaay.  so that's a really good piece of implementation advice. +> algorithms do require data size conversion, so at some point you need to +> introduce the feature of upconverting and downconverting. +> +> > for int and uint, this is dead simple and fits well within the RVV pipeline +> > without any critical path, pipeline depth, or area implications. + + + +## Under review / discussion: remove CSR vector length, use VSETVL + +**DECISION: 11jun2018 - CSR vector length removed, VSETVL determines +length on all regs**. This section kept for historical reasons. + +So the issue is as follows: + +* CSRs are used to set the "span" of a vector (how many of the standard + register file to contiguously use) +* VSETVL in RVV works as follows: it sets the vector length (copy of which + is placed in a dest register), and if the "required" length is longer + than the *available* length, the dest reg is set to the MIN of those + two. +* **HOWEVER**... in SV, *EVERY* vector register has its own separate + length and thus there is no way (at the time that VSETVL is called) to + know what to set the vector length *to*. +* At first glance it seems that it would be perfectly fine to just limit + the vector operation to the length specified in the destination + register's CSR, at the time that each instruction is issued... + except that that cannot possibly be guaranteed to match + with the value *already loaded into the target register from VSETVL*. + +Therefore a different approach is needed. + +Possible options include: + +* Removing the CSR "Vector Length" and always using the value from + VSETVL. "VSETVL destreg, counterreg, #lenimmed" will set VL *and* + destreg equal to MIN(counterreg, lenimmed), with register-based + variant "VSETVL destreg, counterreg, lenreg" doing the same. +* Keeping the CSR "Vector Length" and having the lenreg version have + a "twist": "if lengreg is vectorised, read the length from the CSR" +* Other (TBD) + +The first option (of the ones brainstormed so far) is a lot simpler. +It does however mean that the length set in VSETVL will apply across-the-board +to all src1, src2 and dest vectorised registers until it is otherwise changed +(by another VSETVL call). This is probably desirable behaviour. + +## Implementation Paradigms + +TODO: assess various implementation paradigms. These are listed roughly +in order of simplicity (minimum compliance, for ultra-light-weight +embedded systems or to reduce design complexity and the burden of +design implementation and compliance, in non-critical areas), right the +way to high-performance systems. + +* Full (or partial) software-emulated (via traps): full support for CSRs + required, however when a register is used that is detected (in hardware) + to be vectorised, an exception is thrown. +* Single-issue In-order, reduced pipeline depth (traditional SIMD / DSP) +* In-order 5+ stage pipelines with instruction FIFOs and mild register-renaming +* Out-of-order with instruction FIFOs and aggressive register-renaming +* VLIW + +Also to be taken into consideration: + +* "Virtual" vectorisation: single-issue loop, no internal ALU parallelism +* Comphrensive vectorisation: FIFOs and internal parallelism +* Hybrid Parallelism + +### Full or partial software-emulation + +The absolute, absolute minimal implementation is to provide the full +set of CSRs and detection logic for when any of the source or destination +registers are vectorised. On detection, a trap is thrown, whether it's +a branch, LOAD, STORE, or an arithmetic operation. + +Implementors are entirely free to choose whether to allow absolutely every +single operation to be software-emulated, or whether to provide some emulation +and some hardware support. In particular, for an RV32E implementation +where fast context-switching is a requirement (see "Context Switch Example"), +it makes no sense to allow Vectorised-LOAD/STORE to be implemented as an +exception, as every context-switch will result in double-traps. + +# TODO Research + +> For great floating point DSPs check TI’s C3x, C4X, and C6xx DSPs + +Idea: basic simple butterfly swap on a few element indices, primarily targetted +at SIMD / DSP. High-byte low-byte swapping, high-word low-word swapping, +perhaps allow reindexing of permutations up to 4 elements? 8? Reason: +such operations are less costly than a full indexed-shuffle, which requires +a separate instruction cycle. + +Predication "all zeros" needs to be "leave alone". Detection of +ADD r1, rs1, rs0 cases result in nop on predication index 0, whereas +ADD r0, rs1, rs2 is actually a desirable copy from r2 into r0. +Destruction of destination indices requires a copy of the entire vector +in advance to avoid. + +TBD: floating-point compare and other exception handling + +------ + +Multi-LR/SC + +Please don't try to use the L1 itself. + +Use the Load and Store buffers which capture instruction state prior +to being accessed in the L1 (and prior to data arriving in the case of +Store buffer). + +Also, use the L1 Miss buffers as these already HAVE to be snooped by +coherence traffic. These are used to monitor that all participating +cache lines remain interference free, and amalgamate same into a CPU +signal accessible ia branch or predicate. + +The Load buffers manage inbound traffic +The Store buffers manage outbound traffic. + +Done properly, the participating cache lines can exceed the associativity +of the L1 cache without architectural harm (may incur additional latency). + + + +> > > so, let's say instead of another LR *cancelling* the load +> > > reservation, the SMP core / hardware thread *blocks* for +> > > up to 63 further instructions, waiting for the reservation +> > > to clear. +> > +> > Can you explain what you mean by this paragraph? +> +> best put in sequential events, probably. +> +> LR <-- 64-instruction countdown starts here +> ... 63 +> ... 62 +> LR same address <--- notes that core1 is on 61, +> so pauses for **UP TO** 61 cycles +> ... 32 +> SC <- core1 didn't reach zero, therefore valid, therefore +> core2 is now **UNBLOCKED**, is granted the +> load-reservation (and begins its **own** 64-cycle +> LR instruction countdown) +> ... 63 +> ... 62 +> ... +> ... +> SC <- also valid + +Looks to me that you could effect the same functionality by simply +holding onto the cache line in core 1 preventing core 2 from + getting past the LR. + +On the other hand, the freeze is similar to how the MP CRAYs did +ATOMIC stuff. + # References * SIMD considered harmful @@ -1341,4 +1739,17 @@ by *introducing* deliberate latency into the execution phase. * Fast context save/restore proposal * Register File Bank Cacheing * Expired Patent on Vector Virtual Memory solutions - +* Discussion on RVV "re-entrant" capabilities allowing operations to be + restarted if an exception occurs (VM page-table miss) + +* Dot Product Vector +* RVV slides 2017 +* Wavefront skipping using BRAMS +* Streaming Pipelines +* Barcelona SIMD Presentation +* +* Full Description (last page) of RVV instructions + +* PULP Low-energy Cluster Vector Processor +