X-Git-Url: https://git.libre-soc.org/?a=blobdiff_plain;f=openpower%2Fsv%2Foverview.mdwn;h=9f37c0198a2187f20dfd4eccf9afc034714c135b;hb=35bbedd4f0a77290c27730201112c2222636ee6a;hp=066e005b630ae7e5b31486f56efe0a6007d4fa81;hpb=4be8001cf3a7b774ce27348f783cbc8f77052004;p=libreriscv.git diff --git a/openpower/sv/overview.mdwn b/openpower/sv/overview.mdwn index 066e005b6..9f37c0198 100644 --- a/openpower/sv/overview.mdwn +++ b/openpower/sv/overview.mdwn @@ -1,51 +1,82 @@ # SV Overview +**SV is in DRAFT STATUS**. SV has not yet been submitted to the OpenPOWER Foundation ISA WG for review. + This document provides an overview and introduction as to why SV (a -Cray-style Vector augmentation to OpenPOWER) exists, and how it works. +[[!wikipedia Cray]]-style Vector augmentation to [[!wikipedia OpenPOWER]]) exists, and how it works. + +**Sponsored by NLnet under the Privacy and Enhanced Trust Programme** Links: +* This page: [http://libre-soc.org/openpower/sv/overview](http://libre-soc.org/openpower/sv/overview) +* [FOSDEM2021 SimpleV for OpenPOWER](https://fosdem.org/2021/schedule/event/the_libresoc_project_simple_v_vectorisation/) +* FOSDEM2021 presentation * [[discussion]] and [bugreport](https://bugs.libre-soc.org/show_bug.cgi?id=556) feel free to add comments, questions. * [[SV|sv]] * [[sv/svp64]] +* [x86 REP instruction](https://c9x.me/x86/html/file_module_x86_id_279.html): + a useful way to quickly understand that the core of the SV concept + is not new. +* [Article about register tagging](http://science.lpnu.ua/sites/default/files/journal-paper/2019/jul/17084/volum3number1text-9-16_1.pdf) showing + that tagging is not a new idea either. Register tags + are also used in the Mill Architecture. -Contents: [[!toc]] # Introduction: SIMD and Cray Vectors SIMD, the primary method for easy parallelism of the -past 30 years in Computer Architectures, is [known to be -harmful](https://www.sigarch.org/simd-instructions-considered-harmful/). +past 30 years in Computer Architectures, is +[known to be harmful](https://www.sigarch.org/simd-instructions-considered-harmful/). SIMD provides a seductive simplicity that is easy to implement in -hardware. With each doubling in width it promises increases in raw performance without the complexity of either multi-issue or out-of-order execution. +hardware. With each doubling in width it promises increases in raw +performance without the complexity of either multi-issue or out-of-order +execution. Unfortunately, even with predication added, SIMD only becomes more and more problematic with each power of two SIMD width increase introduced through an ISA revision. The opcode proliferation, at O(N^6), inexorably spirals out of control in the ISA, detrimentally impacting the hardware, -the software, the compilers and the testing and compliance. +the software, the compilers and the testing and compliance. Here are +the typical dimensions that result in such massive proliferation: + +* Operation (add, mul) +* bitwidth (8, 16, 32, 64, 128) +* Conversion between bitwidths (FP16-FP32-64) +* Signed/unsigned +* HI/LO swizzle (Audio L/R channels) + - HI/LO selection on src 1 + - selection on src 2 + - selection on dest + - Example: AndesSTAR Audio DSP +* Saturation (Clamping at max range) + +These typically are multiplied up to produce explicit opcodes numbering +in the thousands on, for example the ARC Video/DSP cores. Cray-style variable-length Vectors on the other hand result in stunningly elegant and small loops, exceptionally high data throughput -per instruction (by one *or greater* orders of magnitude than SIMD), with no alarmingly high setup and cleanup code, where -at the hardware level the microarchitecture may execute from one element -right the way through to tens of thousands at a time, yet the executable -remains exactly the same and the ISA remains clear, true to the RISC -paradigm, and clean. Unlike in SIMD, powers of two limitations are not -involved in the ISA or in the assembly code. +per instruction (by one *or greater* orders of magnitude than SIMD), with +no alarmingly high setup and cleanup code, where at the hardware level +the microarchitecture may execute from one element right the way through +to tens of thousands at a time, yet the executable remains exactly the +same and the ISA remains clear, true to the RISC paradigm, and clean. +Unlike in SIMD, powers of two limitations are not involved in the ISA +or in the assembly code. SimpleV takes the Cray style Vector principle and applies it in the -abstract to a Scalar ISA, in the process allowing register file size -increases using "tagging" (similar to how x86 originally extended +abstract to a Scalar ISA in the same way that x86 used to do its "REP" instruction. In the process, "context" is applied, allowing amongst other things +a register file size +increase using "tagging" (similar to how x86 originally extended registers from 32 to 64 bit). ## SV -The fundamentals are: +The fundamentals are (just like x86 "REP"): * The Program Counter (PC) gains a "Sub Counter" context (Sub-PC) * Vectorisation pauses the PC and runs a Sub-PC loop from 0 to VL-1 @@ -76,11 +107,13 @@ basis further refinements can be added which build up towards an extremely powerful Vector augmentation system, with very little in the way of additional opcodes required: simply external "context". -x86 was originally only 80 instructions: prior to AVX512 over 1,300 additional instructions have been added, almost all of them SIMD. +x86 was originally only 80 instructions: prior to AVX512 over 1,300 +additional instructions have been added, almost all of them SIMD. RISC-V RVV as of version 0.9 is over 188 instructions (more than the -rest of RV64G combined: 80 for RV64G and 27 for C). Over 95% of that functionality is added to -OpenPOWER v3 0B, by SimpleV augmentation, with around 5 to 8 instructions. +rest of RV64G combined: 80 for RV64G and 27 for C). Over 95% of that +functionality is added to OpenPOWER v3 0B, by SimpleV augmentation, +with around 5 to 8 instructions. Even in OpenPOWER v3.0B, the Scalar Integer ISA is around 150 instructions, with IEEE754 FP adding approximately 80 more. VSX, being @@ -109,8 +142,9 @@ by SimpleV: (struct-based LD/ST from RVV for example) * 32-bit instruction lengths. [[svp64]] had to be added as 64 bit. -These limitations, which stem inherently from the adaptation process of starting from a Scalar ISA, are not insurmountable. Over time, they may well be -addressed in future revisions of SV. +These limitations, which stem inherently from the adaptation process of +starting from a Scalar ISA, are not insurmountable. Over time, they may +well be addressed in future revisions of SV. The rest of this document builds on the above simple loop to add: @@ -144,7 +178,9 @@ this is where our "simple" loop gets its first complexity. if (RA.isvec) { irs1 += 1; } if (RB.isvec) { irs2 += 1; } -This could have been written out as eight separate cases: one each for when each of `RA`, `RB` or `RT` is scalar or vector. Those eight cases, when optimally combined, result in the pseudocode above. +This could have been written out as eight separate cases: one each for +when each of `RA`, `RB` or `RT` is scalar or vector. Those eight cases, +when optimally combined, result in the pseudocode above. With some walkthroughs it is clear that the loop exits immediately after the first scalar destination result is written, and that when the @@ -165,15 +201,31 @@ there is no separate Vector register file*: it's all the same instruction, on the standard register file, just with a loop. Scalar happens to set that loop size to one. -The important insight from the above is that, strictly speaking, Simple-V is not really a Vectorisation scheme at all: it is more of a hardware ISA "Compression scheme", allowing as it does for what would normally require multiple sequential instructions to be replaced with just one. This is where the rule that Program Order must be preserved in Sub-PC execution derives from. However in other ways, which will emerge below, the "tagging" concept presents an opportunity to include features definitely not common outside of Vector ISAs, and in that regard it's definitely a xlass of Vectorisation. +The important insight from the above is that, strictly speaking, Simple-V +is not really a Vectorisation scheme at all: it is more of a hardware +ISA "Compression scheme", allowing as it does for what would normally +require multiple sequential instructions to be replaced with just one. +This is where the rule that Program Order must be preserved in Sub-PC +execution derives from. However in other ways, which will emerge below, +the "tagging" concept presents an opportunity to include features +definitely not common outside of Vector ISAs, and in that regard it's +definitely a class of Vectorisation. ## Register "tagging" -As an aside: in [[sv/svp64]] the encoding which allows SV to both extend the range beyond r0-r31 and to determine whether it is a scalar or vector is encoded in two to three bits, depending on the instruction. +As an aside: in [[sv/svp64]] the encoding which allows SV to both extend +the range beyond r0-r31 and to determine whether it is a scalar or vector +is encoded in two to three bits, depending on the instruction. -The reason for using so few bits is because there are up to *four* registers to mark in this way (`fma`, `isel`) which starts to be of concern when there are only 24 available bits to specify the entire SV Vectorisation Context. In fact, for a small subset of instructions it is just not possible to tag every single register. Under these rare circumstances a tag has to be shared between two registers. +The reason for using so few bits is because there are up to *four* +registers to mark in this way (`fma`, `isel`) which starts to be of +concern when there are only 24 available bits to specify the entire SV +Vectorisation Context. In fact, for a small subset of instructions it +is just not possible to tag every single register. Under these rare +circumstances a tag has to be shared between two registers. -Below is the pseudocode which expresses the relationship which is usually applied to *every* register: +Below is the pseudocode which expresses the relationship which is usually +applied to *every* register: if extra3_mode: spec = EXTRA3 # bit 2 s/v, 0-1 extends range @@ -186,9 +238,19 @@ Below is the pseudocode which expresses the relationship which is usually applie RA.isvec = False return (spec[0:1] << 5) | RA -Here we can see that the scalar registers are extended in the top bits, whilst vectors are shifted up by 2 bits, and then extended in the LSBs. Condition Registers have a slightly different scheme, along the same principle, which takes into account the fact that each CR may be bit-level addressed by Condition Register operations. +Here we can see that the scalar registers are extended in the top bits, +whilst vectors are shifted up by 2 bits, and then extended in the LSBs. +Condition Registers have a slightly different scheme, along the same +principle, which takes into account the fact that each CR may be bit-level +addressed by Condition Register operations. -Readers familiar with OpenPOWER will know of Rc=1 operations that create an associated post-result "test", placing this test into an implicit Condition Register. The original researchers who created the POWER ISA chose CR0 for Integer, and CR1 for Floating Point. These *also become Vectorised* - implicitly - if the associated destination register is also Vectorised. This allows for some very interesting savings on instruction count due to the very same CR Vectors being predication masks. +Readers familiar with OpenPOWER will know of Rc=1 operations that create +an associated post-result "test", placing this test into an implicit +Condition Register. The original researchers who created the POWER ISA +chose CR0 for Integer, and CR1 for Floating Point. These *also become +Vectorised* - implicitly - if the associated destination register is +also Vectorised. This allows for some very interesting savings on +instruction count due to the very same CR Vectors being predication masks. # Adding single predication @@ -228,11 +290,17 @@ as if this were a straight OpenPOWER v3.0B non-augmented instruction. Single Predication therefore provides several modes traditionally seen in Vector ISAs: -* VINSERT: the predicate may be set as a single bit, the sources are scalar and the destination a vector. -* VSPLAT (result broadcasting) is provided by making the sources scalar and the destination a vector, and having no predicate set or having multiple bits set. -* VSELECT is provided by setting up (at least one of) the sources as a vector, using a single bit in olthe predicate, and the destination as a scalar. +* VINSERT: the predicate may be set as a single bit, the sources are + scalar and the destination a vector. +* VSPLAT (result broadcasting) is provided by making the sources scalar + and the destination a vector, and having no predicate set or having + multiple bits set. +* VSELECT is provided by setting up (at least one of) the sources as a + vector, using a single bit in the predicate, and the destination as + a scalar. -All of this capability and coverage without even adding one single actual Vector opcode, let alone 180, 600 or 1,300! +All of this capability and coverage without even adding one single actual +Vector opcode, let alone 180, 600 or 1,300! # Predicate "zeroing" mode @@ -303,18 +371,73 @@ structure, where all types uint16_t etc. are in little-endian order: reg_t int_regfile[128]; // SV extends to 128 regs -Setting `actual_bytes[3]` in any given `reg_t` to 0x01 would mean that: +This means that Vector elements start from locations specified by 64 bit +"register" but that from that location onwards the elements *overlap +subsequent registers*. + +Here is another way to view the same concept, bearing in mind that it +is assumed a LE memory order: + + uint8_t reg_sram[8*128]; + uint8_t *actual_bytes = ®_sram[RA*8]; + if elwidth == 8: + uint8_t *b = (uint8_t*)actual_bytes; + b[idx] = result; + if elwidth == 16: + uint16_t *s = (uint16_t*)actual_bytes; + s[idx] = result; + if elwidth == 32: + uint32_t *i = (uint32_t*)actual_bytes; + i[idx] = result; + if elwidth == default: + uint64_t *l = (uint64_t*)actual_bytes; + l[idx] = result; + +Starting with all zeros, setting `actual_bytes[3]` in any given `reg_t` +to 0x01 would mean that: * b[0..2] = 0x00 and b[3] = 0x01 * s[0] = 0x0000 and s[1] = 0x0001 * i[0] = 0x00010000 * l[0] = 0x0000000000010000 -Then, our simple loop, instead of accessing the array of regfile entries -with a computed index, would access the appropriate element of the -appropriate type. Thus we have a series of overlapping conceptual arrays -that each start at what is traditionally thought of as "a register". -It then helps if we have a couple of routines: +In tabular form, starting an elwidth=8 loop from r0 and extending for +16 elements would begin at r0 and extend over the entirety of r1: + + | byte0 | byte1 | byte2 | byte3 | byte4 | byte5 | byte6 | byte7 | + | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | + r0 | b[0] | b[1] | b[2] | b[3] | b[4] | b[5] | b[6] | b[7] | + r1 | b[8] | b[9] | b[10] | b[11] | b[12] | b[13] | b[14] | b[15] | + +Starting an elwidth=16 loop from r0 and extending for +7 elements would begin at r0 and extend partly over r1. Note that +b0 indicates the low byte (lowest 8 bits) of each 16-bit word, and +b1 represents the top byte: + + | byte0 | byte1 | byte2 | byte3 | byte4 | byte5 | byte6 | byte7 | + | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | + r0 | s[0].b0 b1 | s[1].b0 b1 | s[2].b0 b1 | s[3].b0 b1 | + r1 | s[4].b0 b1 | s[5].b0 b1 | s[6].b0 b1 | unmodified | + +Likewise for elwidth=32, and a loop extending for 3 elements. b0 through +b3 represent the bytes (numbered lowest for LSB and highest for MSB) within +each element word: + + | byte0 | byte1 | byte2 | byte3 | byte4 | byte5 | byte6 | byte7 | + | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | + r0 | w[0].b0 b1 b2 b3 | w[1].b0 b1 b2 b3 | + r1 | w[2].b0 b1 b2 b3 | unmodified unmodified | + +64-bit (default) elements access the full registers. In each case the +register number (`RT`, `RA`) indicates the *starting* point for the storage +and retrieval of the elements. + +Our simple loop, instead of accessing the array of regfile entries +with a computed index `iregs[RT+i]`, would access the appropriate element +of the appropriate width, such as `iregs[RT].s[i]` in order to access +16 bit elements starting from RT. Thus we have a series of overlapping +conceptual arrays that each start at what is traditionally thought of as +"a register". It then helps if we have a couple of routines: get_polymorphed_reg(reg, bitwidth, offset): reg_t res = 0; @@ -350,7 +473,7 @@ element width. Our first simple loop thus becomes: src1 = get_polymorphed_reg(RA, srcwid, i) src2 = get_polymorphed_reg(RB, srcwid, i) result = src1 + src2 # actual add here - set_polymorphed_reg(rd, destwid, i, result) + set_polymorphed_reg(RT, destwid, i, result) With this loop, if elwidth=16 and VL=3 the first 48 bits of the target register will contain three 16 bit addition results, and the upper 16 @@ -358,10 +481,9 @@ bits will be *unaltered*. Note that things such as zero/sign-extension (and predication) have been left out to illustrate the elwidth concept. Also note that it turns -out to be important to perform the operation at the maximum bitwidth - -`max(srcwid, destwid)` - such that any truncation, rounding errors or +out to be important to perform the operation internally at effectively an *infinite* bitwidth such that any truncation, rounding errors or other artefacts may all be ironed out. This turns out to be important -when applying Saturation for Audio DSP workloads. +when applying Saturation for Audio DSP workloads, particularly for multiply and IEEE754 FP rounding. By "infinite" this is conceptual only: in reality, the application of the different truncations and width-extensions set a fixed deterministic practical limit on the internal precision needed, on a per-operation basis. Other than that, element width overrides, which can be applied to *either* source or destination or both, are pretty straightforward, conceptually. @@ -376,52 +498,114 @@ of the destination. The only situation where a full overwrite occurs is on "default" behaviour. This is extremely important to consider the register file as a byte-level store, not a 64-bit-level store. -## Why LE regfile? - -The concept of having a regfile where the byte ordering of the underlying SRAM seems utter nonsense. Surely, a hardware implementation gets to choose the order, right? The bytes come in, all registers are 64 bit and it's just wiring, right? +## Why a LE regfile? + +The concept of having a regfile where the byte ordering of the underlying +SRAM seems utter nonsense. Surely, a hardware implementation gets to +choose the order, right? It's memory only where LE/BE matters, right? The +bytes come in, all registers are 64 bit and it's just wiring, right? + +Ordinarily this would be 100% correct, in both a scalar ISA and in a Cray +style Vector one. The assumption in that last question was, however, "all +registers are 64 bit". SV allows SIMD-style packing of vectors into the +64 bit registers, where one instruction and the next may interpret that +very same register as containing elements of completely different widths. + +Consequently it becomes critically important to decide a byte-order. +That decision was - arbitrarily - LE mode. Actually it wasn't arbitrary +at all: it was such hell to implement BE supported interpretations of CRs +and LD/ST in LibreSOC, based on a terse spec that provides insufficient +clarity and assumes significant working knowledge of OpenPOWER, with +arbitrary insertions of 7-index here and 3-bitindex there, the decision +to pick LE was extremely easy. + +Without such a decision, if two words are packed as elements into a 64 +bit register, what does this mean? Should they be inverted so that the +lower indexed element goes into the HI or the LO word? should the 8 +bytes of each register be inverted? Should the bytes in each element +be inverted? Should the element indexing loop order be broken onto +discontiguous chunks such as 32107654 rather than 01234567, and if so +at what granularity of discontinuity? These are all equally valid and +legitimate interpretations of what constitutes "BE" and they all cause +merry mayhem. + +The decision was therefore made: the c typedef union is the canonical +definition, and its members are defined as being in LE order. From there, +implementations may choose whatever internal HDL wire order they like +as long as the results produced conform to the elwidth pseudocode. + +*Note: it turns out that both x86 SIMD and NEON SIMD follow this convention, namely that both are implicitly LE, even though their ISA Manuals may not explicitly spell this out* + +* +* +* -The assumption in that question was, "all registers are 64 bit". SV allows SIMD-style packing of vectors into the 64 bit registers, and consequently it becomes critically important to decide a byte-order. That decision was - arbitrarily - LE mode. Actually it wasn't arbitrary at all: it was such hell to implement CRs and LD/ST in LibreSOC, with arbitrary insertions of 7-index here and 3-bitindex there that the decision to pick LE was extremrly easy. - -Without such a decision, if two words are packed as elements into a 64 bit register, what does this mean? Should they be inverted so that the lower indexed element does into the HI or the LO word? should the 8 bytes of each register be inverted? Should the bytes in each element be inverted? The decision was therefore made: the c typedef union is, in a LE context, the definitive canonical definition, and implementations may choose whatever internal HDL wire order they like as long as the results conform to the elwidth pseudocode. ## Source and Destination overrides -A minor fly in the ointment: what happens if the source and destination are over-ridden to different widths? For example, FP16 arithmetic is not accurate enough and may introduce rounding errors when up-converted to FP32 output. The rule is therefore set: +A minor fly in the ointment: what happens if the source and destination +are over-ridden to different widths? For example, FP16 arithmetic is +not accurate enough and may introduce rounding errors when up-converted +to FP32 output. The rule is therefore set: - The operation MUST take place at the larger of the two widths + The operation MUST take place effectively at infinite precision: + actual precision determined by the operation and the operand widths In pseudocode this is: for i = 0 to VL-1: src1 = get_polymorphed_reg(RA, srcwid, i) src2 = get_polymorphed_reg(RB, srcwid, i) - opwidth = max(srcwid, destwid) + opwidth = max(srcwid, destwid) # usually result = op_add(src1, src2, opwidth) # at max width set_polymorphed_reg(rd, destwid, i, result) -It will turn out that under some conditions the combination of the extension of the source registers followed by truncation of the result gets rid of bits that didn't matter, and the operation might as well have taken place at the narrower width and could save resources that way. Examples include Logical OR where the source extension would place zeros in the upper bits, the result will be truncated and throw those zeros away. +In reality the source and destination widths determine the actual required +precision in a given ALU. The reason for setting "effectively" infinite precision +is illustrated for example by Saturated-multiply, where if the internal precision was insufficient it would not be possible to correctly determine the maximum clip range had been exceeded. + +Thus it will turn out that under some conditions the combination of the +extension of the source registers followed by truncation of the result +gets rid of bits that didn't matter, and the operation might as well have +taken place at the narrower width and could save resources that way. +Examples include Logical OR where the source extension would place +zeros in the upper bits, the result will be truncated and throw those +zeros away. -Counterexamples include the previously mentioned FP16 arithmetic, where for operations such as division of large numbers by very small ones it should be clear that internal accuracy will play a major role in influencing the result. Hence the rule that the calculation takes place at the maximum bitwidth, and truncation follows afterwards. +Counterexamples include the previously mentioned FP16 arithmetic, +where for operations such as division of large numbers by very small +ones it should be clear that internal accuracy will play a major role +in influencing the result. Hence the rule that the calculation takes +place at the maximum bitwidth, and truncation follows afterwards. ## Signed arithmetic -What happens when the operation involves signed arithmetic? Here the implementor has to use common sense, and make sure behaviour is accurately documented. If the result of the unmodified operation is sign-extended because one of the inputs is signed, then the input source operands must be first read at their overridden bitwidth and *then* sign-extended: +What happens when the operation involves signed arithmetic? Here the +implementor has to use common sense, and make sure behaviour is accurately +documented. If the result of the unmodified operation is sign-extended +because one of the inputs is signed, then the input source operands must +be first read at their overridden bitwidth and *then* sign-extended: for i = 0 to VL-1: src1 = get_polymorphed_reg(RA, srcwid, i) src2 = get_polymorphed_reg(RB, srcwid, i) opwidth = max(srcwid, destwid) # srces known to be less than result width - src1 = sign_extend(src1, srcwid, destwid) - src2 = sign_extend(src2, srcwid, destwid) + src1 = sign_extend(src1, srcwid, opwidth) + src2 = sign_extend(src2, srcwid, opwidth) result = op_signed(src1, src2, opwidth) # at max width set_polymorphed_reg(rd, destwid, i, result) - + The key here is that the cues are taken from the underlying operation. ## Saturation -Audio DSPs need to be able to clip sound when the "volume" is adjusted, but if it is too loud and the signal wraps, distortion occurs. The solution is to clip (saturate) the audio and allow this to be detected. In practical terms this is a post-result analysis however it needs to take place at the largest bitwidth i.e. before a result is element width truncated. Only then can the arithmetic saturation condition be detected: +Audio DSPs need to be able to clip sound when the "volume" is adjusted, +but if it is too loud and the signal wraps, distortion occurs. The +solution is to clip (saturate) the audio and allow this to be detected. +In practical terms this is a post-result analysis however it needs to +take place at the largest bitwidth i.e. before a result is element width +truncated. Only then can the arithmetic saturation condition be detected: for i = 0 to VL-1: src1 = get_polymorphed_reg(RA, srcwid, i) @@ -430,17 +614,31 @@ Audio DSPs need to be able to clip sound when the "volume" is adjusted, but if i # unsigned add result = op_add(src1, src2, opwidth) # at max width # now saturate (unsigned) - sat = max(result, (1< # Data-dependent fail-first -This is a minor variant on the CR-based predicate-result mode. Where pred-result continues with independent element testing (any of which may be parallelised), data-dependent fail-first *stops* at the first failure: +This is a minor variant on the CR-based predicate-result mode. Where +pred-result continues with independent element testing (any of which may +be parallelised), data-dependent fail-first *stops* at the first failure: if Rc=0: BO = inv<<2 | 0b00 # test CR.eq bit z/nz for i in range(VL): @@ -670,23 +897,94 @@ This is a minor variant on the CR-based predicate-result mode. Where pred-resul if not RC1: iregs[RT+i] = result if RC1 or Rc=1: crregs[offs+i] = CRnew -This is particularly useful, again, for FP operations that might overflow, where it is desirable to end the loop early, but also desirable to complete at least those operations that were okay (passed the test) without also having to slow down execution by adding extra instructions that tested for the possibility of that failure, in advance of doing the actual calculation. - -The only minor downside here though is the change to VL, which in some implementations may cause pipeline stalls. This was one of the reasons why CR-based pred-result analysis was added, because that at least is entirely paralleliseable. +This is particularly useful, again, for FP operations that might overflow, +where it is desirable to end the loop early, but also desirable to +complete at least those operations that were okay (passed the test) +without also having to slow down execution by adding extra instructions +that tested for the possibility of that failure, in advance of doing +the actual calculation. + +The only minor downside here though is the change to VL, which in some +implementations may cause pipeline stalls. This was one of the reasons +why CR-based pred-result analysis was added, because that at least is +entirely paralleliseable. + +# Vertical-First Mode + +This is a relatively new addition to SVP64 under development as of +July 2021. Where Horizontal-First is the standard Cray-style for-loop, +Vertical-First typically executes just the **one** scalar element +in each Vectorised operation. That element is selected by srcstep +and dststep *neither of which are changed as a side-effect of execution*. +Illustrating this in pseodocode, with a branch/loop. +To create loops, a new instruction `svstep` must be called, +explicitly, with Rc=1: + +``` +loop: + sv.addi r0.v, r8.v, 5 # GPR(0+dststep) = GPR(8+srcstep) + 5 + sv.addi r0.v, r8, 5 # GPR(0+dststep) = GPR(8 ) + 5 + sv.addi r0, r8.v, 5 # GPR(0 ) = GPR(8+srcstep) + 5 + svstep. # srcstep++, dststep++, CR0.eq = srcstep==VL + beq loop +``` + +Three examples are illustrated of different types of Scalar-Vector +operations. Note that in its simplest form **only one** element is +executed per instruction **not** multiple elements per instruction. +(The more advanced version of Vertical-First mode may execute multiple +elements per instruction, however the number executed **must** remain +a fixed quantity.) + +Now that such explicit loops can increment inexorably towards VL, +of course we now need a way to test if srcstep or dststep have reached +VL. This is achieved in one of two ways: [[sv/svstep]] has an Rc=1 mode +where CR0 will be updated if VL is reached. A standard v3.0B Branch +Conditional may rely on that. Alternatively, the number of elements +may be transferred into CTR, as is standard practice in Power ISA. +Here, SVP64 [[sv/branches]] have a mode which allows CTR to be decremented +by the number of vertical elements executed. # Instruction format -Whilst this overview shows the internals, it does not go into detail on the actual instruction format itself. There are a couple of reasons for this: firstly, it's under development, and secondly, it needs to be proposed to the OpenPOWER Foundation ISA WG for consideration and review. - -That said: draft pages for [[sv/setvl]] and [[sv/svp64]] are written up. The `setvl` instruction is pretty much as would be expected from a Cray style VL instruction: the only differences being that, firstly, the MAXVL (Maximum Vector Length) has to be specified, because that determines - precisely - how many of the *scalar* registers are to be used for a given Vector. Secondly: within the limit of MAXVL, VL is required to be set to the requested value. By contrast, RVV systems permit the hardware to set arbitrary values of VL. - -The other key question is of course: what's the actual instruction format, and what's in it? Bearing in mind that this requires OPF review, the current draft is at the [[sv/svp64]] page, and includes space for all the different modes, the predicates, element width overrides, SUBVL and the register extensions, in 24 bits. This just about fits into an OpenPOWER v3.1B 64 bit Prefix by borrowing some of the Reserved Encoding space. The v3.1B suffix - containing as it does a 32 bit OpenPOWER instruction - aligns perfectly with SV. +Whilst this overview shows the internals, it does not go into detail +on the actual instruction format itself. There are a couple of reasons +for this: firstly, it's under development, and secondly, it needs to be +proposed to the OpenPOWER Foundation ISA WG for consideration and review. + +That said: draft pages for [[sv/setvl]] and [[sv/svp64]] are written up. +The `setvl` instruction is pretty much as would be expected from a +Cray style VL instruction: the only differences being that, firstly, +the MAXVL (Maximum Vector Length) has to be specified, because that +determines - precisely - how many of the *scalar* registers are to be +used for a given Vector. Secondly: within the limit of MAXVL, VL is +required to be set to the requested value. By contrast, RVV systems +permit the hardware to set arbitrary values of VL. + +The other key question is of course: what's the actual instruction format, +and what's in it? Bearing in mind that this requires OPF review, the +current draft is at the [[sv/svp64]] page, and includes space for all the +different modes, the predicates, element width overrides, SUBVL and the +register extensions, in 24 bits. This just about fits into an OpenPOWER +v3.1B 64 bit Prefix by borrowing some of the Reserved Encoding space. +The v3.1B suffix - containing as it does a 32 bit OpenPOWER instruction - +aligns perfectly with SV. Further reading is at the main [[SV|sv]] page. # Conclusion -Starting from a scalar ISA - OpenPOWER v3.0B - it was shown above that, with conceptual sub-loops, a Scalar ISA can be turned into a Vector one, by embedding Scalar instructions - unmodified - into a Vector "context" using "Prefixing". With careful thought, this technique reaches 90% par with good Vector ISAs, increasing to 95% with the addition of a mere handful of additional context-vectoriseable scalar instructions ([[sv/mv.x]] amongst them). - -What is particularly cool about the SV concept is that custom extensions and research need not be concerned about inventing new Vector instructions and how to get them to interact with the Scalar ISA: they are effectively one and the same. Any new instruction added at the Scalar level is inherently and automatically Vectorised, following some simple rules. +Starting from a scalar ISA - OpenPOWER v3.0B - it was shown above that, +with conceptual sub-loops, a Scalar ISA can be turned into a Vector one, +by embedding Scalar instructions - unmodified - into a Vector "context" +using "Prefixing". With careful thought, this technique reaches 90% +par with good Vector ISAs, increasing to 95% with the addition of a +mere handful of additional context-vectoriseable scalar instructions +([[sv/mv.x]] amongst them). + +What is particularly cool about the SV concept is that custom extensions +and research need not be concerned about inventing new Vector instructions +and how to get them to interact with the Scalar ISA: they are effectively +one and the same. Any new instruction added at the Scalar level is +inherently and automatically Vectorised, following some simple rules.