[[!tag standards]] # Implementation Log * ternlogi * grev * GF2^M # bitmanipulation **DRAFT STATUS** pseudocode: this extension amalgamates bitmanipulation primitives from many sources, including RISC-V bitmanip, Packed SIMD, AVX-512 and OpenPOWER VSX. Vectorisation and SIMD are removed: these are straight scalar (element) operations making them suitable for embedded applications. Vectorisation Context is provided by [[openpower/sv]]. When combined with SV, scalar variants of bitmanip operations found in VSX are added so that VSX may be retired as "legacy" in the far future (10 to 20 years). Also, VSX is hundreds of opcodes, requires 128 bit pathways, and is wholly unsuited to low power or embedded scenarios. ternlogv is experimental and is the only operation that may be considered a "Packed SIMD". It is added as a variant of the already well-justified ternlog operation (done in AVX512 as an immediate only) "because it looks fun". As it is based on the LUT4 concept it will allow accelerated emulation of FPGAs. Other vendors of ISAs are buying FPGA companies to achieve similar objectives. general-purpose Galois Field 2^M operations are added so as to avoid huge custom opcode proliferation across many areas of Computer Science. however for convenience and also to avoid setup costs, some of the more common operations (clmul, crc32) are also added. The expectation is that these operations would all be covered by the same pipeline. note that there are brownfield spaces below that could incorporate some of the set-before-first and other scalar operations listed in [[sv/vector_ops]], and the [[sv/av_opcodes]] as well as [[sv/setvl]] Useful resource: * * # summary two major opcodes are needed ternlog has its own major opcode | 29.30 |31| name | | ------ |--| --------- | | 0 0 |Rc| ternlogi | | 0 1 |sz| ternlogv | | 1 iv | | grevlogi | 2nd major opcode for other bitmanip: minor opcode allocation | 28.30 |31| name | | ------ |--| --------- | | -00 |0 | xpermi | | -00 |1 | grevlog | | -01 | | crternlog | | 010 |Rc| bitmask | | 011 | | gf/cl madd* | | 110 |Rc| 1/2-op | | 111 | | bmrevi | 1-op and variants | dest | src1 | subop | op | | ---- | ---- | ----- | -------- | | RT | RA | .. | bmatflip | 2-op and variants | dest | src1 | src2 | subop | op | | ---- | ---- | ---- | ----- | -------- | | RT | RA | RB | or | bmatflip | | RT | RA | RB | xor | bmatflip | | RT | RA | RB | | grev | | RT | RA | RB | | clmul* | | RT | RA | RB | | gorc | | RT | RA | RB | shuf | shuffle | | RT | RA | RB | unshuf| shuffle | | RT | RA | RB | width | xperm | | RT | RA | RB | type | minmax | | RT | RA | RB | | av abs avgadd | | RT | RA | RB | type | vmask ops | | RT | RA | RB | | | 3 ops * grevlog * GF mul-add * bitmask-reverse TODO: convert all instructions to use RT and not RS | 0.5|6.8 | 9.11|12.14|15.17|18.20|21.28 | 29.30|31|name| | -- | -- | --- | --- | --- |-----|----- | -----|--|----| | NN | BT | BA | BB | BC |m0-2 | imm | 10 |m3|crternlog| | 0.5|6.10|11.15|16.20 |21..25 | 26....30 |31| name | | -- | -- | --- | --- | ----- | -------- |--| ------ | | NN | RT | RA |itype/| im0-4 | im5-7 00 |0 | xpermi | | NN | RT | RA | RB | im0-4 | im5-7 00 |1 | grevlog | | NN | | | | | ..... 01 |0 | crternlog | | NN | RT | RA | RB | RC | mode 010 |Rc| bitmask* | | NN | RS | RA | RB | RC | 00 011 |0 | gfbmadd | | NN | RS | RA | RB | RC | 00 011 |1 | gfbmaddsub | | NN | RS | RA | RB | RC | 01 011 |0 | clmadd | | NN | RS | RA | RB | RC | 01 011 |1 | clmaddsub | | NN | RS | RA | RB | RC | 10 011 |0 | gfpmadd | | NN | RS | RA | RB | RC | 10 011 |1 | gfpmaddsub | | NN | RS | RA | RB | RC | 11 011 | | rsvd | | NN | RT | RA | RB | sh0-4 | sh5 1 111 |Rc| bmrevi | ops (note that av avg and abs as well as vec scalar mask are included here) TODO: convert from RA, RB, and RC to correct field names of RT, RA, and RB, and double check that instructions didn't need 3 inputs. | 0.5|6.10|11.15|16.20| 21 | 22.23 | 24....30 |31| name | | -- | -- | --- | --- | -- | ----- | -------- |--| ---- | | NN | RS | me | sh | SH | ME 0 | nn00 110 |Rc| bmopsi | | NN | RS | RB | sh | SH | / 1 | nn00 110 |Rc| bmopsi | | NN | RT | RA | RB | | | 1100 110 |Rc| srsvd | | NN | RT | RA | RB | 1 | 00 | 0001 110 |Rc| cldiv | | NN | RT | RA | RB | 1 | 01 | 0001 110 |Rc| clmod | | NN | RT | RA | RB | 1 | 10 | 0001 110 |Rc| | | NN | RT | RB | RB | 1 | 11 | 0001 110 |Rc| clinv | | NN | RA | RB | RC | 0 | 00 | 0001 110 |Rc| vec sbfm | | NN | RA | RB | RC | 0 | 01 | 0001 110 |Rc| vec sofm | | NN | RA | RB | RC | 0 | 10 | 0001 110 |Rc| vec sifm | | NN | RA | RB | RC | 0 | 11 | 0001 110 |Rc| vec cprop | | NN | RT | RA | RB | 1 | itype | 0101 110 |Rc| xperm | | NN | RA | RB | RC | 0 | itype | 0101 110 |Rc| minmax | | NN | RA | RB | RC | 1 | 00 | 0101 110 |Rc| av avgadd | | NN | RA | RB | RC | 1 | 01 | 0101 110 |Rc| av abs | | NN | RA | RB | | 1 | 10 | 0101 110 |Rc| rsvd | | NN | RA | RB | | 1 | 11 | 0101 110 |Rc| rsvd | | NN | RA | RB | RC | 0 | 00 | 0010 110 |Rc| gorc | | NN | RA | RB | sh | SH | 00 | 1010 110 |Rc| gorci | | NN | RA | RB | RC | 0 | 00 | 0110 110 |Rc| gorcw | | NN | RA | RB | sh | 0 | 00 | 1110 110 |Rc| gorcwi | | NN | RA | RB | RC | 1 | 00 | 1110 110 |Rc| bmator | | NN | RA | RB | RC | 0 | 01 | 0010 110 |Rc| grev | | NN | RA | RB | RC | 1 | 01 | 0010 110 |Rc| clmul | | NN | RA | RB | sh | SH | 01 | 1010 110 |Rc| grevi | | NN | RA | RB | RC | 0 | 01 | 0110 110 |Rc| grevw | | NN | RA | RB | sh | 0 | 01 | 1110 110 |Rc| grevwi | | NN | RA | RB | RC | 1 | 01 | 1110 110 |Rc| bmatxor | | NN | RA | RB | RC | | 10 | --10 110 |Rc| rsvd | | NN | RA | RB | RC | 0 | 11 | 1110 110 |Rc| clmulr | | NN | RA | RB | RC | 1 | 11 | 1110 110 |Rc| clmulh | | NN | | | | | | --11 110 |Rc| setvl | # ternlog bitops Similar to FPGA LUTs: for every bit perform a lookup into a table using an 8bit immediate, or in another register. Like the x86 AVX512F [vpternlogd/vpternlogq](https://www.felixcloutier.com/x86/vpternlogd:vpternlogq) instructions. ## ternlogi | 0.5|6.10|11.15|16.20| 21..28|29.30|31| | -- | -- | --- | --- | ----- | --- |--| | NN | RT | RA | RB | im0-7 | 00 |Rc| lut3(imm, a, b, c): idx = c << 2 | b << 1 | a return imm[idx] # idx by LSB0 order for i in range(64): RT[i] = lut3(imm, RB[i], RA[i], RT[i]) ## ternlogv also, another possible variant involving swizzle-like selection and masking, this only requires 3 64 bit registers (RA, RS, RB) and only 16 LUT3s. Note however that unless XLEN matches sz, this instruction is a Read-Modify-Write: RS must be read as a second operand and all unmodified bits preserved. SVP64 may provide limited alternative destination for RS from RS-as-source, but again all unmodified bits must still be copied. | 0.5|6.10|11.15|16.20|21.28 | 29.30 |31| | -- | -- | --- | --- | ---- | ----- |--| | NN | RS | RA | RB |idx0-3| 01 |sz| SZ = (1+sz) * 8 # 8 or 16 raoff = MIN(XLEN, idx0 * SZ) rboff = MIN(XLEN, idx1 * SZ) rcoff = MIN(XLEN, idx2 * SZ) rsoff = MIN(XLEN, idx3 * SZ) imm = RB[0:8] for i in range(MIN(XLEN, SZ)): ra = RA[raoff:+i] rb = RA[rboff+i] rc = RA[rcoff+i] res = lut3(imm, ra, rb, rc) RS[rsoff+i] = res ## ternlogcr another mode selection would be CRs not Ints. | 0.5|6.8 | 9.11|12.14|15.17|18.20|21.28 | 29.30|31| | -- | -- | --- | --- | --- |-----|----- | -----|--| | NN | BT | BA | BB | BC |m0-2 | imm | 10 |m3| mask = m0-3,m4 for i in range(4): if not mask[i] continue crregs[BT][i] = lut3(imm, crregs[BA][i], crregs[BB][i], crregs[BC][i]) # int min/max signed and unsigned min/max for integer. this is sort-of partly synthesiseable in [[sv/svp64]] with pred-result as long as the dest reg is one of the sources, but not both signed and unsigned. when the dest is also one of the srces and the mv fails due to the CR bittest failing this will only overwrite the dest where the src is greater (or less). signed/unsigned min/max gives more flexibility. ``` uint_xlen_t min(uint_xlen_t rs1, uint_xlen_t rs2) { return (int_xlen_t)rs1 < (int_xlen_t)rs2 ? rs1 : rs2; } uint_xlen_t max(uint_xlen_t rs1, uint_xlen_t rs2) { return (int_xlen_t)rs1 > (int_xlen_t)rs2 ? rs1 : rs2; } uint_xlen_t minu(uint_xlen_t rs1, uint_xlen_t rs2) { return rs1 < rs2 ? rs1 : rs2; } uint_xlen_t maxu(uint_xlen_t rs1, uint_xlen_t rs2) { return rs1 > rs2 ? rs1 : rs2; } ``` ## cmix based on RV bitmanip, covered by ternlog bitops ``` uint_xlen_t cmix(uint_xlen_t RA, uint_xlen_t RB, uint_xlen_t RC) { return (RA & RB) | (RC & ~RB); } ``` # bitmask set based on RV bitmanip singlebit set, instruction format similar to shift [[isa/fixedshift]]. bmext is actually covered already (shift-with-mask rldicl but only immediate version). however bitmask-invert is not, and set/clr are not covered, although they can use the same Shift ALU. bmext (RB) version is not the same as rldicl because bmext is a right shift by RC, where rldicl is a left rotate. for the immediate version this does not matter, so a bmexti is not required. bmrev however there is no direct equivalent and consequently a bmrevi is required. bmset (register for mask amount) is particularly useful for creating predicate masks where the length is a dynamic runtime quantity. bmset(RA=0, RB=0, RC=mask) will produce a run of ones of length "mask" in a single instruction without needing to initialise or depend on any other registers. | 0.5|6.10|11.15|16.20|21.25| 26..30 |31| name | | -- | -- | --- | --- | --- | ------- |--| ----- | | NN | RS | RA | RB | RC | mode 010 |Rc| bm* | Immediate-variant is an overwrite form: | 0.5|6.10|11.15|16.20| 21 | 22.23 | 24....30 |31| name | | -- | -- | --- | --- | -- | ----- | -------- |--| ---- | | NN | RS | RB | sh | SH | itype | 1000 110 |Rc| bm*i | ``` def MASK(x, y): if x < y: x = x+1 mask_a = ((1 << x) - 1) & ((1 << 64) - 1) mask_b = ((1 << y) - 1) & ((1 << 64) - 1) elif x == y: return 1 << x else: x = x+1 mask_a = ((1 << x) - 1) & ((1 << 64) - 1) mask_b = (~((1 << y) - 1)) & ((1 << 64) - 1) return mask_a ^ mask_b uint_xlen_t bmset(RS, RB, sh) { int shamt = RB & (XLEN - 1); mask = (2<> shamt); } ``` bitmask extract with reverse. can be done by bit-order-inverting all of RB and getting bits of RB from the opposite end. when RA is zero, no shift occurs. this makes bmextrev useful for simply reversing all bits of a register. ``` msb = ra[5:0]; rev[0:msb] = rb[msb:0]; rt = ZE(rev[msb:0]); uint_xlen_t bmextrev(RA, RB, sh) { int shamt = XLEN-1; if (RA != 0) shamt = (GPR(RA) & (XLEN - 1)); shamt = (XLEN-1)-shamt; # shift other end bra = bitreverse(RB) # swap LSB-MSB mask = (2<> shamt); } ``` | 0.5|6.10|11.15|16.20|21.26| 27..30 |31| name | | -- | -- | --- | --- | --- | ------- |--| ------ | | NN | RT | RA | RB | sh | 1 011 |Rc| bmrevi | # grevlut generalised reverse combined with a pair of LUT2s and allowing a constant `0b0101...0101` when RA=0, and an option to invert (including when RA=0, giving a constant 0b1010...1010 as the initial value) provides a wide range of instructions and a means to set regular 64 bit patterns in one 32 bit instruction. the two LUT2s are applied left-half (when not swapping) and right-half (when swapping) so as to allow a wider range of options. * A value of `0b11001010` for the immediate provides the functionality of a standard "grev". * `0b11101110` provides gorc grevlut should be arranged so as to produce the constants needed to put into bext (bitextract) so as in turn to be able to emulate x86 pmovmask instructions . This only requires 2 instructions (grevlut, bext). Note that if the mask is required to be placed directly into CR Fields (for use as CR Predicate masks rather than a integer mask) then sv.ori may be used instead, bearing in mind that sv.ori is a 64-bit instruction, and `VL` must have been set to the required length: sv.ori./elwid=8 r10.v, r10.v, 0 The following settings provide the required mask constants: | RA | RB | imm | iv | result | | ------- | ------- | ---------- | -- | ---------- | | 0x555.. | 0b10 | 0b01101100 | 0 | 0x111111... | | 0x555.. | 0b110 | 0b01101100 | 0 | 0x010101... | | 0x555.. | 0b1110 | 0b01101100 | 0 | 0x00010001... | | 0x555.. | 0b10 | 0b11000110 | 1 | 0x88888... | | 0x555.. | 0b110 | 0b11000110 | 1 | 0x808080... | | 0x555.. | 0b1110 | 0b11000110 | 1 | 0x80008000... | Better diagram showing the correct ordering of shamt (RB). A LUT2 is applied to all locations marked in red using the first 4 bits of the immediate, and a separate LUT2 applied to all locations in green using the upper 4 bits of the immediate. demo code [[openpower/sv/grevlut.py]] ``` lut2(imm, a, b): idx = b << 1 | a return imm[idx] # idx by LSB0 order dorow(imm8, step_i, chunksize): for j in 0 to 63: if (j&chunk_size) == 0 imm = imm8[0..3] else imm = imm8[4..7] step_o[j] = lut2(imm, step_i[j], step_i[j ^ chunk_size]) return step_o uint64_t grevlut64(uint64_t RA, uint64_t RB, uint8 imm, bool iv) { uint64_t x = 0x5555_5555_5555_5555; if (RA != 0) x = GPR(RA); if (iv) x = ~x; int shamt = RB & 63; for i in 0 to 6 step = 1< ``` uint64_t grev64(uint64_t RA, uint64_t RB) { uint64_t x = RA; int shamt = RB & 63; if (shamt & 1) x = ((x & 0x5555555555555555LL) << 1) | ((x & 0xAAAAAAAAAAAAAAAALL) >> 1); if (shamt & 2) x = ((x & 0x3333333333333333LL) << 2) | ((x & 0xCCCCCCCCCCCCCCCCLL) >> 2); if (shamt & 4) x = ((x & 0x0F0F0F0F0F0F0F0FLL) << 4) | ((x & 0xF0F0F0F0F0F0F0F0LL) >> 4); if (shamt & 8) x = ((x & 0x00FF00FF00FF00FFLL) << 8) | ((x & 0xFF00FF00FF00FF00LL) >> 8); if (shamt & 16) x = ((x & 0x0000FFFF0000FFFFLL) << 16) | ((x & 0xFFFF0000FFFF0000LL) >> 16); if (shamt & 32) x = ((x & 0x00000000FFFFFFFFLL) << 32) | ((x & 0xFFFFFFFF00000000LL) >> 32); return x; } ``` # xperm based on RV bitmanip. RA contains a vector of indices to select parts of RB to be copied to RT. The immediate-variant allows up to an 8 bit pattern (repeated) to be targetted at different parts of RT ``` uint_xlen_t xpermi(uint8_t imm8, uint_xlen_t RB, int sz_log2) { uint_xlen_t r = 0; uint_xlen_t sz = 1LL << sz_log2; uint_xlen_t mask = (1LL << sz) - 1; uint_xlen_t RA = imm8 | imm8<<8 | ... | imm8<<56; for (int i = 0; i < XLEN; i += sz) { uint_xlen_t pos = ((RA >> i) & mask) << sz_log2; if (pos < XLEN) r |= ((RB >> pos) & mask) << i; } return r; } uint_xlen_t xperm(uint_xlen_t RA, uint_xlen_t RB, int sz_log2) { uint_xlen_t r = 0; uint_xlen_t sz = 1LL << sz_log2; uint_xlen_t mask = (1LL << sz) - 1; for (int i = 0; i < XLEN; i += sz) { uint_xlen_t pos = ((RA >> i) & mask) << sz_log2; if (pos < XLEN) r |= ((RB >> pos) & mask) << i; } return r; } uint_xlen_t xperm_n (uint_xlen_t RA, uint_xlen_t RB) { return xperm(RA, RB, 2); } uint_xlen_t xperm_b (uint_xlen_t RA, uint_xlen_t RB) { return xperm(RA, RB, 3); } uint_xlen_t xperm_h (uint_xlen_t RA, uint_xlen_t RB) { return xperm(RA, RB, 4); } uint_xlen_t xperm_w (uint_xlen_t RA, uint_xlen_t RB) { return xperm(RA, RB, 5); } ``` # gorc based on RV bitmanip ``` uint32_t gorc32(uint32_t RA, uint32_t RB) { uint32_t x = RA; int shamt = RB & 31; if (shamt & 1) x |= ((x & 0x55555555) << 1) | ((x & 0xAAAAAAAA) >> 1); if (shamt & 2) x |= ((x & 0x33333333) << 2) | ((x & 0xCCCCCCCC) >> 2); if (shamt & 4) x |= ((x & 0x0F0F0F0F) << 4) | ((x & 0xF0F0F0F0) >> 4); if (shamt & 8) x |= ((x & 0x00FF00FF) << 8) | ((x & 0xFF00FF00) >> 8); if (shamt & 16) x |= ((x & 0x0000FFFF) << 16) | ((x & 0xFFFF0000) >> 16); return x; } uint64_t gorc64(uint64_t RA, uint64_t RB) { uint64_t x = RA; int shamt = RB & 63; if (shamt & 1) x |= ((x & 0x5555555555555555LL) << 1) | ((x & 0xAAAAAAAAAAAAAAAALL) >> 1); if (shamt & 2) x |= ((x & 0x3333333333333333LL) << 2) | ((x & 0xCCCCCCCCCCCCCCCCLL) >> 2); if (shamt & 4) x |= ((x & 0x0F0F0F0F0F0F0F0FLL) << 4) | ((x & 0xF0F0F0F0F0F0F0F0LL) >> 4); if (shamt & 8) x |= ((x & 0x00FF00FF00FF00FFLL) << 8) | ((x & 0xFF00FF00FF00FF00LL) >> 8); if (shamt & 16) x |= ((x & 0x0000FFFF0000FFFFLL) << 16) | ((x & 0xFFFF0000FFFF0000LL) >> 16); if (shamt & 32) x |= ((x & 0x00000000FFFFFFFFLL) << 32) | ((x & 0xFFFFFFFF00000000LL) >> 32); return x; } ``` # Instructions for Carry-less Operations aka. Polynomials with coefficients in `GF(2)` Carry-less addition/subtraction is simply XOR, so a `cladd` instruction is not provided since the `xor[i]` instruction can be used instead. These are operations on polynomials with coefficients in `GF(2)`, with the polynomial's coefficients packed into integers with the following algorithm: [[!inline pagenames="openpower/sv/bitmanip/pack_poly.py" raw="true" feeds="no" actions="yes"]] ## Carry-less Multiply Instructions based on RV bitmanip see and and They are worth adding as their own non-overwrite operations (in the same pipeline). ### `clmul` Carry-less Multiply [[!inline pagenames="openpower/sv/bitmanip/clmul.py" raw="true" feeds="no" actions="yes"]] ### `clmulh` Carry-less Multiply High [[!inline pagenames="openpower/sv/bitmanip/clmulh.py" raw="true" feeds="no" actions="yes"]] ### `clmulr` Carry-less Multiply (Reversed) Useful for CRCs. Equivalent to bit-reversing the result of `clmul` on bit-reversed inputs. [[!inline pagenames="openpower/sv/bitmanip/clmulr.py" raw="true" feeds="no" actions="yes"]] ## `clmadd` Carry-less Multiply-Add ``` clmadd RT, RA, RB, RC ``` ``` (RT) = clmul((RA), (RB)) ^ (RC) ``` ## `cltmadd` Twin Carry-less Multiply-Add (for FFTs) ``` cltmadd RT, RA, RB, RC ``` TODO: add link to explanation for where `RS` comes from. ``` (RT) = RC ^ clmul((RA), (RB)) (RS) = RA ^ RC ``` ## `cldivrem` Carry-less Division and Remainder `cldivrem` isn't an actual instruction, but is just used in the pseudo-code for other instructions. [[!inline pagenames="openpower/sv/bitmanip/cldivrem.py" raw="true" feeds="no" actions="yes"]] ## `cldiv` Carry-less Division ``` cldiv RT, RA, RB ``` ``` n = (RA) d = (RB) q, r = cldivrem(n, d, width=XLEN) (RT) = q ``` ## `clrem` Carry-less Remainder ``` clrem RT, RA, RB ``` ``` n = (RA) d = (RB) q, r = cldivrem(n, d, width=XLEN) (RT) = r ``` # Instructions for Binary Galois Fields `GF(2^m)` see: * * * Binary Galois Field addition/subtraction is simply XOR, so a `gfbadd` instruction is not provided since the `xor[i]` instruction can be used instead. ## `GFBREDPOLY` SPR -- Reducing Polynomial In order to save registers and to make operations orthogonal with standard arithmetic, the reducing polynomial is stored in a dedicated SPR `GFBREDPOLY`. This also allows hardware to pre-compute useful parameters (such as the degree, or look-up tables) based on the reducing polynomial, and store them alongside the SPR in hidden registers, only recomputing them whenever the SPR is written to, rather than having to recompute those values for every instruction. Because Galois Fields require the reducing polynomial to be an irreducible polynomial, that guarantees that any polynomial of `degree > 1` must have the LSB set, since otherwise it would be divisible by the polynomial `x`, making it reducible, making whatever we're working on no longer a Field. Therefore, we can reuse the LSB to indicate `degree == XLEN`. [[!inline pagenames="openpower/sv/bitmanip/decode_reducing_polynomial.py" raw="true" feeds="no" actions="yes"]] ## `gfbredpoly` -- Set the Reducing Polynomial SPR `GFBREDPOLY` unless this is an immediate op, `mtspr` is completely sufficient. [[!inline pagenames="openpower/sv/bitmanip/gfbredpoly.py" raw="true" feeds="no" actions="yes"]] ## `gfbmul` -- Binary Galois Field `GF(2^m)` Multiplication ``` gfbmul RT, RA, RB ``` [[!inline pagenames="openpower/sv/bitmanip/gfbmul.py" raw="true" feeds="no" actions="yes"]] ## `gfbmadd` -- Binary Galois Field `GF(2^m)` Multiply-Add ``` gfbmadd RT, RA, RB, RC ``` [[!inline pagenames="openpower/sv/bitmanip/gfbmadd.py" raw="true" feeds="no" actions="yes"]] ## `gfbtmadd` -- Binary Galois Field `GF(2^m)` Twin Multiply-Add (for FFT) ``` gfbtmadd RT, RA, RB, RC ``` TODO: add link to explanation for where `RS` comes from. ``` (RT) = gfbmadd((RA), (RB), (RC)) (RS) = RA ^ RC ``` ## `gfbinv` -- Binary Galois Field `GF(2^m)` Inverse ``` gfbinv RT, RA ``` [[!inline pagenames="openpower/sv/bitmanip/gfbinv.py" raw="true" feeds="no" actions="yes"]] # Instructions for Prime Galois Fields `GF(p)` ## Helper algorithms ```python def int_to_gfp(int_value, prime): return int_value % prime # follows Python remainder semantics ``` ## `GFPRIME` SPR -- Prime Modulus For `gfp*` Instructions ## `gfpadd` Prime Galois Field `GF(p)` Addition ``` gfpadd RT, RA, RB ``` ``` (RT) = int_to_gfp((RA) + (RB), GFPRIME) ``` the addition happens on infinite-precision integers ## `gfpsub` Prime Galois Field `GF(p)` Subtraction ``` gfpsub RT, RA, RB ``` ``` (RT) = int_to_gfp((RA) - (RB), GFPRIME) ``` the subtraction happens on infinite-precision integers ## `gfpmul` Prime Galois Field `GF(p)` Multiplication ``` gfpmul RT, RA, RB ``` ``` (RT) = int_to_gfp((RA) * (RB), GFPRIME) ``` the multiplication happens on infinite-precision integers ## `gfpinv` Prime Galois Field `GF(p)` Invert ``` gfpinv RT, RA ``` Some potential hardware implementations are found in: ``` (RT) = gfpinv((RA), GFPRIME) ``` the multiplication happens on infinite-precision integers ## `gfpmadd` Prime Galois Field `GF(p)` Multiply-Add ``` gfpmadd RT, RA, RB, RC ``` ``` (RT) = int_to_gfp((RA) * (RB) + (RC), GFPRIME) ``` the multiplication and addition happens on infinite-precision integers ## `gfpmsub` Prime Galois Field `GF(p)` Multiply-Subtract ``` gfpmsub RT, RA, RB, RC ``` ``` (RT) = int_to_gfp((RA) * (RB) - (RC), GFPRIME) ``` the multiplication and subtraction happens on infinite-precision integers ## `gfpmsubr` Prime Galois Field `GF(p)` Multiply-Subtract-Reversed ``` gfpmsubr RT, RA, RB, RC ``` ``` (RT) = int_to_gfp((RC) - (RA) * (RB), GFPRIME) ``` the multiplication and subtraction happens on infinite-precision integers ## `gfpmaddsubr` Prime Galois Field `GF(p)` Multiply-Add and Multiply-Sub-Reversed (for FFT) ``` gfpmaddsubr RT, RA, RB, RC ``` TODO: add link to explanation for where `RS` comes from. ``` product = (RA) * (RB) term = (RC) (RT) = int_to_gfp(product + term, GFPRIME) (RS) = int_to_gfp(term - product, GFPRIME) ``` the multiplication, addition, and subtraction happens on infinite-precision integers ## Twin Butterfly (Tukey-Cooley) Mul-add-sub used in combination with SV FFT REMAP to perform a full NTT in-place. possible by having 3-in 2-out, to avoid the need for a temp register. RS is written to as well as RT. gffmadd RT,RA,RC,RB (Rc=0) gffmadd. RT,RA,RC,RB (Rc=1) Pseudo-code: RT <- GFADD(GFMUL(RA, RC), RB)) RS <- GFADD(GFMUL(RA, RC), RB)) ## Multiply with the modulo and degree being in an SPR, multiply can be identical equivalent to standard integer add RS = GFMUL(RA, RB) | 0.5|6.10|11.15|16.20|21.25| 26..30 |31| | -- | -- | --- | --- | --- | ------ |--| | NN | RT | RA | RB |11000| 01110 |Rc| ``` from functools import reduce def gf_degree(a) : res = 0 a >>= 1 while (a != 0) : a >>= 1; res += 1; return res # constants used in the multGF2 function mask1 = mask2 = polyred = None def setGF2(irPoly): """Define parameters of binary finite field GF(2^m)/g(x) - irPoly: coefficients of irreducible polynomial g(x) """ # degree: extension degree of binary field degree = gf_degree(irPoly) def i2P(sInt): """Convert an integer into a polynomial""" return [(sInt >> i) & 1 for i in reversed(range(sInt.bit_length()))] global mask1, mask2, polyred mask1 = mask2 = 1 << degree mask2 -= 1 polyred = reduce(lambda x, y: (x << 1) + y, i2P(irPoly)[1:]) def multGF2(p1, p2): """Multiply two polynomials in GF(2^m)/g(x)""" p = 0 while p2: # standard long-multiplication: check LSB and add if p2 & 1: p ^= p1 p1 <<= 1 # standard modulo: check MSB and add polynomial if p1 & mask1: p1 ^= polyred p2 >>= 1 return p & mask2 if __name__ == "__main__": # Define binary field GF(2^3)/x^3 + x + 1 setGF2(0b1011) # degree 3 # Evaluate the product (x^2 + x + 1)(x^2 + 1) print("{:02x}".format(multGF2(0b111, 0b101))) # Define binary field GF(2^8)/x^8 + x^4 + x^3 + x + 1 # (used in the Advanced Encryption Standard-AES) setGF2(0b100011011) # degree 8 # Evaluate the product (x^7)(x^7 + x + 1) print("{:02x}".format(multGF2(0b10000000, 0b10000011))) ``` ## carryless Twin Butterfly (Tukey-Cooley) Mul-add-sub used in combination with SV FFT REMAP to perform a full NTT in-place. possible by having 3-in 2-out, to avoid the need for a temp register. RS is written to as well as RT. clfmadd RT,RA,RC,RB (Rc=0) clfmadd. RT,RA,RC,RB (Rc=1) Pseudo-code: RT <- CLMUL(RA, RC) ^ RB RS <- CLMUL(RA, RC) ^ RB # bitmatrix ``` uint64_t bmatflip(uint64_t RA) { uint64_t x = RA; x = shfl64(x, 31); x = shfl64(x, 31); x = shfl64(x, 31); return x; } uint64_t bmatxor(uint64_t RA, uint64_t RB) { // transpose of RB uint64_t RBt = bmatflip(RB); uint8_t u[8]; // rows of RA uint8_t v[8]; // cols of RB for (int i = 0; i < 8; i++) { u[i] = RA >> (i*8); v[i] = RBt >> (i*8); } uint64_t x = 0; for (int i = 0; i < 64; i++) { if (pcnt(u[i / 8] & v[i % 8]) & 1) x |= 1LL << i; } return x; } uint64_t bmator(uint64_t RA, uint64_t RB) { // transpose of RB uint64_t RBt = bmatflip(RB); uint8_t u[8]; // rows of RA uint8_t v[8]; // cols of RB for (int i = 0; i < 8; i++) { u[i] = RA >> (i*8); v[i] = RBt >> (i*8); } uint64_t x = 0; for (int i = 0; i < 64; i++) { if ((u[i / 8] & v[i % 8]) != 0) x |= 1LL << i; } return x; } ``` # Already in POWER ISA ## count leading/trailing zeros with mask in v3.1 p105 ``` count = 0 do i = 0 to 63 if((RB)i=1) then do if((RS)i=1) then break end end count ← count + 1 RA ← EXTZ64(count) ``` ## bit deposit vpdepd VRT,VRA,VRB, identical to RV bitmamip bdep, found already in v3.1 p106 do while(m < 64) if VSR[VRB+32].dword[i].bit[63-m]=1 then do result = VSR[VRA+32].dword[i].bit[63-k] VSR[VRT+32].dword[i].bit[63-m] = result k = k + 1 m = m + 1 ``` uint_xlen_t bdep(uint_xlen_t RA, uint_xlen_t RB) { uint_xlen_t r = 0; for (int i = 0, j = 0; i < XLEN; i++) if ((RB >> i) & 1) { if ((RA >> j) & 1) r |= uint_xlen_t(1) << i; j++; } return r; } ``` # bit extract other way round: identical to RV bext, found in v3.1 p196 ``` uint_xlen_t bext(uint_xlen_t RA, uint_xlen_t RB) { uint_xlen_t r = 0; for (int i = 0, j = 0; i < XLEN; i++) if ((RB >> i) & 1) { if ((RA >> i) & 1) r |= uint_xlen_t(1) << j; j++; } return r; } ``` # centrifuge found in v3.1 p106 so not to be added here ``` ptr0 = 0 ptr1 = 0 do i = 0 to 63 if((RB)i=0) then do resultptr0 = (RS)i end ptr0 = ptr0 + 1 if((RB)63-i==1) then do result63-ptr1 = (RS)63-i end ptr1 = ptr1 + 1 RA = result ``` # bit to byte permute similar to matrix permute in RV bitmanip, which has XOR and OR variants, these perform a transpose. do j = 0 to 7 do k = 0 to 7 b = VSR[VRB+32].dword[i].byte[k].bit[j] VSR[VRT+32].dword[i].byte[j].bit[k] = b