5 * ternlogi <https://bugs.libre-soc.org/show_bug.cgi?id=745>
6 * grev <https://bugs.libre-soc.org/show_bug.cgi?id=755>
7 * remove Rc=1 from ternlog due to conflicts in encoding as well
8 as saving space <https://bugs.libre-soc.org/show_bug.cgi?id=753#c5>
9 * GF2^M <https://bugs.libre-soc.org/show_bug.cgi?id=782>
15 pseudocode: <https://libre-soc.org/openpower/isa/bitmanip/>
17 this extension amalgamates bitmanipulation primitives from many sources, including RISC-V bitmanip, Packed SIMD, AVX-512 and OpenPOWER VSX. Vectorisation and SIMD are removed: these are straight scalar (element) operations making them suitable for embedded applications.
18 Vectorisation Context is provided by [[openpower/sv]].
20 When combined with SV, scalar variants of bitmanip operations found in VSX are added so that VSX may be retired as "legacy" in the far future (10 to 20 years). Also, VSX is hundreds of opcodes, requires 128 bit pathways, and is wholly unsuited to low power or embedded scenarios.
22 ternlogv is experimental and is the only operation that may be considered a "Packed SIMD". It is added as a variant of the already well-justified ternlog operation (done in AVX512 as an immediate only) "because it looks fun". As it is based on the LUT4 concept it will allow accelerated emulation of FPGAs. Other vendors of ISAs are buying FPGA companies to achieve similar objectives.
24 general-purpose Galois Field 2^M operations are added so as to avoid huge custom opcode proliferation across many areas of Computer Science. however for convenience and also to avoid setup costs, some of the more common operations (clmul, crc32) are also added. The expectation is that these operations would all be covered by the same pipeline.
26 note that there are brownfield spaces below that could incorporate some of the set-before-first and other scalar operations listed in [[sv/vector_ops]], and
27 the [[sv/av_opcodes]] as well as [[sv/setvl]]
31 * <https://en.wikiversity.org/wiki/Reed%E2%80%93Solomon_codes_for_coders>
32 * <https://maths-people.anu.edu.au/~brent/pd/rpb232tr.pdf>
36 two major opcodes are needed
38 ternlog has its own major opcode
41 | ------ |--| --------- |
47 2nd major opcode for other bitmanip: minor opcode allocation
50 | ------ |--| --------- |
55 | 011 | | gf/cl madd* |
62 | dest | src1 | subop | op |
63 | ---- | ---- | ----- | -------- |
64 | RT | RA | .. | bmatflip |
68 | dest | src1 | src2 | subop | op |
69 | ---- | ---- | ---- | ----- | -------- |
70 | RT | RA | RB | or | bmatflip |
71 | RT | RA | RB | xor | bmatflip |
72 | RT | RA | RB | | grev |
73 | RT | RA | RB | | clmul* |
74 | RT | RA | RB | | gorc |
75 | RT | RA | RB | shuf | shuffle |
76 | RT | RA | RB | unshuf| shuffle |
77 | RT | RA | RB | width | xperm |
78 | RT | RA | RB | type | minmax |
79 | RT | RA | RB | | av abs avgadd |
80 | RT | RA | RB | type | vmask ops |
89 TODO: convert all instructions to use RT and not RS
91 | 0.5|6.10|11.15|16.20 |21..25 | 26....30 |31| name |
92 | -- | -- | --- | --- | ----- | -------- |--| ------ |
93 | NN | RT | RA | RB | | 00 |0 | rsvd |
94 | NN | RT | RA | RB | im0-4 | im5-7 00 |1 | grevlog |
95 | NN | RT | RA | s0-4 | im0-4 | im5-7 01 |s5| grevlogi |
96 | NN | RT | RA | RB | RC | mode 010 |Rc| bitmask* |
97 | NN | RS | RA | RB | RC | 00 011 |0 | gfbmadd |
98 | NN | RS | RA | RB | RC | 00 011 |1 | gfbmaddsub |
99 | NN | RS | RA | RB | RC | 01 011 |0 | clmadd |
100 | NN | RS | RA | RB | RC | 01 011 |1 | clmaddsub |
101 | NN | RS | RA | RB | RC | 10 011 |0 | gfpmadd |
102 | NN | RS | RA | RB | RC | 10 011 |1 | gfpmaddsub |
103 | NN | RS | RA | RB | RC | 11 011 | | rsvd |
104 | NN | RT | RA | RB | sh0-4 | sh5 1 111 |Rc| bmrevi |
106 ops (note that av avg and abs as well as vec scalar mask
109 TODO: convert from RA, RB, and RC to correct field names of RT, RA, and RB, and
110 double check that instructions didn't need 3 inputs.
112 | 0.5|6.10|11.15|16.20| 21 | 22.23 | 24....30 |31| name |
113 | -- | -- | --- | --- | -- | ----- | -------- |--| ---- |
114 | NN | RT | RA | RB | 0 | | 0000 110 |Rc| rsvd |
115 | NN | RT | RA | RB | 1 | itype | 0000 110 |Rc| xperm |
116 | NN | RA | RB | RC | 0 | itype | 0100 110 |Rc| minmax |
117 | NN | RA | RB | RC | 1 | 00 | 0100 110 |Rc| av avgadd |
118 | NN | RA | RB | RC | 1 | 01 | 0100 110 |Rc| av abs |
119 | NN | RA | RB | | 1 | 10 | 0100 110 |Rc| rsvd |
120 | NN | RA | RB | | 1 | 11 | 0100 110 |Rc| rsvd |
121 | NN | RA | RB | sh | SH | itype | 1000 110 |Rc| bmopsi |
122 | NN | RT | RA | RB | | | 1100 110 |Rc| srsvd |
123 | NN | RT | RA | RB | 1 | 00 | 0001 110 |Rc| cldiv |
124 | NN | RT | RA | RB | 1 | 01 | 0001 110 |Rc| clmod |
125 | NN | RT | RA | RB | 1 | 10 | 0001 110 |Rc| |
126 | NN | RT | RB | RB | 1 | 11 | 0001 110 |Rc| clinv |
127 | NN | RA | RB | RC | 0 | 00 | 0001 110 |Rc| vec sbfm |
128 | NN | RA | RB | RC | 0 | 01 | 0001 110 |Rc| vec sofm |
129 | NN | RA | RB | RC | 0 | 10 | 0001 110 |Rc| vec sifm |
130 | NN | RA | RB | RC | 0 | 11 | 0001 110 |Rc| vec cprop |
131 | NN | RA | RB | | 0 | | 0101 110 |Rc| rsvd |
132 | NN | RA | RB | RC | 0 | 00 | 0010 110 |Rc| gorc |
133 | NN | RA | RB | sh | SH | 00 | 1010 110 |Rc| gorci |
134 | NN | RA | RB | RC | 0 | 00 | 0110 110 |Rc| gorcw |
135 | NN | RA | RB | sh | 0 | 00 | 1110 110 |Rc| gorcwi |
136 | NN | RA | RB | RC | 1 | 00 | 1110 110 |Rc| bmator |
137 | NN | RA | RB | RC | 0 | 01 | 0010 110 |Rc| grev |
138 | NN | RA | RB | RC | 1 | 01 | 0010 110 |Rc| clmul |
139 | NN | RA | RB | sh | SH | 01 | 1010 110 |Rc| grevi |
140 | NN | RA | RB | RC | 0 | 01 | 0110 110 |Rc| grevw |
141 | NN | RA | RB | sh | 0 | 01 | 1110 110 |Rc| grevwi |
142 | NN | RA | RB | RC | 1 | 01 | 1110 110 |Rc| bmatxor |
143 | NN | RA | RB | RC | 0 | 10 | 0010 110 |Rc| shfl |
144 | NN | RA | RB | sh | SH | 10 | 1010 110 |Rc| shfli |
145 | NN | RA | RB | RC | 0 | 10 | 0110 110 |Rc| shflw |
146 | NN | RA | RB | RC | | 10 | 1110 110 |Rc| rsvd |
147 | NN | RA | RB | RC | 0 | 11 | 1110 110 |Rc| clmulr |
148 | NN | RA | RB | RC | 1 | 11 | 1110 110 |Rc| clmulh |
149 | NN | | | | | | --11 110 |Rc| setvl |
153 Similar to FPGA LUTs: for every bit perform a lookup into a table using an 8bit immediate, or in another register.
155 Like the x86 AVX512F [vpternlogd/vpternlogq](https://www.felixcloutier.com/x86/vpternlogd:vpternlogq) instructions.
159 TODO: if/when we get more encoding space, add Rc=1 option back to ternlogi, for consistency with OpenPower base logical instructions (and./xor./or./etc.). <https://bugs.libre-soc.org/show_bug.cgi?id=745#c56>
161 | 0.5|6.10|11.15|16.20| 21..25| 26..30 |31|
162 | -- | -- | --- | --- | ----- | -------- |--|
163 | NN | RT | RA | RB | im0-4 | im5-7 00 |Rc|
166 idx = c << 2 | b << 1 | a
167 return imm[idx] # idx by LSB0 order
170 RT[i] = lut3(imm, RB[i], RA[i], RT[i])
172 bits 21..22 may be used to specify a mode, such as treating the whole integer zero/nonzero and putting 1/0 in the result, rather than bitwise test.
176 a 5 operand variant which becomes more along the lines of an FPGA,
177 this is very expensive: 4 in and 1 out and is not recommended.
179 | 0.5|6.10|11.15|16.20|21.25| 26...30 |31|
180 | -- | -- | --- | --- | --- | -------- |--|
181 | NN | RT | RA | RB | RC | mode 01 |1 |
184 j = (i//8)*8 # 0,8,16,24,..,56
186 RT[i] = lut3(lookup, RT[i], RA[i], RB[i])
188 mode (3 bit) may be used to do inversion of ordering, similar to carryless mul,
193 also, another possible variant involving swizzle-like selection
194 and masking, this only requires 2 64 bit registers (RA, RS) and
197 Note however that unless XLEN matches sz, this instruction
198 is a Read-Modify-Write: RS must be read as a second operand
199 and all unmodified bits preserved. SVP64 may provide limited
200 alternative destination for RS from RS-as-source, but again
201 all unmodified bits must still be copied.
203 | 0.5|6.10|11.15| 16.23 |24.27 | 28.30 |31|
204 | -- | -- | --- | ----- | ---- | ----- |--|
205 | NN | RS | RA | idx0-3| mask | sz 01 |0 |
207 SZ = (1+sz) * 8 # 8 or 16
208 raoff = MIN(XLEN, idx0 * SZ)
209 rboff = MIN(XLEN, idx1 * SZ)
210 rcoff = MIN(XLEN, idx2 * SZ)
211 imoff = MIN(XLEN, idx3 * SZ)
212 imm = RA[imoff:imoff+SZ]
213 for i in range(MIN(XLEN, SZ)):
217 res = lut3(imm, ra, rb, rc)
218 for j in range(MIN(XLEN//8, 4)):
219 if mask[j]: RS[i+j*SZ] = res
223 another mode selection would be CRs not Ints.
225 | 0.5|6.8 | 9.11|12.14|15.17|18.20|21.28 | 29.30|31|
226 | -- | -- | --- | --- | --- |-----|----- | -----|--|
227 | NN | BT | BA | BB | BC |m0-3 | imm | 10 |m4|
231 if not mask[i] continue
232 crregs[BT][i] = lut3(imm,
240 signed and unsigned min/max for integer. this is sort-of partly synthesiseable in [[sv/svp64]] with pred-result as long as the dest reg is one of the sources, but not both signed and unsigned. when the dest is also one of the srces and the mv fails due to the CR bittest failing this will only overwrite the dest where the src is greater (or less).
242 signed/unsigned min/max gives more flexibility.
245 uint_xlen_t min(uint_xlen_t rs1, uint_xlen_t rs2)
246 { return (int_xlen_t)rs1 < (int_xlen_t)rs2 ? rs1 : rs2;
248 uint_xlen_t max(uint_xlen_t rs1, uint_xlen_t rs2)
249 { return (int_xlen_t)rs1 > (int_xlen_t)rs2 ? rs1 : rs2;
251 uint_xlen_t minu(uint_xlen_t rs1, uint_xlen_t rs2)
252 { return rs1 < rs2 ? rs1 : rs2;
254 uint_xlen_t maxu(uint_xlen_t rs1, uint_xlen_t rs2)
255 { return rs1 > rs2 ? rs1 : rs2;
262 based on RV bitmanip, covered by ternlog bitops
265 uint_xlen_t cmix(uint_xlen_t RA, uint_xlen_t RB, uint_xlen_t RC) {
266 return (RA & RB) | (RC & ~RB);
273 based on RV bitmanip singlebit set, instruction format similar to shift
274 [[isa/fixedshift]]. bmext is actually covered already (shift-with-mask rldicl but only immediate version).
275 however bitmask-invert is not, and set/clr are not covered, although they can use the same Shift ALU.
277 bmext (RB) version is not the same as rldicl because bmext is a right shift by RC, where rldicl is a left rotate. for the immediate version this does not matter, so a bmexti is not required.
278 bmrev however there is no direct equivalent and consequently a bmrevi is required.
280 bmset (register for mask amount) is particularly useful for creating
281 predicate masks where the length is a dynamic runtime quantity.
282 bmset(RA=0, RB=0, RC=mask) will produce a run of ones of length "mask" in a single instruction without needing to initialise or depend on any other registers.
284 | 0.5|6.10|11.15|16.20|21.25| 26..30 |31| name |
285 | -- | -- | --- | --- | --- | ------- |--| ----- |
286 | NN | RS | RA | RB | RC | mode 010 |Rc| bm* |
288 Immediate-variant is an overwrite form:
290 | 0.5|6.10|11.15|16.20| 21 | 22.23 | 24....30 |31| name |
291 | -- | -- | --- | --- | -- | ----- | -------- |--| ---- |
292 | NN | RS | RB | sh | SH | itype | 1000 110 |Rc| bm*i |
295 uint_xlen_t bmset(RS, RB, sh)
297 int shamt = RB & (XLEN - 1);
299 return RS | (mask << shamt);
302 uint_xlen_t bmclr(RS, RB, sh)
304 int shamt = RB & (XLEN - 1);
306 return RS & ~(mask << shamt);
309 uint_xlen_t bminv(RS, RB, sh)
311 int shamt = RB & (XLEN - 1);
313 return RS ^ (mask << shamt);
316 uint_xlen_t bmext(RS, RB, sh)
318 int shamt = RB & (XLEN - 1);
320 return mask & (RS >> shamt);
324 bitmask extract with reverse. can be done by bitinverting all of RB and getting bits of RB from the opposite end.
326 when RA is zero, no shift occurs. this makes bmextrev useful for
327 simply reversing all bits of a register.
331 rev[0:msb] = rb[msb:0];
334 uint_xlen_t bmextrev(RA, RB, sh)
337 if (RA != 0) (GPR(RA) & (XLEN - 1));
338 shamt = (XLEN-1)-shamt; # shift other end
339 bra = bitreverse(RB) # swap LSB-MSB
341 return mask & (bra >> shamt);
345 | 0.5|6.10|11.15|16.20|21.26| 27..30 |31| name |
346 | -- | -- | --- | --- | --- | ------- |--| ------ |
347 | NN | RT | RA | RB | sh | 1 011 |Rc| bmrevi |
352 generalised reverse combined with a pair of LUT2s and allowing
353 zero when RA=0 provides a wide range of instructions
354 and a means to set regular 64 bit patterns in one
357 the two LUT2s are applied left-half (when not swapping)
358 and right-half (when swapping) so as to allow a wider
361 grevlut should be arranged so as to produce the constants
362 needed to put into bext (bitextract) so as in turn to
363 be able to emulate x86 pmovmask instructions <https://www.felixcloutier.com/x86/pmovmskb>
365 <img src="/openpower/sv/grevlut2x2.jpg" width=700 />
370 return imm[idx] # idx by LSB0 order
372 dorow(imm8, step_i, chunksize):
374 if (j&chunk_size) == 0
378 step_o[j] = lut2(imm, step_i[j], step_i[j ^ chunk_size])
381 uint64_t grevlut64(uint64_t RA, uint64_t RB, uint8 imm)
387 if (shamt & step) x = dorow(imm, x, step)
395 based on RV bitmanip, this is also known as a butterfly network. however
396 where a butterfly network allows setting of every crossbar setting in
397 every row and every column, generalised-reverse (grev) only allows
398 a per-row decision: every entry in the same row must either switch or
401 <img src="https://upload.wikimedia.org/wikipedia/commons/thumb/8/8c/Butterfly_Network.jpg/474px-Butterfly_Network.jpg" />
404 uint64_t grev64(uint64_t RA, uint64_t RB)
408 if (shamt & 1) x = ((x & 0x5555555555555555LL) << 1) |
409 ((x & 0xAAAAAAAAAAAAAAAALL) >> 1);
410 if (shamt & 2) x = ((x & 0x3333333333333333LL) << 2) |
411 ((x & 0xCCCCCCCCCCCCCCCCLL) >> 2);
412 if (shamt & 4) x = ((x & 0x0F0F0F0F0F0F0F0FLL) << 4) |
413 ((x & 0xF0F0F0F0F0F0F0F0LL) >> 4);
414 if (shamt & 8) x = ((x & 0x00FF00FF00FF00FFLL) << 8) |
415 ((x & 0xFF00FF00FF00FF00LL) >> 8);
416 if (shamt & 16) x = ((x & 0x0000FFFF0000FFFFLL) << 16) |
417 ((x & 0xFFFF0000FFFF0000LL) >> 16);
418 if (shamt & 32) x = ((x & 0x00000000FFFFFFFFLL) << 32) |
419 ((x & 0xFFFFFFFF00000000LL) >> 32);
425 # shuffle / unshuffle
430 uint32_t shfl32(uint32_t RA, uint32_t RB)
434 if (shamt & 8) x = shuffle32_stage(x, 0x00ff0000, 0x0000ff00, 8);
435 if (shamt & 4) x = shuffle32_stage(x, 0x0f000f00, 0x00f000f0, 4);
436 if (shamt & 2) x = shuffle32_stage(x, 0x30303030, 0x0c0c0c0c, 2);
437 if (shamt & 1) x = shuffle32_stage(x, 0x44444444, 0x22222222, 1);
440 uint32_t unshfl32(uint32_t RA, uint32_t RB)
444 if (shamt & 1) x = shuffle32_stage(x, 0x44444444, 0x22222222, 1);
445 if (shamt & 2) x = shuffle32_stage(x, 0x30303030, 0x0c0c0c0c, 2);
446 if (shamt & 4) x = shuffle32_stage(x, 0x0f000f00, 0x00f000f0, 4);
447 if (shamt & 8) x = shuffle32_stage(x, 0x00ff0000, 0x0000ff00, 8);
451 uint64_t shuffle64_stage(uint64_t src, uint64_t maskL, uint64_t maskR, int N)
453 uint64_t x = src & ~(maskL | maskR);
454 x |= ((src << N) & maskL) | ((src >> N) & maskR);
457 uint64_t shfl64(uint64_t RA, uint64_t RB)
461 if (shamt & 16) x = shuffle64_stage(x, 0x0000ffff00000000LL,
462 0x00000000ffff0000LL, 16);
463 if (shamt & 8) x = shuffle64_stage(x, 0x00ff000000ff0000LL,
464 0x0000ff000000ff00LL, 8);
465 if (shamt & 4) x = shuffle64_stage(x, 0x0f000f000f000f00LL,
466 0x00f000f000f000f0LL, 4);
467 if (shamt & 2) x = shuffle64_stage(x, 0x3030303030303030LL,
468 0x0c0c0c0c0c0c0c0cLL, 2);
469 if (shamt & 1) x = shuffle64_stage(x, 0x4444444444444444LL,
470 0x2222222222222222LL, 1);
473 uint64_t unshfl64(uint64_t RA, uint64_t RB)
477 if (shamt & 1) x = shuffle64_stage(x, 0x4444444444444444LL,
478 0x2222222222222222LL, 1);
479 if (shamt & 2) x = shuffle64_stage(x, 0x3030303030303030LL,
480 0x0c0c0c0c0c0c0c0cLL, 2);
481 if (shamt & 4) x = shuffle64_stage(x, 0x0f000f000f000f00LL,
482 0x00f000f000f000f0LL, 4);
483 if (shamt & 8) x = shuffle64_stage(x, 0x00ff000000ff0000LL,
484 0x0000ff000000ff00LL, 8);
485 if (shamt & 16) x = shuffle64_stage(x, 0x0000ffff00000000LL,
486 0x00000000ffff0000LL, 16);
493 based on RV bitmanip.
495 RB contains a vector of indices to select parts of RA to be
499 uint_xlen_t xperm(uint_xlen_t RA, uint_xlen_t RB, int sz_log2)
502 uint_xlen_t sz = 1LL << sz_log2;
503 uint_xlen_t mask = (1LL << sz) - 1;
504 for (int i = 0; i < XLEN; i += sz) {
505 uint_xlen_t pos = ((RB >> i) & mask) << sz_log2;
507 r |= ((RA >> pos) & mask) << i;
511 uint_xlen_t xperm_n (uint_xlen_t RA, uint_xlen_t RB)
512 { return xperm(RA, RB, 2); }
513 uint_xlen_t xperm_b (uint_xlen_t RA, uint_xlen_t RB)
514 { return xperm(RA, RB, 3); }
515 uint_xlen_t xperm_h (uint_xlen_t RA, uint_xlen_t RB)
516 { return xperm(RA, RB, 4); }
517 uint_xlen_t xperm_w (uint_xlen_t RA, uint_xlen_t RB)
518 { return xperm(RA, RB, 5); }
526 uint32_t gorc32(uint32_t RA, uint32_t RB)
530 if (shamt & 1) x |= ((x & 0x55555555) << 1) | ((x & 0xAAAAAAAA) >> 1);
531 if (shamt & 2) x |= ((x & 0x33333333) << 2) | ((x & 0xCCCCCCCC) >> 2);
532 if (shamt & 4) x |= ((x & 0x0F0F0F0F) << 4) | ((x & 0xF0F0F0F0) >> 4);
533 if (shamt & 8) x |= ((x & 0x00FF00FF) << 8) | ((x & 0xFF00FF00) >> 8);
534 if (shamt & 16) x |= ((x & 0x0000FFFF) << 16) | ((x & 0xFFFF0000) >> 16);
537 uint64_t gorc64(uint64_t RA, uint64_t RB)
541 if (shamt & 1) x |= ((x & 0x5555555555555555LL) << 1) |
542 ((x & 0xAAAAAAAAAAAAAAAALL) >> 1);
543 if (shamt & 2) x |= ((x & 0x3333333333333333LL) << 2) |
544 ((x & 0xCCCCCCCCCCCCCCCCLL) >> 2);
545 if (shamt & 4) x |= ((x & 0x0F0F0F0F0F0F0F0FLL) << 4) |
546 ((x & 0xF0F0F0F0F0F0F0F0LL) >> 4);
547 if (shamt & 8) x |= ((x & 0x00FF00FF00FF00FFLL) << 8) |
548 ((x & 0xFF00FF00FF00FF00LL) >> 8);
549 if (shamt & 16) x |= ((x & 0x0000FFFF0000FFFFLL) << 16) |
550 ((x & 0xFFFF0000FFFF0000LL) >> 16);
551 if (shamt & 32) x |= ((x & 0x00000000FFFFFFFFLL) << 32) |
552 ((x & 0xFFFFFFFF00000000LL) >> 32);
558 # Instructions for Carry-less Operations aka. Polynomials with coefficients in `GF(2)`
560 Carry-less addition/subtraction is simply XOR, so a `cladd`
561 instruction is not provided since the `xor[i]` instruction can be used instead.
563 These are operations on polynomials with coefficients in `GF(2)`, with the
564 polynomial's coefficients packed into integers with the following algorithm:
568 """`poly` is a list where `poly[i]` is the coefficient for `x ** i`"""
570 for i, v in enumerate(poly):
575 """returns a list `poly`, where `poly[i]` is the coefficient for `x ** i`.
584 ## Carry-less Multiply Instructions
587 see <https://en.wikipedia.org/wiki/CLMUL_instruction_set> and
588 <https://www.felixcloutier.com/x86/pclmulqdq> and
589 <https://en.m.wikipedia.org/wiki/Carry-less_product>
591 They are worth adding as their own non-overwrite operations
592 (in the same pipeline).
594 ### `clmul` Carry-less Multiply
597 uint_xlen_t clmul(uint_xlen_t RA, uint_xlen_t RB)
600 for (int i = 0; i < XLEN; i++)
607 ### `clmulh` Carry-less Multiply High
610 uint_xlen_t clmulh(uint_xlen_t RA, uint_xlen_t RB)
613 for (int i = 1; i < XLEN; i++)
620 ### `clmulr` Carry-less Multiply (Reversed)
622 Useful for CRCs. Equivalent to bit-reversing the result of `clmul` on
626 uint_xlen_t clmulr(uint_xlen_t RA, uint_xlen_t RB)
629 for (int i = 0; i < XLEN; i++)
631 x ^= RA >> (XLEN-i-1);
636 ## `clmadd` Carry-less Multiply-Add
639 clmadd RT, RA, RB, RC
643 (RT) = clmul((RA), (RB)) ^ (RC)
646 ## `cltmadd` Twin Carry-less Multiply-Add (for FFTs)
649 cltmadd RT, RA, RB, RC
652 TODO: add link to explanation for where `RS` comes from.
655 temp = clmul((RA), (RB)) ^ (RC)
660 ## `cldiv` Carry-less Division
666 TODO: decide what happens on division by zero
669 (RT) = cldiv((RA), (RB))
672 ## `clrem` Carry-less Remainder
678 TODO: decide what happens on division by zero
681 (RT) = clrem((RA), (RB))
684 # Instructions for Binary Galois Fields `GF(2^m)`
688 * <https://courses.csail.mit.edu/6.857/2016/files/ffield.py>
689 * <https://engineering.purdue.edu/kak/compsec/NewLectures/Lecture7.pdf>
690 * <https://foss.heptapod.net/math/libgf2/-/blob/branch/default/src/libgf2/gf2.py>
692 Binary Galois Field addition/subtraction is simply XOR, so a `gfbadd`
693 instruction is not provided since the `xor[i]` instruction can be used instead.
695 ## `GFBREDPOLY` SPR -- Reducing Polynomial
697 In order to save registers and to make operations orthogonal with standard
698 arithmetic, the reducing polynomial is stored in a dedicated SPR `GFBREDPOLY`.
699 This also allows hardware to pre-compute useful parameters (such as the
700 degree, or look-up tables) based on the reducing polynomial, and store them
701 alongside the SPR in hidden registers, only recomputing them whenever the SPR
702 is written to, rather than having to recompute those values for every
705 Because Galois Fields require the reducing polynomial to be an irreducible
706 polynomial, that guarantees that any polynomial of `degree > 1` must have
707 the LSB set, since otherwise it would be divisible by the polynomial `x`,
708 making it reducible, making whatever we're working on no longer a Field.
709 Therefore, we can reuse the LSB to indicate `degree == XLEN`.
712 def decode_reducing_polynomial(GFBREDPOLY, XLEN):
713 """returns the decoded coefficient list in LSB to MSB order,
714 len(retval) == degree + 1"""
715 v = GFBREDPOLY & ((1 << XLEN) - 1) # mask to XLEN bits
716 if v == 0 or v == 2: # GF(2)
717 return [0, 1] # degree = 1, poly = x
719 degree = floor_log2(v)
721 # all reducing polynomials of degree > 1 must have the LSB set,
722 # because they must be irreducible polynomials (meaning they
723 # can't be factored), if the LSB was clear, then they would
724 # have `x` as a factor. Therefore, we can reuse the LSB clear
725 # to instead mean the polynomial has degree XLEN.
728 v |= 1 # LSB must be set
729 return [(v >> i) & 1 for i in range(1 + degree)]
732 ## `gfbredpoly` -- Set the Reducing Polynomial SPR `GFBREDPOLY`
734 unless this is an immediate op, `mtspr` is completely sufficient.
736 ## `gfbmul` -- Binary Galois Field `GF(2^m)` Multiplication
743 (RT) = gfbmul((RA), (RB))
746 ## `gfbmadd` -- Binary Galois Field `GF(2^m)` Multiply-Add
749 gfbmadd RT, RA, RB, RC
753 (RT) = gfbadd(gfbmul((RA), (RB)), (RC))
756 ## `gfbtmadd` -- Binary Galois Field `GF(2^m)` Twin Multiply-Add (for FFT)
759 gfbtmadd RT, RA, RB, RC
762 TODO: add link to explanation for where `RS` comes from.
765 temp = gfbadd(gfbmul((RA), (RB)), (RC))
770 ## `gfbinv` -- Binary Galois Field `GF(2^m)` Inverse
780 # Instructions for Prime Galois Fields `GF(p)`
785 def int_to_gfp(int_value, prime):
786 return int_value % prime # follows Python remainder semantics
789 ## `GFPRIME` SPR -- Prime Modulus For `gfp*` Instructions
791 ## `gfpadd` Prime Galois Field `GF(p)` Addition
798 (RT) = int_to_gfp((RA) + (RB), GFPRIME)
801 the addition happens on infinite-precision integers
803 ## `gfpsub` Prime Galois Field `GF(p)` Subtraction
810 (RT) = int_to_gfp((RA) - (RB), GFPRIME)
813 the subtraction happens on infinite-precision integers
815 ## `gfpmul` Prime Galois Field `GF(p)` Multiplication
822 (RT) = int_to_gfp((RA) * (RB), GFPRIME)
825 the multiplication happens on infinite-precision integers
827 ## `gfpinv` Prime Galois Field `GF(p)` Invert
833 Some potential hardware implementations are found in:
834 <https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.90.5233&rep=rep1&type=pdf>
837 (RT) = gfpinv((RA), GFPRIME)
840 the multiplication happens on infinite-precision integers
842 ## `gfpmadd` Prime Galois Field `GF(p)` Multiply-Add
845 gfpmadd RT, RA, RB, RC
849 (RT) = int_to_gfp((RA) * (RB) + (RC), GFPRIME)
852 the multiplication and addition happens on infinite-precision integers
854 ## `gfpmsub` Prime Galois Field `GF(p)` Multiply-Subtract
857 gfpmsub RT, RA, RB, RC
861 (RT) = int_to_gfp((RA) * (RB) - (RC), GFPRIME)
864 the multiplication and subtraction happens on infinite-precision integers
866 ## `gfpmsubr` Prime Galois Field `GF(p)` Multiply-Subtract-Reversed
869 gfpmsubr RT, RA, RB, RC
873 (RT) = int_to_gfp((RC) - (RA) * (RB), GFPRIME)
876 the multiplication and subtraction happens on infinite-precision integers
878 ## `gfpmaddsubr` Prime Galois Field `GF(p)` Multiply-Add and Multiply-Sub-Reversed (for FFT)
881 gfpmaddsubr RT, RA, RB, RC
884 TODO: add link to explanation for where `RS` comes from.
887 product = (RA) * (RB)
889 (RT) = int_to_gfp(product + term, GFPRIME)
890 (RS) = int_to_gfp(term - product, GFPRIME)
893 the multiplication, addition, and subtraction happens on infinite-precision integers
895 ## Twin Butterfly (Tukey-Cooley) Mul-add-sub
897 used in combination with SV FFT REMAP to perform
898 a full NTT in-place. possible by having 3-in 2-out,
899 to avoid the need for a temp register. RS is written
902 gffmadd RT,RA,RC,RB (Rc=0)
903 gffmadd. RT,RA,RC,RB (Rc=1)
907 RT <- GFADD(GFMUL(RA, RC), RB))
908 RS <- GFADD(GFMUL(RA, RC), RB))
913 with the modulo and degree being in an SPR, multiply can be identical
914 equivalent to standard integer add
918 | 0.5|6.10|11.15|16.20|21.25| 26..30 |31|
919 | -- | -- | --- | --- | --- | ------ |--|
920 | NN | RT | RA | RB |11000| 01110 |Rc|
925 from functools import reduce
935 # constants used in the multGF2 function
936 mask1 = mask2 = polyred = None
939 """Define parameters of binary finite field GF(2^m)/g(x)
940 - irPoly: coefficients of irreducible polynomial g(x)
942 # degree: extension degree of binary field
943 degree = gf_degree(irPoly)
946 """Convert an integer into a polynomial"""
947 return [(sInt >> i) & 1
948 for i in reversed(range(sInt.bit_length()))]
950 global mask1, mask2, polyred
951 mask1 = mask2 = 1 << degree
953 polyred = reduce(lambda x, y: (x << 1) + y, i2P(irPoly)[1:])
956 """Multiply two polynomials in GF(2^m)/g(x)"""
959 # standard long-multiplication: check LSB and add
963 # standard modulo: check MSB and add polynomial
969 if __name__ == "__main__":
971 # Define binary field GF(2^3)/x^3 + x + 1
972 setGF2(0b1011) # degree 3
974 # Evaluate the product (x^2 + x + 1)(x^2 + 1)
975 print("{:02x}".format(multGF2(0b111, 0b101)))
977 # Define binary field GF(2^8)/x^8 + x^4 + x^3 + x + 1
978 # (used in the Advanced Encryption Standard-AES)
979 setGF2(0b100011011) # degree 8
981 # Evaluate the product (x^7)(x^7 + x + 1)
982 print("{:02x}".format(multGF2(0b10000000, 0b10000011)))
988 # https://bugs.libre-soc.org/show_bug.cgi?id=782#c33
989 # https://ftp.libre-soc.org/ARITH18_Kobayashi.pdf
992 s = getGF2() # get the full polynomial (including the MSB)
998 for i in range(1, 2*degree+1):
999 # could use count-trailing-1s here to skip ahead
1000 if r & mask1: # test MSB of r
1001 if s & mask1: # test MSB of s
1004 s <<= 1 # shift left 1
1006 r, s = s, r # swap r,s
1007 u, v = v<<1, u # shift v and swap
1010 u >>= 1 # right shift left
1013 r <<= 1 # shift left 1
1014 u <<= 1 # shift left 1
1022 ## GF2 (carryless) div and mod
1033 def FullDivision(self, f, v):
1035 Takes two arguments, f, v
1036 fDegree and vDegree are the degrees of the field elements
1037 f and v represented as a polynomials.
1038 This method returns the field elements a and b such that
1040 f(x) = a(x) * v(x) + b(x).
1042 That is, a is the divisor and b is the remainder, or in
1043 other words a is like floor(f/v) and b is like f modulo v.
1046 fDegree, vDegree = gf_degree(f), gf_degree(v)
1048 for i in reversed(range(vDegree, fDegree+1):
1049 if ((rem >> i) & 1): # check bit
1050 res ^= (1 << (i - vDegree))
1051 rem ^= ( v << (i - vDegree)))
1055 | 0.5|6.10|11.15|16.20| 21 | 22.23 | 24....30 |31| name |
1056 | -- | -- | --- | --- | -- | ----- | -------- |--| ---- |
1057 | NN | RT | RA | RB | 1 | 00 | 0001 110 |Rc| cldiv |
1058 | NN | RT | RA | RB | 1 | 01 | 0001 110 |Rc| clmod |
1060 ## GF2 carryless mul
1062 based on RV bitmanip
1063 see <https://en.wikipedia.org/wiki/CLMUL_instruction_set> and
1064 <https://www.felixcloutier.com/x86/pclmulqdq> and
1065 <https://en.m.wikipedia.org/wiki/Carry-less_product>
1067 these are GF2 operations with the modulo set to 2^degree.
1068 they are worth adding as their own non-overwrite operations
1069 (in the same pipeline).
1072 uint_xlen_t clmul(uint_xlen_t RA, uint_xlen_t RB)
1075 for (int i = 0; i < XLEN; i++)
1080 uint_xlen_t clmulh(uint_xlen_t RA, uint_xlen_t RB)
1083 for (int i = 1; i < XLEN; i++)
1085 x ^= RA >> (XLEN-i);
1088 uint_xlen_t clmulr(uint_xlen_t RA, uint_xlen_t RB)
1091 for (int i = 0; i < XLEN; i++)
1093 x ^= RA >> (XLEN-i-1);
1097 ## carryless Twin Butterfly (Tukey-Cooley) Mul-add-sub
1099 used in combination with SV FFT REMAP to perform
1100 a full NTT in-place. possible by having 3-in 2-out,
1101 to avoid the need for a temp register. RS is written
1104 clfmadd RT,RA,RC,RB (Rc=0)
1105 clfmadd. RT,RA,RC,RB (Rc=1)
1109 RT <- CLMUL(RA, RC) ^ RB
1110 RS <- CLMUL(RA, RC) ^ RB
1116 uint64_t bmatflip(uint64_t RA)
1124 uint64_t bmatxor(uint64_t RA, uint64_t RB)
1127 uint64_t RBt = bmatflip(RB);
1128 uint8_t u[8]; // rows of RA
1129 uint8_t v[8]; // cols of RB
1130 for (int i = 0; i < 8; i++) {
1132 v[i] = RBt >> (i*8);
1135 for (int i = 0; i < 64; i++) {
1136 if (pcnt(u[i / 8] & v[i % 8]) & 1)
1141 uint64_t bmator(uint64_t RA, uint64_t RB)
1144 uint64_t RBt = bmatflip(RB);
1145 uint8_t u[8]; // rows of RA
1146 uint8_t v[8]; // cols of RB
1147 for (int i = 0; i < 8; i++) {
1149 v[i] = RBt >> (i*8);
1152 for (int i = 0; i < 64; i++) {
1153 if ((u[i / 8] & v[i % 8]) != 0)
1161 # Already in POWER ISA
1163 ## count leading/trailing zeros with mask
1169 do i = 0 to 63 if((RB)i=1) then do
1170 if((RS)i=1) then break end end count ← count + 1
1176 vpdepd VRT,VRA,VRB, identical to RV bitmamip bdep, found already in v3.1 p106
1179 if VSR[VRB+32].dword[i].bit[63-m]=1 then do
1180 result = VSR[VRA+32].dword[i].bit[63-k]
1181 VSR[VRT+32].dword[i].bit[63-m] = result
1187 uint_xlen_t bdep(uint_xlen_t RA, uint_xlen_t RB)
1190 for (int i = 0, j = 0; i < XLEN; i++)
1191 if ((RB >> i) & 1) {
1193 r |= uint_xlen_t(1) << i;
1203 other way round: identical to RV bext, found in v3.1 p196
1206 uint_xlen_t bext(uint_xlen_t RA, uint_xlen_t RB)
1209 for (int i = 0, j = 0; i < XLEN; i++)
1210 if ((RB >> i) & 1) {
1212 r |= uint_xlen_t(1) << j;
1221 found in v3.1 p106 so not to be added here
1231 if((RB)63-i==1) then do
1232 result63-ptr1 = (RS)63-i
1238 # bit to byte permute
1240 similar to matrix permute in RV bitmanip, which has XOR and OR variants,
1241 these perform a transpose.
1245 b = VSR[VRB+32].dword[i].byte[k].bit[j]
1246 VSR[VRT+32].dword[i].byte[j].bit[k] = b