(no commit message)
[libreriscv.git] / openpower / sv / bitmanip.mdwn
1 [[!tag standards]]
2
3 [[!toc levels=1]]
4
5 # Implementation Log
6
7 * ternlogi <https://bugs.libre-soc.org/show_bug.cgi?id=745>
8 * grev <https://bugs.libre-soc.org/show_bug.cgi?id=755>
9 * GF2^M <https://bugs.libre-soc.org/show_bug.cgi?id=782>
10
11
12 # bitmanipulation
13
14 **DRAFT STATUS**
15
16 pseudocode: [[openpower/isa/bitmanip]]
17
18 this extension amalgamates bitmanipulation primitives from many sources,
19 including RISC-V bitmanip, Packed SIMD, AVX-512 and OpenPOWER VSX.
20 Also included are DSP/Multimedia operations suitable for Audio/Video.
21 Vectorisation and SIMD are removed: these are straight scalar (element)
22 operations making them suitable for embedded applications. Vectorisation
23 Context is provided by [[openpower/sv]].
24
25 When combined with SV, scalar variants of bitmanip operations found in
26 VSX are added so that the Packed SIMD aspects of VSX may be retired as
27 "legacy" in the far future (10 to 20 years). Also, VSX is hundreds of
28 opcodes, requires 128 bit pathways, and is wholly unsuited to low power
29 or embedded scenarios.
30
31 ternlogv is experimental and is the only operation that may be considered
32 a "Packed SIMD". It is added as a variant of the already well-justified
33 ternlog operation (done in AVX512 as an immediate only) "because it
34 looks fun". As it is based on the LUT4 concept it will allow accelerated
35 emulation of FPGAs. Other vendors of ISAs are buying FPGA companies to
36 achieve similar objectives.
37
38 general-purpose Galois Field 2^M operations are added so as to avoid
39 huge custom opcode proliferation across many areas of Computer Science.
40 however for convenience and also to avoid setup costs, some of the more
41 common operations (clmul, crc32) are also added. The expectation is
42 that these operations would all be covered by the same pipeline.
43
44 note that there are brownfield spaces below that could incorporate
45 some of the set-before-first and other scalar operations listed in
46 [[sv/mv.swizzle]],
47 [[sv/vector_ops]], [[sv/int_fp_mv]] and the [[sv/av_opcodes]] as well as
48 [[sv/setvl]], [[sv/svstep]], [[sv/remap]]
49
50 Useful resource:
51
52 * <https://en.wikiversity.org/wiki/Reed%E2%80%93Solomon_codes_for_coders>
53 * <https://maths-people.anu.edu.au/~brent/pd/rpb232tr.pdf>
54
55 [[!inline quick="yes" raw="yes" pages="openpower/sv/bmask.py"]]
56
57
58 # binary and ternary bitops
59
60 Similar to FPGA LUTs: for two (binary) or three (ternary) inputs take
61 bits from each input, concatenate them and perform a lookup into a
62 table using an 8-8-bit immediate (for the ternary instructions), or in
63 another register (4-bit for the binary instructions). The binary lookup
64 instructions have CR Field lookup variants due to CR Fields being 4 bit.
65
66 Like the x86 AVX512F
67 [vpternlogd/vpternlogq](https://www.felixcloutier.com/x86/vpternlogd:vpternlogq)
68 instructions.
69
70 ## ternlogi
71
72 | 0.5|6.10|11.15|16.20| 21..28|29.30|31|
73 | -- | -- | --- | --- | ----- | --- |--|
74 | NN | RT | RA | RB | im0-7 | 00 |Rc|
75
76 lut3(imm, a, b, c):
77 idx = c << 2 | b << 1 | a
78 return imm[idx] # idx by LSB0 order
79
80 for i in range(64):
81 RT[i] = lut3(imm, RB[i], RA[i], RT[i])
82
83 ## binlut
84
85 Binary lookup is a dynamic LUT2 version of ternlogi. Firstly, the
86 lookup table is 4 bits wide not 8 bits, and secondly the lookup
87 table comes from a register not an immediate.
88
89 | 0.5|6.10|11.15|16.20| 21..25|26..31 | Form |
90 | -- | -- | --- | --- | ----- |--------|---------|
91 | NN | RT | RA | RB | RC |nh 00001| VA-Form |
92 | NN | RT | RA | RB | /BFA/ |0 01001| VA-Form |
93
94 For binlut, the 4-bit LUT may be selected from either the high nibble
95 or the low nibble of the first byte of RC:
96
97 lut2(imm, a, b):
98 idx = b << 1 | a
99 return imm[idx] # idx by LSB0 order
100
101 imm = (RC>>(nh*4))&0b1111
102 for i in range(64):
103 RT[i] = lut2(imm, RB[i], RA[i])
104
105 For bincrlut, `BFA` selects the 4-bit CR Field as the LUT2:
106
107 for i in range(64):
108 RT[i] = lut2(CRs{BFA}, RB[i], RA[i])
109
110 When Vectorised with SVP64, as usual both source and destination may be
111 Vector or Scalar.
112
113 *Programmer's note: a dynamic ternary lookup may be synthesised from
114 a pair of `binlut` instructions followed by a `ternlogi` to select which
115 to merge. Use `nh` to select which nibble to use as the lookup table
116 from the RC source register (`nh=1` nibble high), i.e. keeping
117 an 8-bit LUT3 in RC, the first `binlut` instruction may set nh=0 and
118 the second nh=1.*
119
120 ## crternlogi
121
122 another mode selection would be CRs not Ints.
123
124 | 0.5|6.8 | 9.11|12.14|15.17|18.20|21.28 | 29.30|31|
125 | -- | -- | --- | --- | --- |-----|----- | -----|--|
126 | NN | BT | BA | BB | BC |m0-2 | imm | 01 |m3|
127
128 mask = m0-3
129 for i in range(4):
130 a,b,c = CRs[BA][i], CRs[BB][i], CRs[BC][i])
131 if mask[i] CRs[BT][i] = lut3(imm, a, b, c)
132
133 This instruction is remarkably similar to the existing crops, `crand` etc.
134 which have been noted to be a 4-bit (binary) LUT. In effect `crternlogi`
135 is the ternary LUT version of crops, having an 8-bit LUT.
136
137 ## crbinlog
138
139 With ternary (LUT3) dynamic instructions being very costly,
140 and CR Fields being only 4 bit, a binary (LUT2) variant is better
141
142 | 0.5|6.8 | 9.11|12.14|15.17|18.21|22...30 |31|
143 | -- | -- | --- | --- | --- |-----| -------- |--|
144 | NN | BT | BA | BB | BC |m0-m3|000101110 |0 |
145
146 mask = m0..m3
147 for i in range(4):
148 a,b = CRs[BA][i], CRs[BB][i])
149 if mask[i] CRs[BT][i] = lut2(CRs[BC], a, b)
150
151 When SVP64 Vectorised any of the 4 operands may be Scalar or
152 Vector, including `BC` meaning that multiple different dynamic
153 lookups may be performed with a single instruction.
154
155 *Programmer's note: just as with binlut and ternlogi, a pair
156 of crbinlog instructions followed by a merging crternlogi may
157 be deployed to synthesise dynamic ternary (LUT3) CR Field
158 manipulation*
159
160 # int ops
161
162 ## min/m
163
164 required for the [[sv/av_opcodes]]
165
166 signed and unsigned min/max for integer. this is sort-of partly
167 synthesiseable in [[sv/svp64]] with pred-result as long as the dest reg
168 is one of the sources, but not both signed and unsigned. when the dest
169 is also one of the srces and the mv fails due to the CR bittest failing
170 this will only overwrite the dest where the src is greater (or less).
171
172 signed/unsigned min/max gives more flexibility.
173
174 X-Form
175
176 * XO=0001001110, itype=0b00 min, unsigned
177 * XO=0101001110, itype=0b01 min, signed
178 * XO=0011001110, itype=0b10 max, unsigned
179 * XO=0111001110, itype=0b11 max, signed
180
181
182 ```
183 uint_xlen_t mins(uint_xlen_t rs1, uint_xlen_t rs2)
184 { return (int_xlen_t)rs1 < (int_xlen_t)rs2 ? rs1 : rs2;
185 }
186 uint_xlen_t maxs(uint_xlen_t rs1, uint_xlen_t rs2)
187 { return (int_xlen_t)rs1 > (int_xlen_t)rs2 ? rs1 : rs2;
188 }
189 uint_xlen_t minu(uint_xlen_t rs1, uint_xlen_t rs2)
190 { return rs1 < rs2 ? rs1 : rs2;
191 }
192 uint_xlen_t maxu(uint_xlen_t rs1, uint_xlen_t rs2)
193 { return rs1 > rs2 ? rs1 : rs2;
194 }
195 ```
196
197 ## average
198
199 required for the [[sv/av_opcodes]], these exist in Packed SIMD (VSX)
200 but not scalar
201
202 ```
203 uint_xlen_t intavg(uint_xlen_t rs1, uint_xlen_t rs2) {
204 return (rs1 + rs2 + 1) >> 1:
205 }
206 ```
207
208 ## absdu
209
210 required for the [[sv/av_opcodes]], these exist in Packed SIMD (VSX)
211 but not scalar
212
213 ```
214 uint_xlen_t absdu(uint_xlen_t rs1, uint_xlen_t rs2) {
215 return (src1 > src2) ? (src1-src2) : (src2-src1)
216 }
217 ```
218
219 ## abs-accumulate
220
221 required for the [[sv/av_opcodes]], these are needed for motion estimation.
222 both are overwrite on RS.
223
224 ```
225 uint_xlen_t uintabsacc(uint_xlen_t rs, uint_xlen_t ra, uint_xlen_t rb) {
226 return rs + (src1 > src2) ? (src1-src2) : (src2-src1)
227 }
228 uint_xlen_t intabsacc(uint_xlen_t rs, int_xlen_t ra, int_xlen_t rb) {
229 return rs + (src1 > src2) ? (src1-src2) : (src2-src1)
230 }
231 ```
232
233 For SVP64, the twin Elwidths allows e.g. a 16 bit accumulator for 8 bit
234 differences. Form is `RM-1P-3S1D` where RS-as-source has a separate
235 SVP64 designation from RS-as-dest. This gives a limited range of
236 non-overwrite capability.
237
238 # shift-and-add
239
240 Power ISA is missing LD/ST with shift, which is present in both ARM and x86.
241 Too complex to add more LD/ST, a compromise is to add shift-and-add.
242 Replaces a pair of explicit instructions in hot-loops.
243
244 ```
245 uint_xlen_t shadd(uint_xlen_t rs1, uint_xlen_t rs2, uint8_t sh) {
246 return (rs1 << (sh+1)) + rs2;
247 }
248
249 uint_xlen_t shadduw(uint_xlen_t rs1, uint_xlen_t rs2, uint8_t sh) {
250 uint_xlen_t rs1z = rs1 & 0xFFFFFFFF;
251 return (rs1z << (sh+1)) + rs2;
252 }
253 ```
254
255 # bitmask set
256
257 based on RV bitmanip singlebit set, instruction format similar to shift
258 [[isa/fixedshift]]. bmext is actually covered already (shift-with-mask
259 rldicl but only immediate version). however bitmask-invert is not,
260 and set/clr are not covered, although they can use the same Shift ALU.
261
262 bmext (RB) version is not the same as rldicl because bmext is a right
263 shift by RC, where rldicl is a left rotate. for the immediate version
264 this does not matter, so a bmexti is not required. bmrev however there
265 is no direct equivalent and consequently a bmrevi is required.
266
267 bmset (register for mask amount) is particularly useful for creating
268 predicate masks where the length is a dynamic runtime quantity.
269 bmset(RA=0, RB=0, RC=mask) will produce a run of ones of length "mask"
270 in a single instruction without needing to initialise or depend on any
271 other registers.
272
273 | 0.5|6.10|11.15|16.20|21.25| 26..30 |31| name |
274 | -- | -- | --- | --- | --- | ------- |--| ----- |
275 | NN | RS | RA | RB | RC | mode 010 |Rc| bm\* |
276
277 Immediate-variant is an overwrite form:
278
279 | 0.5|6.10|11.15|16.20| 21 | 22.23 | 24....30 |31| name |
280 | -- | -- | --- | --- | -- | ----- | -------- |--| ---- |
281 | NN | RS | RB | sh | SH | itype | 1000 110 |Rc| bm\*i |
282
283 ```
284 def MASK(x, y):
285 if x < y:
286 x = x+1
287 mask_a = ((1 << x) - 1) & ((1 << 64) - 1)
288 mask_b = ((1 << y) - 1) & ((1 << 64) - 1)
289 elif x == y:
290 return 1 << x
291 else:
292 x = x+1
293 mask_a = ((1 << x) - 1) & ((1 << 64) - 1)
294 mask_b = (~((1 << y) - 1)) & ((1 << 64) - 1)
295 return mask_a ^ mask_b
296
297
298 uint_xlen_t bmset(RS, RB, sh)
299 {
300 int shamt = RB & (XLEN - 1);
301 mask = (2<<sh)-1;
302 return RS | (mask << shamt);
303 }
304
305 uint_xlen_t bmclr(RS, RB, sh)
306 {
307 int shamt = RB & (XLEN - 1);
308 mask = (2<<sh)-1;
309 return RS & ~(mask << shamt);
310 }
311
312 uint_xlen_t bminv(RS, RB, sh)
313 {
314 int shamt = RB & (XLEN - 1);
315 mask = (2<<sh)-1;
316 return RS ^ (mask << shamt);
317 }
318
319 uint_xlen_t bmext(RS, RB, sh)
320 {
321 int shamt = RB & (XLEN - 1);
322 mask = (2<<sh)-1;
323 return mask & (RS >> shamt);
324 }
325 ```
326
327 bitmask extract with reverse. can be done by bit-order-inverting all
328 of RB and getting bits of RB from the opposite end.
329
330 when RA is zero, no shift occurs. this makes bmextrev useful for
331 simply reversing all bits of a register.
332
333 ```
334 msb = ra[5:0];
335 rev[0:msb] = rb[msb:0];
336 rt = ZE(rev[msb:0]);
337
338 uint_xlen_t bmrevi(RA, RB, sh)
339 {
340 int shamt = XLEN-1;
341 if (RA != 0) shamt = (GPR(RA) & (XLEN - 1));
342 shamt = (XLEN-1)-shamt; # shift other end
343 brb = bitreverse(GPR(RB)) # swap LSB-MSB
344 mask = (2<<sh)-1;
345 return mask & (brb >> shamt);
346 }
347
348 uint_xlen_t bmrev(RA, RB, RC) {
349 return bmrevi(RA, RB, GPR(RC) & 0b111111);
350 }
351 ```
352
353 | 0.5|6.10|11.15|16.20|21.26| 27..30 |31| name | Form |
354 | -- | -- | --- | --- | --- | ------- |--| ------ | -------- |
355 | NN | RT | RA | RB | sh | 1111 |Rc| bmrevi | MDS-Form |
356
357 | 0.5|6.10|11.15|16.20|21.25| 26..30 |31| name | Form |
358 | -- | -- | --- | --- | --- | ------- |--| ------ | -------- |
359 | NN | RT | RA | RB | RC | 11110 |Rc| bmrev | VA2-Form |
360
361 # grevlut <a name="grevlut"> </a>
362
363 ([3x lower latency alternative](grev_gorc_design/) which is
364 not equivalent and has limited constant-generation capability)
365
366 generalised reverse combined with a pair of LUT2s and allowing
367 a constant `0b0101...0101` when RA=0, and an option to invert
368 (including when RA=0, giving a constant 0b1010...1010 as the
369 initial value) provides a wide range of instructions
370 and a means to set hundreds of regular 64 bit patterns with one
371 single 32 bit instruction.
372
373 the two LUT2s are applied left-half (when not swapping)
374 and right-half (when swapping) so as to allow a wider
375 range of options.
376
377 <img src="/openpower/sv/grevlut2x2.jpg" width=700 />
378
379 * A value of `0b11001010` for the immediate provides
380 the functionality of a standard "grev".
381 * `0b11101110` provides gorc
382
383 grevlut should be arranged so as to produce the constants
384 needed to put into bext (bitextract) so as in turn to
385 be able to emulate x86 pmovmask instructions
386 <https://www.felixcloutier.com/x86/pmovmskb>.
387 This only requires 2 instructions (grevlut, bext).
388
389 Note that if the mask is required to be placed
390 directly into CR Fields (for use as CR Predicate
391 masks rather than a integer mask) then sv.cmpi or sv.ori
392 may be used instead, bearing in mind that sv.ori
393 is a 64-bit instruction, and `VL` must have been
394 set to the required length:
395
396 sv.ori./elwid=8 r10.v, r10.v, 0
397
398 The following settings provide the required mask constants:
399
400 | RA=0 | RB | imm | iv | result |
401 | ------- | ------- | ---------- | -- | ---------- |
402 | 0x555.. | 0b10 | 0b01101100 | 0 | 0x111111... |
403 | 0x555.. | 0b110 | 0b01101100 | 0 | 0x010101... |
404 | 0x555.. | 0b1110 | 0b01101100 | 0 | 0x00010001... |
405 | 0x555.. | 0b10 | 0b11000110 | 1 | 0x88888... |
406 | 0x555.. | 0b110 | 0b11000110 | 1 | 0x808080... |
407 | 0x555.. | 0b1110 | 0b11000110 | 1 | 0x80008000... |
408
409 Better diagram showing the correct ordering of shamt (RB). A LUT2
410 is applied to all locations marked in red using the first 4
411 bits of the immediate, and a separate LUT2 applied to all
412 locations in green using the upper 4 bits of the immediate.
413
414 <img src="/openpower/sv/grevlut.png" width=700 />
415
416 demo code [[openpower/sv/grevlut.py]]
417
418 ```
419 lut2(imm, a, b):
420 idx = b << 1 | a
421 return imm[idx] # idx by LSB0 order
422
423 dorow(imm8, step_i, chunksize, us32b):
424 for j in 0 to 31 if is32b else 63:
425 if (j&chunk_size) == 0
426 imm = imm8[0..3]
427 else
428 imm = imm8[4..7]
429 step_o[j] = lut2(imm, step_i[j], step_i[j ^ chunk_size])
430 return step_o
431
432 uint64_t grevlut(uint64_t RA, uint64_t RB, uint8 imm, bool iv, bool is32b)
433 {
434 uint64_t x = 0x5555_5555_5555_5555;
435 if (RA != 0) x = GPR(RA);
436 if (iv) x = ~x;
437 int shamt = RB & 31 if is32b else 63
438 for i in 0 to (6-is32b)
439 step = 1<<i
440 if (shamt & step) x = dorow(imm, x, step, is32b)
441 return x;
442 }
443 ```
444
445 A variant may specify different LUT-pairs per row,
446 using one byte of RB for each. If it is desired that
447 a particular row-crossover shall not be applied it is
448 a simple matter to set the appropriate LUT-pair in RB
449 to effect an identity transform for that row (`0b11001010`).
450
451 ```
452 uint64_t grevlutr(uint64_t RA, uint64_t RB, bool iv, bool is32b)
453 {
454 uint64_t x = 0x5555_5555_5555_5555;
455 if (RA != 0) x = GPR(RA);
456 if (iv) x = ~x;
457 for i in 0 to (6-is32b)
458 step = 1<<i
459 imm = (RB>>(i*8))&0xff
460 x = dorow(imm, x, step, is32b)
461 return x;
462 }
463
464 ```
465
466 | 0.5|6.10|11.15|16.20 |21..28 | 29.30|31| name | Form |
467 | -- | -- | --- | --- | ----- | -----|--| ------ | ----- |
468 | NN | RT | RA | s0-4 | im0-7 | 1 iv |s5| grevlogi | |
469 | NN | RT | RA | RB | im0-7 | 01 |0 | grevlog | |
470 | NN | RT | RA | RB | im0-7 | 01 |1 | grevlogw | |
471
472 # xperm
473
474 based on RV bitmanip.
475
476 RA contains a vector of indices to select parts of RB to be
477 copied to RT. The immediate-variant allows up to an 8 bit
478 pattern (repeated) to be targetted at different parts of RT.
479
480 xperm shares some similarity with one of the uses of bmator
481 in that xperm indices are binary addressing where bitmator
482 may be considered to be unary addressing.
483
484 ```
485 uint_xlen_t xpermi(uint8_t imm8, uint_xlen_t RB, int sz_log2)
486 {
487 uint_xlen_t r = 0;
488 uint_xlen_t sz = 1LL << sz_log2;
489 uint_xlen_t mask = (1LL << sz) - 1;
490 uint_xlen_t RA = imm8 | imm8<<8 | ... | imm8<<56;
491 for (int i = 0; i < XLEN; i += sz) {
492 uint_xlen_t pos = ((RA >> i) & mask) << sz_log2;
493 if (pos < XLEN)
494 r |= ((RB >> pos) & mask) << i;
495 }
496 return r;
497 }
498 uint_xlen_t xperm(uint_xlen_t RA, uint_xlen_t RB, int sz_log2)
499 {
500 uint_xlen_t r = 0;
501 uint_xlen_t sz = 1LL << sz_log2;
502 uint_xlen_t mask = (1LL << sz) - 1;
503 for (int i = 0; i < XLEN; i += sz) {
504 uint_xlen_t pos = ((RA >> i) & mask) << sz_log2;
505 if (pos < XLEN)
506 r |= ((RB >> pos) & mask) << i;
507 }
508 return r;
509 }
510 uint_xlen_t xperm_n (uint_xlen_t RA, uint_xlen_t RB)
511 { return xperm(RA, RB, 2); }
512 uint_xlen_t xperm_b (uint_xlen_t RA, uint_xlen_t RB)
513 { return xperm(RA, RB, 3); }
514 uint_xlen_t xperm_h (uint_xlen_t RA, uint_xlen_t RB)
515 { return xperm(RA, RB, 4); }
516 uint_xlen_t xperm_w (uint_xlen_t RA, uint_xlen_t RB)
517 { return xperm(RA, RB, 5); }
518 ```
519
520 # bitmatrix
521
522 bmatflip and bmatxor is found in the Cray XMT, and in x86 is known
523 as GF2P8AFFINEQB. uses:
524
525 * <https://gist.github.com/animetosho/d3ca95da2131b5813e16b5bb1b137ca0>
526 * SM4, Reed Solomon, RAID6
527 <https://stackoverflow.com/questions/59124720/what-are-the-avx-512-galois-field-related-instructions-for>
528 * Vector bit-reverse <https://reviews.llvm.org/D91515?id=305411>
529 * Affine Inverse <https://github.com/HJLebbink/asm-dude/wiki/GF2P8AFFINEINVQB>
530
531 | 0.5|6.10|11.15|16.20| 21 | 22.23 | 24....30 |31| name | Form |
532 | -- | -- | --- | --- | -- | ----- | -------- |--| ---- | ------- |
533 | NN | RS | RA |im04 | im5| 1 1 | im67 00 110 |Rc| bmatxori | TODO |
534
535
536 ```
537 uint64_t bmatflip(uint64_t RA)
538 {
539 uint64_t x = RA;
540 x = shfl64(x, 31);
541 x = shfl64(x, 31);
542 x = shfl64(x, 31);
543 return x;
544 }
545
546 uint64_t bmatxori(uint64_t RS, uint64_t RA, uint8_t imm) {
547 // transpose of RA
548 uint64_t RAt = bmatflip(RA);
549 uint8_t u[8]; // rows of RS
550 uint8_t v[8]; // cols of RA
551 for (int i = 0; i < 8; i++) {
552 u[i] = RS >> (i*8);
553 v[i] = RAt >> (i*8);
554 }
555 uint64_t bit, x = 0;
556 for (int i = 0; i < 64; i++) {
557 bit = (imm >> (i%8)) & 1;
558 bit ^= pcnt(u[i / 8] & v[i % 8]) & 1;
559 x |= bit << i;
560 }
561 return x;
562 }
563
564 uint64_t bmatxor(uint64_t RA, uint64_t RB) {
565 return bmatxori(RA, RB, 0xff)
566 }
567
568 uint64_t bmator(uint64_t RA, uint64_t RB) {
569 // transpose of RB
570 uint64_t RBt = bmatflip(RB);
571 uint8_t u[8]; // rows of RA
572 uint8_t v[8]; // cols of RB
573 for (int i = 0; i < 8; i++) {
574 u[i] = RA >> (i*8);
575 v[i] = RBt >> (i*8);
576 }
577 uint64_t x = 0;
578 for (int i = 0; i < 64; i++) {
579 if ((u[i / 8] & v[i % 8]) != 0)
580 x |= 1LL << i;
581 }
582 return x;
583 }
584
585 uint64_t bmatand(uint64_t RA, uint64_t RB) {
586 // transpose of RB
587 uint64_t RBt = bmatflip(RB);
588 uint8_t u[8]; // rows of RA
589 uint8_t v[8]; // cols of RB
590 for (int i = 0; i < 8; i++) {
591 u[i] = RA >> (i*8);
592 v[i] = RBt >> (i*8);
593 }
594 uint64_t x = 0;
595 for (int i = 0; i < 64; i++) {
596 if ((u[i / 8] & v[i % 8]) == 0xff)
597 x |= 1LL << i;
598 }
599 return x;
600 }
601 ```
602
603 # Introduction to Carry-less and GF arithmetic
604
605 * obligatory xkcd <https://xkcd.com/2595/>
606
607 There are three completely separate types of Galois-Field-based arithmetic
608 that we implement which are not well explained even in introductory
609 literature. A slightly oversimplified explanation is followed by more
610 accurate descriptions:
611
612 * `GF(2)` carry-less binary arithmetic. this is not actually a Galois Field,
613 but is accidentally referred to as GF(2) - see below as to why.
614 * `GF(p)` modulo arithmetic with a Prime number, these are "proper"
615 Galois Fields
616 * `GF(2^N)` carry-less binary arithmetic with two limits: modulo a power-of-2
617 (2^N) and a second "reducing" polynomial (similar to a prime number), these
618 are said to be GF(2^N) arithmetic.
619
620 further detailed and more precise explanations are provided below
621
622 * **Polynomials with coefficients in `GF(2)`**
623 (aka. Carry-less arithmetic -- the `cl*` instructions).
624 This isn't actually a Galois Field, but its coefficients are. This is
625 basically binary integer addition, subtraction, and multiplication like
626 usual, except that carries aren't propagated at all, effectively turning
627 both addition and subtraction into the bitwise xor operation. Division and
628 remainder are defined to match how addition and multiplication works.
629 * **Galois Fields with a prime size**
630 (aka. `GF(p)` or Prime Galois Fields -- the `gfp*` instructions).
631 This is basically just the integers mod `p`.
632 * **Galois Fields with a power-of-a-prime size**
633 (aka. `GF(p^n)` or `GF(q)` where `q == p^n` for prime `p` and
634 integer `n > 0`).
635 We only implement these for `p == 2`, called Binary Galois Fields
636 (`GF(2^n)` -- the `gfb*` instructions).
637 For any prime `p`, `GF(p^n)` is implemented as polynomials with
638 coefficients in `GF(p)` and degree `< n`, where the polynomials are the
639 remainders of dividing by a specificly chosen polynomial in `GF(p)` called
640 the Reducing Polynomial (we will denote that by `red_poly`). The Reducing
641 Polynomial must be an irreducable polynomial (like primes, but for
642 polynomials), as well as have degree `n`. All `GF(p^n)` for the same `p`
643 and `n` are isomorphic to each other -- the choice of `red_poly` doesn't
644 affect `GF(p^n)`'s mathematical shape, all that changes is the specific
645 polynomials used to implement `GF(p^n)`.
646
647 Many implementations and much of the literature do not make a clear
648 distinction between these three categories, which makes it confusing
649 to understand what their purpose and value is.
650
651 * carry-less multiply is extremely common and is used for the ubiquitous
652 CRC32 algorithm. [TODO add many others, helps justify to ISA WG]
653 * GF(2^N) forms the basis of Rijndael (the current AES standard) and
654 has significant uses throughout cryptography
655 * GF(p) is the basis again of a significant quantity of algorithms
656 (TODO, list them, jacob knows what they are), even though the
657 modulo is limited to be below 64-bit (size of a scalar int)
658
659 # Instructions for Carry-less Operations
660
661 aka. Polynomials with coefficients in `GF(2)`
662
663 Carry-less addition/subtraction is simply XOR, so a `cladd`
664 instruction is not provided since the `xor[i]` instruction can be used instead.
665
666 These are operations on polynomials with coefficients in `GF(2)`, with the
667 polynomial's coefficients packed into integers with the following algorithm:
668
669 ```python
670 [[!inline pagenames="gf_reference/pack_poly.py" raw="yes"]]
671 ```
672
673 ## Carry-less Multiply Instructions
674
675 based on RV bitmanip
676 see <https://en.wikipedia.org/wiki/CLMUL_instruction_set> and
677 <https://www.felixcloutier.com/x86/pclmulqdq> and
678 <https://en.m.wikipedia.org/wiki/Carry-less_product>
679
680 They are worth adding as their own non-overwrite operations
681 (in the same pipeline).
682
683 ### `clmul` Carry-less Multiply
684
685 ```python
686 [[!inline pagenames="gf_reference/clmul.py" raw="yes"]]
687 ```
688
689 ### `clmulh` Carry-less Multiply High
690
691 ```python
692 [[!inline pagenames="gf_reference/clmulh.py" raw="yes"]]
693 ```
694
695 ### `clmulr` Carry-less Multiply (Reversed)
696
697 Useful for CRCs. Equivalent to bit-reversing the result of `clmul` on
698 bit-reversed inputs.
699
700 ```python
701 [[!inline pagenames="gf_reference/clmulr.py" raw="yes"]]
702 ```
703
704 ## `clmadd` Carry-less Multiply-Add
705
706 ```
707 clmadd RT, RA, RB, RC
708 ```
709
710 ```
711 (RT) = clmul((RA), (RB)) ^ (RC)
712 ```
713
714 ## `cltmadd` Twin Carry-less Multiply-Add (for FFTs)
715
716 Used in combination with SV FFT REMAP to perform a full Discrete Fourier
717 Transform of Polynomials over GF(2) in-place. Possible by having 3-in 2-out,
718 to avoid the need for a temp register. RS is written to as well as RT.
719
720 Note: Polynomials over GF(2) are a Ring rather than a Field, so, because the
721 definition of the Inverse Discrete Fourier Transform involves calculating a
722 multiplicative inverse, which may not exist in every Ring, therefore the
723 Inverse Discrete Fourier Transform may not exist. (AFAICT the number of inputs
724 to the IDFT must be odd for the IDFT to be defined for Polynomials over GF(2).
725 TODO: check with someone who knows for sure if that's correct.)
726
727 ```
728 cltmadd RT, RA, RB, RC
729 ```
730
731 TODO: add link to explanation for where `RS` comes from.
732
733 ```
734 a = (RA)
735 c = (RC)
736 # read all inputs before writing to any outputs in case
737 # an input overlaps with an output register.
738 (RT) = clmul(a, (RB)) ^ c
739 (RS) = a ^ c
740 ```
741
742 ## `cldivrem` Carry-less Division and Remainder
743
744 `cldivrem` isn't an actual instruction, but is just used in the pseudo-code
745 for other instructions.
746
747 ```python
748 [[!inline pagenames="gf_reference/cldivrem.py" raw="yes"]]
749 ```
750
751 ## `cldiv` Carry-less Division
752
753 ```
754 cldiv RT, RA, RB
755 ```
756
757 ```
758 n = (RA)
759 d = (RB)
760 q, r = cldivrem(n, d, width=XLEN)
761 (RT) = q
762 ```
763
764 ## `clrem` Carry-less Remainder
765
766 ```
767 clrem RT, RA, RB
768 ```
769
770 ```
771 n = (RA)
772 d = (RB)
773 q, r = cldivrem(n, d, width=XLEN)
774 (RT) = r
775 ```
776
777 # Instructions for Binary Galois Fields `GF(2^m)`
778
779 see:
780
781 * <https://courses.csail.mit.edu/6.857/2016/files/ffield.py>
782 * <https://engineering.purdue.edu/kak/compsec/NewLectures/Lecture7.pdf>
783 * <https://foss.heptapod.net/math/libgf2/-/blob/branch/default/src/libgf2/gf2.py>
784
785 Binary Galois Field addition/subtraction is simply XOR, so a `gfbadd`
786 instruction is not provided since the `xor[i]` instruction can be used instead.
787
788 ## `GFBREDPOLY` SPR -- Reducing Polynomial
789
790 In order to save registers and to make operations orthogonal with standard
791 arithmetic, the reducing polynomial is stored in a dedicated SPR `GFBREDPOLY`.
792 This also allows hardware to pre-compute useful parameters (such as the
793 degree, or look-up tables) based on the reducing polynomial, and store them
794 alongside the SPR in hidden registers, only recomputing them whenever the SPR
795 is written to, rather than having to recompute those values for every
796 instruction.
797
798 Because Galois Fields require the reducing polynomial to be an irreducible
799 polynomial, that guarantees that any polynomial of `degree > 1` must have
800 the LSB set, since otherwise it would be divisible by the polynomial `x`,
801 making it reducible, making whatever we're working on no longer a Field.
802 Therefore, we can reuse the LSB to indicate `degree == XLEN`.
803
804 ```python
805 [[!inline pagenames="gf_reference/decode_reducing_polynomial.py" raw="yes"]]
806 ```
807
808 ## `gfbredpoly` -- Set the Reducing Polynomial SPR `GFBREDPOLY`
809
810 unless this is an immediate op, `mtspr` is completely sufficient.
811
812 ```python
813 [[!inline pagenames="gf_reference/gfbredpoly.py" raw="yes"]]
814 ```
815
816 ## `gfbmul` -- Binary Galois Field `GF(2^m)` Multiplication
817
818 ```
819 gfbmul RT, RA, RB
820 ```
821
822 ```python
823 [[!inline pagenames="gf_reference/gfbmul.py" raw="yes"]]
824 ```
825
826 ## `gfbmadd` -- Binary Galois Field `GF(2^m)` Multiply-Add
827
828 ```
829 gfbmadd RT, RA, RB, RC
830 ```
831
832 ```python
833 [[!inline pagenames="gf_reference/gfbmadd.py" raw="yes"]]
834 ```
835
836 ## `gfbtmadd` -- Binary Galois Field `GF(2^m)` Twin Multiply-Add (for FFT)
837
838 Used in combination with SV FFT REMAP to perform a full `GF(2^m)` Discrete
839 Fourier Transform in-place. Possible by having 3-in 2-out, to avoid the need
840 for a temp register. RS is written to as well as RT.
841
842 ```
843 gfbtmadd RT, RA, RB, RC
844 ```
845
846 TODO: add link to explanation for where `RS` comes from.
847
848 ```
849 a = (RA)
850 c = (RC)
851 # read all inputs before writing to any outputs in case
852 # an input overlaps with an output register.
853 (RT) = gfbmadd(a, (RB), c)
854 # use gfbmadd again since it reduces the result
855 (RS) = gfbmadd(a, 1, c) # "a * 1 + c"
856 ```
857
858 ## `gfbinv` -- Binary Galois Field `GF(2^m)` Inverse
859
860 ```
861 gfbinv RT, RA
862 ```
863
864 ```python
865 [[!inline pagenames="gf_reference/gfbinv.py" raw="yes"]]
866 ```
867
868 # Instructions for Prime Galois Fields `GF(p)`
869
870 ## `GFPRIME` SPR -- Prime Modulus For `gfp*` Instructions
871
872 ## `gfpadd` Prime Galois Field `GF(p)` Addition
873
874 ```
875 gfpadd RT, RA, RB
876 ```
877
878 ```python
879 [[!inline pagenames="gf_reference/gfpadd.py" raw="yes"]]
880 ```
881
882 the addition happens on infinite-precision integers
883
884 ## `gfpsub` Prime Galois Field `GF(p)` Subtraction
885
886 ```
887 gfpsub RT, RA, RB
888 ```
889
890 ```python
891 [[!inline pagenames="gf_reference/gfpsub.py" raw="yes"]]
892 ```
893
894 the subtraction happens on infinite-precision integers
895
896 ## `gfpmul` Prime Galois Field `GF(p)` Multiplication
897
898 ```
899 gfpmul RT, RA, RB
900 ```
901
902 ```python
903 [[!inline pagenames="gf_reference/gfpmul.py" raw="yes"]]
904 ```
905
906 the multiplication happens on infinite-precision integers
907
908 ## `gfpinv` Prime Galois Field `GF(p)` Invert
909
910 ```
911 gfpinv RT, RA
912 ```
913
914 Some potential hardware implementations are found in:
915 <https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.90.5233&rep=rep1&type=pdf>
916
917 ```python
918 [[!inline pagenames="gf_reference/gfpinv.py" raw="yes"]]
919 ```
920
921 ## `gfpmadd` Prime Galois Field `GF(p)` Multiply-Add
922
923 ```
924 gfpmadd RT, RA, RB, RC
925 ```
926
927 ```python
928 [[!inline pagenames="gf_reference/gfpmadd.py" raw="yes"]]
929 ```
930
931 the multiplication and addition happens on infinite-precision integers
932
933 ## `gfpmsub` Prime Galois Field `GF(p)` Multiply-Subtract
934
935 ```
936 gfpmsub RT, RA, RB, RC
937 ```
938
939 ```python
940 [[!inline pagenames="gf_reference/gfpmsub.py" raw="yes"]]
941 ```
942
943 the multiplication and subtraction happens on infinite-precision integers
944
945 ## `gfpmsubr` Prime Galois Field `GF(p)` Multiply-Subtract-Reversed
946
947 ```
948 gfpmsubr RT, RA, RB, RC
949 ```
950
951 ```python
952 [[!inline pagenames="gf_reference/gfpmsubr.py" raw="yes"]]
953 ```
954
955 the multiplication and subtraction happens on infinite-precision integers
956
957 ## `gfpmaddsubr` Prime Galois Field `GF(p)` Multiply-Add and Multiply-Sub-Reversed (for FFT)
958
959 Used in combination with SV FFT REMAP to perform
960 a full Number-Theoretic-Transform in-place. Possible by having 3-in 2-out,
961 to avoid the need for a temp register. RS is written
962 to as well as RT.
963
964 ```
965 gfpmaddsubr RT, RA, RB, RC
966 ```
967
968 TODO: add link to explanation for where `RS` comes from.
969
970 ```
971 factor1 = (RA)
972 factor2 = (RB)
973 term = (RC)
974 # read all inputs before writing to any outputs in case
975 # an input overlaps with an output register.
976 (RT) = gfpmadd(factor1, factor2, term)
977 (RS) = gfpmsubr(factor1, factor2, term)
978 ```
979
980 # Already in POWER ISA or subsumed
981
982 Lists operations either included as part of
983 other bitmanip operations, or are already in
984 Power ISA.
985
986 ## cmix
987
988 based on RV bitmanip, covered by ternlog bitops
989
990 ```
991 uint_xlen_t cmix(uint_xlen_t RA, uint_xlen_t RB, uint_xlen_t RC) {
992 return (RA & RB) | (RC & ~RB);
993 }
994 ```
995
996 ## count leading/trailing zeros with mask
997
998 in v3.1 p105
999
1000 ```
1001 count = 0
1002 do i = 0 to 63 if((RB)i=1) then do
1003 if((RS)i=1) then break end end count ← count + 1
1004 RA ← EXTZ64(count)
1005 ```
1006
1007 ## bit deposit
1008
1009 pdepd VRT,VRA,VRB, identical to RV bitmamip bdep, found already in v3.1 p106
1010
1011 do while(m < 64)
1012 if VSR[VRB+32].dword[i].bit[63-m]=1 then do
1013 result = VSR[VRA+32].dword[i].bit[63-k]
1014 VSR[VRT+32].dword[i].bit[63-m] = result
1015 k = k + 1
1016 m = m + 1
1017
1018 ```
1019
1020 uint_xlen_t bdep(uint_xlen_t RA, uint_xlen_t RB)
1021 {
1022 uint_xlen_t r = 0;
1023 for (int i = 0, j = 0; i < XLEN; i++)
1024 if ((RB >> i) & 1) {
1025 if ((RA >> j) & 1)
1026 r |= uint_xlen_t(1) << i;
1027 j++;
1028 }
1029 return r;
1030 }
1031
1032 ```
1033
1034 ## bit extract
1035
1036 other way round: identical to RV bext: pextd, found in v3.1 p196
1037
1038 ```
1039 uint_xlen_t bext(uint_xlen_t RA, uint_xlen_t RB)
1040 {
1041 uint_xlen_t r = 0;
1042 for (int i = 0, j = 0; i < XLEN; i++)
1043 if ((RB >> i) & 1) {
1044 if ((RA >> i) & 1)
1045 r |= uint_xlen_t(1) << j;
1046 j++;
1047 }
1048 return r;
1049 }
1050 ```
1051
1052 ## centrifuge
1053
1054 found in v3.1 p106 so not to be added here
1055
1056 ```
1057 ptr0 = 0
1058 ptr1 = 0
1059 do i = 0 to 63
1060 if((RB)i=0) then do
1061 resultptr0 = (RS)i
1062 end
1063 ptr0 = ptr0 + 1
1064 if((RB)63-i==1) then do
1065 result63-ptr1 = (RS)63-i
1066 end
1067 ptr1 = ptr1 + 1
1068 RA = result
1069 ```
1070
1071 ## bit to byte permute
1072
1073 similar to matrix permute in RV bitmanip, which has XOR and OR variants,
1074 these perform a transpose (bmatflip).
1075 TODO this looks VSX is there a scalar variant
1076 in v3.0/1 already
1077
1078 do j = 0 to 7
1079 do k = 0 to 7
1080 b = VSR[VRB+32].dword[i].byte[k].bit[j]
1081 VSR[VRT+32].dword[i].byte[j].bit[k] = b
1082
1083 ## grev
1084
1085 superceded by grevlut
1086
1087 based on RV bitmanip, this is also known as a butterfly network. however
1088 where a butterfly network allows setting of every crossbar setting in
1089 every row and every column, generalised-reverse (grev) only allows
1090 a per-row decision: every entry in the same row must either switch or
1091 not-switch.
1092
1093 <img src="https://upload.wikimedia.org/wikipedia/commons/thumb/8/8c/Butterfly_Network.jpg/474px-Butterfly_Network.jpg" />
1094
1095 ```
1096 uint64_t grev64(uint64_t RA, uint64_t RB)
1097 {
1098 uint64_t x = RA;
1099 int shamt = RB & 63;
1100 if (shamt & 1) x = ((x & 0x5555555555555555LL) << 1) |
1101 ((x & 0xAAAAAAAAAAAAAAAALL) >> 1);
1102 if (shamt & 2) x = ((x & 0x3333333333333333LL) << 2) |
1103 ((x & 0xCCCCCCCCCCCCCCCCLL) >> 2);
1104 if (shamt & 4) x = ((x & 0x0F0F0F0F0F0F0F0FLL) << 4) |
1105 ((x & 0xF0F0F0F0F0F0F0F0LL) >> 4);
1106 if (shamt & 8) x = ((x & 0x00FF00FF00FF00FFLL) << 8) |
1107 ((x & 0xFF00FF00FF00FF00LL) >> 8);
1108 if (shamt & 16) x = ((x & 0x0000FFFF0000FFFFLL) << 16) |
1109 ((x & 0xFFFF0000FFFF0000LL) >> 16);
1110 if (shamt & 32) x = ((x & 0x00000000FFFFFFFFLL) << 32) |
1111 ((x & 0xFFFFFFFF00000000LL) >> 32);
1112 return x;
1113 }
1114
1115 ```
1116
1117 ## gorc
1118
1119 based on RV bitmanip, gorc is superceded by grevlut
1120
1121 ```
1122 uint32_t gorc32(uint32_t RA, uint32_t RB)
1123 {
1124 uint32_t x = RA;
1125 int shamt = RB & 31;
1126 if (shamt & 1) x |= ((x & 0x55555555) << 1) | ((x & 0xAAAAAAAA) >> 1);
1127 if (shamt & 2) x |= ((x & 0x33333333) << 2) | ((x & 0xCCCCCCCC) >> 2);
1128 if (shamt & 4) x |= ((x & 0x0F0F0F0F) << 4) | ((x & 0xF0F0F0F0) >> 4);
1129 if (shamt & 8) x |= ((x & 0x00FF00FF) << 8) | ((x & 0xFF00FF00) >> 8);
1130 if (shamt & 16) x |= ((x & 0x0000FFFF) << 16) | ((x & 0xFFFF0000) >> 16);
1131 return x;
1132 }
1133 uint64_t gorc64(uint64_t RA, uint64_t RB)
1134 {
1135 uint64_t x = RA;
1136 int shamt = RB & 63;
1137 if (shamt & 1) x |= ((x & 0x5555555555555555LL) << 1) |
1138 ((x & 0xAAAAAAAAAAAAAAAALL) >> 1);
1139 if (shamt & 2) x |= ((x & 0x3333333333333333LL) << 2) |
1140 ((x & 0xCCCCCCCCCCCCCCCCLL) >> 2);
1141 if (shamt & 4) x |= ((x & 0x0F0F0F0F0F0F0F0FLL) << 4) |
1142 ((x & 0xF0F0F0F0F0F0F0F0LL) >> 4);
1143 if (shamt & 8) x |= ((x & 0x00FF00FF00FF00FFLL) << 8) |
1144 ((x & 0xFF00FF00FF00FF00LL) >> 8);
1145 if (shamt & 16) x |= ((x & 0x0000FFFF0000FFFFLL) << 16) |
1146 ((x & 0xFFFF0000FFFF0000LL) >> 16);
1147 if (shamt & 32) x |= ((x & 0x00000000FFFFFFFFLL) << 32) |
1148 ((x & 0xFFFFFFFF00000000LL) >> 32);
1149 return x;
1150 }
1151
1152 ```
1153
1154
1155 # Appendix
1156
1157 see [[bitmanip/appendix]]
1158