bitmanip: fixed syntax for code block
[libreriscv.git] / openpower / sv / bitmanip.mdwn
1 [[!tag standards]]
2
3 [[!toc levels=1]]
4
5 # Implementation Log
6
7 * ternlogi <https://bugs.libre-soc.org/show_bug.cgi?id=745>
8 * grev <https://bugs.libre-soc.org/show_bug.cgi?id=755>
9 * GF2^M <https://bugs.libre-soc.org/show_bug.cgi?id=782>
10 * binutils <https://bugs.libre-soc.org/show_bug.cgi?id=836>
11 * shift-and-add <https://bugs.libre-soc.org/show_bug.cgi?id=968>
12
13 # bitmanipulation
14
15 **DRAFT STATUS**
16
17 pseudocode: [[openpower/isa/bitmanip]]
18
19 this extension amalgamates bitmanipulation primitives from many sources,
20 including RISC-V bitmanip, Packed SIMD, AVX-512 and OpenPOWER VSX.
21 Also included are DSP/Multimedia operations suitable for Audio/Video.
22 Vectorisation and SIMD are removed: these are straight scalar (element)
23 operations making them suitable for embedded applications. Vectorisation
24 Context is provided by [[openpower/sv]].
25
26 When combined with SV, scalar variants of bitmanip operations found in
27 VSX are added so that the Packed SIMD aspects of VSX may be retired as
28 "legacy" in the far future (10 to 20 years). Also, VSX is hundreds of
29 opcodes, requires 128 bit pathways, and is wholly unsuited to low power
30 or embedded scenarios.
31
32 ternlogv is experimental and is the only operation that may be considered
33 a "Packed SIMD". It is added as a variant of the already well-justified
34 ternlog operation (done in AVX512 as an immediate only) "because it
35 looks fun". As it is based on the LUT4 concept it will allow accelerated
36 emulation of FPGAs. Other vendors of ISAs are buying FPGA companies to
37 achieve similar objectives.
38
39 general-purpose Galois Field 2^M operations are added so as to avoid
40 huge custom opcode proliferation across many areas of Computer Science.
41 however for convenience and also to avoid setup costs, some of the more
42 common operations (clmul, crc32) are also added. The expectation is
43 that these operations would all be covered by the same pipeline.
44
45 note that there are brownfield spaces below that could incorporate
46 some of the set-before-first and other scalar operations listed in
47 [[sv/mv.swizzle]],
48 [[sv/vector_ops]], [[sv/int_fp_mv]] and the [[sv/av_opcodes]] as well as
49 [[sv/setvl]], [[sv/svstep]], [[sv/remap]]
50
51 Useful resource:
52
53 * <https://en.wikiversity.org/wiki/Reed%E2%80%93Solomon_codes_for_coders>
54 * <https://maths-people.anu.edu.au/~brent/pd/rpb232tr.pdf>
55 * <https://gist.github.com/animetosho/d3ca95da2131b5813e16b5bb1b137ca0>
56 * <https://github.com/HJLebbink/asm-dude/wiki/GF2P8AFFINEINVQB>
57
58 [[!inline pages="openpower/sv/draft_opcode_tables" quick="yes" raw="yes" ]]
59
60 # binary and ternary bitops
61
62 Similar to FPGA LUTs: for two (binary) or three (ternary) inputs take
63 bits from each input, concatenate them and perform a lookup into a
64 table using an 8-8-bit immediate (for the ternary instructions), or in
65 another register (4-bit for the binary instructions). The binary lookup
66 instructions have CR Field lookup variants due to CR Fields being 4 bit.
67
68 Like the x86 AVX512F
69 [vpternlogd/vpternlogq](https://www.felixcloutier.com/x86/vpternlogd:vpternlogq)
70 instructions.
71
72 ## ternlogi
73
74 | 0.5|6.10|11.15|16.20| 21..28|29.30|31|
75 | -- | -- | --- | --- | ----- | --- |--|
76 | NN | RT | RA | RB | im0-7 | 00 |Rc|
77
78 lut3(imm, a, b, c):
79 idx = c << 2 | b << 1 | a
80 return imm[idx] # idx by LSB0 order
81
82 for i in range(64):
83 RT[i] = lut3(imm, RB[i], RA[i], RT[i])
84
85 ## binlut
86
87 Binary lookup is a dynamic LUT2 version of ternlogi. Firstly, the
88 lookup table is 4 bits wide not 8 bits, and secondly the lookup
89 table comes from a register not an immediate.
90
91 | 0.5|6.10|11.15|16.20| 21..25|26..31 | Form |
92 | -- | -- | --- | --- | ----- |--------|---------|
93 | NN | RT | RA | RB | RC |nh 00001| VA-Form |
94 | NN | RT | RA | RB | /BFA/ |0 01001| VA-Form |
95
96 For binlut, the 4-bit LUT may be selected from either the high nibble
97 or the low nibble of the first byte of RC:
98
99 lut2(imm, a, b):
100 idx = b << 1 | a
101 return imm[idx] # idx by LSB0 order
102
103 imm = (RC>>(nh*4))&0b1111
104 for i in range(64):
105 RT[i] = lut2(imm, RB[i], RA[i])
106
107 For bincrlut, `BFA` selects the 4-bit CR Field as the LUT2:
108
109 for i in range(64):
110 RT[i] = lut2(CRs{BFA}, RB[i], RA[i])
111
112 When Vectorised with SVP64, as usual both source and destination may be
113 Vector or Scalar.
114
115 *Programmer's note: a dynamic ternary lookup may be synthesised from
116 a pair of `binlut` instructions followed by a `ternlogi` to select which
117 to merge. Use `nh` to select which nibble to use as the lookup table
118 from the RC source register (`nh=1` nibble high), i.e. keeping
119 an 8-bit LUT3 in RC, the first `binlut` instruction may set nh=0 and
120 the second nh=1.*
121
122 ## crternlogi
123
124 another mode selection would be CRs not Ints.
125
126 | 0.5|6.8 | 9.11|12.14|15.17|18.20|21.28 | 29.30|31|
127 | -- | -- | --- | --- | --- |-----|----- | -----|--|
128 | NN | BT | BA | BB | BC |m0-2 | imm | 01 |m3|
129
130 mask = m0-3
131 for i in range(4):
132 a,b,c = CRs[BA][i], CRs[BB][i], CRs[BC][i])
133 if mask[i] CRs[BT][i] = lut3(imm, a, b, c)
134
135 This instruction is remarkably similar to the existing crops, `crand` etc.
136 which have been noted to be a 4-bit (binary) LUT. In effect `crternlogi`
137 is the ternary LUT version of crops, having an 8-bit LUT.
138
139 ## crbinlog
140
141 With ternary (LUT3) dynamic instructions being very costly,
142 and CR Fields being only 4 bit, a binary (LUT2) variant is better
143
144 | 0.5|6.8 | 9.11|12.14|15.17|18.21|22...30 |31|
145 | -- | -- | --- | --- | --- |-----| -------- |--|
146 | NN | BT | BA | BB | BC |m0-m3|000101110 |0 |
147
148 mask = m0..m3
149 for i in range(4):
150 a,b = CRs[BA][i], CRs[BB][i])
151 if mask[i] CRs[BT][i] = lut2(CRs[BC], a, b)
152
153 When SVP64 Vectorised any of the 4 operands may be Scalar or
154 Vector, including `BC` meaning that multiple different dynamic
155 lookups may be performed with a single instruction.
156
157 *Programmer's note: just as with binlut and ternlogi, a pair
158 of crbinlog instructions followed by a merging crternlogi may
159 be deployed to synthesise dynamic ternary (LUT3) CR Field
160 manipulation*
161
162 # int ops
163
164 ## min/m
165
166 required for the [[sv/av_opcodes]]
167
168 signed and unsigned min/max for integer. this is sort-of partly
169 synthesiseable in [[sv/svp64]] with pred-result as long as the dest reg
170 is one of the sources, but not both signed and unsigned. when the dest
171 is also one of the srces and the mv fails due to the CR bittest failing
172 this will only overwrite the dest where the src is greater (or less).
173
174 signed/unsigned min/max gives more flexibility.
175
176 X-Form
177
178 * XO=0001001110, itype=0b00 min, unsigned
179 * XO=0101001110, itype=0b01 min, signed
180 * XO=0011001110, itype=0b10 max, unsigned
181 * XO=0111001110, itype=0b11 max, signed
182
183
184 ```
185 uint_xlen_t mins(uint_xlen_t rs1, uint_xlen_t rs2)
186 { return (int_xlen_t)rs1 < (int_xlen_t)rs2 ? rs1 : rs2;
187 }
188 uint_xlen_t maxs(uint_xlen_t rs1, uint_xlen_t rs2)
189 { return (int_xlen_t)rs1 > (int_xlen_t)rs2 ? rs1 : rs2;
190 }
191 uint_xlen_t minu(uint_xlen_t rs1, uint_xlen_t rs2)
192 { return rs1 < rs2 ? rs1 : rs2;
193 }
194 uint_xlen_t maxu(uint_xlen_t rs1, uint_xlen_t rs2)
195 { return rs1 > rs2 ? rs1 : rs2;
196 }
197 ```
198
199 ## average
200
201 required for the [[sv/av_opcodes]], these exist in Packed SIMD (VSX)
202 but not scalar
203
204 ```
205 uint_xlen_t intavg(uint_xlen_t rs1, uint_xlen_t rs2) {
206 return (rs1 + rs2 + 1) >> 1:
207 }
208 ```
209
210 ## absdu
211
212 required for the [[sv/av_opcodes]], these exist in Packed SIMD (VSX)
213 but not scalar
214
215 ```
216 uint_xlen_t absdu(uint_xlen_t rs1, uint_xlen_t rs2) {
217 return (src1 > src2) ? (src1-src2) : (src2-src1)
218 }
219 ```
220
221 ## abs-accumulate
222
223 required for the [[sv/av_opcodes]], these are needed for motion estimation.
224 both are overwrite on RS.
225
226 ```
227 uint_xlen_t uintabsacc(uint_xlen_t rs, uint_xlen_t ra, uint_xlen_t rb) {
228 return rs + (src1 > src2) ? (src1-src2) : (src2-src1)
229 }
230 uint_xlen_t intabsacc(uint_xlen_t rs, int_xlen_t ra, int_xlen_t rb) {
231 return rs + (src1 > src2) ? (src1-src2) : (src2-src1)
232 }
233 ```
234
235 For SVP64, the twin Elwidths allows e.g. a 16 bit accumulator for 8 bit
236 differences. Form is `RM-1P-3S1D` where RS-as-source has a separate
237 SVP64 designation from RS-as-dest. This gives a limited range of
238 non-overwrite capability.
239
240 # shift-and-add <a name="shift-add"> </a>
241
242 Power ISA is missing LD/ST with shift, which is present in both ARM and x86.
243 Too complex to add more LD/ST, a compromise is to add shift-and-add.
244 Replaces a pair of explicit instructions in hot-loops.
245
246 ```
247 # 1.6.27 Z23-FORM
248 |0 |6 |11 |15 |16 |21 |23 |31 |
249 | PO | RT | RA | RB |sm | XO |Rc |
250 ```
251
252 Pseudo-code (shadd):
253 shift <- sm & 0x3 # Ensure sm is 2-bit
254 shift <- shift + 1 # Shift is between 1-4
255 sum[0:63] <- ((RB) << shift) + (RA) # Shift RB, add RA
256 RT <- sum # Result stored in RT
257
258 Is Rc used to indicate the two modes?
259
260 Pseudo-code (shadduw):
261 shift <- sm & 0x3 # Ensure sm is 2-bit
262 shift <- shift + 1 # Shift is between 1-4
263 n <- (RB) & 0xFFFFFFFF # Limit RB to upper word (32-bits)
264 sum[0:63] <- (n << shift) + (RA) # Shift n, add RA
265 RT <- sum # Result stored in RT
266
267 ```
268 uint_xlen_t shadd(uint_xlen_t RA, uint_xlen_t RB, uint8_t sm) {
269 sm = sm & 0x3;
270 return (RB << (sm+1)) + RA;
271 }
272
273 uint_xlen_t shadduw(uint_xlen_t RA, uint_xlen_t RB, uint8_t sm) {
274 uint_xlen_t n = RB & 0xFFFFFFFF;
275 sm = sm & 0x3;
276 return (n << (sm+1)) + RA;
277 }
278 ```
279
280 # bitmask set
281
282 based on RV bitmanip singlebit set, instruction format similar to shift
283 [[isa/fixedshift]]. bmext is actually covered already (shift-with-mask
284 rldicl but only immediate version). however bitmask-invert is not,
285 and set/clr are not covered, although they can use the same Shift ALU.
286
287 bmext (RB) version is not the same as rldicl because bmext is a right
288 shift by RC, where rldicl is a left rotate. for the immediate version
289 this does not matter, so a bmexti is not required. bmrev however there
290 is no direct equivalent and consequently a bmrevi is required.
291
292 bmset (register for mask amount) is particularly useful for creating
293 predicate masks where the length is a dynamic runtime quantity.
294 bmset(RA=0, RB=0, RC=mask) will produce a run of ones of length "mask"
295 in a single instruction without needing to initialise or depend on any
296 other registers.
297
298 | 0.5|6.10|11.15|16.20|21.25| 26..30 |31| name |
299 | -- | -- | --- | --- | --- | ------- |--| ----- |
300 | NN | RS | RA | RB | RC | mode 010 |Rc| bm\* |
301
302 Immediate-variant is an overwrite form:
303
304 | 0.5|6.10|11.15|16.20| 21 | 22.23 | 24....30 |31| name |
305 | -- | -- | --- | --- | -- | ----- | -------- |--| ---- |
306 | NN | RS | RB | sh | SH | itype | 1000 110 |Rc| bm\*i |
307
308 ```
309 def MASK(x, y):
310 if x < y:
311 x = x+1
312 mask_a = ((1 << x) - 1) & ((1 << 64) - 1)
313 mask_b = ((1 << y) - 1) & ((1 << 64) - 1)
314 elif x == y:
315 return 1 << x
316 else:
317 x = x+1
318 mask_a = ((1 << x) - 1) & ((1 << 64) - 1)
319 mask_b = (~((1 << y) - 1)) & ((1 << 64) - 1)
320 return mask_a ^ mask_b
321
322
323 uint_xlen_t bmset(RS, RB, sh)
324 {
325 int shamt = RB & (XLEN - 1);
326 mask = (2<<sh)-1;
327 return RS | (mask << shamt);
328 }
329
330 uint_xlen_t bmclr(RS, RB, sh)
331 {
332 int shamt = RB & (XLEN - 1);
333 mask = (2<<sh)-1;
334 return RS & ~(mask << shamt);
335 }
336
337 uint_xlen_t bminv(RS, RB, sh)
338 {
339 int shamt = RB & (XLEN - 1);
340 mask = (2<<sh)-1;
341 return RS ^ (mask << shamt);
342 }
343
344 uint_xlen_t bmext(RS, RB, sh)
345 {
346 int shamt = RB & (XLEN - 1);
347 mask = (2<<sh)-1;
348 return mask & (RS >> shamt);
349 }
350 ```
351
352 bitmask extract with reverse. can be done by bit-order-inverting all
353 of RB and getting bits of RB from the opposite end.
354
355 when RA is zero, no shift occurs. this makes bmextrev useful for
356 simply reversing all bits of a register.
357
358 ```
359 msb = ra[5:0];
360 rev[0:msb] = rb[msb:0];
361 rt = ZE(rev[msb:0]);
362
363 uint_xlen_t bmrevi(RA, RB, sh)
364 {
365 int shamt = XLEN-1;
366 if (RA != 0) shamt = (GPR(RA) & (XLEN - 1));
367 shamt = (XLEN-1)-shamt; # shift other end
368 brb = bitreverse(GPR(RB)) # swap LSB-MSB
369 mask = (2<<sh)-1;
370 return mask & (brb >> shamt);
371 }
372
373 uint_xlen_t bmrev(RA, RB, RC) {
374 return bmrevi(RA, RB, GPR(RC) & 0b111111);
375 }
376 ```
377
378 | 0.5|6.10|11.15|16.20|21.26| 27..30 |31| name | Form |
379 | -- | -- | --- | --- | --- | ------- |--| ------ | -------- |
380 | NN | RT | RA | RB | sh | 1111 |Rc| bmrevi | MDS-Form |
381
382 | 0.5|6.10|11.15|16.20|21.25| 26..30 |31| name | Form |
383 | -- | -- | --- | --- | --- | ------- |--| ------ | -------- |
384 | NN | RT | RA | RB | RC | 11110 |Rc| bmrev | VA2-Form |
385
386 # grevlut <a name="grevlut"> </a>
387
388 generalised reverse combined with a pair of LUT2s and allowing
389 a constant `0b0101...0101` when RA=0, and an option to invert
390 (including when RA=0, giving a constant 0b1010...1010 as the
391 initial value) provides a wide range of instructions
392 and a means to set hundreds of regular 64 bit patterns with one
393 single 32 bit instruction.
394
395 the two LUT2s are applied left-half (when not swapping)
396 and right-half (when swapping) so as to allow a wider
397 range of options.
398
399 <img src="/openpower/sv/grevlut2x2.jpg" width=700 />
400
401 * A value of `0b11001010` for the immediate provides
402 the functionality of a standard "grev".
403 * `0b11101110` provides gorc
404
405 grevlut should be arranged so as to produce the constants
406 needed to put into bext (bitextract) so as in turn to
407 be able to emulate x86 pmovmask instructions
408 <https://www.felixcloutier.com/x86/pmovmskb>.
409 This only requires 2 instructions (grevlut, bext).
410
411 Note that if the mask is required to be placed
412 directly into CR Fields (for use as CR Predicate
413 masks rather than a integer mask) then sv.cmpi or sv.ori
414 may be used instead, bearing in mind that sv.ori
415 is a 64-bit instruction, and `VL` must have been
416 set to the required length:
417
418 sv.ori./elwid=8 r10.v, r10.v, 0
419
420 The following settings provide the required mask constants:
421
422 | RA=0 | RB | imm | iv | result |
423 | ------- | ------- | ---------- | -- | ---------- |
424 | 0x555.. | 0b10 | 0b01101100 | 0 | 0x111111... |
425 | 0x555.. | 0b110 | 0b01101100 | 0 | 0x010101... |
426 | 0x555.. | 0b1110 | 0b01101100 | 0 | 0x00010001... |
427 | 0x555.. | 0b10 | 0b11000110 | 1 | 0x88888... |
428 | 0x555.. | 0b110 | 0b11000110 | 1 | 0x808080... |
429 | 0x555.. | 0b1110 | 0b11000110 | 1 | 0x80008000... |
430
431 Better diagram showing the correct ordering of shamt (RB). A LUT2
432 is applied to all locations marked in red using the first 4
433 bits of the immediate, and a separate LUT2 applied to all
434 locations in green using the upper 4 bits of the immediate.
435
436 <img src="/openpower/sv/grevlut.png" width=700 />
437
438 demo code [[openpower/sv/grevlut.py]]
439
440 ```
441 lut2(imm, a, b):
442 idx = b << 1 | a
443 return imm[idx] # idx by LSB0 order
444
445 dorow(imm8, step_i, chunksize, us32b):
446 for j in 0 to 31 if is32b else 63:
447 if (j&chunk_size) == 0
448 imm = imm8[0..3]
449 else
450 imm = imm8[4..7]
451 step_o[j] = lut2(imm, step_i[j], step_i[j ^ chunk_size])
452 return step_o
453
454 uint64_t grevlut(uint64_t RA, uint64_t RB, uint8 imm, bool iv, bool is32b)
455 {
456 uint64_t x = 0x5555_5555_5555_5555;
457 if (RA != 0) x = GPR(RA);
458 if (iv) x = ~x;
459 int shamt = RB & 31 if is32b else 63
460 for i in 0 to (6-is32b)
461 step = 1<<i
462 if (shamt & step) x = dorow(imm, x, step, is32b)
463 return x;
464 }
465 ```
466
467 A variant may specify different LUT-pairs per row,
468 using one byte of RB for each. If it is desired that
469 a particular row-crossover shall not be applied it is
470 a simple matter to set the appropriate LUT-pair in RB
471 to effect an identity transform for that row (`0b11001010`).
472
473 ```
474 uint64_t grevlutr(uint64_t RA, uint64_t RB, bool iv, bool is32b)
475 {
476 uint64_t x = 0x5555_5555_5555_5555;
477 if (RA != 0) x = GPR(RA);
478 if (iv) x = ~x;
479 for i in 0 to (6-is32b)
480 step = 1<<i
481 imm = (RB>>(i*8))&0xff
482 x = dorow(imm, x, step, is32b)
483 return x;
484 }
485
486 ```
487
488 | 0.5|6.10|11.15|16.20 |21..28 | 29.30|31| name | Form |
489 | -- | -- | --- | --- | ----- | -----|--| ------ | ----- |
490 | NN | RT | RA | s0-4 | im0-7 | 1 iv |s5| grevlogi | |
491 | NN | RT | RA | RB | im0-7 | 01 |0 | grevlog | |
492
493 An equivalent to `grevlogw` may be synthesised by setting the
494 appropriate bits in RB to set the top half of RT to zero.
495 Thus an explicit grevlogw instruction is not necessary.
496
497 # xperm
498
499 based on RV bitmanip.
500
501 RA contains a vector of indices to select parts of RB to be
502 copied to RT. The immediate-variant allows up to an 8 bit
503 pattern (repeated) to be targetted at different parts of RT.
504
505 xperm shares some similarity with one of the uses of bmator
506 in that xperm indices are binary addressing where bitmator
507 may be considered to be unary addressing.
508
509 ```
510 uint_xlen_t xpermi(uint8_t imm8, uint_xlen_t RB, int sz_log2)
511 {
512 uint_xlen_t r = 0;
513 uint_xlen_t sz = 1LL << sz_log2;
514 uint_xlen_t mask = (1LL << sz) - 1;
515 uint_xlen_t RA = imm8 | imm8<<8 | ... | imm8<<56;
516 for (int i = 0; i < XLEN; i += sz) {
517 uint_xlen_t pos = ((RA >> i) & mask) << sz_log2;
518 if (pos < XLEN)
519 r |= ((RB >> pos) & mask) << i;
520 }
521 return r;
522 }
523 uint_xlen_t xperm(uint_xlen_t RA, uint_xlen_t RB, int sz_log2)
524 {
525 uint_xlen_t r = 0;
526 uint_xlen_t sz = 1LL << sz_log2;
527 uint_xlen_t mask = (1LL << sz) - 1;
528 for (int i = 0; i < XLEN; i += sz) {
529 uint_xlen_t pos = ((RA >> i) & mask) << sz_log2;
530 if (pos < XLEN)
531 r |= ((RB >> pos) & mask) << i;
532 }
533 return r;
534 }
535 uint_xlen_t xperm_n (uint_xlen_t RA, uint_xlen_t RB)
536 { return xperm(RA, RB, 2); }
537 uint_xlen_t xperm_b (uint_xlen_t RA, uint_xlen_t RB)
538 { return xperm(RA, RB, 3); }
539 uint_xlen_t xperm_h (uint_xlen_t RA, uint_xlen_t RB)
540 { return xperm(RA, RB, 4); }
541 uint_xlen_t xperm_w (uint_xlen_t RA, uint_xlen_t RB)
542 { return xperm(RA, RB, 5); }
543 ```
544
545 # bitmatrix
546
547 bmatflip and bmatxor is found in the Cray XMT, and in x86 is known
548 as GF2P8AFFINEQB. uses:
549
550 * <https://gist.github.com/animetosho/d3ca95da2131b5813e16b5bb1b137ca0>
551 * SM4, Reed Solomon, RAID6
552 <https://stackoverflow.com/questions/59124720/what-are-the-avx-512-galois-field-related-instructions-for>
553 * Vector bit-reverse <https://reviews.llvm.org/D91515?id=305411>
554 * Affine Inverse <https://github.com/HJLebbink/asm-dude/wiki/GF2P8AFFINEINVQB>
555
556 | 0.5|6.10|11.15|16.20| 21 | 22.23 | 24....30 |31| name | Form |
557 | -- | -- | --- | --- | -- | ----- | -------- |--| ---- | ------- |
558 | NN | RS | RA |im04 | im5| 1 1 | im67 00 110 |Rc| bmatxori | TODO |
559
560
561 ```
562 uint64_t bmatflip(uint64_t RA)
563 {
564 uint64_t x = RA;
565 x = shfl64(x, 31);
566 x = shfl64(x, 31);
567 x = shfl64(x, 31);
568 return x;
569 }
570
571 uint64_t bmatxori(uint64_t RS, uint64_t RA, uint8_t imm) {
572 // transpose of RA
573 uint64_t RAt = bmatflip(RA);
574 uint8_t u[8]; // rows of RS
575 uint8_t v[8]; // cols of RA
576 for (int i = 0; i < 8; i++) {
577 u[i] = RS >> (i*8);
578 v[i] = RAt >> (i*8);
579 }
580 uint64_t bit, x = 0;
581 for (int i = 0; i < 64; i++) {
582 bit = (imm >> (i%8)) & 1;
583 bit ^= pcnt(u[i / 8] & v[i % 8]) & 1;
584 x |= bit << i;
585 }
586 return x;
587 }
588
589 uint64_t bmatxor(uint64_t RA, uint64_t RB) {
590 return bmatxori(RA, RB, 0xff)
591 }
592
593 uint64_t bmator(uint64_t RA, uint64_t RB) {
594 // transpose of RB
595 uint64_t RBt = bmatflip(RB);
596 uint8_t u[8]; // rows of RA
597 uint8_t v[8]; // cols of RB
598 for (int i = 0; i < 8; i++) {
599 u[i] = RA >> (i*8);
600 v[i] = RBt >> (i*8);
601 }
602 uint64_t x = 0;
603 for (int i = 0; i < 64; i++) {
604 if ((u[i / 8] & v[i % 8]) != 0)
605 x |= 1LL << i;
606 }
607 return x;
608 }
609
610 uint64_t bmatand(uint64_t RA, uint64_t RB) {
611 // transpose of RB
612 uint64_t RBt = bmatflip(RB);
613 uint8_t u[8]; // rows of RA
614 uint8_t v[8]; // cols of RB
615 for (int i = 0; i < 8; i++) {
616 u[i] = RA >> (i*8);
617 v[i] = RBt >> (i*8);
618 }
619 uint64_t x = 0;
620 for (int i = 0; i < 64; i++) {
621 if ((u[i / 8] & v[i % 8]) == 0xff)
622 x |= 1LL << i;
623 }
624 return x;
625 }
626 ```
627
628 # Introduction to Carry-less and GF arithmetic
629
630 * obligatory xkcd <https://xkcd.com/2595/>
631
632 There are three completely separate types of Galois-Field-based arithmetic
633 that we implement which are not well explained even in introductory
634 literature. A slightly oversimplified explanation is followed by more
635 accurate descriptions:
636
637 * `GF(2)` carry-less binary arithmetic. this is not actually a Galois Field,
638 but is accidentally referred to as GF(2) - see below as to why.
639 * `GF(p)` modulo arithmetic with a Prime number, these are "proper"
640 Galois Fields
641 * `GF(2^N)` carry-less binary arithmetic with two limits: modulo a power-of-2
642 (2^N) and a second "reducing" polynomial (similar to a prime number), these
643 are said to be GF(2^N) arithmetic.
644
645 further detailed and more precise explanations are provided below
646
647 * **Polynomials with coefficients in `GF(2)`**
648 (aka. Carry-less arithmetic -- the `cl*` instructions).
649 This isn't actually a Galois Field, but its coefficients are. This is
650 basically binary integer addition, subtraction, and multiplication like
651 usual, except that carries aren't propagated at all, effectively turning
652 both addition and subtraction into the bitwise xor operation. Division and
653 remainder are defined to match how addition and multiplication works.
654 * **Galois Fields with a prime size**
655 (aka. `GF(p)` or Prime Galois Fields -- the `gfp*` instructions).
656 This is basically just the integers mod `p`.
657 * **Galois Fields with a power-of-a-prime size**
658 (aka. `GF(p^n)` or `GF(q)` where `q == p^n` for prime `p` and
659 integer `n > 0`).
660 We only implement these for `p == 2`, called Binary Galois Fields
661 (`GF(2^n)` -- the `gfb*` instructions).
662 For any prime `p`, `GF(p^n)` is implemented as polynomials with
663 coefficients in `GF(p)` and degree `< n`, where the polynomials are the
664 remainders of dividing by a specificly chosen polynomial in `GF(p)` called
665 the Reducing Polynomial (we will denote that by `red_poly`). The Reducing
666 Polynomial must be an irreducable polynomial (like primes, but for
667 polynomials), as well as have degree `n`. All `GF(p^n)` for the same `p`
668 and `n` are isomorphic to each other -- the choice of `red_poly` doesn't
669 affect `GF(p^n)`'s mathematical shape, all that changes is the specific
670 polynomials used to implement `GF(p^n)`.
671
672 Many implementations and much of the literature do not make a clear
673 distinction between these three categories, which makes it confusing
674 to understand what their purpose and value is.
675
676 * carry-less multiply is extremely common and is used for the ubiquitous
677 CRC32 algorithm. [TODO add many others, helps justify to ISA WG]
678 * GF(2^N) forms the basis of Rijndael (the current AES standard) and
679 has significant uses throughout cryptography
680 * GF(p) is the basis again of a significant quantity of algorithms
681 (TODO, list them, jacob knows what they are), even though the
682 modulo is limited to be below 64-bit (size of a scalar int)
683
684 # Instructions for Carry-less Operations
685
686 aka. Polynomials with coefficients in `GF(2)`
687
688 Carry-less addition/subtraction is simply XOR, so a `cladd`
689 instruction is not provided since the `xor[i]` instruction can be used instead.
690
691 These are operations on polynomials with coefficients in `GF(2)`, with the
692 polynomial's coefficients packed into integers with the following algorithm:
693
694 ```python
695 [[!inline pagenames="gf_reference/pack_poly.py" raw="yes"]]
696 ```
697
698 ## Carry-less Multiply Instructions
699
700 based on RV bitmanip
701 see <https://en.wikipedia.org/wiki/CLMUL_instruction_set> and
702 <https://www.felixcloutier.com/x86/pclmulqdq> and
703 <https://en.m.wikipedia.org/wiki/Carry-less_product>
704
705 They are worth adding as their own non-overwrite operations
706 (in the same pipeline).
707
708 ### `clmul` Carry-less Multiply
709
710 ```python
711 [[!inline pagenames="gf_reference/clmul.py" raw="yes"]]
712 ```
713
714 ### `clmulh` Carry-less Multiply High
715
716 ```python
717 [[!inline pagenames="gf_reference/clmulh.py" raw="yes"]]
718 ```
719
720 ### `clmulr` Carry-less Multiply (Reversed)
721
722 Useful for CRCs. Equivalent to bit-reversing the result of `clmul` on
723 bit-reversed inputs.
724
725 ```python
726 [[!inline pagenames="gf_reference/clmulr.py" raw="yes"]]
727 ```
728
729 ## `clmadd` Carry-less Multiply-Add
730
731 ```
732 clmadd RT, RA, RB, RC
733 ```
734
735 ```
736 (RT) = clmul((RA), (RB)) ^ (RC)
737 ```
738
739 ## `cltmadd` Twin Carry-less Multiply-Add (for FFTs)
740
741 Used in combination with SV FFT REMAP to perform a full Discrete Fourier
742 Transform of Polynomials over GF(2) in-place. Possible by having 3-in 2-out,
743 to avoid the need for a temp register. RS is written to as well as RT.
744
745 Note: Polynomials over GF(2) are a Ring rather than a Field, so, because the
746 definition of the Inverse Discrete Fourier Transform involves calculating a
747 multiplicative inverse, which may not exist in every Ring, therefore the
748 Inverse Discrete Fourier Transform may not exist. (AFAICT the number of inputs
749 to the IDFT must be odd for the IDFT to be defined for Polynomials over GF(2).
750 TODO: check with someone who knows for sure if that's correct.)
751
752 ```
753 cltmadd RT, RA, RB, RC
754 ```
755
756 TODO: add link to explanation for where `RS` comes from.
757
758 ```
759 a = (RA)
760 c = (RC)
761 # read all inputs before writing to any outputs in case
762 # an input overlaps with an output register.
763 (RT) = clmul(a, (RB)) ^ c
764 (RS) = a ^ c
765 ```
766
767 ## `cldivrem` Carry-less Division and Remainder
768
769 `cldivrem` isn't an actual instruction, but is just used in the pseudo-code
770 for other instructions.
771
772 ```python
773 [[!inline pagenames="gf_reference/cldivrem.py" raw="yes"]]
774 ```
775
776 ## `cldiv` Carry-less Division
777
778 ```
779 cldiv RT, RA, RB
780 ```
781
782 ```
783 n = (RA)
784 d = (RB)
785 q, r = cldivrem(n, d, width=XLEN)
786 (RT) = q
787 ```
788
789 ## `clrem` Carry-less Remainder
790
791 ```
792 clrem RT, RA, RB
793 ```
794
795 ```
796 n = (RA)
797 d = (RB)
798 q, r = cldivrem(n, d, width=XLEN)
799 (RT) = r
800 ```
801
802 # Instructions for Binary Galois Fields `GF(2^m)`
803
804 see:
805
806 * <https://courses.csail.mit.edu/6.857/2016/files/ffield.py>
807 * <https://engineering.purdue.edu/kak/compsec/NewLectures/Lecture7.pdf>
808 * <https://foss.heptapod.net/math/libgf2/-/blob/branch/default/src/libgf2/gf2.py>
809
810 Binary Galois Field addition/subtraction is simply XOR, so a `gfbadd`
811 instruction is not provided since the `xor[i]` instruction can be used instead.
812
813 ## `GFBREDPOLY` SPR -- Reducing Polynomial
814
815 In order to save registers and to make operations orthogonal with standard
816 arithmetic, the reducing polynomial is stored in a dedicated SPR `GFBREDPOLY`.
817 This also allows hardware to pre-compute useful parameters (such as the
818 degree, or look-up tables) based on the reducing polynomial, and store them
819 alongside the SPR in hidden registers, only recomputing them whenever the SPR
820 is written to, rather than having to recompute those values for every
821 instruction.
822
823 Because Galois Fields require the reducing polynomial to be an irreducible
824 polynomial, that guarantees that any polynomial of `degree > 1` must have
825 the LSB set, since otherwise it would be divisible by the polynomial `x`,
826 making it reducible, making whatever we're working on no longer a Field.
827 Therefore, we can reuse the LSB to indicate `degree == XLEN`.
828
829 ```python
830 [[!inline pagenames="gf_reference/decode_reducing_polynomial.py" raw="yes"]]
831 ```
832
833 ## `gfbredpoly` -- Set the Reducing Polynomial SPR `GFBREDPOLY`
834
835 unless this is an immediate op, `mtspr` is completely sufficient.
836
837 ```python
838 [[!inline pagenames="gf_reference/gfbredpoly.py" raw="yes"]]
839 ```
840
841 ## `gfbmul` -- Binary Galois Field `GF(2^m)` Multiplication
842
843 ```
844 gfbmul RT, RA, RB
845 ```
846
847 ```python
848 [[!inline pagenames="gf_reference/gfbmul.py" raw="yes"]]
849 ```
850
851 ## `gfbmadd` -- Binary Galois Field `GF(2^m)` Multiply-Add
852
853 ```
854 gfbmadd RT, RA, RB, RC
855 ```
856
857 ```python
858 [[!inline pagenames="gf_reference/gfbmadd.py" raw="yes"]]
859 ```
860
861 ## `gfbtmadd` -- Binary Galois Field `GF(2^m)` Twin Multiply-Add (for FFT)
862
863 Used in combination with SV FFT REMAP to perform a full `GF(2^m)` Discrete
864 Fourier Transform in-place. Possible by having 3-in 2-out, to avoid the need
865 for a temp register. RS is written to as well as RT.
866
867 ```
868 gfbtmadd RT, RA, RB, RC
869 ```
870
871 TODO: add link to explanation for where `RS` comes from.
872
873 ```
874 a = (RA)
875 c = (RC)
876 # read all inputs before writing to any outputs in case
877 # an input overlaps with an output register.
878 (RT) = gfbmadd(a, (RB), c)
879 # use gfbmadd again since it reduces the result
880 (RS) = gfbmadd(a, 1, c) # "a * 1 + c"
881 ```
882
883 ## `gfbinv` -- Binary Galois Field `GF(2^m)` Inverse
884
885 ```
886 gfbinv RT, RA
887 ```
888
889 ```python
890 [[!inline pagenames="gf_reference/gfbinv.py" raw="yes"]]
891 ```
892
893 # Instructions for Prime Galois Fields `GF(p)`
894
895 ## `GFPRIME` SPR -- Prime Modulus For `gfp*` Instructions
896
897 ## `gfpadd` Prime Galois Field `GF(p)` Addition
898
899 ```
900 gfpadd RT, RA, RB
901 ```
902
903 ```python
904 [[!inline pagenames="gf_reference/gfpadd.py" raw="yes"]]
905 ```
906
907 the addition happens on infinite-precision integers
908
909 ## `gfpsub` Prime Galois Field `GF(p)` Subtraction
910
911 ```
912 gfpsub RT, RA, RB
913 ```
914
915 ```python
916 [[!inline pagenames="gf_reference/gfpsub.py" raw="yes"]]
917 ```
918
919 the subtraction happens on infinite-precision integers
920
921 ## `gfpmul` Prime Galois Field `GF(p)` Multiplication
922
923 ```
924 gfpmul RT, RA, RB
925 ```
926
927 ```python
928 [[!inline pagenames="gf_reference/gfpmul.py" raw="yes"]]
929 ```
930
931 the multiplication happens on infinite-precision integers
932
933 ## `gfpinv` Prime Galois Field `GF(p)` Invert
934
935 ```
936 gfpinv RT, RA
937 ```
938
939 Some potential hardware implementations are found in:
940 <https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.90.5233&rep=rep1&type=pdf>
941
942 ```python
943 [[!inline pagenames="gf_reference/gfpinv.py" raw="yes"]]
944 ```
945
946 ## `gfpmadd` Prime Galois Field `GF(p)` Multiply-Add
947
948 ```
949 gfpmadd RT, RA, RB, RC
950 ```
951
952 ```python
953 [[!inline pagenames="gf_reference/gfpmadd.py" raw="yes"]]
954 ```
955
956 the multiplication and addition happens on infinite-precision integers
957
958 ## `gfpmsub` Prime Galois Field `GF(p)` Multiply-Subtract
959
960 ```
961 gfpmsub RT, RA, RB, RC
962 ```
963
964 ```python
965 [[!inline pagenames="gf_reference/gfpmsub.py" raw="yes"]]
966 ```
967
968 the multiplication and subtraction happens on infinite-precision integers
969
970 ## `gfpmsubr` Prime Galois Field `GF(p)` Multiply-Subtract-Reversed
971
972 ```
973 gfpmsubr RT, RA, RB, RC
974 ```
975
976 ```python
977 [[!inline pagenames="gf_reference/gfpmsubr.py" raw="yes"]]
978 ```
979
980 the multiplication and subtraction happens on infinite-precision integers
981
982 ## `gfpmaddsubr` Prime Galois Field `GF(p)` Multiply-Add and Multiply-Sub-Reversed (for FFT)
983
984 Used in combination with SV FFT REMAP to perform
985 a full Number-Theoretic-Transform in-place. Possible by having 3-in 2-out,
986 to avoid the need for a temp register. RS is written
987 to as well as RT.
988
989 ```
990 gfpmaddsubr RT, RA, RB, RC
991 ```
992
993 TODO: add link to explanation for where `RS` comes from.
994
995 ```
996 factor1 = (RA)
997 factor2 = (RB)
998 term = (RC)
999 # read all inputs before writing to any outputs in case
1000 # an input overlaps with an output register.
1001 (RT) = gfpmadd(factor1, factor2, term)
1002 (RS) = gfpmsubr(factor1, factor2, term)
1003 ```
1004
1005 # Already in POWER ISA or subsumed
1006
1007 Lists operations either included as part of
1008 other bitmanip operations, or are already in
1009 Power ISA.
1010
1011 ## cmix
1012
1013 based on RV bitmanip, covered by ternlog bitops
1014
1015 ```
1016 uint_xlen_t cmix(uint_xlen_t RA, uint_xlen_t RB, uint_xlen_t RC) {
1017 return (RA & RB) | (RC & ~RB);
1018 }
1019 ```
1020
1021 ## count leading/trailing zeros with mask
1022
1023 in v3.1 p105
1024
1025 ```
1026 count = 0
1027 do i = 0 to 63 if((RB)i=1) then do
1028 if((RS)i=1) then break end end count ← count + 1
1029 RA ← EXTZ64(count)
1030 ```
1031
1032 ## bit deposit
1033
1034 pdepd VRT,VRA,VRB, identical to RV bitmamip bdep, found already in v3.1 p106
1035
1036 do while(m < 64)
1037 if VSR[VRB+32].dword[i].bit[63-m]=1 then do
1038 result = VSR[VRA+32].dword[i].bit[63-k]
1039 VSR[VRT+32].dword[i].bit[63-m] = result
1040 k = k + 1
1041 m = m + 1
1042
1043 ```
1044
1045 uint_xlen_t bdep(uint_xlen_t RA, uint_xlen_t RB)
1046 {
1047 uint_xlen_t r = 0;
1048 for (int i = 0, j = 0; i < XLEN; i++)
1049 if ((RB >> i) & 1) {
1050 if ((RA >> j) & 1)
1051 r |= uint_xlen_t(1) << i;
1052 j++;
1053 }
1054 return r;
1055 }
1056
1057 ```
1058
1059 ## bit extract
1060
1061 other way round: identical to RV bext: pextd, found in v3.1 p196
1062
1063 ```
1064 uint_xlen_t bext(uint_xlen_t RA, uint_xlen_t RB)
1065 {
1066 uint_xlen_t r = 0;
1067 for (int i = 0, j = 0; i < XLEN; i++)
1068 if ((RB >> i) & 1) {
1069 if ((RA >> i) & 1)
1070 r |= uint_xlen_t(1) << j;
1071 j++;
1072 }
1073 return r;
1074 }
1075 ```
1076
1077 ## centrifuge
1078
1079 found in v3.1 p106 so not to be added here
1080
1081 ```
1082 ptr0 = 0
1083 ptr1 = 0
1084 do i = 0 to 63
1085 if((RB)i=0) then do
1086 resultptr0 = (RS)i
1087 end
1088 ptr0 = ptr0 + 1
1089 if((RB)63-i==1) then do
1090 result63-ptr1 = (RS)63-i
1091 end
1092 ptr1 = ptr1 + 1
1093 RA = result
1094 ```
1095
1096 ## bit to byte permute
1097
1098 similar to matrix permute in RV bitmanip, which has XOR and OR variants,
1099 these perform a transpose (bmatflip).
1100 TODO this looks VSX is there a scalar variant
1101 in v3.0/1 already
1102
1103 do j = 0 to 7
1104 do k = 0 to 7
1105 b = VSR[VRB+32].dword[i].byte[k].bit[j]
1106 VSR[VRT+32].dword[i].byte[j].bit[k] = b
1107
1108 ## grev
1109
1110 superceded by grevlut
1111
1112 based on RV bitmanip, this is also known as a butterfly network. however
1113 where a butterfly network allows setting of every crossbar setting in
1114 every row and every column, generalised-reverse (grev) only allows
1115 a per-row decision: every entry in the same row must either switch or
1116 not-switch.
1117
1118 <img src="https://upload.wikimedia.org/wikipedia/commons/thumb/8/8c/Butterfly_Network.jpg/474px-Butterfly_Network.jpg" />
1119
1120 ```
1121 uint64_t grev64(uint64_t RA, uint64_t RB)
1122 {
1123 uint64_t x = RA;
1124 int shamt = RB & 63;
1125 if (shamt & 1) x = ((x & 0x5555555555555555LL) << 1) |
1126 ((x & 0xAAAAAAAAAAAAAAAALL) >> 1);
1127 if (shamt & 2) x = ((x & 0x3333333333333333LL) << 2) |
1128 ((x & 0xCCCCCCCCCCCCCCCCLL) >> 2);
1129 if (shamt & 4) x = ((x & 0x0F0F0F0F0F0F0F0FLL) << 4) |
1130 ((x & 0xF0F0F0F0F0F0F0F0LL) >> 4);
1131 if (shamt & 8) x = ((x & 0x00FF00FF00FF00FFLL) << 8) |
1132 ((x & 0xFF00FF00FF00FF00LL) >> 8);
1133 if (shamt & 16) x = ((x & 0x0000FFFF0000FFFFLL) << 16) |
1134 ((x & 0xFFFF0000FFFF0000LL) >> 16);
1135 if (shamt & 32) x = ((x & 0x00000000FFFFFFFFLL) << 32) |
1136 ((x & 0xFFFFFFFF00000000LL) >> 32);
1137 return x;
1138 }
1139
1140 ```
1141
1142 ## gorc
1143
1144 based on RV bitmanip, gorc is superceded by grevlut
1145
1146 ```
1147 uint32_t gorc32(uint32_t RA, uint32_t RB)
1148 {
1149 uint32_t x = RA;
1150 int shamt = RB & 31;
1151 if (shamt & 1) x |= ((x & 0x55555555) << 1) | ((x & 0xAAAAAAAA) >> 1);
1152 if (shamt & 2) x |= ((x & 0x33333333) << 2) | ((x & 0xCCCCCCCC) >> 2);
1153 if (shamt & 4) x |= ((x & 0x0F0F0F0F) << 4) | ((x & 0xF0F0F0F0) >> 4);
1154 if (shamt & 8) x |= ((x & 0x00FF00FF) << 8) | ((x & 0xFF00FF00) >> 8);
1155 if (shamt & 16) x |= ((x & 0x0000FFFF) << 16) | ((x & 0xFFFF0000) >> 16);
1156 return x;
1157 }
1158 uint64_t gorc64(uint64_t RA, uint64_t RB)
1159 {
1160 uint64_t x = RA;
1161 int shamt = RB & 63;
1162 if (shamt & 1) x |= ((x & 0x5555555555555555LL) << 1) |
1163 ((x & 0xAAAAAAAAAAAAAAAALL) >> 1);
1164 if (shamt & 2) x |= ((x & 0x3333333333333333LL) << 2) |
1165 ((x & 0xCCCCCCCCCCCCCCCCLL) >> 2);
1166 if (shamt & 4) x |= ((x & 0x0F0F0F0F0F0F0F0FLL) << 4) |
1167 ((x & 0xF0F0F0F0F0F0F0F0LL) >> 4);
1168 if (shamt & 8) x |= ((x & 0x00FF00FF00FF00FFLL) << 8) |
1169 ((x & 0xFF00FF00FF00FF00LL) >> 8);
1170 if (shamt & 16) x |= ((x & 0x0000FFFF0000FFFFLL) << 16) |
1171 ((x & 0xFFFF0000FFFF0000LL) >> 16);
1172 if (shamt & 32) x |= ((x & 0x00000000FFFFFFFFLL) << 32) |
1173 ((x & 0xFFFFFFFF00000000LL) >> 32);
1174 return x;
1175 }
1176
1177 ```
1178
1179
1180 # Appendix
1181
1182 see [[bitmanip/appendix]]
1183