make biginteger pdfs too
[libreriscv.git] / openpower / sv / bitmanip.mdwn
1 [[!tag standards]]
2
3 [[!toc levels=1]]
4
5 # Implementation Log
6
7 * ternlogi <https://bugs.libre-soc.org/show_bug.cgi?id=745>
8 * grev <https://bugs.libre-soc.org/show_bug.cgi?id=755>
9 * GF2^M <https://bugs.libre-soc.org/show_bug.cgi?id=782>
10 * binutils <https://bugs.libre-soc.org/show_bug.cgi?id=836>
11 * shift-and-add <https://bugs.libre-soc.org/show_bug.cgi?id=968>
12
13 # bitmanipulation
14
15 **DRAFT STATUS**
16
17 pseudocode: [[openpower/isa/bitmanip]]
18
19 this extension amalgamates bitmanipulation primitives from many sources,
20 including RISC-V bitmanip, Packed SIMD, AVX-512 and OpenPOWER VSX.
21 Also included are DSP/Multimedia operations suitable for Audio/Video.
22 Vectorization and SIMD are removed: these are straight scalar (element)
23 operations making them suitable for embedded applications. Vectorization
24 Context is provided by [[openpower/sv]].
25
26 When combined with SV, scalar variants of bitmanip operations found in
27 VSX are added so that the Packed SIMD aspects of VSX may be retired as
28 "legacy" in the far future (10 to 20 years). Also, VSX is hundreds of
29 opcodes, requires 128 bit pathways, and is wholly unsuited to low power
30 or embedded scenarios.
31
32 ternlogv is experimental and is the only operation that may be considered
33 a "Packed SIMD". It is added as a variant of the already well-justified
34 ternlog operation (done in AVX512 as an immediate only) "because it
35 looks fun". As it is based on the LUT4 concept it will allow accelerated
36 emulation of FPGAs. Other vendors of ISAs are buying FPGA companies to
37 achieve similar objectives.
38
39 general-purpose Galois Field 2^M operations are added so as to avoid
40 huge custom opcode proliferation across many areas of Computer Science.
41 however for convenience and also to avoid setup costs, some of the more
42 common operations (clmul, crc32) are also added. The expectation is
43 that these operations would all be covered by the same pipeline.
44
45 note that there are brownfield spaces below that could incorporate
46 some of the set-before-first and other scalar operations listed in
47 [[sv/mv.swizzle]],
48 [[sv/vector_ops]], [[sv/int_fp_mv]] and the [[sv/av_opcodes]] as well as
49 [[sv/setvl]], [[sv/svstep]], [[sv/remap]]
50
51 Useful resource:
52
53 * <https://en.wikiversity.org/wiki/Reed%E2%80%93Solomon_codes_for_coders>
54 * <https://maths-people.anu.edu.au/~brent/pd/rpb232tr.pdf>
55 * <https://gist.github.com/animetosho/d3ca95da2131b5813e16b5bb1b137ca0>
56 * <https://github.com/HJLebbink/asm-dude/wiki/GF2P8AFFINEINVQB>
57
58 [[!inline pages="openpower/sv/draft_opcode_tables" quick="yes" raw="yes" ]]
59
60 # binary and ternary bitops
61
62 Similar to FPGA LUTs: for two (binary) or three (ternary) inputs take
63 bits from each input, concatenate them and perform a lookup into a
64 table using an 8-8-bit immediate (for the ternary instructions), or in
65 another register (4-bit for the binary instructions). The binary lookup
66 instructions have CR Field lookup variants due to CR Fields being 4 bit.
67
68 Like the x86 AVX512F
69 [vpternlogd/vpternlogq](https://www.felixcloutier.com/x86/vpternlogd:vpternlogq)
70 instructions.
71
72 ## ternlogi
73
74 | 0.5|6.10|11.15|16.20| 21..28|29.30|31|
75 | -- | -- | --- | --- | ----- | --- |--|
76 | NN | RT | RA | RB | im0-7 | 00 |Rc|
77
78 lut3(imm, a, b, c):
79 idx = c << 2 | b << 1 | a
80 return imm[idx] # idx by LSB0 order
81
82 for i in range(64):
83 RT[i] = lut3(imm, RB[i], RA[i], RT[i])
84
85 ## binlut
86
87 Binary lookup is a dynamic LUT2 version of ternlogi. Firstly, the
88 lookup table is 4 bits wide not 8 bits, and secondly the lookup
89 table comes from a register not an immediate.
90
91 | 0.5|6.10|11.15|16.20| 21..25|26..31 | Form |
92 | -- | -- | --- | --- | ----- |--------|---------|
93 | NN | RT | RA | RB | RC |nh 00001| VA-Form |
94 | NN | RT | RA | RB | /BFA/ |0 01001| VA-Form |
95
96 For binlut, the 4-bit LUT may be selected from either the high nibble
97 or the low nibble of the first byte of RC:
98
99 lut2(imm, a, b):
100 idx = b << 1 | a
101 return imm[idx] # idx by LSB0 order
102
103 imm = (RC>>(nh*4))&0b1111
104 for i in range(64):
105 RT[i] = lut2(imm, RB[i], RA[i])
106
107 For bincrlut, `BFA` selects the 4-bit CR Field as the LUT2:
108
109 for i in range(64):
110 RT[i] = lut2(CRs{BFA}, RB[i], RA[i])
111
112 When Vectorized with SVP64, as usual both source and destination may be
113 Vector or Scalar.
114
115 *Programmer's note: a dynamic ternary lookup may be synthesised from
116 a pair of `binlut` instructions followed by a `ternlogi` to select which
117 to merge. Use `nh` to select which nibble to use as the lookup table
118 from the RC source register (`nh=1` nibble high), i.e. keeping
119 an 8-bit LUT3 in RC, the first `binlut` instruction may set nh=0 and
120 the second nh=1.*
121
122 ## crternlogi
123
124 another mode selection would be CRs not Ints.
125
126 CRB-Form:
127
128 | 0.5|6.8 |9.10|11.13|14.15|16.18|19.25|26.30| 31|
129 |----|----|----|-----|-----|-----|-----|-----|---|
130 | NN | BF | msk|BFA | msk | BFB | TLI | XO |TLI|
131
132 for i in range(4):
133 a,b,c = CRs[BF][i], CRs[BFA][i], CRs[BFB][i])
134 if msk[i] CRs[BF][i] = lut3(imm, a, b, c)
135
136 This instruction is remarkably similar to the existing crops, `crand` etc.
137 which have been noted to be a 4-bit (binary) LUT. In effect `crternlogi`
138 is the ternary LUT version of crops, having an 8-bit LUT. However it
139 is an overwrite instruction in order to save on register file ports,
140 due to the mask requiring the contents of the BF to be both read and
141 written.
142
143 Programmer's note: This instruction is useful when combined with Matrix REMAP
144 in "Inner Product" Mode, creating Warshall Transitive Closure that has many
145 applications in Computer Science.
146
147 ## crbinlog
148
149 With ternary (LUT3) dynamic instructions being very costly,
150 and CR Fields being only 4 bit, a binary (LUT2) variant is better
151
152 CRB-Form:
153
154 | 0.5|6.8 |9.10|11.13|14.15|16.18|19.25|26.30| 31|
155 |----|----|----|-----|-----|-----|-----|-----|---|
156 | NN | BF | msk|BFA | msk | BFB | // | XO | //|
157
158 for i in range(4):
159 a,b = CRs[BF][i], CRs[BF][i])
160 if msk[i] CRs[BF][i] = lut2(CRs[BFB], a, b)
161
162 When SVP64 Vectorized any of the 4 operands may be Scalar or
163 Vector, including `BFB` meaning that multiple different dynamic
164 lookups may be performed with a single instruction. Note that
165 this instruction is deliberately an overwrite in order to reduce
166 the number of register file ports required: like `crternlogi`
167 the contents of `BF` **must** be read due to the mask only
168 writing back to non-masked-out bits of `BF`.
169
170 *Programmer's note: just as with binlut and ternlogi, a pair
171 of crbinlog instructions followed by a merging crternlogi may
172 be deployed to synthesise dynamic ternary (LUT3) CR Field
173 manipulation*
174
175 # int ops
176
177 ## min/m
178
179 required for the [[sv/av_opcodes]]
180
181 signed and unsigned min/max for integer.
182
183 signed/unsigned min/max gives more flexibility.
184
185 \[un]signed min/max instructions are specifically needed for vector reduce min/max operations which are pretty common.
186
187 X-Form
188
189 * PO=19, XO=----000011 `minmax RT, RA, RB, MMM`
190 * PO=19, XO=----000011 `minmax. RT, RA, RB, MMM`
191
192 see [[openpower/sv/rfc/ls013]] for `MMM` definition and pseudo-code.
193
194 implements all of (and more):
195
196 ```
197 uint_xlen_t mins(uint_xlen_t rs1, uint_xlen_t rs2)
198 { return (int_xlen_t)rs1 < (int_xlen_t)rs2 ? rs1 : rs2;
199 }
200 uint_xlen_t maxs(uint_xlen_t rs1, uint_xlen_t rs2)
201 { return (int_xlen_t)rs1 > (int_xlen_t)rs2 ? rs1 : rs2;
202 }
203 uint_xlen_t minu(uint_xlen_t rs1, uint_xlen_t rs2)
204 { return rs1 < rs2 ? rs1 : rs2;
205 }
206 uint_xlen_t maxu(uint_xlen_t rs1, uint_xlen_t rs2)
207 { return rs1 > rs2 ? rs1 : rs2;
208 }
209 ```
210
211 ## average
212
213 required for the [[sv/av_opcodes]], these exist in Packed SIMD (VSX)
214 but not scalar
215
216 ```
217 uint_xlen_t intavg(uint_xlen_t rs1, uint_xlen_t rs2) {
218 return (rs1 + rs2 + 1) >> 1:
219 }
220 ```
221
222 ## absdu
223
224 required for the [[sv/av_opcodes]], these exist in Packed SIMD (VSX)
225 but not scalar
226
227 ```
228 uint_xlen_t absdu(uint_xlen_t rs1, uint_xlen_t rs2) {
229 return (src1 > src2) ? (src1-src2) : (src2-src1)
230 }
231 ```
232
233 ## abs-accumulate
234
235 required for the [[sv/av_opcodes]], these are needed for motion estimation.
236 both are overwrite on RS.
237
238 ```
239 uint_xlen_t uintabsacc(uint_xlen_t rs, uint_xlen_t ra, uint_xlen_t rb) {
240 return rs + (src1 > src2) ? (src1-src2) : (src2-src1)
241 }
242 uint_xlen_t intabsacc(uint_xlen_t rs, int_xlen_t ra, int_xlen_t rb) {
243 return rs + (src1 > src2) ? (src1-src2) : (src2-src1)
244 }
245 ```
246
247 For SVP64, the twin Elwidths allows e.g. a 16 bit accumulator for 8 bit
248 differences. Form is `RM-1P-3S1D` where RS-as-source has a separate
249 SVP64 designation from RS-as-dest. This gives a limited range of
250 non-overwrite capability.
251
252 # shift-and-add <a name="shift-add"> </a>
253
254 Power ISA is missing LD/ST with shift, which is present in both ARM and x86.
255 Too complex to add more LD/ST, a compromise is to add shift-and-add.
256 Replaces a pair of explicit instructions in hot-loops.
257
258 ```
259 # 1.6.27 Z23-FORM
260 |0 |6 |11 |15 |16 |21 |23 |31 |
261 | PO | RT | RA | RB |sm | XO |Rc |
262 ```
263
264 Pseudo-code (shadd):
265
266 n <- (RB)
267 m <- sm + 1
268 RT <- (n[m:XLEN-1] || [0]*m) + (RA)
269
270 Pseudo-code (shaddw):
271
272 shift <- sm + 1 # Shift is between 1-4
273 n <- EXTS((RB)[XLEN/2:XLEN-1]) # Only use lower XLEN/2-bits of RB
274 RT <- (n << shift) + (RA) # Shift n, add RA
275
276 Pseudo-code (shadduw):
277
278 n <- ([0]*(XLEN/2)) || (RB)[XLEN/2:XLEN-1]
279 m <- sm + 1
280 RT <- (n[m:XLEN-1] || [0]*m) + (RA)
281
282 ```
283 uint_xlen_t shadd(uint_xlen_t RA, uint_xlen_t RB, uint8_t sm) {
284 sm = sm & 0x3;
285 return (RB << (sm+1)) + RA;
286 }
287
288 uint_xlen_t shaddw(uint_xlen_t RA, uint_xlen_t RB, uint8_t sm) {
289 uint_xlen_t n = (int_xlen_t)(RB << XLEN / 2) >> XLEN / 2;
290 sm = sm & 0x3;
291 return (n << (sm+1)) + RA;
292 }
293
294 uint_xlen_t shadduw(uint_xlen_t RA, uint_xlen_t RB, uint8_t sm) {
295 uint_xlen_t n = RB & 0xFFFFFFFF;
296 sm = sm & 0x3;
297 return (n << (sm+1)) + RA;
298 }
299 ```
300
301 # bitmask set
302
303 based on RV bitmanip singlebit set, instruction format similar to shift
304 [[isa/fixedshift]]. bmext is actually covered already (shift-with-mask
305 rldicl but only immediate version). however bitmask-invert is not,
306 and set/clr are not covered, although they can use the same Shift ALU.
307
308 bmext (RB) version is not the same as rldicl because bmext is a right
309 shift by RC, where rldicl is a left rotate. for the immediate version
310 this does not matter, so a bmexti is not required. bmrev however there
311 is no direct equivalent and consequently a bmrevi is required.
312
313 bmset (register for mask amount) is particularly useful for creating
314 predicate masks where the length is a dynamic runtime quantity.
315 bmset(RA=0, RB=0, RC=mask) will produce a run of ones of length "mask"
316 in a single instruction without needing to initialise or depend on any
317 other registers.
318
319 | 0.5|6.10|11.15|16.20|21.25| 26..30 |31| name |
320 | -- | -- | --- | --- | --- | ------- |--| ----- |
321 | NN | RS | RA | RB | RC | mode 010 |Rc| bm\* |
322
323 Immediate-variant is an overwrite form:
324
325 | 0.5|6.10|11.15|16.20| 21 | 22.23 | 24....30 |31| name |
326 | -- | -- | --- | --- | -- | ----- | -------- |--| ---- |
327 | NN | RS | RB | sh | SH | itype | 1000 110 |Rc| bm\*i |
328
329 ```
330 def MASK(x, y):
331 if x < y:
332 x = x+1
333 mask_a = ((1 << x) - 1) & ((1 << 64) - 1)
334 mask_b = ((1 << y) - 1) & ((1 << 64) - 1)
335 elif x == y:
336 return 1 << x
337 else:
338 x = x+1
339 mask_a = ((1 << x) - 1) & ((1 << 64) - 1)
340 mask_b = (~((1 << y) - 1)) & ((1 << 64) - 1)
341 return mask_a ^ mask_b
342
343
344 uint_xlen_t bmset(RS, RB, sh)
345 {
346 int shamt = RB & (XLEN - 1);
347 mask = (2<<sh)-1;
348 return RS | (mask << shamt);
349 }
350
351 uint_xlen_t bmclr(RS, RB, sh)
352 {
353 int shamt = RB & (XLEN - 1);
354 mask = (2<<sh)-1;
355 return RS & ~(mask << shamt);
356 }
357
358 uint_xlen_t bminv(RS, RB, sh)
359 {
360 int shamt = RB & (XLEN - 1);
361 mask = (2<<sh)-1;
362 return RS ^ (mask << shamt);
363 }
364
365 uint_xlen_t bmext(RS, RB, sh)
366 {
367 int shamt = RB & (XLEN - 1);
368 mask = (2<<sh)-1;
369 return mask & (RS >> shamt);
370 }
371 ```
372
373 bitmask extract with reverse. can be done by bit-order-inverting all
374 of RB and getting bits of RB from the opposite end.
375
376 when RA is zero, no shift occurs. this makes bmextrev useful for
377 simply reversing all bits of a register.
378
379 ```
380 msb = ra[5:0];
381 rev[0:msb] = rb[msb:0];
382 rt = ZE(rev[msb:0]);
383
384 uint_xlen_t bmrevi(RA, RB, sh)
385 {
386 int shamt = XLEN-1;
387 if (RA != 0) shamt = (GPR(RA) & (XLEN - 1));
388 shamt = (XLEN-1)-shamt; # shift other end
389 brb = bitreverse(GPR(RB)) # swap LSB-MSB
390 mask = (2<<sh)-1;
391 return mask & (brb >> shamt);
392 }
393
394 uint_xlen_t bmrev(RA, RB, RC) {
395 return bmrevi(RA, RB, GPR(RC) & 0b111111);
396 }
397 ```
398
399 | 0.5|6.10|11.15|16.20|21.26| 27..30 |31| name | Form |
400 | -- | -- | --- | --- | --- | ------- |--| ------ | -------- |
401 | NN | RT | RA | RB | sh | 1111 |Rc| bmrevi | MDS-Form |
402
403 | 0.5|6.10|11.15|16.20|21.25| 26..30 |31| name | Form |
404 | -- | -- | --- | --- | --- | ------- |--| ------ | -------- |
405 | NN | RT | RA | RB | RC | 11110 |Rc| bmrev | VA2-Form |
406
407 # grevlut <a name="grevlut"> </a>
408
409 generalised reverse combined with a pair of LUT2s and allowing
410 a constant `0b0101...0101` when RA=0, and an option to invert
411 (including when RA=0, giving a constant 0b1010...1010 as the
412 initial value) provides a wide range of instructions
413 and a means to set hundreds of regular 64 bit patterns with one
414 single 32 bit instruction.
415
416 the two LUT2s are applied left-half (when not swapping)
417 and right-half (when swapping) so as to allow a wider
418 range of options.
419
420 <img src="/openpower/sv/grevlut2x2.jpg" width=700 />
421
422 * A value of `0b11001010` for the immediate provides
423 the functionality of a standard "grev".
424 * `0b11101110` provides gorc
425
426 grevlut should be arranged so as to produce the constants
427 needed to put into bext (bitextract) so as in turn to
428 be able to emulate x86 pmovmask instructions
429 <https://www.felixcloutier.com/x86/pmovmskb>.
430 This only requires 2 instructions (grevlut, bext).
431
432 Note that if the mask is required to be placed
433 directly into CR Fields (for use as CR Predicate
434 masks rather than a integer mask) then sv.cmpi or sv.ori
435 may be used instead, bearing in mind that sv.ori
436 is a 64-bit instruction, and `VL` must have been
437 set to the required length:
438
439 sv.ori./elwid=8 r10.v, r10.v, 0
440
441 The following settings provide the required mask constants:
442
443 | RA=0 | RB | imm | iv | result |
444 | ------- | ------- | ---------- | -- | ---------- |
445 | 0x555.. | 0b10 | 0b01101100 | 0 | 0x111111... |
446 | 0x555.. | 0b110 | 0b01101100 | 0 | 0x010101... |
447 | 0x555.. | 0b1110 | 0b01101100 | 0 | 0x00010001... |
448 | 0x555.. | 0b10 | 0b11000110 | 1 | 0x88888... |
449 | 0x555.. | 0b110 | 0b11000110 | 1 | 0x808080... |
450 | 0x555.. | 0b1110 | 0b11000110 | 1 | 0x80008000... |
451
452 Better diagram showing the correct ordering of shamt (RB). A LUT2
453 is applied to all locations marked in red using the first 4
454 bits of the immediate, and a separate LUT2 applied to all
455 locations in green using the upper 4 bits of the immediate.
456
457 <img src="/openpower/sv/grevlut.png" width=700 />
458
459 demo code [[openpower/sv/grevlut.py]]
460
461 ```
462 def lut2(imm, a, b):
463 idx = b << 1 | a
464 return (imm>>idx) & 1
465
466 def dorow(imm8, step_i, chunk_size):
467 step_o = 0
468 for j in range(64):
469 if (j&chunk_size) == 0:
470 imm = (imm8 & 0b1111)
471 else:
472 imm = (imm8>>4)
473 a = (step_i>>j)&1
474 b = (step_i>>(j ^ chunk_size))&1
475 res = lut2(imm, a, b)
476 #print(j, bin(imm), a, b, res)
477 step_o |= (res<<j)
478 #print (" ", chunk_size, bin(step_o))
479 return step_o
480
481 def grevlut64(RA, RB, imm, iv):
482 x = 0
483 if RA is None: # RA=0
484 x = 0x5555555555555555
485 else:
486 x = RA
487 if (iv): x = ~x;
488 shamt = RB & 63;
489 for i in range(6):
490 step = 1<<i
491 if (shamt & step):
492 x = dorow(imm, x, step)
493 return x & ((1<<64)-1)
494 ```
495
496 A variant may specify different LUT-pairs per row,
497 using one byte of RB for each. If it is desired that
498 a particular row-crossover shall not be applied it is
499 a simple matter to set the appropriate LUT-pair in RB
500 to effect an identity transform for that row (`0b11001010`).
501
502 ```
503 uint64_t grevlutr(uint64_t RA, uint64_t RB, bool iv, bool is32b)
504 {
505 uint64_t x = 0x5555_5555_5555_5555;
506 if (RA != 0) x = GPR(RA);
507 if (iv) x = ~x;
508 for i in 0 to (6-is32b)
509 step = 1<<i
510 imm = (RB>>(i*8))&0xff
511 x = dorow(imm, x, step, is32b)
512 return x;
513 }
514
515 ```
516
517 | 0.5|6.10|11.15|16.20 |21..28 | 29.30|31| name | Form |
518 | -- | -- | --- | --- | ----- | -----|--| ------ | ----- |
519 | NN | RT | RA | s0-4 | im0-7 | 1 iv |s5| grevlogi | |
520 | NN | RT | RA | RB | im0-7 | 01 |0 | grevlog | |
521
522 An equivalent to `grevlogw` may be synthesised by setting the
523 appropriate bits in RB to set the top half of RT to zero.
524 Thus an explicit grevlogw instruction is not necessary.
525
526 # xperm
527
528 based on RV bitmanip.
529
530 RA contains a vector of indices to select parts of RB to be
531 copied to RT. The immediate-variant allows up to an 8 bit
532 pattern (repeated) to be targetted at different parts of RT.
533
534 xperm shares some similarity with one of the uses of bmator
535 in that xperm indices are binary addressing where bitmator
536 may be considered to be unary addressing.
537
538 ```
539 uint_xlen_t xpermi(uint8_t imm8, uint_xlen_t RB, int sz_log2)
540 {
541 uint_xlen_t r = 0;
542 uint_xlen_t sz = 1LL << sz_log2;
543 uint_xlen_t mask = (1LL << sz) - 1;
544 uint_xlen_t RA = imm8 | imm8<<8 | ... | imm8<<56;
545 for (int i = 0; i < XLEN; i += sz) {
546 uint_xlen_t pos = ((RA >> i) & mask) << sz_log2;
547 if (pos < XLEN)
548 r |= ((RB >> pos) & mask) << i;
549 }
550 return r;
551 }
552 uint_xlen_t xperm(uint_xlen_t RA, uint_xlen_t RB, int sz_log2)
553 {
554 uint_xlen_t r = 0;
555 uint_xlen_t sz = 1LL << sz_log2;
556 uint_xlen_t mask = (1LL << sz) - 1;
557 for (int i = 0; i < XLEN; i += sz) {
558 uint_xlen_t pos = ((RA >> i) & mask) << sz_log2;
559 if (pos < XLEN)
560 r |= ((RB >> pos) & mask) << i;
561 }
562 return r;
563 }
564 uint_xlen_t xperm_n (uint_xlen_t RA, uint_xlen_t RB)
565 { return xperm(RA, RB, 2); }
566 uint_xlen_t xperm_b (uint_xlen_t RA, uint_xlen_t RB)
567 { return xperm(RA, RB, 3); }
568 uint_xlen_t xperm_h (uint_xlen_t RA, uint_xlen_t RB)
569 { return xperm(RA, RB, 4); }
570 uint_xlen_t xperm_w (uint_xlen_t RA, uint_xlen_t RB)
571 { return xperm(RA, RB, 5); }
572 ```
573
574 # bitmatrix
575
576 bmatflip and bmatxor is found in the Cray XMT, and in x86 is known
577 as GF2P8AFFINEQB. uses:
578
579 * <https://gist.github.com/animetosho/d3ca95da2131b5813e16b5bb1b137ca0>
580 * SM4, Reed Solomon, RAID6
581 <https://stackoverflow.com/questions/59124720/what-are-the-avx-512-galois-field-related-instructions-for>
582 * Vector bit-reverse <https://reviews.llvm.org/D91515?id=305411>
583 * Affine Inverse <https://github.com/HJLebbink/asm-dude/wiki/GF2P8AFFINEINVQB>
584
585 | 0.5|6.10|11.15|16.20| 21 | 22.23 | 24....30 |31| name | Form |
586 | -- | -- | --- | --- | -- | ----- | -------- |--| ---- | ------- |
587 | NN | RS | RA |im04 | im5| 1 1 | im67 00 110 |Rc| bmatxori | TODO |
588
589
590 ```
591 uint64_t bmatflip(uint64_t RA)
592 {
593 uint64_t x = RA;
594 x = shfl64(x, 31);
595 x = shfl64(x, 31);
596 x = shfl64(x, 31);
597 return x;
598 }
599
600 uint64_t bmatxori(uint64_t RS, uint64_t RA, uint8_t imm) {
601 // transpose of RA
602 uint64_t RAt = bmatflip(RA);
603 uint8_t u[8]; // rows of RS
604 uint8_t v[8]; // cols of RA
605 for (int i = 0; i < 8; i++) {
606 u[i] = RS >> (i*8);
607 v[i] = RAt >> (i*8);
608 }
609 uint64_t bit, x = 0;
610 for (int i = 0; i < 64; i++) {
611 bit = (imm >> (i%8)) & 1;
612 bit ^= pcnt(u[i / 8] & v[i % 8]) & 1;
613 x |= bit << i;
614 }
615 return x;
616 }
617
618 uint64_t bmatxor(uint64_t RA, uint64_t RB) {
619 return bmatxori(RA, RB, 0xff)
620 }
621
622 uint64_t bmator(uint64_t RA, uint64_t RB) {
623 // transpose of RB
624 uint64_t RBt = bmatflip(RB);
625 uint8_t u[8]; // rows of RA
626 uint8_t v[8]; // cols of RB
627 for (int i = 0; i < 8; i++) {
628 u[i] = RA >> (i*8);
629 v[i] = RBt >> (i*8);
630 }
631 uint64_t x = 0;
632 for (int i = 0; i < 64; i++) {
633 if ((u[i / 8] & v[i % 8]) != 0)
634 x |= 1LL << i;
635 }
636 return x;
637 }
638
639 uint64_t bmatand(uint64_t RA, uint64_t RB) {
640 // transpose of RB
641 uint64_t RBt = bmatflip(RB);
642 uint8_t u[8]; // rows of RA
643 uint8_t v[8]; // cols of RB
644 for (int i = 0; i < 8; i++) {
645 u[i] = RA >> (i*8);
646 v[i] = RBt >> (i*8);
647 }
648 uint64_t x = 0;
649 for (int i = 0; i < 64; i++) {
650 if ((u[i / 8] & v[i % 8]) == 0xff)
651 x |= 1LL << i;
652 }
653 return x;
654 }
655 ```
656
657 # Introduction to Carry-less and GF arithmetic
658
659 * obligatory xkcd <https://xkcd.com/2595/>
660
661 There are three completely separate types of Galois-Field-based arithmetic
662 that we implement which are not well explained even in introductory
663 literature. A slightly oversimplified explanation is followed by more
664 accurate descriptions:
665
666 * `GF(2)` carry-less binary arithmetic. this is not actually a Galois Field,
667 but is accidentally referred to as GF(2) - see below as to why.
668 * `GF(p)` modulo arithmetic with a Prime number, these are "proper"
669 Galois Fields
670 * `GF(2^N)` carry-less binary arithmetic with two limits: modulo a power-of-2
671 (2^N) and a second "reducing" polynomial (similar to a prime number), these
672 are said to be GF(2^N) arithmetic.
673
674 further detailed and more precise explanations are provided below
675
676 * **Polynomials with coefficients in `GF(2)`**
677 (aka. Carry-less arithmetic -- the `cl*` instructions).
678 This isn't actually a Galois Field, but its coefficients are. This is
679 basically binary integer addition, subtraction, and multiplication like
680 usual, except that carries aren't propagated at all, effectively turning
681 both addition and subtraction into the bitwise xor operation. Division and
682 remainder are defined to match how addition and multiplication works.
683 * **Galois Fields with a prime size**
684 (aka. `GF(p)` or Prime Galois Fields -- the `gfp*` instructions).
685 This is basically just the integers mod `p`.
686 * **Galois Fields with a power-of-a-prime size**
687 (aka. `GF(p^n)` or `GF(q)` where `q == p^n` for prime `p` and
688 integer `n > 0`).
689 We only implement these for `p == 2`, called Binary Galois Fields
690 (`GF(2^n)` -- the `gfb*` instructions).
691 For any prime `p`, `GF(p^n)` is implemented as polynomials with
692 coefficients in `GF(p)` and degree `< n`, where the polynomials are the
693 remainders of dividing by a specificly chosen polynomial in `GF(p)` called
694 the Reducing Polynomial (we will denote that by `red_poly`). The Reducing
695 Polynomial must be an irreducable polynomial (like primes, but for
696 polynomials), as well as have degree `n`. All `GF(p^n)` for the same `p`
697 and `n` are isomorphic to each other -- the choice of `red_poly` doesn't
698 affect `GF(p^n)`'s mathematical shape, all that changes is the specific
699 polynomials used to implement `GF(p^n)`.
700
701 Many implementations and much of the literature do not make a clear
702 distinction between these three categories, which makes it confusing
703 to understand what their purpose and value is.
704
705 * carry-less multiply is extremely common and is used for the ubiquitous
706 CRC32 algorithm. [TODO add many others, helps justify to ISA WG]
707 * GF(2^N) forms the basis of Rijndael (the current AES standard) and
708 has significant uses throughout cryptography
709 * GF(p) is the basis again of a significant quantity of algorithms
710 (TODO, list them, jacob knows what they are), even though the
711 modulo is limited to be below 64-bit (size of a scalar int)
712
713 # Instructions for Carry-less Operations
714
715 aka. Polynomials with coefficients in `GF(2)`
716
717 Carry-less addition/subtraction is simply XOR, so a `cladd`
718 instruction is not provided since the `xor[i]` instruction can be used instead.
719
720 These are operations on polynomials with coefficients in `GF(2)`, with the
721 polynomial's coefficients packed into integers with the following algorithm:
722
723 ```python
724 [[!inline pagenames="gf_reference/pack_poly.py" raw="yes"]]
725 ```
726
727 ## Carry-less Multiply Instructions
728
729 based on RV bitmanip
730 see <https://en.wikipedia.org/wiki/CLMUL_instruction_set> and
731 <https://www.felixcloutier.com/x86/pclmulqdq> and
732 <https://en.m.wikipedia.org/wiki/Carry-less_product>
733
734 They are worth adding as their own non-overwrite operations
735 (in the same pipeline).
736
737 ### `clmul` Carry-less Multiply
738
739 ```python
740 [[!inline pagenames="gf_reference/clmul.py" raw="yes"]]
741 ```
742
743 ### `clmulh` Carry-less Multiply High
744
745 ```python
746 [[!inline pagenames="gf_reference/clmulh.py" raw="yes"]]
747 ```
748
749 ### `clmulr` Carry-less Multiply (Reversed)
750
751 Useful for CRCs. Equivalent to bit-reversing the result of `clmul` on
752 bit-reversed inputs.
753
754 ```python
755 [[!inline pagenames="gf_reference/clmulr.py" raw="yes"]]
756 ```
757
758 ## `clmadd` Carry-less Multiply-Add
759
760 ```
761 clmadd RT, RA, RB, RC
762 ```
763
764 ```
765 (RT) = clmul((RA), (RB)) ^ (RC)
766 ```
767
768 ## `cltmadd` Twin Carry-less Multiply-Add (for FFTs)
769
770 Used in combination with SV FFT REMAP to perform a full Discrete Fourier
771 Transform of Polynomials over GF(2) in-place. Possible by having 3-in 2-out,
772 to avoid the need for a temp register. RS is written to as well as RT.
773
774 Note: Polynomials over GF(2) are a Ring rather than a Field, so, because the
775 definition of the Inverse Discrete Fourier Transform involves calculating a
776 multiplicative inverse, which may not exist in every Ring, therefore the
777 Inverse Discrete Fourier Transform may not exist. (AFAICT the number of inputs
778 to the IDFT must be odd for the IDFT to be defined for Polynomials over GF(2).
779 TODO: check with someone who knows for sure if that's correct.)
780
781 ```
782 cltmadd RT, RA, RB, RC
783 ```
784
785 TODO: add link to explanation for where `RS` comes from.
786
787 ```
788 a = (RA)
789 c = (RC)
790 # read all inputs before writing to any outputs in case
791 # an input overlaps with an output register.
792 (RT) = clmul(a, (RB)) ^ c
793 (RS) = a ^ c
794 ```
795
796 ## `cldivrem` Carry-less Division and Remainder
797
798 `cldivrem` isn't an actual instruction, but is just used in the pseudo-code
799 for other instructions.
800
801 ```python
802 [[!inline pagenames="gf_reference/cldivrem.py" raw="yes"]]
803 ```
804
805 ## `cldiv` Carry-less Division
806
807 ```
808 cldiv RT, RA, RB
809 ```
810
811 ```
812 n = (RA)
813 d = (RB)
814 q, r = cldivrem(n, d, width=XLEN)
815 (RT) = q
816 ```
817
818 ## `clrem` Carry-less Remainder
819
820 ```
821 clrem RT, RA, RB
822 ```
823
824 ```
825 n = (RA)
826 d = (RB)
827 q, r = cldivrem(n, d, width=XLEN)
828 (RT) = r
829 ```
830
831 # Instructions for Binary Galois Fields `GF(2^m)`
832
833 see:
834
835 * <https://courses.csail.mit.edu/6.857/2016/files/ffield.py>
836 * <https://engineering.purdue.edu/kak/compsec/NewLectures/Lecture7.pdf>
837 * <https://foss.heptapod.net/math/libgf2/-/blob/branch/default/src/libgf2/gf2.py>
838
839 Binary Galois Field addition/subtraction is simply XOR, so a `gfbadd`
840 instruction is not provided since the `xor[i]` instruction can be used instead.
841
842 ## `GFBREDPOLY` SPR -- Reducing Polynomial
843
844 In order to save registers and to make operations orthogonal with standard
845 arithmetic, the reducing polynomial is stored in a dedicated SPR `GFBREDPOLY`.
846 This also allows hardware to pre-compute useful parameters (such as the
847 degree, or look-up tables) based on the reducing polynomial, and store them
848 alongside the SPR in hidden registers, only recomputing them whenever the SPR
849 is written to, rather than having to recompute those values for every
850 instruction.
851
852 Because Galois Fields require the reducing polynomial to be an irreducible
853 polynomial, that guarantees that any polynomial of `degree > 1` must have
854 the LSB set, since otherwise it would be divisible by the polynomial `x`,
855 making it reducible, making whatever we're working on no longer a Field.
856 Therefore, we can reuse the LSB to indicate `degree == XLEN`.
857
858 ```python
859 [[!inline pagenames="gf_reference/decode_reducing_polynomial.py" raw="yes"]]
860 ```
861
862 ## `gfbredpoly` -- Set the Reducing Polynomial SPR `GFBREDPOLY`
863
864 unless this is an immediate op, `mtspr` is completely sufficient.
865
866 ```python
867 [[!inline pagenames="gf_reference/gfbredpoly.py" raw="yes"]]
868 ```
869
870 ## `gfbmul` -- Binary Galois Field `GF(2^m)` Multiplication
871
872 ```
873 gfbmul RT, RA, RB
874 ```
875
876 ```python
877 [[!inline pagenames="gf_reference/gfbmul.py" raw="yes"]]
878 ```
879
880 ## `gfbmadd` -- Binary Galois Field `GF(2^m)` Multiply-Add
881
882 ```
883 gfbmadd RT, RA, RB, RC
884 ```
885
886 ```python
887 [[!inline pagenames="gf_reference/gfbmadd.py" raw="yes"]]
888 ```
889
890 ## `gfbtmadd` -- Binary Galois Field `GF(2^m)` Twin Multiply-Add (for FFT)
891
892 Used in combination with SV FFT REMAP to perform a full `GF(2^m)` Discrete
893 Fourier Transform in-place. Possible by having 3-in 2-out, to avoid the need
894 for a temp register. RS is written to as well as RT.
895
896 ```
897 gfbtmadd RT, RA, RB, RC
898 ```
899
900 TODO: add link to explanation for where `RS` comes from.
901
902 ```
903 a = (RA)
904 c = (RC)
905 # read all inputs before writing to any outputs in case
906 # an input overlaps with an output register.
907 (RT) = gfbmadd(a, (RB), c)
908 # use gfbmadd again since it reduces the result
909 (RS) = gfbmadd(a, 1, c) # "a * 1 + c"
910 ```
911
912 ## `gfbinv` -- Binary Galois Field `GF(2^m)` Inverse
913
914 ```
915 gfbinv RT, RA
916 ```
917
918 ```python
919 [[!inline pagenames="gf_reference/gfbinv.py" raw="yes"]]
920 ```
921
922 # Instructions for Prime Galois Fields `GF(p)`
923
924 ## `GFPRIME` SPR -- Prime Modulus For `gfp*` Instructions
925
926 ## `gfpadd` Prime Galois Field `GF(p)` Addition
927
928 ```
929 gfpadd RT, RA, RB
930 ```
931
932 ```python
933 [[!inline pagenames="gf_reference/gfpadd.py" raw="yes"]]
934 ```
935
936 the addition happens on infinite-precision integers
937
938 ## `gfpsub` Prime Galois Field `GF(p)` Subtraction
939
940 ```
941 gfpsub RT, RA, RB
942 ```
943
944 ```python
945 [[!inline pagenames="gf_reference/gfpsub.py" raw="yes"]]
946 ```
947
948 the subtraction happens on infinite-precision integers
949
950 ## `gfpmul` Prime Galois Field `GF(p)` Multiplication
951
952 ```
953 gfpmul RT, RA, RB
954 ```
955
956 ```python
957 [[!inline pagenames="gf_reference/gfpmul.py" raw="yes"]]
958 ```
959
960 the multiplication happens on infinite-precision integers
961
962 ## `gfpinv` Prime Galois Field `GF(p)` Invert
963
964 ```
965 gfpinv RT, RA
966 ```
967
968 Some potential hardware implementations are found in:
969 <https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.90.5233&rep=rep1&type=pdf>
970
971 ```python
972 [[!inline pagenames="gf_reference/gfpinv.py" raw="yes"]]
973 ```
974
975 ## `gfpmadd` Prime Galois Field `GF(p)` Multiply-Add
976
977 ```
978 gfpmadd RT, RA, RB, RC
979 ```
980
981 ```python
982 [[!inline pagenames="gf_reference/gfpmadd.py" raw="yes"]]
983 ```
984
985 the multiplication and addition happens on infinite-precision integers
986
987 ## `gfpmsub` Prime Galois Field `GF(p)` Multiply-Subtract
988
989 ```
990 gfpmsub RT, RA, RB, RC
991 ```
992
993 ```python
994 [[!inline pagenames="gf_reference/gfpmsub.py" raw="yes"]]
995 ```
996
997 the multiplication and subtraction happens on infinite-precision integers
998
999 ## `gfpmsubr` Prime Galois Field `GF(p)` Multiply-Subtract-Reversed
1000
1001 ```
1002 gfpmsubr RT, RA, RB, RC
1003 ```
1004
1005 ```python
1006 [[!inline pagenames="gf_reference/gfpmsubr.py" raw="yes"]]
1007 ```
1008
1009 the multiplication and subtraction happens on infinite-precision integers
1010
1011 ## `gfpmaddsubr` Prime Galois Field `GF(p)` Multiply-Add and Multiply-Sub-Reversed (for FFT)
1012
1013 Used in combination with SV FFT REMAP to perform
1014 a full Number-Theoretic-Transform in-place. Possible by having 3-in 2-out,
1015 to avoid the need for a temp register. RS is written
1016 to as well as RT.
1017
1018 ```
1019 gfpmaddsubr RT, RA, RB, RC
1020 ```
1021
1022 TODO: add link to explanation for where `RS` comes from.
1023
1024 ```
1025 factor1 = (RA)
1026 factor2 = (RB)
1027 term = (RC)
1028 # read all inputs before writing to any outputs in case
1029 # an input overlaps with an output register.
1030 (RT) = gfpmadd(factor1, factor2, term)
1031 (RS) = gfpmsubr(factor1, factor2, term)
1032 ```
1033
1034 # Already in POWER ISA or subsumed
1035
1036 Lists operations either included as part of
1037 other bitmanip operations, or are already in
1038 Power ISA.
1039
1040 ## cmix
1041
1042 based on RV bitmanip, covered by ternlog bitops
1043
1044 ```
1045 uint_xlen_t cmix(uint_xlen_t RA, uint_xlen_t RB, uint_xlen_t RC) {
1046 return (RA & RB) | (RC & ~RB);
1047 }
1048 ```
1049
1050 ## count leading/trailing zeros with mask
1051
1052 in v3.1 p105
1053
1054 ```
1055 count = 0
1056 do i = 0 to 63 if((RB)i=1) then do
1057 if((RS)i=1) then break end end count ← count + 1
1058 RA ← EXTZ64(count)
1059 ```
1060
1061 ## bit deposit
1062
1063 pdepd VRT,VRA,VRB, identical to RV bitmamip bdep, found already in v3.1 p106
1064
1065 do while(m < 64)
1066 if VSR[VRB+32].dword[i].bit[63-m]=1 then do
1067 result = VSR[VRA+32].dword[i].bit[63-k]
1068 VSR[VRT+32].dword[i].bit[63-m] = result
1069 k = k + 1
1070 m = m + 1
1071
1072 ```
1073
1074 uint_xlen_t bdep(uint_xlen_t RA, uint_xlen_t RB)
1075 {
1076 uint_xlen_t r = 0;
1077 for (int i = 0, j = 0; i < XLEN; i++)
1078 if ((RB >> i) & 1) {
1079 if ((RA >> j) & 1)
1080 r |= uint_xlen_t(1) << i;
1081 j++;
1082 }
1083 return r;
1084 }
1085
1086 ```
1087
1088 ## bit extract
1089
1090 other way round: identical to RV bext: pextd, found in v3.1 p196
1091
1092 ```
1093 uint_xlen_t bext(uint_xlen_t RA, uint_xlen_t RB)
1094 {
1095 uint_xlen_t r = 0;
1096 for (int i = 0, j = 0; i < XLEN; i++)
1097 if ((RB >> i) & 1) {
1098 if ((RA >> i) & 1)
1099 r |= uint_xlen_t(1) << j;
1100 j++;
1101 }
1102 return r;
1103 }
1104 ```
1105
1106 ## centrifuge
1107
1108 found in v3.1 p106 so not to be added here
1109
1110 ```
1111 ptr0 = 0
1112 ptr1 = 0
1113 do i = 0 to 63
1114 if((RB)i=0) then do
1115 resultptr0 = (RS)i
1116 end
1117 ptr0 = ptr0 + 1
1118 if((RB)63-i==1) then do
1119 result63-ptr1 = (RS)63-i
1120 end
1121 ptr1 = ptr1 + 1
1122 RA = result
1123 ```
1124
1125 ## bit to byte permute
1126
1127 similar to matrix permute in RV bitmanip, which has XOR and OR variants,
1128 these perform a transpose (bmatflip).
1129 TODO this looks VSX is there a scalar variant
1130 in v3.0/1 already
1131
1132 do j = 0 to 7
1133 do k = 0 to 7
1134 b = VSR[VRB+32].dword[i].byte[k].bit[j]
1135 VSR[VRT+32].dword[i].byte[j].bit[k] = b
1136
1137 ## grev
1138
1139 superceded by grevlut
1140
1141 based on RV bitmanip, this is also known as a butterfly network. however
1142 where a butterfly network allows setting of every crossbar setting in
1143 every row and every column, generalised-reverse (grev) only allows
1144 a per-row decision: every entry in the same row must either switch or
1145 not-switch.
1146
1147 <img src="https://upload.wikimedia.org/wikipedia/commons/thumb/8/8c/Butterfly_Network.jpg/474px-Butterfly_Network.jpg" />
1148
1149 ```
1150 uint64_t grev64(uint64_t RA, uint64_t RB)
1151 {
1152 uint64_t x = RA;
1153 int shamt = RB & 63;
1154 if (shamt & 1) x = ((x & 0x5555555555555555LL) << 1) |
1155 ((x & 0xAAAAAAAAAAAAAAAALL) >> 1);
1156 if (shamt & 2) x = ((x & 0x3333333333333333LL) << 2) |
1157 ((x & 0xCCCCCCCCCCCCCCCCLL) >> 2);
1158 if (shamt & 4) x = ((x & 0x0F0F0F0F0F0F0F0FLL) << 4) |
1159 ((x & 0xF0F0F0F0F0F0F0F0LL) >> 4);
1160 if (shamt & 8) x = ((x & 0x00FF00FF00FF00FFLL) << 8) |
1161 ((x & 0xFF00FF00FF00FF00LL) >> 8);
1162 if (shamt & 16) x = ((x & 0x0000FFFF0000FFFFLL) << 16) |
1163 ((x & 0xFFFF0000FFFF0000LL) >> 16);
1164 if (shamt & 32) x = ((x & 0x00000000FFFFFFFFLL) << 32) |
1165 ((x & 0xFFFFFFFF00000000LL) >> 32);
1166 return x;
1167 }
1168
1169 ```
1170
1171 ## gorc
1172
1173 based on RV bitmanip, gorc is superceded by grevlut
1174
1175 ```
1176 uint32_t gorc32(uint32_t RA, uint32_t RB)
1177 {
1178 uint32_t x = RA;
1179 int shamt = RB & 31;
1180 if (shamt & 1) x |= ((x & 0x55555555) << 1) | ((x & 0xAAAAAAAA) >> 1);
1181 if (shamt & 2) x |= ((x & 0x33333333) << 2) | ((x & 0xCCCCCCCC) >> 2);
1182 if (shamt & 4) x |= ((x & 0x0F0F0F0F) << 4) | ((x & 0xF0F0F0F0) >> 4);
1183 if (shamt & 8) x |= ((x & 0x00FF00FF) << 8) | ((x & 0xFF00FF00) >> 8);
1184 if (shamt & 16) x |= ((x & 0x0000FFFF) << 16) | ((x & 0xFFFF0000) >> 16);
1185 return x;
1186 }
1187 uint64_t gorc64(uint64_t RA, uint64_t RB)
1188 {
1189 uint64_t x = RA;
1190 int shamt = RB & 63;
1191 if (shamt & 1) x |= ((x & 0x5555555555555555LL) << 1) |
1192 ((x & 0xAAAAAAAAAAAAAAAALL) >> 1);
1193 if (shamt & 2) x |= ((x & 0x3333333333333333LL) << 2) |
1194 ((x & 0xCCCCCCCCCCCCCCCCLL) >> 2);
1195 if (shamt & 4) x |= ((x & 0x0F0F0F0F0F0F0F0FLL) << 4) |
1196 ((x & 0xF0F0F0F0F0F0F0F0LL) >> 4);
1197 if (shamt & 8) x |= ((x & 0x00FF00FF00FF00FFLL) << 8) |
1198 ((x & 0xFF00FF00FF00FF00LL) >> 8);
1199 if (shamt & 16) x |= ((x & 0x0000FFFF0000FFFFLL) << 16) |
1200 ((x & 0xFFFF0000FFFF0000LL) >> 16);
1201 if (shamt & 32) x |= ((x & 0x00000000FFFFFFFFLL) << 32) |
1202 ((x & 0xFFFFFFFF00000000LL) >> 32);
1203 return x;
1204 }
1205
1206 ```
1207
1208
1209 # Appendix
1210
1211 see [[bitmanip/appendix]]
1212