add shaddw
[libreriscv.git] / openpower / sv / bitmanip.mdwn
1 [[!tag standards]]
2
3 [[!toc levels=1]]
4
5 # Implementation Log
6
7 * ternlogi <https://bugs.libre-soc.org/show_bug.cgi?id=745>
8 * grev <https://bugs.libre-soc.org/show_bug.cgi?id=755>
9 * GF2^M <https://bugs.libre-soc.org/show_bug.cgi?id=782>
10 * binutils <https://bugs.libre-soc.org/show_bug.cgi?id=836>
11 * shift-and-add <https://bugs.libre-soc.org/show_bug.cgi?id=968>
12
13 # bitmanipulation
14
15 **DRAFT STATUS**
16
17 pseudocode: [[openpower/isa/bitmanip]]
18
19 this extension amalgamates bitmanipulation primitives from many sources,
20 including RISC-V bitmanip, Packed SIMD, AVX-512 and OpenPOWER VSX.
21 Also included are DSP/Multimedia operations suitable for Audio/Video.
22 Vectorisation and SIMD are removed: these are straight scalar (element)
23 operations making them suitable for embedded applications. Vectorisation
24 Context is provided by [[openpower/sv]].
25
26 When combined with SV, scalar variants of bitmanip operations found in
27 VSX are added so that the Packed SIMD aspects of VSX may be retired as
28 "legacy" in the far future (10 to 20 years). Also, VSX is hundreds of
29 opcodes, requires 128 bit pathways, and is wholly unsuited to low power
30 or embedded scenarios.
31
32 ternlogv is experimental and is the only operation that may be considered
33 a "Packed SIMD". It is added as a variant of the already well-justified
34 ternlog operation (done in AVX512 as an immediate only) "because it
35 looks fun". As it is based on the LUT4 concept it will allow accelerated
36 emulation of FPGAs. Other vendors of ISAs are buying FPGA companies to
37 achieve similar objectives.
38
39 general-purpose Galois Field 2^M operations are added so as to avoid
40 huge custom opcode proliferation across many areas of Computer Science.
41 however for convenience and also to avoid setup costs, some of the more
42 common operations (clmul, crc32) are also added. The expectation is
43 that these operations would all be covered by the same pipeline.
44
45 note that there are brownfield spaces below that could incorporate
46 some of the set-before-first and other scalar operations listed in
47 [[sv/mv.swizzle]],
48 [[sv/vector_ops]], [[sv/int_fp_mv]] and the [[sv/av_opcodes]] as well as
49 [[sv/setvl]], [[sv/svstep]], [[sv/remap]]
50
51 Useful resource:
52
53 * <https://en.wikiversity.org/wiki/Reed%E2%80%93Solomon_codes_for_coders>
54 * <https://maths-people.anu.edu.au/~brent/pd/rpb232tr.pdf>
55 * <https://gist.github.com/animetosho/d3ca95da2131b5813e16b5bb1b137ca0>
56 * <https://github.com/HJLebbink/asm-dude/wiki/GF2P8AFFINEINVQB>
57
58 [[!inline pages="openpower/sv/draft_opcode_tables" quick="yes" raw="yes" ]]
59
60 # binary and ternary bitops
61
62 Similar to FPGA LUTs: for two (binary) or three (ternary) inputs take
63 bits from each input, concatenate them and perform a lookup into a
64 table using an 8-8-bit immediate (for the ternary instructions), or in
65 another register (4-bit for the binary instructions). The binary lookup
66 instructions have CR Field lookup variants due to CR Fields being 4 bit.
67
68 Like the x86 AVX512F
69 [vpternlogd/vpternlogq](https://www.felixcloutier.com/x86/vpternlogd:vpternlogq)
70 instructions.
71
72 ## ternlogi
73
74 | 0.5|6.10|11.15|16.20| 21..28|29.30|31|
75 | -- | -- | --- | --- | ----- | --- |--|
76 | NN | RT | RA | RB | im0-7 | 00 |Rc|
77
78 lut3(imm, a, b, c):
79 idx = c << 2 | b << 1 | a
80 return imm[idx] # idx by LSB0 order
81
82 for i in range(64):
83 RT[i] = lut3(imm, RB[i], RA[i], RT[i])
84
85 ## binlut
86
87 Binary lookup is a dynamic LUT2 version of ternlogi. Firstly, the
88 lookup table is 4 bits wide not 8 bits, and secondly the lookup
89 table comes from a register not an immediate.
90
91 | 0.5|6.10|11.15|16.20| 21..25|26..31 | Form |
92 | -- | -- | --- | --- | ----- |--------|---------|
93 | NN | RT | RA | RB | RC |nh 00001| VA-Form |
94 | NN | RT | RA | RB | /BFA/ |0 01001| VA-Form |
95
96 For binlut, the 4-bit LUT may be selected from either the high nibble
97 or the low nibble of the first byte of RC:
98
99 lut2(imm, a, b):
100 idx = b << 1 | a
101 return imm[idx] # idx by LSB0 order
102
103 imm = (RC>>(nh*4))&0b1111
104 for i in range(64):
105 RT[i] = lut2(imm, RB[i], RA[i])
106
107 For bincrlut, `BFA` selects the 4-bit CR Field as the LUT2:
108
109 for i in range(64):
110 RT[i] = lut2(CRs{BFA}, RB[i], RA[i])
111
112 When Vectorised with SVP64, as usual both source and destination may be
113 Vector or Scalar.
114
115 *Programmer's note: a dynamic ternary lookup may be synthesised from
116 a pair of `binlut` instructions followed by a `ternlogi` to select which
117 to merge. Use `nh` to select which nibble to use as the lookup table
118 from the RC source register (`nh=1` nibble high), i.e. keeping
119 an 8-bit LUT3 in RC, the first `binlut` instruction may set nh=0 and
120 the second nh=1.*
121
122 ## crternlogi
123
124 another mode selection would be CRs not Ints.
125
126 CRB-Form:
127
128 | 0.5|6.8 |9.10|11.13|14.15|16.18|19.25|26.30| 31|
129 |----|----|----|-----|-----|-----|-----|-----|---|
130 | NN | BF | msk|BFA | msk | BFB | TLI | XO |TLI|
131
132 for i in range(4):
133 a,b,c = CRs[BF][i], CRs[BFA][i], CRs[BFB][i])
134 if msk[i] CRs[BF][i] = lut3(imm, a, b, c)
135
136 This instruction is remarkably similar to the existing crops, `crand` etc.
137 which have been noted to be a 4-bit (binary) LUT. In effect `crternlogi`
138 is the ternary LUT version of crops, having an 8-bit LUT. However it
139 is an overwrite instruction in order to save on register file ports,
140 due to the mask requiring the contents of the BF to be both read and
141 written.
142
143 Programmer's note: This instruction is useful when combined with Matrix REMAP
144 in "Inner Product" Mode, creating Warshall Transitive Closure that has many
145 applications in Computer Science.
146
147 ## crbinlog
148
149 With ternary (LUT3) dynamic instructions being very costly,
150 and CR Fields being only 4 bit, a binary (LUT2) variant is better
151
152 CRB-Form:
153
154 | 0.5|6.8 |9.10|11.13|14.15|16.18|19.25|26.30| 31|
155 |----|----|----|-----|-----|-----|-----|-----|---|
156 | NN | BF | msk|BFA | msk | BFB | // | XO | //|
157
158 for i in range(4):
159 a,b = CRs[BF][i], CRs[BF][i])
160 if msk[i] CRs[BF][i] = lut2(CRs[BFB], a, b)
161
162 When SVP64 Vectorised any of the 4 operands may be Scalar or
163 Vector, including `BFB` meaning that multiple different dynamic
164 lookups may be performed with a single instruction. Note that
165 this instruction is deliberately an overwrite in order to reduce
166 the number of register file ports required: like `crternlogi`
167 the contents of `BF` **must** be read due to the mask only
168 writing back to non-masked-out bits of `BF`.
169
170 *Programmer's note: just as with binlut and ternlogi, a pair
171 of crbinlog instructions followed by a merging crternlogi may
172 be deployed to synthesise dynamic ternary (LUT3) CR Field
173 manipulation*
174
175 # int ops
176
177 ## min/m
178
179 required for the [[sv/av_opcodes]]
180
181 signed and unsigned min/max for integer. this is sort-of partly
182 synthesiseable in [[sv/svp64]] with pred-result as long as the dest reg
183 is one of the sources, but not both signed and unsigned. when the dest
184 is also one of the srces and the mv fails due to the CR bittest failing
185 this will only overwrite the dest where the src is greater (or less).
186
187 signed/unsigned min/max gives more flexibility.
188
189 \[un]signed min/max instructions are specifically needed for vector reduce min/max operations which are pretty common.
190
191 X-Form
192
193 * XO=0001001110, itype=0b00 min, unsigned
194 * XO=0101001110, itype=0b01 min, signed
195 * XO=0011001110, itype=0b10 max, unsigned
196 * XO=0111001110, itype=0b11 max, signed
197
198
199 ```
200 uint_xlen_t mins(uint_xlen_t rs1, uint_xlen_t rs2)
201 { return (int_xlen_t)rs1 < (int_xlen_t)rs2 ? rs1 : rs2;
202 }
203 uint_xlen_t maxs(uint_xlen_t rs1, uint_xlen_t rs2)
204 { return (int_xlen_t)rs1 > (int_xlen_t)rs2 ? rs1 : rs2;
205 }
206 uint_xlen_t minu(uint_xlen_t rs1, uint_xlen_t rs2)
207 { return rs1 < rs2 ? rs1 : rs2;
208 }
209 uint_xlen_t maxu(uint_xlen_t rs1, uint_xlen_t rs2)
210 { return rs1 > rs2 ? rs1 : rs2;
211 }
212 ```
213
214 ## average
215
216 required for the [[sv/av_opcodes]], these exist in Packed SIMD (VSX)
217 but not scalar
218
219 ```
220 uint_xlen_t intavg(uint_xlen_t rs1, uint_xlen_t rs2) {
221 return (rs1 + rs2 + 1) >> 1:
222 }
223 ```
224
225 ## absdu
226
227 required for the [[sv/av_opcodes]], these exist in Packed SIMD (VSX)
228 but not scalar
229
230 ```
231 uint_xlen_t absdu(uint_xlen_t rs1, uint_xlen_t rs2) {
232 return (src1 > src2) ? (src1-src2) : (src2-src1)
233 }
234 ```
235
236 ## abs-accumulate
237
238 required for the [[sv/av_opcodes]], these are needed for motion estimation.
239 both are overwrite on RS.
240
241 ```
242 uint_xlen_t uintabsacc(uint_xlen_t rs, uint_xlen_t ra, uint_xlen_t rb) {
243 return rs + (src1 > src2) ? (src1-src2) : (src2-src1)
244 }
245 uint_xlen_t intabsacc(uint_xlen_t rs, int_xlen_t ra, int_xlen_t rb) {
246 return rs + (src1 > src2) ? (src1-src2) : (src2-src1)
247 }
248 ```
249
250 For SVP64, the twin Elwidths allows e.g. a 16 bit accumulator for 8 bit
251 differences. Form is `RM-1P-3S1D` where RS-as-source has a separate
252 SVP64 designation from RS-as-dest. This gives a limited range of
253 non-overwrite capability.
254
255 # shift-and-add <a name="shift-add"> </a>
256
257 Power ISA is missing LD/ST with shift, which is present in both ARM and x86.
258 Too complex to add more LD/ST, a compromise is to add shift-and-add.
259 Replaces a pair of explicit instructions in hot-loops.
260
261 ```
262 # 1.6.27 Z23-FORM
263 |0 |6 |11 |15 |16 |21 |23 |31 |
264 | PO | RT | RA | RB |sm | XO |Rc |
265 ```
266
267 Pseudo-code (shadd):
268
269 n <- (RB)
270 m <- sm + 1
271 RT <- (n[m:XLEN-1] || [0]*m) + (RA)
272
273 Pseudo-code (shaddw):
274
275 shift <- sm + 1 # Shift is between 1-4
276 n <- EXTS((RB)[XLEN/2:XLEN-1]) # Only use lower XLEN/2-bits of RB
277 RT <- (n << shift) + (RA) # Shift n, add RA
278
279 Pseudo-code (shadduw):
280
281 n <- ([0]*(XLEN/2)) || (RB)[XLEN/2:XLEN-1]
282 m <- sm + 1
283 RT <- (n[m:XLEN-1] || [0]*m) + (RA)
284
285 ```
286 uint_xlen_t shadd(uint_xlen_t RA, uint_xlen_t RB, uint8_t sm) {
287 sm = sm & 0x3;
288 return (RB << (sm+1)) + RA;
289 }
290
291 uint_xlen_t shaddw(uint_xlen_t RA, uint_xlen_t RB, uint8_t sm) {
292 uint_xlen_t n = (int_xlen_t)(RB << XLEN / 2) >> XLEN / 2;
293 sm = sm & 0x3;
294 return (n << (sm+1)) + RA;
295 }
296
297 uint_xlen_t shadduw(uint_xlen_t RA, uint_xlen_t RB, uint8_t sm) {
298 uint_xlen_t n = RB & 0xFFFFFFFF;
299 sm = sm & 0x3;
300 return (n << (sm+1)) + RA;
301 }
302 ```
303
304 # bitmask set
305
306 based on RV bitmanip singlebit set, instruction format similar to shift
307 [[isa/fixedshift]]. bmext is actually covered already (shift-with-mask
308 rldicl but only immediate version). however bitmask-invert is not,
309 and set/clr are not covered, although they can use the same Shift ALU.
310
311 bmext (RB) version is not the same as rldicl because bmext is a right
312 shift by RC, where rldicl is a left rotate. for the immediate version
313 this does not matter, so a bmexti is not required. bmrev however there
314 is no direct equivalent and consequently a bmrevi is required.
315
316 bmset (register for mask amount) is particularly useful for creating
317 predicate masks where the length is a dynamic runtime quantity.
318 bmset(RA=0, RB=0, RC=mask) will produce a run of ones of length "mask"
319 in a single instruction without needing to initialise or depend on any
320 other registers.
321
322 | 0.5|6.10|11.15|16.20|21.25| 26..30 |31| name |
323 | -- | -- | --- | --- | --- | ------- |--| ----- |
324 | NN | RS | RA | RB | RC | mode 010 |Rc| bm\* |
325
326 Immediate-variant is an overwrite form:
327
328 | 0.5|6.10|11.15|16.20| 21 | 22.23 | 24....30 |31| name |
329 | -- | -- | --- | --- | -- | ----- | -------- |--| ---- |
330 | NN | RS | RB | sh | SH | itype | 1000 110 |Rc| bm\*i |
331
332 ```
333 def MASK(x, y):
334 if x < y:
335 x = x+1
336 mask_a = ((1 << x) - 1) & ((1 << 64) - 1)
337 mask_b = ((1 << y) - 1) & ((1 << 64) - 1)
338 elif x == y:
339 return 1 << x
340 else:
341 x = x+1
342 mask_a = ((1 << x) - 1) & ((1 << 64) - 1)
343 mask_b = (~((1 << y) - 1)) & ((1 << 64) - 1)
344 return mask_a ^ mask_b
345
346
347 uint_xlen_t bmset(RS, RB, sh)
348 {
349 int shamt = RB & (XLEN - 1);
350 mask = (2<<sh)-1;
351 return RS | (mask << shamt);
352 }
353
354 uint_xlen_t bmclr(RS, RB, sh)
355 {
356 int shamt = RB & (XLEN - 1);
357 mask = (2<<sh)-1;
358 return RS & ~(mask << shamt);
359 }
360
361 uint_xlen_t bminv(RS, RB, sh)
362 {
363 int shamt = RB & (XLEN - 1);
364 mask = (2<<sh)-1;
365 return RS ^ (mask << shamt);
366 }
367
368 uint_xlen_t bmext(RS, RB, sh)
369 {
370 int shamt = RB & (XLEN - 1);
371 mask = (2<<sh)-1;
372 return mask & (RS >> shamt);
373 }
374 ```
375
376 bitmask extract with reverse. can be done by bit-order-inverting all
377 of RB and getting bits of RB from the opposite end.
378
379 when RA is zero, no shift occurs. this makes bmextrev useful for
380 simply reversing all bits of a register.
381
382 ```
383 msb = ra[5:0];
384 rev[0:msb] = rb[msb:0];
385 rt = ZE(rev[msb:0]);
386
387 uint_xlen_t bmrevi(RA, RB, sh)
388 {
389 int shamt = XLEN-1;
390 if (RA != 0) shamt = (GPR(RA) & (XLEN - 1));
391 shamt = (XLEN-1)-shamt; # shift other end
392 brb = bitreverse(GPR(RB)) # swap LSB-MSB
393 mask = (2<<sh)-1;
394 return mask & (brb >> shamt);
395 }
396
397 uint_xlen_t bmrev(RA, RB, RC) {
398 return bmrevi(RA, RB, GPR(RC) & 0b111111);
399 }
400 ```
401
402 | 0.5|6.10|11.15|16.20|21.26| 27..30 |31| name | Form |
403 | -- | -- | --- | --- | --- | ------- |--| ------ | -------- |
404 | NN | RT | RA | RB | sh | 1111 |Rc| bmrevi | MDS-Form |
405
406 | 0.5|6.10|11.15|16.20|21.25| 26..30 |31| name | Form |
407 | -- | -- | --- | --- | --- | ------- |--| ------ | -------- |
408 | NN | RT | RA | RB | RC | 11110 |Rc| bmrev | VA2-Form |
409
410 # grevlut <a name="grevlut"> </a>
411
412 generalised reverse combined with a pair of LUT2s and allowing
413 a constant `0b0101...0101` when RA=0, and an option to invert
414 (including when RA=0, giving a constant 0b1010...1010 as the
415 initial value) provides a wide range of instructions
416 and a means to set hundreds of regular 64 bit patterns with one
417 single 32 bit instruction.
418
419 the two LUT2s are applied left-half (when not swapping)
420 and right-half (when swapping) so as to allow a wider
421 range of options.
422
423 <img src="/openpower/sv/grevlut2x2.jpg" width=700 />
424
425 * A value of `0b11001010` for the immediate provides
426 the functionality of a standard "grev".
427 * `0b11101110` provides gorc
428
429 grevlut should be arranged so as to produce the constants
430 needed to put into bext (bitextract) so as in turn to
431 be able to emulate x86 pmovmask instructions
432 <https://www.felixcloutier.com/x86/pmovmskb>.
433 This only requires 2 instructions (grevlut, bext).
434
435 Note that if the mask is required to be placed
436 directly into CR Fields (for use as CR Predicate
437 masks rather than a integer mask) then sv.cmpi or sv.ori
438 may be used instead, bearing in mind that sv.ori
439 is a 64-bit instruction, and `VL` must have been
440 set to the required length:
441
442 sv.ori./elwid=8 r10.v, r10.v, 0
443
444 The following settings provide the required mask constants:
445
446 | RA=0 | RB | imm | iv | result |
447 | ------- | ------- | ---------- | -- | ---------- |
448 | 0x555.. | 0b10 | 0b01101100 | 0 | 0x111111... |
449 | 0x555.. | 0b110 | 0b01101100 | 0 | 0x010101... |
450 | 0x555.. | 0b1110 | 0b01101100 | 0 | 0x00010001... |
451 | 0x555.. | 0b10 | 0b11000110 | 1 | 0x88888... |
452 | 0x555.. | 0b110 | 0b11000110 | 1 | 0x808080... |
453 | 0x555.. | 0b1110 | 0b11000110 | 1 | 0x80008000... |
454
455 Better diagram showing the correct ordering of shamt (RB). A LUT2
456 is applied to all locations marked in red using the first 4
457 bits of the immediate, and a separate LUT2 applied to all
458 locations in green using the upper 4 bits of the immediate.
459
460 <img src="/openpower/sv/grevlut.png" width=700 />
461
462 demo code [[openpower/sv/grevlut.py]]
463
464 ```
465 lut2(imm, a, b):
466 idx = b << 1 | a
467 return imm[idx] # idx by LSB0 order
468
469 dorow(imm8, step_i, chunksize, us32b):
470 for j in 0 to 31 if is32b else 63:
471 if (j&chunk_size) == 0
472 imm = imm8[0..3]
473 else
474 imm = imm8[4..7]
475 step_o[j] = lut2(imm, step_i[j], step_i[j ^ chunk_size])
476 return step_o
477
478 uint64_t grevlut(uint64_t RA, uint64_t RB, uint8 imm, bool iv, bool is32b)
479 {
480 uint64_t x = 0x5555_5555_5555_5555;
481 if (RA != 0) x = GPR(RA);
482 if (iv) x = ~x;
483 int shamt = RB & 31 if is32b else 63
484 for i in 0 to (6-is32b)
485 step = 1<<i
486 if (shamt & step) x = dorow(imm, x, step, is32b)
487 return x;
488 }
489 ```
490
491 A variant may specify different LUT-pairs per row,
492 using one byte of RB for each. If it is desired that
493 a particular row-crossover shall not be applied it is
494 a simple matter to set the appropriate LUT-pair in RB
495 to effect an identity transform for that row (`0b11001010`).
496
497 ```
498 uint64_t grevlutr(uint64_t RA, uint64_t RB, bool iv, bool is32b)
499 {
500 uint64_t x = 0x5555_5555_5555_5555;
501 if (RA != 0) x = GPR(RA);
502 if (iv) x = ~x;
503 for i in 0 to (6-is32b)
504 step = 1<<i
505 imm = (RB>>(i*8))&0xff
506 x = dorow(imm, x, step, is32b)
507 return x;
508 }
509
510 ```
511
512 | 0.5|6.10|11.15|16.20 |21..28 | 29.30|31| name | Form |
513 | -- | -- | --- | --- | ----- | -----|--| ------ | ----- |
514 | NN | RT | RA | s0-4 | im0-7 | 1 iv |s5| grevlogi | |
515 | NN | RT | RA | RB | im0-7 | 01 |0 | grevlog | |
516
517 An equivalent to `grevlogw` may be synthesised by setting the
518 appropriate bits in RB to set the top half of RT to zero.
519 Thus an explicit grevlogw instruction is not necessary.
520
521 # xperm
522
523 based on RV bitmanip.
524
525 RA contains a vector of indices to select parts of RB to be
526 copied to RT. The immediate-variant allows up to an 8 bit
527 pattern (repeated) to be targetted at different parts of RT.
528
529 xperm shares some similarity with one of the uses of bmator
530 in that xperm indices are binary addressing where bitmator
531 may be considered to be unary addressing.
532
533 ```
534 uint_xlen_t xpermi(uint8_t imm8, uint_xlen_t RB, int sz_log2)
535 {
536 uint_xlen_t r = 0;
537 uint_xlen_t sz = 1LL << sz_log2;
538 uint_xlen_t mask = (1LL << sz) - 1;
539 uint_xlen_t RA = imm8 | imm8<<8 | ... | imm8<<56;
540 for (int i = 0; i < XLEN; i += sz) {
541 uint_xlen_t pos = ((RA >> i) & mask) << sz_log2;
542 if (pos < XLEN)
543 r |= ((RB >> pos) & mask) << i;
544 }
545 return r;
546 }
547 uint_xlen_t xperm(uint_xlen_t RA, uint_xlen_t RB, int sz_log2)
548 {
549 uint_xlen_t r = 0;
550 uint_xlen_t sz = 1LL << sz_log2;
551 uint_xlen_t mask = (1LL << sz) - 1;
552 for (int i = 0; i < XLEN; i += sz) {
553 uint_xlen_t pos = ((RA >> i) & mask) << sz_log2;
554 if (pos < XLEN)
555 r |= ((RB >> pos) & mask) << i;
556 }
557 return r;
558 }
559 uint_xlen_t xperm_n (uint_xlen_t RA, uint_xlen_t RB)
560 { return xperm(RA, RB, 2); }
561 uint_xlen_t xperm_b (uint_xlen_t RA, uint_xlen_t RB)
562 { return xperm(RA, RB, 3); }
563 uint_xlen_t xperm_h (uint_xlen_t RA, uint_xlen_t RB)
564 { return xperm(RA, RB, 4); }
565 uint_xlen_t xperm_w (uint_xlen_t RA, uint_xlen_t RB)
566 { return xperm(RA, RB, 5); }
567 ```
568
569 # bitmatrix
570
571 bmatflip and bmatxor is found in the Cray XMT, and in x86 is known
572 as GF2P8AFFINEQB. uses:
573
574 * <https://gist.github.com/animetosho/d3ca95da2131b5813e16b5bb1b137ca0>
575 * SM4, Reed Solomon, RAID6
576 <https://stackoverflow.com/questions/59124720/what-are-the-avx-512-galois-field-related-instructions-for>
577 * Vector bit-reverse <https://reviews.llvm.org/D91515?id=305411>
578 * Affine Inverse <https://github.com/HJLebbink/asm-dude/wiki/GF2P8AFFINEINVQB>
579
580 | 0.5|6.10|11.15|16.20| 21 | 22.23 | 24....30 |31| name | Form |
581 | -- | -- | --- | --- | -- | ----- | -------- |--| ---- | ------- |
582 | NN | RS | RA |im04 | im5| 1 1 | im67 00 110 |Rc| bmatxori | TODO |
583
584
585 ```
586 uint64_t bmatflip(uint64_t RA)
587 {
588 uint64_t x = RA;
589 x = shfl64(x, 31);
590 x = shfl64(x, 31);
591 x = shfl64(x, 31);
592 return x;
593 }
594
595 uint64_t bmatxori(uint64_t RS, uint64_t RA, uint8_t imm) {
596 // transpose of RA
597 uint64_t RAt = bmatflip(RA);
598 uint8_t u[8]; // rows of RS
599 uint8_t v[8]; // cols of RA
600 for (int i = 0; i < 8; i++) {
601 u[i] = RS >> (i*8);
602 v[i] = RAt >> (i*8);
603 }
604 uint64_t bit, x = 0;
605 for (int i = 0; i < 64; i++) {
606 bit = (imm >> (i%8)) & 1;
607 bit ^= pcnt(u[i / 8] & v[i % 8]) & 1;
608 x |= bit << i;
609 }
610 return x;
611 }
612
613 uint64_t bmatxor(uint64_t RA, uint64_t RB) {
614 return bmatxori(RA, RB, 0xff)
615 }
616
617 uint64_t bmator(uint64_t RA, uint64_t RB) {
618 // transpose of RB
619 uint64_t RBt = bmatflip(RB);
620 uint8_t u[8]; // rows of RA
621 uint8_t v[8]; // cols of RB
622 for (int i = 0; i < 8; i++) {
623 u[i] = RA >> (i*8);
624 v[i] = RBt >> (i*8);
625 }
626 uint64_t x = 0;
627 for (int i = 0; i < 64; i++) {
628 if ((u[i / 8] & v[i % 8]) != 0)
629 x |= 1LL << i;
630 }
631 return x;
632 }
633
634 uint64_t bmatand(uint64_t RA, uint64_t RB) {
635 // transpose of RB
636 uint64_t RBt = bmatflip(RB);
637 uint8_t u[8]; // rows of RA
638 uint8_t v[8]; // cols of RB
639 for (int i = 0; i < 8; i++) {
640 u[i] = RA >> (i*8);
641 v[i] = RBt >> (i*8);
642 }
643 uint64_t x = 0;
644 for (int i = 0; i < 64; i++) {
645 if ((u[i / 8] & v[i % 8]) == 0xff)
646 x |= 1LL << i;
647 }
648 return x;
649 }
650 ```
651
652 # Introduction to Carry-less and GF arithmetic
653
654 * obligatory xkcd <https://xkcd.com/2595/>
655
656 There are three completely separate types of Galois-Field-based arithmetic
657 that we implement which are not well explained even in introductory
658 literature. A slightly oversimplified explanation is followed by more
659 accurate descriptions:
660
661 * `GF(2)` carry-less binary arithmetic. this is not actually a Galois Field,
662 but is accidentally referred to as GF(2) - see below as to why.
663 * `GF(p)` modulo arithmetic with a Prime number, these are "proper"
664 Galois Fields
665 * `GF(2^N)` carry-less binary arithmetic with two limits: modulo a power-of-2
666 (2^N) and a second "reducing" polynomial (similar to a prime number), these
667 are said to be GF(2^N) arithmetic.
668
669 further detailed and more precise explanations are provided below
670
671 * **Polynomials with coefficients in `GF(2)`**
672 (aka. Carry-less arithmetic -- the `cl*` instructions).
673 This isn't actually a Galois Field, but its coefficients are. This is
674 basically binary integer addition, subtraction, and multiplication like
675 usual, except that carries aren't propagated at all, effectively turning
676 both addition and subtraction into the bitwise xor operation. Division and
677 remainder are defined to match how addition and multiplication works.
678 * **Galois Fields with a prime size**
679 (aka. `GF(p)` or Prime Galois Fields -- the `gfp*` instructions).
680 This is basically just the integers mod `p`.
681 * **Galois Fields with a power-of-a-prime size**
682 (aka. `GF(p^n)` or `GF(q)` where `q == p^n` for prime `p` and
683 integer `n > 0`).
684 We only implement these for `p == 2`, called Binary Galois Fields
685 (`GF(2^n)` -- the `gfb*` instructions).
686 For any prime `p`, `GF(p^n)` is implemented as polynomials with
687 coefficients in `GF(p)` and degree `< n`, where the polynomials are the
688 remainders of dividing by a specificly chosen polynomial in `GF(p)` called
689 the Reducing Polynomial (we will denote that by `red_poly`). The Reducing
690 Polynomial must be an irreducable polynomial (like primes, but for
691 polynomials), as well as have degree `n`. All `GF(p^n)` for the same `p`
692 and `n` are isomorphic to each other -- the choice of `red_poly` doesn't
693 affect `GF(p^n)`'s mathematical shape, all that changes is the specific
694 polynomials used to implement `GF(p^n)`.
695
696 Many implementations and much of the literature do not make a clear
697 distinction between these three categories, which makes it confusing
698 to understand what their purpose and value is.
699
700 * carry-less multiply is extremely common and is used for the ubiquitous
701 CRC32 algorithm. [TODO add many others, helps justify to ISA WG]
702 * GF(2^N) forms the basis of Rijndael (the current AES standard) and
703 has significant uses throughout cryptography
704 * GF(p) is the basis again of a significant quantity of algorithms
705 (TODO, list them, jacob knows what they are), even though the
706 modulo is limited to be below 64-bit (size of a scalar int)
707
708 # Instructions for Carry-less Operations
709
710 aka. Polynomials with coefficients in `GF(2)`
711
712 Carry-less addition/subtraction is simply XOR, so a `cladd`
713 instruction is not provided since the `xor[i]` instruction can be used instead.
714
715 These are operations on polynomials with coefficients in `GF(2)`, with the
716 polynomial's coefficients packed into integers with the following algorithm:
717
718 ```python
719 [[!inline pagenames="gf_reference/pack_poly.py" raw="yes"]]
720 ```
721
722 ## Carry-less Multiply Instructions
723
724 based on RV bitmanip
725 see <https://en.wikipedia.org/wiki/CLMUL_instruction_set> and
726 <https://www.felixcloutier.com/x86/pclmulqdq> and
727 <https://en.m.wikipedia.org/wiki/Carry-less_product>
728
729 They are worth adding as their own non-overwrite operations
730 (in the same pipeline).
731
732 ### `clmul` Carry-less Multiply
733
734 ```python
735 [[!inline pagenames="gf_reference/clmul.py" raw="yes"]]
736 ```
737
738 ### `clmulh` Carry-less Multiply High
739
740 ```python
741 [[!inline pagenames="gf_reference/clmulh.py" raw="yes"]]
742 ```
743
744 ### `clmulr` Carry-less Multiply (Reversed)
745
746 Useful for CRCs. Equivalent to bit-reversing the result of `clmul` on
747 bit-reversed inputs.
748
749 ```python
750 [[!inline pagenames="gf_reference/clmulr.py" raw="yes"]]
751 ```
752
753 ## `clmadd` Carry-less Multiply-Add
754
755 ```
756 clmadd RT, RA, RB, RC
757 ```
758
759 ```
760 (RT) = clmul((RA), (RB)) ^ (RC)
761 ```
762
763 ## `cltmadd` Twin Carry-less Multiply-Add (for FFTs)
764
765 Used in combination with SV FFT REMAP to perform a full Discrete Fourier
766 Transform of Polynomials over GF(2) in-place. Possible by having 3-in 2-out,
767 to avoid the need for a temp register. RS is written to as well as RT.
768
769 Note: Polynomials over GF(2) are a Ring rather than a Field, so, because the
770 definition of the Inverse Discrete Fourier Transform involves calculating a
771 multiplicative inverse, which may not exist in every Ring, therefore the
772 Inverse Discrete Fourier Transform may not exist. (AFAICT the number of inputs
773 to the IDFT must be odd for the IDFT to be defined for Polynomials over GF(2).
774 TODO: check with someone who knows for sure if that's correct.)
775
776 ```
777 cltmadd RT, RA, RB, RC
778 ```
779
780 TODO: add link to explanation for where `RS` comes from.
781
782 ```
783 a = (RA)
784 c = (RC)
785 # read all inputs before writing to any outputs in case
786 # an input overlaps with an output register.
787 (RT) = clmul(a, (RB)) ^ c
788 (RS) = a ^ c
789 ```
790
791 ## `cldivrem` Carry-less Division and Remainder
792
793 `cldivrem` isn't an actual instruction, but is just used in the pseudo-code
794 for other instructions.
795
796 ```python
797 [[!inline pagenames="gf_reference/cldivrem.py" raw="yes"]]
798 ```
799
800 ## `cldiv` Carry-less Division
801
802 ```
803 cldiv RT, RA, RB
804 ```
805
806 ```
807 n = (RA)
808 d = (RB)
809 q, r = cldivrem(n, d, width=XLEN)
810 (RT) = q
811 ```
812
813 ## `clrem` Carry-less Remainder
814
815 ```
816 clrem RT, RA, RB
817 ```
818
819 ```
820 n = (RA)
821 d = (RB)
822 q, r = cldivrem(n, d, width=XLEN)
823 (RT) = r
824 ```
825
826 # Instructions for Binary Galois Fields `GF(2^m)`
827
828 see:
829
830 * <https://courses.csail.mit.edu/6.857/2016/files/ffield.py>
831 * <https://engineering.purdue.edu/kak/compsec/NewLectures/Lecture7.pdf>
832 * <https://foss.heptapod.net/math/libgf2/-/blob/branch/default/src/libgf2/gf2.py>
833
834 Binary Galois Field addition/subtraction is simply XOR, so a `gfbadd`
835 instruction is not provided since the `xor[i]` instruction can be used instead.
836
837 ## `GFBREDPOLY` SPR -- Reducing Polynomial
838
839 In order to save registers and to make operations orthogonal with standard
840 arithmetic, the reducing polynomial is stored in a dedicated SPR `GFBREDPOLY`.
841 This also allows hardware to pre-compute useful parameters (such as the
842 degree, or look-up tables) based on the reducing polynomial, and store them
843 alongside the SPR in hidden registers, only recomputing them whenever the SPR
844 is written to, rather than having to recompute those values for every
845 instruction.
846
847 Because Galois Fields require the reducing polynomial to be an irreducible
848 polynomial, that guarantees that any polynomial of `degree > 1` must have
849 the LSB set, since otherwise it would be divisible by the polynomial `x`,
850 making it reducible, making whatever we're working on no longer a Field.
851 Therefore, we can reuse the LSB to indicate `degree == XLEN`.
852
853 ```python
854 [[!inline pagenames="gf_reference/decode_reducing_polynomial.py" raw="yes"]]
855 ```
856
857 ## `gfbredpoly` -- Set the Reducing Polynomial SPR `GFBREDPOLY`
858
859 unless this is an immediate op, `mtspr` is completely sufficient.
860
861 ```python
862 [[!inline pagenames="gf_reference/gfbredpoly.py" raw="yes"]]
863 ```
864
865 ## `gfbmul` -- Binary Galois Field `GF(2^m)` Multiplication
866
867 ```
868 gfbmul RT, RA, RB
869 ```
870
871 ```python
872 [[!inline pagenames="gf_reference/gfbmul.py" raw="yes"]]
873 ```
874
875 ## `gfbmadd` -- Binary Galois Field `GF(2^m)` Multiply-Add
876
877 ```
878 gfbmadd RT, RA, RB, RC
879 ```
880
881 ```python
882 [[!inline pagenames="gf_reference/gfbmadd.py" raw="yes"]]
883 ```
884
885 ## `gfbtmadd` -- Binary Galois Field `GF(2^m)` Twin Multiply-Add (for FFT)
886
887 Used in combination with SV FFT REMAP to perform a full `GF(2^m)` Discrete
888 Fourier Transform in-place. Possible by having 3-in 2-out, to avoid the need
889 for a temp register. RS is written to as well as RT.
890
891 ```
892 gfbtmadd RT, RA, RB, RC
893 ```
894
895 TODO: add link to explanation for where `RS` comes from.
896
897 ```
898 a = (RA)
899 c = (RC)
900 # read all inputs before writing to any outputs in case
901 # an input overlaps with an output register.
902 (RT) = gfbmadd(a, (RB), c)
903 # use gfbmadd again since it reduces the result
904 (RS) = gfbmadd(a, 1, c) # "a * 1 + c"
905 ```
906
907 ## `gfbinv` -- Binary Galois Field `GF(2^m)` Inverse
908
909 ```
910 gfbinv RT, RA
911 ```
912
913 ```python
914 [[!inline pagenames="gf_reference/gfbinv.py" raw="yes"]]
915 ```
916
917 # Instructions for Prime Galois Fields `GF(p)`
918
919 ## `GFPRIME` SPR -- Prime Modulus For `gfp*` Instructions
920
921 ## `gfpadd` Prime Galois Field `GF(p)` Addition
922
923 ```
924 gfpadd RT, RA, RB
925 ```
926
927 ```python
928 [[!inline pagenames="gf_reference/gfpadd.py" raw="yes"]]
929 ```
930
931 the addition happens on infinite-precision integers
932
933 ## `gfpsub` Prime Galois Field `GF(p)` Subtraction
934
935 ```
936 gfpsub RT, RA, RB
937 ```
938
939 ```python
940 [[!inline pagenames="gf_reference/gfpsub.py" raw="yes"]]
941 ```
942
943 the subtraction happens on infinite-precision integers
944
945 ## `gfpmul` Prime Galois Field `GF(p)` Multiplication
946
947 ```
948 gfpmul RT, RA, RB
949 ```
950
951 ```python
952 [[!inline pagenames="gf_reference/gfpmul.py" raw="yes"]]
953 ```
954
955 the multiplication happens on infinite-precision integers
956
957 ## `gfpinv` Prime Galois Field `GF(p)` Invert
958
959 ```
960 gfpinv RT, RA
961 ```
962
963 Some potential hardware implementations are found in:
964 <https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.90.5233&rep=rep1&type=pdf>
965
966 ```python
967 [[!inline pagenames="gf_reference/gfpinv.py" raw="yes"]]
968 ```
969
970 ## `gfpmadd` Prime Galois Field `GF(p)` Multiply-Add
971
972 ```
973 gfpmadd RT, RA, RB, RC
974 ```
975
976 ```python
977 [[!inline pagenames="gf_reference/gfpmadd.py" raw="yes"]]
978 ```
979
980 the multiplication and addition happens on infinite-precision integers
981
982 ## `gfpmsub` Prime Galois Field `GF(p)` Multiply-Subtract
983
984 ```
985 gfpmsub RT, RA, RB, RC
986 ```
987
988 ```python
989 [[!inline pagenames="gf_reference/gfpmsub.py" raw="yes"]]
990 ```
991
992 the multiplication and subtraction happens on infinite-precision integers
993
994 ## `gfpmsubr` Prime Galois Field `GF(p)` Multiply-Subtract-Reversed
995
996 ```
997 gfpmsubr RT, RA, RB, RC
998 ```
999
1000 ```python
1001 [[!inline pagenames="gf_reference/gfpmsubr.py" raw="yes"]]
1002 ```
1003
1004 the multiplication and subtraction happens on infinite-precision integers
1005
1006 ## `gfpmaddsubr` Prime Galois Field `GF(p)` Multiply-Add and Multiply-Sub-Reversed (for FFT)
1007
1008 Used in combination with SV FFT REMAP to perform
1009 a full Number-Theoretic-Transform in-place. Possible by having 3-in 2-out,
1010 to avoid the need for a temp register. RS is written
1011 to as well as RT.
1012
1013 ```
1014 gfpmaddsubr RT, RA, RB, RC
1015 ```
1016
1017 TODO: add link to explanation for where `RS` comes from.
1018
1019 ```
1020 factor1 = (RA)
1021 factor2 = (RB)
1022 term = (RC)
1023 # read all inputs before writing to any outputs in case
1024 # an input overlaps with an output register.
1025 (RT) = gfpmadd(factor1, factor2, term)
1026 (RS) = gfpmsubr(factor1, factor2, term)
1027 ```
1028
1029 # Already in POWER ISA or subsumed
1030
1031 Lists operations either included as part of
1032 other bitmanip operations, or are already in
1033 Power ISA.
1034
1035 ## cmix
1036
1037 based on RV bitmanip, covered by ternlog bitops
1038
1039 ```
1040 uint_xlen_t cmix(uint_xlen_t RA, uint_xlen_t RB, uint_xlen_t RC) {
1041 return (RA & RB) | (RC & ~RB);
1042 }
1043 ```
1044
1045 ## count leading/trailing zeros with mask
1046
1047 in v3.1 p105
1048
1049 ```
1050 count = 0
1051 do i = 0 to 63 if((RB)i=1) then do
1052 if((RS)i=1) then break end end count ← count + 1
1053 RA ← EXTZ64(count)
1054 ```
1055
1056 ## bit deposit
1057
1058 pdepd VRT,VRA,VRB, identical to RV bitmamip bdep, found already in v3.1 p106
1059
1060 do while(m < 64)
1061 if VSR[VRB+32].dword[i].bit[63-m]=1 then do
1062 result = VSR[VRA+32].dword[i].bit[63-k]
1063 VSR[VRT+32].dword[i].bit[63-m] = result
1064 k = k + 1
1065 m = m + 1
1066
1067 ```
1068
1069 uint_xlen_t bdep(uint_xlen_t RA, uint_xlen_t RB)
1070 {
1071 uint_xlen_t r = 0;
1072 for (int i = 0, j = 0; i < XLEN; i++)
1073 if ((RB >> i) & 1) {
1074 if ((RA >> j) & 1)
1075 r |= uint_xlen_t(1) << i;
1076 j++;
1077 }
1078 return r;
1079 }
1080
1081 ```
1082
1083 ## bit extract
1084
1085 other way round: identical to RV bext: pextd, found in v3.1 p196
1086
1087 ```
1088 uint_xlen_t bext(uint_xlen_t RA, uint_xlen_t RB)
1089 {
1090 uint_xlen_t r = 0;
1091 for (int i = 0, j = 0; i < XLEN; i++)
1092 if ((RB >> i) & 1) {
1093 if ((RA >> i) & 1)
1094 r |= uint_xlen_t(1) << j;
1095 j++;
1096 }
1097 return r;
1098 }
1099 ```
1100
1101 ## centrifuge
1102
1103 found in v3.1 p106 so not to be added here
1104
1105 ```
1106 ptr0 = 0
1107 ptr1 = 0
1108 do i = 0 to 63
1109 if((RB)i=0) then do
1110 resultptr0 = (RS)i
1111 end
1112 ptr0 = ptr0 + 1
1113 if((RB)63-i==1) then do
1114 result63-ptr1 = (RS)63-i
1115 end
1116 ptr1 = ptr1 + 1
1117 RA = result
1118 ```
1119
1120 ## bit to byte permute
1121
1122 similar to matrix permute in RV bitmanip, which has XOR and OR variants,
1123 these perform a transpose (bmatflip).
1124 TODO this looks VSX is there a scalar variant
1125 in v3.0/1 already
1126
1127 do j = 0 to 7
1128 do k = 0 to 7
1129 b = VSR[VRB+32].dword[i].byte[k].bit[j]
1130 VSR[VRT+32].dword[i].byte[j].bit[k] = b
1131
1132 ## grev
1133
1134 superceded by grevlut
1135
1136 based on RV bitmanip, this is also known as a butterfly network. however
1137 where a butterfly network allows setting of every crossbar setting in
1138 every row and every column, generalised-reverse (grev) only allows
1139 a per-row decision: every entry in the same row must either switch or
1140 not-switch.
1141
1142 <img src="https://upload.wikimedia.org/wikipedia/commons/thumb/8/8c/Butterfly_Network.jpg/474px-Butterfly_Network.jpg" />
1143
1144 ```
1145 uint64_t grev64(uint64_t RA, uint64_t RB)
1146 {
1147 uint64_t x = RA;
1148 int shamt = RB & 63;
1149 if (shamt & 1) x = ((x & 0x5555555555555555LL) << 1) |
1150 ((x & 0xAAAAAAAAAAAAAAAALL) >> 1);
1151 if (shamt & 2) x = ((x & 0x3333333333333333LL) << 2) |
1152 ((x & 0xCCCCCCCCCCCCCCCCLL) >> 2);
1153 if (shamt & 4) x = ((x & 0x0F0F0F0F0F0F0F0FLL) << 4) |
1154 ((x & 0xF0F0F0F0F0F0F0F0LL) >> 4);
1155 if (shamt & 8) x = ((x & 0x00FF00FF00FF00FFLL) << 8) |
1156 ((x & 0xFF00FF00FF00FF00LL) >> 8);
1157 if (shamt & 16) x = ((x & 0x0000FFFF0000FFFFLL) << 16) |
1158 ((x & 0xFFFF0000FFFF0000LL) >> 16);
1159 if (shamt & 32) x = ((x & 0x00000000FFFFFFFFLL) << 32) |
1160 ((x & 0xFFFFFFFF00000000LL) >> 32);
1161 return x;
1162 }
1163
1164 ```
1165
1166 ## gorc
1167
1168 based on RV bitmanip, gorc is superceded by grevlut
1169
1170 ```
1171 uint32_t gorc32(uint32_t RA, uint32_t RB)
1172 {
1173 uint32_t x = RA;
1174 int shamt = RB & 31;
1175 if (shamt & 1) x |= ((x & 0x55555555) << 1) | ((x & 0xAAAAAAAA) >> 1);
1176 if (shamt & 2) x |= ((x & 0x33333333) << 2) | ((x & 0xCCCCCCCC) >> 2);
1177 if (shamt & 4) x |= ((x & 0x0F0F0F0F) << 4) | ((x & 0xF0F0F0F0) >> 4);
1178 if (shamt & 8) x |= ((x & 0x00FF00FF) << 8) | ((x & 0xFF00FF00) >> 8);
1179 if (shamt & 16) x |= ((x & 0x0000FFFF) << 16) | ((x & 0xFFFF0000) >> 16);
1180 return x;
1181 }
1182 uint64_t gorc64(uint64_t RA, uint64_t RB)
1183 {
1184 uint64_t x = RA;
1185 int shamt = RB & 63;
1186 if (shamt & 1) x |= ((x & 0x5555555555555555LL) << 1) |
1187 ((x & 0xAAAAAAAAAAAAAAAALL) >> 1);
1188 if (shamt & 2) x |= ((x & 0x3333333333333333LL) << 2) |
1189 ((x & 0xCCCCCCCCCCCCCCCCLL) >> 2);
1190 if (shamt & 4) x |= ((x & 0x0F0F0F0F0F0F0F0FLL) << 4) |
1191 ((x & 0xF0F0F0F0F0F0F0F0LL) >> 4);
1192 if (shamt & 8) x |= ((x & 0x00FF00FF00FF00FFLL) << 8) |
1193 ((x & 0xFF00FF00FF00FF00LL) >> 8);
1194 if (shamt & 16) x |= ((x & 0x0000FFFF0000FFFFLL) << 16) |
1195 ((x & 0xFFFF0000FFFF0000LL) >> 16);
1196 if (shamt & 32) x |= ((x & 0x00000000FFFFFFFFLL) << 32) |
1197 ((x & 0xFFFFFFFF00000000LL) >> 32);
1198 return x;
1199 }
1200
1201 ```
1202
1203
1204 # Appendix
1205
1206 see [[bitmanip/appendix]]
1207