redo table in bitmanip to include NNN-Forms
[libreriscv.git] / openpower / sv / bitmanip.mdwn
1 [[!tag standards]]
2
3 [[!toc levels=1]]
4
5 # Implementation Log
6
7 * ternlogi <https://bugs.libre-soc.org/show_bug.cgi?id=745>
8 * grev <https://bugs.libre-soc.org/show_bug.cgi?id=755>
9 * GF2^M <https://bugs.libre-soc.org/show_bug.cgi?id=782>
10
11
12 # bitmanipulation
13
14 **DRAFT STATUS**
15
16 pseudocode: [[openpower/isa/bitmanip]]
17
18 this extension amalgamates bitmanipulation primitives from many sources, including RISC-V bitmanip, Packed SIMD, AVX-512 and OpenPOWER VSX.
19 Also included are DSP/Multimedia operations suitable for
20 Audio/Video. Vectorisation and SIMD are removed: these are straight scalar (element) operations making them suitable for embedded applications.
21 Vectorisation Context is provided by [[openpower/sv]].
22
23 When combined with SV, scalar variants of bitmanip operations found in VSX are added so that the Packed SIMD aspects of VSX may be retired as "legacy"
24 in the far future (10 to 20 years). Also, VSX is hundreds of opcodes, requires 128 bit pathways, and is wholly unsuited to low power or embedded scenarios.
25
26 ternlogv is experimental and is the only operation that may be considered a "Packed SIMD". It is added as a variant of the already well-justified ternlog operation (done in AVX512 as an immediate only) "because it looks fun". As it is based on the LUT4 concept it will allow accelerated emulation of FPGAs. Other vendors of ISAs are buying FPGA companies to achieve similar objectives.
27
28 general-purpose Galois Field 2^M operations are added so as to avoid huge custom opcode proliferation across many areas of Computer Science. however for convenience and also to avoid setup costs, some of the more common operations (clmul, crc32) are also added. The expectation is that these operations would all be covered by the same pipeline.
29
30 note that there are brownfield spaces below that could incorporate some of the set-before-first and other scalar operations listed in [[sv/vector_ops]], and
31 the [[sv/av_opcodes]] as well as [[sv/setvl]], [[sv/svstep]], [[sv/remap]]
32
33 Useful resource:
34
35 * <https://en.wikiversity.org/wiki/Reed%E2%80%93Solomon_codes_for_coders>
36 * <https://maths-people.anu.edu.au/~brent/pd/rpb232tr.pdf>
37
38 # summary
39
40 two major opcodes are needed
41
42 ternlog has its own major opcode
43
44 | 29.30 |31| name |
45 | ------ |--| --------- |
46 | 0 0 |Rc| ternlogi |
47 | 0 1 |sz| ternlogv |
48 | 1 iv | | grevlogi |
49
50 2nd major opcode for other bitmanip: minor opcode allocation
51
52 | 28.30 |31| name |
53 | ------ |--| --------- |
54 | -00 |0 | xpermi |
55 | -00 |1 | grevlog |
56 | -01 | | crternlog |
57 | 010 |Rc| bitmask |
58 | 011 | | SVP64 |
59 | 110 |Rc| 1/2-op |
60 | 111 | | bmrevi |
61
62
63 1-op and variants
64
65 | dest | src1 | subop | op |
66 | ---- | ---- | ----- | -------- |
67 | RT | RA | .. | bmatflip |
68
69 2-op and variants
70
71 | dest | src1 | src2 | subop | op |
72 | ---- | ---- | ---- | ----- | -------- |
73 | RT | RA | RB | or | bmatflip |
74 | RT | RA | RB | xor | bmatflip |
75 | RT | RA | RB | | grev |
76 | RT | RA | RB | | clmul\* |
77 | RT | RA | RB | | gorc |
78 | RT | RA | RB | shuf | shuffle |
79 | RT | RA | RB | unshuf| shuffle |
80 | RT | RA | RB | width | xperm |
81 | RT | RA | RB | type | av minmax |
82 | RT | RA | RB | | av abs avgadd |
83 | RT | RA | RB | type | vmask ops |
84 | RT | RA | RB | | |
85
86 3 ops
87
88 * grevlog
89 * GF mul-add
90 * bitmask-reverse
91
92 TODO: convert all instructions to use RT and not RS
93
94 | 0.5|6.10|11.15|16.20 |21..25 | 26....30 |31| name |
95 | -- | -- | --- | --- | ----- | -------- |--| ------ |
96 | NN | RT | RA |itype/| im0-4 | im5-7 00 |0 | xpermi |
97 | NN | RT | RA | RB | im0-4 | im5-7 00 |1 | grevlog |
98 | NN | | | | | ----- 01 |m3| crternlog |
99 | NN | RT | RA | RB | RC | mode 010 |Rc| bitmask\* |
100 | NN | | | | | 00 011 | | rsvd |
101 | NN | | | | | 01 011 |0 | svshape |
102 | NN | | | | | 01 011 |1 | svremap |
103 | NN | | | | | 10 011 |Rc| svstep |
104 | NN | | | | | 11 011 |Rc| setvl |
105 | NN | | | | | ---- 110 | | 1/2 ops |
106 | NN | RT | RA | RB | sh0-4 | sh5 1 111 |Rc| bmrevi |
107
108 ops (note that av avg and abs as well as vec scalar mask
109 are included here [[sv/vector_ops]], and
110 the [[sv/av_opcodes]])
111
112 TODO: convert from RA, RB, and RC to correct field names of RT, RA, and RB, and
113 double check that instructions didn't need 3 inputs.
114
115 | 0.5|6.10|11.15|16.20| 21 | 22.23 | 24....30 |31| name | Form |
116 | -- | -- | --- | --- | -- | ----- | -------- |--| ---- | ------- |
117 | NN | RS | me | sh | SH | ME 0 | nn00 110 |Rc| bmopsi | {TODO} |
118 | NN | RS | RA | sh | SH | 0 1 | nn00 110 |Rc| bmopsi | XB-Form |
119 | NN | RT | RA | RB | 1 | 00 | 0001 110 |Rc| cldiv | X-Form |
120 | NN | RT | RA | RB | 1 | 01 | 0001 110 |Rc| clmod | X-Form |
121 | NN | RT | RA | | 1 | 10 | 0001 110 |Rc| bmatflip | X-Form |
122 | NN | | | | 1 | 11 | 0001 110 |Rc| rsvd | |
123 | NN | RT | RA | RB | 0 | 00 | 0001 110 |Rc| vec sbfm | X-Form |
124 | NN | RT | RA | RB | 0 | 01 | 0001 110 |Rc| vec sofm | X-Form |
125 | NN | RT | RA | RB | 0 | 10 | 0001 110 |Rc| vec sifm | X-Form |
126 | NN | RT | RA | RB | 0 | 11 | 0001 110 |Rc| vec cprop | X-Form |
127 | NN | | | | 0 | | 0101 110 |Rc| rsvd | |
128 | NN | RT | RA | RB | 1 | itype | 0101 110 |Rc| xperm | X-Form |
129 | NN | RT | RA | RB | 0 | itype | 1001 110 |Rc| av minmax | X-Form |
130 | NN | RT | RA | RB | 1 | 00 | 1001 110 |Rc| av abss | X-Form |
131 | NN | RT | RA | RB | 1 | 01 | 1001 110 |Rc| av absu | X-Form |
132 | NN | RT | RA | RB | 1 | 10 | 1001 110 |Rc| av avgadd | X-Form |
133 | NN | | | | 1 | 11 | 1001 110 |Rc| rsvd | |
134 | NN | RT | RA | RB | 0 | sh | 1101 110 |Rc| shadd | {TODO} |
135 | NN | RT | RA | RB | 1 | sh | 1101 110 |Rc| shadduw | {TODO} |
136 | NN | RT | RA | RB | 0 | 00 | 0010 110 |Rc| gorc | X-Form |
137 | NN | RA | RB | sh | SH | 00 | 1010 110 |Rc| gorci | XB-Form |
138 | NN | RT | RA | RB | 0 | 00 | 0110 110 |Rc| gorcw | X-Form |
139 | NN | RS | RA | SH | 0 | 00 | 1110 110 |Rc| gorcwi | X-Form |
140 | NN | RT | RA | RB | 1 | 00 | 1110 110 |Rc| bmator | X-Form |
141 | NN | RA | RB | RC | 0 | 01 | 0010 110 |Rc| grev | X-Form |
142 | NN | RT | RA | RB | 1 | 01 | 0010 110 |Rc| clmul | X-Form |
143 | NN | RA | RB | sh | SH | 01 | 1010 110 |Rc| grevi | XB-Form |
144 | NN | RA | RB | RC | 0 | 01 | 0110 110 |Rc| grevw | X-Form |
145 | NN | RS | RA | SH | 0 | 01 | 1110 110 |Rc| grevwi | X-Form |
146 | NN | RT | RA | RB | 1 | 01 | 1110 110 |Rc| bmatxor | X-Form |
147 | NN | | | | | 10 | --10 110 |Rc| rsvd | |
148 | NN | RT | RA | RB | 0 | 11 | 1110 110 |Rc| clmulr | X-Form |
149 | NN | RT | RA | RB | 1 | 11 | 1110 110 |Rc| clmulh | X-Form |
150 | NN | | | | | | --11 110 |Rc| rsvd | |
151
152 # ternlog bitops
153
154 Similar to FPGA LUTs: for every bit perform a lookup into a table using an 8bit immediate, or in another register.
155
156 Like the x86 AVX512F [vpternlogd/vpternlogq](https://www.felixcloutier.com/x86/vpternlogd:vpternlogq) instructions.
157
158 ## ternlogi
159
160 | 0.5|6.10|11.15|16.20| 21..28|29.30|31|
161 | -- | -- | --- | --- | ----- | --- |--|
162 | NN | RT | RA | RB | im0-7 | 00 |Rc|
163
164 lut3(imm, a, b, c):
165 idx = c << 2 | b << 1 | a
166 return imm[idx] # idx by LSB0 order
167
168 for i in range(64):
169 RT[i] = lut3(imm, RB[i], RA[i], RT[i])
170
171 ## ternlogv
172
173 also, another possible variant involving swizzle-like selection
174 and masking, this only requires 3 64 bit registers (RA, RS, RB) and
175 only 16 LUT3s.
176
177 Note however that unless XLEN matches sz, this instruction
178 is a Read-Modify-Write: RS must be read as a second operand
179 and all unmodified bits preserved. SVP64 may provide limited
180 alternative destination for RS from RS-as-source, but again
181 all unmodified bits must still be copied.
182
183 | 0.5|6.10|11.15|16.20|21.28 | 29.30 |31|
184 | -- | -- | --- | --- | ---- | ----- |--|
185 | NN | RS | RA | RB |idx0-3| 01 |sz|
186
187 SZ = (1+sz) * 8 # 8 or 16
188 raoff = MIN(XLEN, idx0 * SZ)
189 rboff = MIN(XLEN, idx1 * SZ)
190 rcoff = MIN(XLEN, idx2 * SZ)
191 rsoff = MIN(XLEN, idx3 * SZ)
192 imm = RB[0:8]
193 for i in range(MIN(XLEN, SZ)):
194 ra = RA[raoff:+i]
195 rb = RA[rboff+i]
196 rc = RA[rcoff+i]
197 res = lut3(imm, ra, rb, rc)
198 RS[rsoff+i] = res
199
200 ## ternlogcr
201
202 another mode selection would be CRs not Ints.
203
204 | 0.5|6.8 | 9.11|12.14|15.17|18.20|21.28 | 29.30|31|
205 | -- | -- | --- | --- | --- |-----|----- | -----|--|
206 | NN | BT | BA | BB | BC |m0-2 | imm | 01 |m3|
207
208 mask = m0-3,m4
209 for i in range(4):
210 a,b,c = CRs[BA][i], CRs[BB][i], CRs[BC][i])
211 if mask[i] CRs[BT][i] = lut3(imm, a, b, c)
212
213 # int ops
214
215 ## min/m
216
217 required for the [[sv/av_opcodes]]
218
219 signed and unsigned min/max for integer. this is sort-of partly synthesiseable in [[sv/svp64]] with pred-result as long as the dest reg is one of the sources, but not both signed and unsigned. when the dest is also one of the srces and the mv fails due to the CR bittest failing this will only overwrite the dest where the src is greater (or less).
220
221 signed/unsigned min/max gives more flexibility.
222
223 ```
224 uint_xlen_t min(uint_xlen_t rs1, uint_xlen_t rs2)
225 { return (int_xlen_t)rs1 < (int_xlen_t)rs2 ? rs1 : rs2;
226 }
227 uint_xlen_t max(uint_xlen_t rs1, uint_xlen_t rs2)
228 { return (int_xlen_t)rs1 > (int_xlen_t)rs2 ? rs1 : rs2;
229 }
230 uint_xlen_t minu(uint_xlen_t rs1, uint_xlen_t rs2)
231 { return rs1 < rs2 ? rs1 : rs2;
232 }
233 uint_xlen_t maxu(uint_xlen_t rs1, uint_xlen_t rs2)
234 { return rs1 > rs2 ? rs1 : rs2;
235 }
236 ```
237
238 ## average
239
240 required for the [[sv/av_opcodes]], these exist in Packed SIMD (VSX)
241 but not scalar
242
243 ```
244 uint_xlen_t intavg(uint_xlen_t rs1, uint_xlen_t rs2) {
245 return (rs1 + rs2 + 1) >> 1:
246 }
247 ```
248
249 ## abs
250
251 required for the [[sv/av_opcodes]], these exist in Packed SIMD (VSX)
252 but not scalar
253
254 ```
255 uint_xlen_t intabs(uint_xlen_t rs1, uint_xlen_t rs2) {
256 return (src1 > src2) ? (src1-src2) : (src2-src1)
257 }
258 ```
259
260 # shift-and-add
261
262 Power ISA is missing LD/ST with shift, which is present in both ARM and x86.
263 Too complex to add more LD/ST, a compromise is to add shift-and-add.
264 Replaces a pair of explicit instructions in hot-loops.
265
266 ```
267 uint_xlen_t shadd(uint_xlen_t rs1, uint_xlen_t rs2, uint8_t sh) {
268 return (rs1 << (sh+1)) + rs2;
269 }
270
271 uint_xlen_t shadduw(uint_xlen_t rs1, uint_xlen_t rs2, uint8_t sh) {
272 uint_xlen_t rs1z = rs1 & 0xFFFFFFFF;
273 return (rs1z << (sh+1)) + rs2;
274 }
275 ```
276
277 # cmix
278
279 based on RV bitmanip, covered by ternlog bitops
280
281 ```
282 uint_xlen_t cmix(uint_xlen_t RA, uint_xlen_t RB, uint_xlen_t RC) {
283 return (RA & RB) | (RC & ~RB);
284 }
285 ```
286
287
288 # bitmask set
289
290 based on RV bitmanip singlebit set, instruction format similar to shift
291 [[isa/fixedshift]]. bmext is actually covered already (shift-with-mask rldicl but only immediate version).
292 however bitmask-invert is not, and set/clr are not covered, although they can use the same Shift ALU.
293
294 bmext (RB) version is not the same as rldicl because bmext is a right shift by RC, where rldicl is a left rotate. for the immediate version this does not matter, so a bmexti is not required.
295 bmrev however there is no direct equivalent and consequently a bmrevi is required.
296
297 bmset (register for mask amount) is particularly useful for creating
298 predicate masks where the length is a dynamic runtime quantity.
299 bmset(RA=0, RB=0, RC=mask) will produce a run of ones of length "mask" in a single instruction without needing to initialise or depend on any other registers.
300
301 | 0.5|6.10|11.15|16.20|21.25| 26..30 |31| name |
302 | -- | -- | --- | --- | --- | ------- |--| ----- |
303 | NN | RS | RA | RB | RC | mode 010 |Rc| bm\* |
304
305 Immediate-variant is an overwrite form:
306
307 | 0.5|6.10|11.15|16.20| 21 | 22.23 | 24....30 |31| name |
308 | -- | -- | --- | --- | -- | ----- | -------- |--| ---- |
309 | NN | RS | RB | sh | SH | itype | 1000 110 |Rc| bm\*i |
310
311 ```
312 def MASK(x, y):
313 if x < y:
314 x = x+1
315 mask_a = ((1 << x) - 1) & ((1 << 64) - 1)
316 mask_b = ((1 << y) - 1) & ((1 << 64) - 1)
317 elif x == y:
318 return 1 << x
319 else:
320 x = x+1
321 mask_a = ((1 << x) - 1) & ((1 << 64) - 1)
322 mask_b = (~((1 << y) - 1)) & ((1 << 64) - 1)
323 return mask_a ^ mask_b
324
325
326 uint_xlen_t bmset(RS, RB, sh)
327 {
328 int shamt = RB & (XLEN - 1);
329 mask = (2<<sh)-1;
330 return RS | (mask << shamt);
331 }
332
333 uint_xlen_t bmclr(RS, RB, sh)
334 {
335 int shamt = RB & (XLEN - 1);
336 mask = (2<<sh)-1;
337 return RS & ~(mask << shamt);
338 }
339
340 uint_xlen_t bminv(RS, RB, sh)
341 {
342 int shamt = RB & (XLEN - 1);
343 mask = (2<<sh)-1;
344 return RS ^ (mask << shamt);
345 }
346
347 uint_xlen_t bmext(RS, RB, sh)
348 {
349 int shamt = RB & (XLEN - 1);
350 mask = (2<<sh)-1;
351 return mask & (RS >> shamt);
352 }
353 ```
354
355 bitmask extract with reverse. can be done by bit-order-inverting all of RB and getting bits of RB from the opposite end.
356
357 when RA is zero, no shift occurs. this makes bmextrev useful for
358 simply reversing all bits of a register.
359
360 ```
361 msb = ra[5:0];
362 rev[0:msb] = rb[msb:0];
363 rt = ZE(rev[msb:0]);
364
365 uint_xlen_t bmextrev(RA, RB, sh)
366 {
367 int shamt = XLEN-1;
368 if (RA != 0) shamt = (GPR(RA) & (XLEN - 1));
369 shamt = (XLEN-1)-shamt; # shift other end
370 bra = bitreverse(RB) # swap LSB-MSB
371 mask = (2<<sh)-1;
372 return mask & (bra >> shamt);
373 }
374 ```
375
376 | 0.5|6.10|11.15|16.20|21.26| 27..30 |31| name |
377 | -- | -- | --- | --- | --- | ------- |--| ------ |
378 | NN | RT | RA | RB | sh | 1 011 |Rc| bmrevi |
379
380
381 # grevlut
382
383 generalised reverse combined with a pair of LUT2s and allowing
384 a constant `0b0101...0101` when RA=0, and an option to invert
385 (including when RA=0, giving a constant 0b1010...1010 as the
386 initial value) provides a wide range of instructions
387 and a means to set hundreds of regular 64 bit patterns with one
388 single 32 bit instruction.
389
390 the two LUT2s are applied left-half (when not swapping)
391 and right-half (when swapping) so as to allow a wider
392 range of options.
393
394 <img src="/openpower/sv/grevlut2x2.jpg" width=700 />
395
396 * A value of `0b11001010` for the immediate provides
397 the functionality of a standard "grev".
398 * `0b11101110` provides gorc
399
400 grevlut should be arranged so as to produce the constants
401 needed to put into bext (bitextract) so as in turn to
402 be able to emulate x86 pmovmask instructions <https://www.felixcloutier.com/x86/pmovmskb>.
403 This only requires 2 instructions (grevlut, bext).
404
405 Note that if the mask is required to be placed
406 directly into CR Fields (for use as CR Predicate
407 masks rather than a integer mask) then sv.cmpi or sv.ori
408 may be used instead, bearing in mind that sv.ori
409 is a 64-bit instruction, and `VL` must have been
410 set to the required length:
411
412 sv.ori./elwid=8 r10.v, r10.v, 0
413
414 The following settings provide the required mask constants:
415
416 | RA | RB | imm | iv | result |
417 | ------- | ------- | ---------- | -- | ---------- |
418 | 0x555.. | 0b10 | 0b01101100 | 0 | 0x111111... |
419 | 0x555.. | 0b110 | 0b01101100 | 0 | 0x010101... |
420 | 0x555.. | 0b1110 | 0b01101100 | 0 | 0x00010001... |
421 | 0x555.. | 0b10 | 0b11000110 | 1 | 0x88888... |
422 | 0x555.. | 0b110 | 0b11000110 | 1 | 0x808080... |
423 | 0x555.. | 0b1110 | 0b11000110 | 1 | 0x80008000... |
424
425 Better diagram showing the correct ordering of shamt (RB). A LUT2
426 is applied to all locations marked in red using the first 4
427 bits of the immediate, and a separate LUT2 applied to all
428 locations in green using the upper 4 bits of the immediate.
429
430 <img src="/openpower/sv/grevlut.png" width=700 />
431
432 demo code [[openpower/sv/grevlut.py]]
433
434 ```
435 lut2(imm, a, b):
436 idx = b << 1 | a
437 return imm[idx] # idx by LSB0 order
438
439 dorow(imm8, step_i, chunksize):
440 for j in 0 to 63:
441 if (j&chunk_size) == 0
442 imm = imm8[0..3]
443 else
444 imm = imm8[4..7]
445 step_o[j] = lut2(imm, step_i[j], step_i[j ^ chunk_size])
446 return step_o
447
448 uint64_t grevlut64(uint64_t RA, uint64_t RB, uint8 imm, bool iv)
449 {
450 uint64_t x = 0x5555_5555_5555_5555;
451 if (RA != 0) x = GPR(RA);
452 if (iv) x = ~x;
453 int shamt = RB & 63;
454 for i in 0 to 6
455 step = 1<<i
456 if (shamt & step) x = dorow(imm, x, step)
457 return x;
458 }
459
460 ```
461
462 | 0.5|6.10|11.15|16.20 |21..25 | 26....30 |31| name |
463 | -- | -- | --- | --- | ----- | -------- |--| ------ |
464 | NN | RT | RA | s0-4 | im0-4 | im5-7 1 iv |s5| grevlogi |
465 | NN | RT | RA | RB | im0-4 | im5-7 00 |1 | grevlog |
466
467
468 # grev
469
470 superceded by grevlut
471
472 based on RV bitmanip, this is also known as a butterfly network. however
473 where a butterfly network allows setting of every crossbar setting in
474 every row and every column, generalised-reverse (grev) only allows
475 a per-row decision: every entry in the same row must either switch or
476 not-switch.
477
478 <img src="https://upload.wikimedia.org/wikipedia/commons/thumb/8/8c/Butterfly_Network.jpg/474px-Butterfly_Network.jpg" />
479
480 ```
481 uint64_t grev64(uint64_t RA, uint64_t RB)
482 {
483 uint64_t x = RA;
484 int shamt = RB & 63;
485 if (shamt & 1) x = ((x & 0x5555555555555555LL) << 1) |
486 ((x & 0xAAAAAAAAAAAAAAAALL) >> 1);
487 if (shamt & 2) x = ((x & 0x3333333333333333LL) << 2) |
488 ((x & 0xCCCCCCCCCCCCCCCCLL) >> 2);
489 if (shamt & 4) x = ((x & 0x0F0F0F0F0F0F0F0FLL) << 4) |
490 ((x & 0xF0F0F0F0F0F0F0F0LL) >> 4);
491 if (shamt & 8) x = ((x & 0x00FF00FF00FF00FFLL) << 8) |
492 ((x & 0xFF00FF00FF00FF00LL) >> 8);
493 if (shamt & 16) x = ((x & 0x0000FFFF0000FFFFLL) << 16) |
494 ((x & 0xFFFF0000FFFF0000LL) >> 16);
495 if (shamt & 32) x = ((x & 0x00000000FFFFFFFFLL) << 32) |
496 ((x & 0xFFFFFFFF00000000LL) >> 32);
497 return x;
498 }
499
500 ```
501
502 # gorc
503
504 based on RV bitmanip, gorc is superceded by grevlut
505
506 ```
507 uint32_t gorc32(uint32_t RA, uint32_t RB)
508 {
509 uint32_t x = RA;
510 int shamt = RB & 31;
511 if (shamt & 1) x |= ((x & 0x55555555) << 1) | ((x & 0xAAAAAAAA) >> 1);
512 if (shamt & 2) x |= ((x & 0x33333333) << 2) | ((x & 0xCCCCCCCC) >> 2);
513 if (shamt & 4) x |= ((x & 0x0F0F0F0F) << 4) | ((x & 0xF0F0F0F0) >> 4);
514 if (shamt & 8) x |= ((x & 0x00FF00FF) << 8) | ((x & 0xFF00FF00) >> 8);
515 if (shamt & 16) x |= ((x & 0x0000FFFF) << 16) | ((x & 0xFFFF0000) >> 16);
516 return x;
517 }
518 uint64_t gorc64(uint64_t RA, uint64_t RB)
519 {
520 uint64_t x = RA;
521 int shamt = RB & 63;
522 if (shamt & 1) x |= ((x & 0x5555555555555555LL) << 1) |
523 ((x & 0xAAAAAAAAAAAAAAAALL) >> 1);
524 if (shamt & 2) x |= ((x & 0x3333333333333333LL) << 2) |
525 ((x & 0xCCCCCCCCCCCCCCCCLL) >> 2);
526 if (shamt & 4) x |= ((x & 0x0F0F0F0F0F0F0F0FLL) << 4) |
527 ((x & 0xF0F0F0F0F0F0F0F0LL) >> 4);
528 if (shamt & 8) x |= ((x & 0x00FF00FF00FF00FFLL) << 8) |
529 ((x & 0xFF00FF00FF00FF00LL) >> 8);
530 if (shamt & 16) x |= ((x & 0x0000FFFF0000FFFFLL) << 16) |
531 ((x & 0xFFFF0000FFFF0000LL) >> 16);
532 if (shamt & 32) x |= ((x & 0x00000000FFFFFFFFLL) << 32) |
533 ((x & 0xFFFFFFFF00000000LL) >> 32);
534 return x;
535 }
536
537 ```
538
539 # xperm
540
541 based on RV bitmanip.
542
543 RA contains a vector of indices to select parts of RB to be
544 copied to RT. The immediate-variant allows up to an 8 bit
545 pattern (repeated) to be targetted at different parts of RT.
546
547 xperm shares some similarity with one of the uses of bmator
548 in that xperm indices are binary addressing where bitmator
549 may be considered to be unary addressing.
550
551 ```
552 uint_xlen_t xpermi(uint8_t imm8, uint_xlen_t RB, int sz_log2)
553 {
554 uint_xlen_t r = 0;
555 uint_xlen_t sz = 1LL << sz_log2;
556 uint_xlen_t mask = (1LL << sz) - 1;
557 uint_xlen_t RA = imm8 | imm8<<8 | ... | imm8<<56;
558 for (int i = 0; i < XLEN; i += sz) {
559 uint_xlen_t pos = ((RA >> i) & mask) << sz_log2;
560 if (pos < XLEN)
561 r |= ((RB >> pos) & mask) << i;
562 }
563 return r;
564 }
565 uint_xlen_t xperm(uint_xlen_t RA, uint_xlen_t RB, int sz_log2)
566 {
567 uint_xlen_t r = 0;
568 uint_xlen_t sz = 1LL << sz_log2;
569 uint_xlen_t mask = (1LL << sz) - 1;
570 for (int i = 0; i < XLEN; i += sz) {
571 uint_xlen_t pos = ((RA >> i) & mask) << sz_log2;
572 if (pos < XLEN)
573 r |= ((RB >> pos) & mask) << i;
574 }
575 return r;
576 }
577 uint_xlen_t xperm_n (uint_xlen_t RA, uint_xlen_t RB)
578 { return xperm(RA, RB, 2); }
579 uint_xlen_t xperm_b (uint_xlen_t RA, uint_xlen_t RB)
580 { return xperm(RA, RB, 3); }
581 uint_xlen_t xperm_h (uint_xlen_t RA, uint_xlen_t RB)
582 { return xperm(RA, RB, 4); }
583 uint_xlen_t xperm_w (uint_xlen_t RA, uint_xlen_t RB)
584 { return xperm(RA, RB, 5); }
585 ```
586
587 # bitmatrix
588
589 ```
590 uint64_t bmatflip(uint64_t RA)
591 {
592 uint64_t x = RA;
593 x = shfl64(x, 31);
594 x = shfl64(x, 31);
595 x = shfl64(x, 31);
596 return x;
597 }
598 uint64_t bmatxor(uint64_t RA, uint64_t RB)
599 {
600 // transpose of RB
601 uint64_t RBt = bmatflip(RB);
602 uint8_t u[8]; // rows of RA
603 uint8_t v[8]; // cols of RB
604 for (int i = 0; i < 8; i++) {
605 u[i] = RA >> (i*8);
606 v[i] = RBt >> (i*8);
607 }
608 uint64_t x = 0;
609 for (int i = 0; i < 64; i++) {
610 if (pcnt(u[i / 8] & v[i % 8]) & 1)
611 x |= 1LL << i;
612 }
613 return x;
614 }
615 uint64_t bmator(uint64_t RA, uint64_t RB)
616 {
617 // transpose of RB
618 uint64_t RBt = bmatflip(RB);
619 uint8_t u[8]; // rows of RA
620 uint8_t v[8]; // cols of RB
621 for (int i = 0; i < 8; i++) {
622 u[i] = RA >> (i*8);
623 v[i] = RBt >> (i*8);
624 }
625 uint64_t x = 0;
626 for (int i = 0; i < 64; i++) {
627 if ((u[i / 8] & v[i % 8]) != 0)
628 x |= 1LL << i;
629 }
630 return x;
631 }
632
633 ```
634
635 # Introduction to Carry-less and GF arithmetic
636
637 * obligatory xkcd <https://xkcd.com/2595/>
638
639 There are three completely separate types of Galois-Field-based arithmetic
640 that we implement which are not well explained even in introductory
641 literature. A slightly oversimplified explanation is followed by more
642 accurate descriptions:
643
644 * `GF(2)` carry-less binary arithmetic. this is not actually a Galois Field,
645 but is accidentally referred to as GF(2) - see below as to why.
646 * `GF(p)` modulo arithmetic with a Prime number, these are "proper"
647 Galois Fields
648 * `GF(2^N)` carry-less binary arithmetic with two limits: modulo a power-of-2
649 (2^N) and a second "reducing" polynomial (similar to a prime number), these
650 are said to be GF(2^N) arithmetic.
651
652 further detailed and more precise explanations are provided below
653
654 * **Polynomials with coefficients in `GF(2)`**
655 (aka. Carry-less arithmetic -- the `cl*` instructions).
656 This isn't actually a Galois Field, but its coefficients are. This is
657 basically binary integer addition, subtraction, and multiplication like
658 usual, except that carries aren't propagated at all, effectively turning
659 both addition and subtraction into the bitwise xor operation. Division and
660 remainder are defined to match how addition and multiplication works.
661 * **Galois Fields with a prime size**
662 (aka. `GF(p)` or Prime Galois Fields -- the `gfp*` instructions).
663 This is basically just the integers mod `p`.
664 * **Galois Fields with a power-of-a-prime size**
665 (aka. `GF(p^n)` or `GF(q)` where `q == p^n` for prime `p` and
666 integer `n > 0`).
667 We only implement these for `p == 2`, called Binary Galois Fields
668 (`GF(2^n)` -- the `gfb*` instructions).
669 For any prime `p`, `GF(p^n)` is implemented as polynomials with
670 coefficients in `GF(p)` and degree `< n`, where the polynomials are the
671 remainders of dividing by a specificly chosen polynomial in `GF(p)` called
672 the Reducing Polynomial (we will denote that by `red_poly`). The Reducing
673 Polynomial must be an irreducable polynomial (like primes, but for
674 polynomials), as well as have degree `n`. All `GF(p^n)` for the same `p`
675 and `n` are isomorphic to each other -- the choice of `red_poly` doesn't
676 affect `GF(p^n)`'s mathematical shape, all that changes is the specific
677 polynomials used to implement `GF(p^n)`.
678
679 Many implementations and much of the literature do not make a clear
680 distinction between these three categories, which makes it confusing
681 to understand what their purpose and value is.
682
683 * carry-less multiply is extremely common and is used for the ubiquitous
684 CRC32 algorithm. [TODO add many others, helps justify to ISA WG]
685 * GF(2^N) forms the basis of Rijndael (the current AES standard) and
686 has significant uses throughout cryptography
687 * GF(p) is the basis again of a significant quantity of algorithms
688 (TODO, list them, jacob knows what they are), even though the
689 modulo is limited to be below 64-bit (size of a scalar int)
690
691 # Instructions for Carry-less Operations
692
693 aka. Polynomials with coefficients in `GF(2)`
694
695 Carry-less addition/subtraction is simply XOR, so a `cladd`
696 instruction is not provided since the `xor[i]` instruction can be used instead.
697
698 These are operations on polynomials with coefficients in `GF(2)`, with the
699 polynomial's coefficients packed into integers with the following algorithm:
700
701 ```python
702 [[!inline pagenames="gf_reference/pack_poly.py" raw="yes"]]
703 ```
704
705 ## Carry-less Multiply Instructions
706
707 based on RV bitmanip
708 see <https://en.wikipedia.org/wiki/CLMUL_instruction_set> and
709 <https://www.felixcloutier.com/x86/pclmulqdq> and
710 <https://en.m.wikipedia.org/wiki/Carry-less_product>
711
712 They are worth adding as their own non-overwrite operations
713 (in the same pipeline).
714
715 ### `clmul` Carry-less Multiply
716
717 ```python
718 [[!inline pagenames="gf_reference/clmul.py" raw="yes"]]
719 ```
720
721 ### `clmulh` Carry-less Multiply High
722
723 ```python
724 [[!inline pagenames="gf_reference/clmulh.py" raw="yes"]]
725 ```
726
727 ### `clmulr` Carry-less Multiply (Reversed)
728
729 Useful for CRCs. Equivalent to bit-reversing the result of `clmul` on
730 bit-reversed inputs.
731
732 ```python
733 [[!inline pagenames="gf_reference/clmulr.py" raw="yes"]]
734 ```
735
736 ## `clmadd` Carry-less Multiply-Add
737
738 ```
739 clmadd RT, RA, RB, RC
740 ```
741
742 ```
743 (RT) = clmul((RA), (RB)) ^ (RC)
744 ```
745
746 ## `cltmadd` Twin Carry-less Multiply-Add (for FFTs)
747
748 Used in combination with SV FFT REMAP to perform a full Discrete Fourier
749 Transform of Polynomials over GF(2) in-place. Possible by having 3-in 2-out,
750 to avoid the need for a temp register. RS is written to as well as RT.
751
752 Note: Polynomials over GF(2) are a Ring rather than a Field, so, because the
753 definition of the Inverse Discrete Fourier Transform involves calculating a
754 multiplicative inverse, which may not exist in every Ring, therefore the
755 Inverse Discrete Fourier Transform may not exist. (AFAICT the number of inputs
756 to the IDFT must be odd for the IDFT to be defined for Polynomials over GF(2).
757 TODO: check with someone who knows for sure if that's correct.)
758
759 ```
760 cltmadd RT, RA, RB, RC
761 ```
762
763 TODO: add link to explanation for where `RS` comes from.
764
765 ```
766 a = (RA)
767 c = (RC)
768 # read all inputs before writing to any outputs in case
769 # an input overlaps with an output register.
770 (RT) = clmul(a, (RB)) ^ c
771 (RS) = a ^ c
772 ```
773
774 ## `cldivrem` Carry-less Division and Remainder
775
776 `cldivrem` isn't an actual instruction, but is just used in the pseudo-code
777 for other instructions.
778
779 ```python
780 [[!inline pagenames="gf_reference/cldivrem.py" raw="yes"]]
781 ```
782
783 ## `cldiv` Carry-less Division
784
785 ```
786 cldiv RT, RA, RB
787 ```
788
789 ```
790 n = (RA)
791 d = (RB)
792 q, r = cldivrem(n, d, width=XLEN)
793 (RT) = q
794 ```
795
796 ## `clrem` Carry-less Remainder
797
798 ```
799 clrem RT, RA, RB
800 ```
801
802 ```
803 n = (RA)
804 d = (RB)
805 q, r = cldivrem(n, d, width=XLEN)
806 (RT) = r
807 ```
808
809 # Instructions for Binary Galois Fields `GF(2^m)`
810
811 see:
812
813 * <https://courses.csail.mit.edu/6.857/2016/files/ffield.py>
814 * <https://engineering.purdue.edu/kak/compsec/NewLectures/Lecture7.pdf>
815 * <https://foss.heptapod.net/math/libgf2/-/blob/branch/default/src/libgf2/gf2.py>
816
817 Binary Galois Field addition/subtraction is simply XOR, so a `gfbadd`
818 instruction is not provided since the `xor[i]` instruction can be used instead.
819
820 ## `GFBREDPOLY` SPR -- Reducing Polynomial
821
822 In order to save registers and to make operations orthogonal with standard
823 arithmetic, the reducing polynomial is stored in a dedicated SPR `GFBREDPOLY`.
824 This also allows hardware to pre-compute useful parameters (such as the
825 degree, or look-up tables) based on the reducing polynomial, and store them
826 alongside the SPR in hidden registers, only recomputing them whenever the SPR
827 is written to, rather than having to recompute those values for every
828 instruction.
829
830 Because Galois Fields require the reducing polynomial to be an irreducible
831 polynomial, that guarantees that any polynomial of `degree > 1` must have
832 the LSB set, since otherwise it would be divisible by the polynomial `x`,
833 making it reducible, making whatever we're working on no longer a Field.
834 Therefore, we can reuse the LSB to indicate `degree == XLEN`.
835
836 ```python
837 [[!inline pagenames="gf_reference/decode_reducing_polynomial.py" raw="yes"]]
838 ```
839
840 ## `gfbredpoly` -- Set the Reducing Polynomial SPR `GFBREDPOLY`
841
842 unless this is an immediate op, `mtspr` is completely sufficient.
843
844 ```python
845 [[!inline pagenames="gf_reference/gfbredpoly.py" raw="yes"]]
846 ```
847
848 ## `gfbmul` -- Binary Galois Field `GF(2^m)` Multiplication
849
850 ```
851 gfbmul RT, RA, RB
852 ```
853
854 ```python
855 [[!inline pagenames="gf_reference/gfbmul.py" raw="yes"]]
856 ```
857
858 ## `gfbmadd` -- Binary Galois Field `GF(2^m)` Multiply-Add
859
860 ```
861 gfbmadd RT, RA, RB, RC
862 ```
863
864 ```python
865 [[!inline pagenames="gf_reference/gfbmadd.py" raw="yes"]]
866 ```
867
868 ## `gfbtmadd` -- Binary Galois Field `GF(2^m)` Twin Multiply-Add (for FFT)
869
870 Used in combination with SV FFT REMAP to perform a full `GF(2^m)` Discrete
871 Fourier Transform in-place. Possible by having 3-in 2-out, to avoid the need
872 for a temp register. RS is written to as well as RT.
873
874 ```
875 gfbtmadd RT, RA, RB, RC
876 ```
877
878 TODO: add link to explanation for where `RS` comes from.
879
880 ```
881 a = (RA)
882 c = (RC)
883 # read all inputs before writing to any outputs in case
884 # an input overlaps with an output register.
885 (RT) = gfbmadd(a, (RB), c)
886 # use gfbmadd again since it reduces the result
887 (RS) = gfbmadd(a, 1, c) # "a * 1 + c"
888 ```
889
890 ## `gfbinv` -- Binary Galois Field `GF(2^m)` Inverse
891
892 ```
893 gfbinv RT, RA
894 ```
895
896 ```python
897 [[!inline pagenames="gf_reference/gfbinv.py" raw="yes"]]
898 ```
899
900 # Instructions for Prime Galois Fields `GF(p)`
901
902 ## `GFPRIME` SPR -- Prime Modulus For `gfp*` Instructions
903
904 ## `gfpadd` Prime Galois Field `GF(p)` Addition
905
906 ```
907 gfpadd RT, RA, RB
908 ```
909
910 ```python
911 [[!inline pagenames="gf_reference/gfpadd.py" raw="yes"]]
912 ```
913
914 the addition happens on infinite-precision integers
915
916 ## `gfpsub` Prime Galois Field `GF(p)` Subtraction
917
918 ```
919 gfpsub RT, RA, RB
920 ```
921
922 ```python
923 [[!inline pagenames="gf_reference/gfpsub.py" raw="yes"]]
924 ```
925
926 the subtraction happens on infinite-precision integers
927
928 ## `gfpmul` Prime Galois Field `GF(p)` Multiplication
929
930 ```
931 gfpmul RT, RA, RB
932 ```
933
934 ```python
935 [[!inline pagenames="gf_reference/gfpmul.py" raw="yes"]]
936 ```
937
938 the multiplication happens on infinite-precision integers
939
940 ## `gfpinv` Prime Galois Field `GF(p)` Invert
941
942 ```
943 gfpinv RT, RA
944 ```
945
946 Some potential hardware implementations are found in:
947 <https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.90.5233&rep=rep1&type=pdf>
948
949 ```python
950 [[!inline pagenames="gf_reference/gfpinv.py" raw="yes"]]
951 ```
952
953 ## `gfpmadd` Prime Galois Field `GF(p)` Multiply-Add
954
955 ```
956 gfpmadd RT, RA, RB, RC
957 ```
958
959 ```python
960 [[!inline pagenames="gf_reference/gfpmadd.py" raw="yes"]]
961 ```
962
963 the multiplication and addition happens on infinite-precision integers
964
965 ## `gfpmsub` Prime Galois Field `GF(p)` Multiply-Subtract
966
967 ```
968 gfpmsub RT, RA, RB, RC
969 ```
970
971 ```python
972 [[!inline pagenames="gf_reference/gfpmsub.py" raw="yes"]]
973 ```
974
975 the multiplication and subtraction happens on infinite-precision integers
976
977 ## `gfpmsubr` Prime Galois Field `GF(p)` Multiply-Subtract-Reversed
978
979 ```
980 gfpmsubr RT, RA, RB, RC
981 ```
982
983 ```python
984 [[!inline pagenames="gf_reference/gfpmsubr.py" raw="yes"]]
985 ```
986
987 the multiplication and subtraction happens on infinite-precision integers
988
989 ## `gfpmaddsubr` Prime Galois Field `GF(p)` Multiply-Add and Multiply-Sub-Reversed (for FFT)
990
991 Used in combination with SV FFT REMAP to perform
992 a full Number-Theoretic-Transform in-place. Possible by having 3-in 2-out,
993 to avoid the need for a temp register. RS is written
994 to as well as RT.
995
996 ```
997 gfpmaddsubr RT, RA, RB, RC
998 ```
999
1000 TODO: add link to explanation for where `RS` comes from.
1001
1002 ```
1003 factor1 = (RA)
1004 factor2 = (RB)
1005 term = (RC)
1006 # read all inputs before writing to any outputs in case
1007 # an input overlaps with an output register.
1008 (RT) = gfpmadd(factor1, factor2, term)
1009 (RS) = gfpmsubr(factor1, factor2, term)
1010 ```
1011
1012 # Already in POWER ISA
1013
1014 ## count leading/trailing zeros with mask
1015
1016 in v3.1 p105
1017
1018 ```
1019 count = 0
1020 do i = 0 to 63 if((RB)i=1) then do
1021 if((RS)i=1) then break end end count ← count + 1
1022 RA ← EXTZ64(count)
1023 ```
1024
1025 ## bit deposit
1026
1027 pdepd VRT,VRA,VRB, identical to RV bitmamip bdep, found already in v3.1 p106
1028
1029 do while(m < 64)
1030 if VSR[VRB+32].dword[i].bit[63-m]=1 then do
1031 result = VSR[VRA+32].dword[i].bit[63-k]
1032 VSR[VRT+32].dword[i].bit[63-m] = result
1033 k = k + 1
1034 m = m + 1
1035
1036 ```
1037
1038 uint_xlen_t bdep(uint_xlen_t RA, uint_xlen_t RB)
1039 {
1040 uint_xlen_t r = 0;
1041 for (int i = 0, j = 0; i < XLEN; i++)
1042 if ((RB >> i) & 1) {
1043 if ((RA >> j) & 1)
1044 r |= uint_xlen_t(1) << i;
1045 j++;
1046 }
1047 return r;
1048 }
1049
1050 ```
1051
1052 ## bit extract
1053
1054 other way round: identical to RV bext: pextd, found in v3.1 p196
1055
1056 ```
1057 uint_xlen_t bext(uint_xlen_t RA, uint_xlen_t RB)
1058 {
1059 uint_xlen_t r = 0;
1060 for (int i = 0, j = 0; i < XLEN; i++)
1061 if ((RB >> i) & 1) {
1062 if ((RA >> i) & 1)
1063 r |= uint_xlen_t(1) << j;
1064 j++;
1065 }
1066 return r;
1067 }
1068 ```
1069
1070 ## centrifuge
1071
1072 found in v3.1 p106 so not to be added here
1073
1074 ```
1075 ptr0 = 0
1076 ptr1 = 0
1077 do i = 0 to 63
1078 if((RB)i=0) then do
1079 resultptr0 = (RS)i
1080 end
1081 ptr0 = ptr0 + 1
1082 if((RB)63-i==1) then do
1083 result63-ptr1 = (RS)63-i
1084 end
1085 ptr1 = ptr1 + 1
1086 RA = result
1087 ```
1088
1089 ## bit to byte permute
1090
1091 similar to matrix permute in RV bitmanip, which has XOR and OR variants,
1092 these perform a transpose. TODO this looks VSX is there a scalar variant
1093 in v3.0/1 already
1094
1095 do j = 0 to 7
1096 do k = 0 to 7
1097 b = VSR[VRB+32].dword[i].byte[k].bit[j]
1098 VSR[VRT+32].dword[i].byte[j].bit[k] = b
1099
1100 # Appendix
1101
1102 see [[bitmanip/appendix]]
1103