4 swizzle needs a MV. see below for a potential way to use the funct7 to do a swizzle in rs2.
6 +---------------+-------------+-------+----------+----------+--------+----------+--------+--------+
7 | Encoding | 31:27 | 26:25 | 24:20 | 19:15 | 14:12 | 11:7 | 6:2 | 1:0 |
8 +---------------+-------------+-------+----------+----------+--------+----------+--------+--------+
9 | RV32-I-type + imm[11:0] + rs1[4:0] + funct3 | rd[4:0] + opcode + 0b11 |
10 +---------------+-------------+-------+----------+----------+--------+----------+--------+--------+
11 | RV32-I-type + fn4[3:0] + swizzle[7:0] + rs1[4:0] + 0b000 | rd[4:0] + OP-V + 0b11 |
12 +---------------+-------------+-------+----------+----------+--------+----------+--------+--------+
16 * fn4 = 4 bit function.
17 * fn4 = 0b0000 - INT MV-SWIZZLE ?
18 * fn4 = 0b0001 - FP MV-SWIZZLE ?
19 * fn4 = 0bNN10 - INT MV-X, NN=elwidth (default/8/16/32)
20 * fn4 = 0bNN11 - FP MV-X NN=elwidth (default/8/16/32)
22 swizzle (only active on SV or P48/P64 when SUBVL!=0):
24 +-----+-----+-----+-----+
25 | 7:6 | 5:4 | 3:2 | 1:0 |
26 +-----+-----+-----+-----+
28 +-----+-----+-----+-----+
30 Pseudocode for element width part of MV.X:
33 def mv_x(rd, rs1, funct4):
34 elwidth = (funct4>>2) & 0x3
35 bitwidth = {0:XLEN, 1:8, 2:16, 3:32}[elwidth] # get bits per el
36 bytewidth = bitwidth / 8 # get bytes per el
38 addr = (unsigned char *)®s[rs1]
39 offset = addr + bytewidth # get offset within regfile as SRAM
40 # TODO, actually, needs to respect rd and rs1 element width,
41 # here, as well. this pseudocode just illustrates that the
42 # MV.X operation contains a way to compact the indices into
44 regs[rd] = (unsigned char*)(regs)[offset]
46 The idea here is to allow 8-bit indices to be stored inside XLEN-sized
47 registers, such that rather than doing this:
54 {SVP.VL=4} MV.X x3, x8, elwidth=default
56 The alternative is this:
60 {SVP.VL=4} MV.X x3, x8, elwidth=8
62 Thus compacting four indices into the one register. x3 and x8's element
63 width are *independent* of the MV.X elwidth, thus allowing both source
64 and element element widths of the *elements* to be moved to be over-ridden,
65 whilst *at the same time* allowing the *indices* to be compacted, as well.
69 potential MV.X? register-version of MV-swizzle?
71 +-------------+-------+-------+----------+----------+--------+----------+--------+--------+
72 | Encoding | 31:27 | 26:25 | 24:20 | 19:15 | 14:12 | 11:7 | 6:2 | 1:0 |
73 +-------------+-------+-------+----------+----------+--------+----------+--------+--------+
74 | RV32-R-type + funct7 + rs2[4:0] + rs1[4:0] + funct3 | rd[4:0] + opcode + 0b11 |
75 +-------------+-------+-------+----------+----------+--------+----------+--------+--------+
76 | RV32-R-type + 0b0000000 + rs2[4:0] + rs1[4:0] + 0b001 | rd[4:0] + OP-V + 0b11 |
77 +-------------+-------+-------+----------+----------+--------+----------+--------+--------+
81 * funct7 = 0b000NN00 - INT MV.X, elwidth=NN (default/8/16/32)
82 * funct7 = 0b000NN10 - FP MV.X, elwidth=NN (default/8/16/32)
83 * funct7 = 0b0000001 - INT MV.swizzle to say that rs2 is a swizzle argument?
84 * funct7 = 0b0000011 - FP MV.swizzle to say that rs2 is a swizzle argument?
86 question: do we need a swizzle MV.X as well?
91 there is the potential for macro-op fusion of mv-swizzle with the following instruction and/or preceding instruction.
92 <http://lists.libre-riscv.org/pipermail/libre-riscv-dev/2019-August/002486.html>
97 additional idea: a VBLOCK context that says that if a given register is used, it indicates that the
98 register is to be "swizzled", and the VBLOCK swizzle context contains the swizzling to be carried out.
103 __m128 _mm_shuffle_ps(__m128 lo,__m128 hi,
104 _MM_SHUFFLE(hi3,hi2,lo1,lo0))
105 Interleave inputs into low 2 floats and high 2 floats of output. Basically
111 For example, _mm_shuffle_ps(a,a,_MM_SHUFFLE(i,i,i,i)) copies the float
112 a[i] into all 4 output floats.