(no commit message)
[libreriscv.git] / openpower / sv / ldst.mdwn
1 # SV Load and Store
2
3 Links:
4
5 * <https://bugs.libre-soc.org/show_bug.cgi?id=572>
6 * <https://bugs.libre-soc.org/show_bug.cgi?id=571>
7 * <https://llvm.org/devmtg/2016-11/Slides/Emerson-ScalableVectorizationinLLVMIR.pdf>
8 * <https://github.com/riscv/riscv-v-spec/blob/master/v-spec.adoc#vector-loads-and-stores>
9
10 Vectorisation of Load and Store requires creation, from scalar operations, a number of different types:
11
12 * fixed stride (contiguous sequence with no gaps)
13 * element strided (sequential but regularly offset, with gaps)
14 * vector indexed (vector of base addresses and vector of offsets)
15
16 OpenPOWER Load/Store operations may be seen from [[isa/fixedload]] and [[isa/fixedstore]] pseudocode to be of the form:
17
18 lbux RT, RA, RB
19 EA <- (RA) + (RB)
20 RT <- MEM(EA)
21
22 and for immediate variants:
23
24 lb RT,D(RA)
25 EA <- RA + EXTS(D)
26 RT <- MEM(EA)
27
28 Thus in the first example, the source registers may each be independently marked as scalar or vector, and likewise the destination; in the second example only the one source and one dest may be marked as scalar or vector.
29
30 Thus we can see that Vector Indexed may be covered, and, as demonstrated with the pseudocode below, the immediate can be set to the element width in order to give unit stride.
31
32 At the minimum however it is possible to provide unit stride and vector mode, as follows:
33
34 # LD not VLD!
35 # op_width: lb=1, lh=2, lw=4, ld=8
36 op_load(RT, RA, op_width, immed, svctx, update):
37  ps = get_pred_val(FALSE, RA); # predication on src
38  pd = get_pred_val(FALSE, RT); # ... AND on dest
39  for (int i = 0, int j = 0; i < VL && j < VL;):
40 # skip nonpredicates elements
41 if (RA.isvec) while (!(ps & 1<<i)) i++;
42 if (RT.isvec) while (!(pd & 1<<j)) j++;
43 if RA.isvec:
44 # indirect mode (multi mode)
45 srcbase = ireg[RA+i]
46 offs = immed;
47 else:
48 srcbase = ireg[RA]
49 if svctx.ldstmode == elementstride:
50 # element stride mode
51 offs = i * immed
52 elif svctx.ldstmode == unitstride:
53 # unit stride mode
54 offs = i * op_width
55 else
56 # standard scalar mode (but predicated)
57 # no stride multiplier means VSPLAT mode
58 offs = immed
59 # compute EA
60 EA = srcbase + offs
61 # update RA? load from memory
62 if update: ireg[rsv+i] = EA;
63 ireg[RT+j] <= MEM[EA];
64 if (!RT.isvec)
65 break # destination scalar, end now
66 if (RA.isvec) i++;
67 if (RT.isvec) j++;
68
69 Indexed LD is:
70
71 function op_ldx(RT, RA, RB, update=False) # LD not VLD!
72  rdv = map_dest_extra(RT);
73  rsv = map_src_extra(RA);
74  rso = map_src_extra(RB);
75  ps = get_pred_val(FALSE, RA); # predication on src
76  pd = get_pred_val(FALSE, RT); # ... AND on dest
77  for (i=0, j=0, k=0; i < VL && j < VL && k < VL):
78 # skip nonpredicated RA, RB and RT
79 if (RA.isvec) while (!(ps & 1<<i)) i++;
80 if (RB.isvec) while (!(ps & 1<<k)) k++;
81 if (RT.isvec) while (!(pd & 1<<j)) j++;
82 EA = ireg[rsv+i] + ireg[rso+k] # indexed address
83 if update: ireg[rsv+i] = EA
84 ireg[rdv+j] <= MEM[EA];
85 if (!RT.isvec)
86 break # destination scalar, end immediately
87 if (!RA.isvec && !RB.isvec)
88 break # scalar-scalar
89 if (RA.isvec) i++;
90 if (RB.isvec) k++;
91 if (RT.isvec) j++;
92
93 # LOAD/STORE Elwidths <a name="ldst"></a>
94
95 Loads and Stores are almost unique in that the OpenPOWER Scalar ISA provides a width for the operation (lb, lh, lw, ld). Only `extsb` and others like it provide an explicit operation width. In order to fit the different types of LD/ST Modes into SV the src elwidth field is used to select that Mode, and the actual src elwidth is implicitly the same as the operation width. We then still apply Twin Predication but using:
96
97 * operation width (lb=8, lh=16, lw=32, ld=64) as src elwidth
98 * destination element width override
99
100 Saturation (and other transformations) occur on the value loaded from memory as if it was an "infinite bitwidth", sign-extended (if Saturation requests signed) from the source width (lb, lh, lw, ld) followed then by the actual Saturation to the destination width.
101
102 In order to respect OpenPOWER v3.0B Scalar behaviour the memory side is treated effectively as completely separate and distinct from SV augmentation. This is primarily down to quirks surrounding LE/BE and byte-reversal in OpenPOWER.
103
104 Note the following regarding the pseudocode to follow:
105
106 * `scalar identity behaviour` SV Context parameter conditions turn this
107 into a straight absolute fully-compliant Scalar v3.0B LD operation
108 * `brev` selects whether the operation is the byte-reversed variant (`ldbrx`
109 rather than `ld`)
110 * `op_width` specifies the operation width (`lb`, `lh`, `lw`, `ld`) as
111 a "normal" part of Scalar v3.0B LD
112 * `imm_offs` specifies the immediate offset `ld r3, imm_offs(r5)`, again
113 as a "normal" part of Scalar v3.0B LD
114 * `svctx` specifies the SV Context and includes VL as well as
115 destination elwidth overrides.
116
117 Below is the pseudocode for Unit-Strided LD (which includes Vector capability).
118
119 Note that twin predication, predication-zeroing, saturation
120 and other modes have all been removed, for clarity and simplicity:
121
122 # LD not VLD! (ldbrx if brev=True)
123 # this covers unit stride mode and a type of vector offset
124 function op_ld(RT, RA, brev, op_width, imm_offs, svctx)
125 for (int i = 0, int j = 0; i < svctx.VL && j < svctx.VL;):
126
127 if RA.isvec:
128 # strange vector mode, compute 64 bit address which is
129 # not polymorphic! elwidth hardcoded to 64 here
130 srcbase = get_polymorphed_reg(RA, 64, i)
131 else:
132 # unit stride mode, compute the address
133 srcbase = ireg[RA] + i * op_width;
134
135 # takes care of (merges) processor LE/BE and ld/ldbrx
136 bytereverse = brev XNOR MSR.LE
137
138 # read the underlying memory
139 memread <= mem[srcbase + imm_offs];
140
141 # optionally performs byteswap at op width
142 if (bytereverse):
143 memread = byteswap(memread, op_width)
144
145 # now truncate/extend to over-ridden width.
146 if not svpctx.saturation_mode:
147 memread = adjust_wid(memread, op_width, svctx.dest_elwidth)
148 else:
149 ... saturation adjustment...
150
151 # takes care of inserting memory-read (now correctly byteswapped)
152 # into regfile underlying LE-defined order, into the right place
153 # within the NEON-like register, respecting destination element
154 # bitwidth, and the element index (j)
155 set_polymorphed_reg(RT, svctx.dest_bitwidth, j, memread)
156
157 # increments both src and dest element indices (no predication here)
158 i++;
159 j++;
160
161 When RA is marked as Vectorised the mode switches to an anomalous version similar to Indexed. The element indices increment to select a 64 bit base address, effectively as if the src elwidth was hard-set to "default". The important thing to note is that `i*op_width` is *not* added on to the base address unless RA is marked as a scalar address.