572f37e2498be0cb4b07071b08f5d20418bc8a43
[libreriscv.git] / openpower / sv / predication.mdwn
1 # TODO ideas
2
3 <https://bugs.libre-soc.org/show_bug.cgi?id=213>
4
5 * idea 1: modify cmp (and other CR generators?) with qualifiers that
6 create single bit prefix vector into int reg
7 * idea 2: override CR SO field in vector form to be predicate bit per element
8 * idea 3: reading of predicates is from bits of int reg
9 * idea 4: SO CR field no longer overflow, contains copy of int reg
10 predicate element bit (passed through). when OE set?
11
12
13 # Requirements
14
15 * must be easily implementable in any microarchitecture including:
16 - small and large out-of-order
17 - in-order
18 - FSM (0.3 IPC or below)
19 * must not compromise or penalise any microarchitectural performance
20 * must cover up to 64 elements
21 * must still work for elwidth over-rides
22
23 ## Additional Capabilities
24
25 * two modes, "zeroing" and "non-zeroing". zeroing mode places a zero in the masked-out element results, where non-zeroing leaves the destination (result) element unmodified.
26 * predicate must be invertable via an opcode bit (to avoid the need for an instruction which inverts all bits of the predicate mask)
27
28 Implementation note: even in in-order microarchitectures it is strongly adviseable to use byte-level write-enable lines on the register file. This in combination with 8-bit SIMD element overrides allows, in "non-zeroing" mode, the predicate mask to be directly ANDed with the regfile write-enable lines to achieve the required functionality. The alternative is to perform a READ-MODIFY-MASK-WRITE cycle which is costly and compromises performance. Avoided very simply with byte-level write-enable.
29
30 ## General implications and considerations
31
32 ### OE=1 and SO
33
34 XER.SO (sticky overflow) is known to cause massive slowdown in pretty much every microarchitecture and it definitely compromises the performance of out-of-order systems. The reason is that it introduces a READ-MODIFY-WRITE cycle between XER.SO and CR0 (which contains a copy of the SO field after inclusion of the overflow). The result and source registers branch off as RaW and WaR hazards from this RMW chain.
35
36 This is even before predication or vectorisation were to be added on top, i.e. these are existing weaknesses in OpenPOWER as a scalar ISA.
37
38 As well-known weaknesses that compromise performance, very little use of OE=1 is actually made, outside of unit tests and Conformance Tests. Consequently it makes very little sense to continue to propagate OE=1 in the Vectorisation context of SV.
39
40 ### Vector Chaining
41
42 (see [[masked_vector_chaining]])
43
44 One of the design principles of SV is that the use of VL should be as closrly equivalent to a direct substitution of the scalar operations of the hardware for-loop as possible, as if those looped operations were actually in the instruction stream (as scalar operations) rather than being issued from the Vector loop.
45
46 The implications here are that *register dependency hazards still have to be respected inter-element*.
47
48 Using a multi-issue out-of-order engine as the underlying microarchitectural basis this is not as difficult to achieve as it first seems (the hard work habing been done by the Dependency Matrices). In addition, Vector Chaining should also be possible for a multi-issue out-of-order engine to cope with, as long as false (unnecessary) Dependency Hazards are not introduced in between Vectors, where the dependencies actually only exist between elements *in* the Vector.
49
50 The concept of recognising that it is the elements within the Vector that have Dependency Hazards rather than the Vectors themselves is what permits Cray-style "chaining".
51
52 This "false/unnecessary hazard" condition eliminates and/or compromises the performance or drives up resource utilisation in at least two of the proposals below.
53
54 # Proposals
55
56 ## Adding new predicate register file type and associated opcodes
57
58 This idea, adding new predicate manipulation opcodes,
59 violates the fundamental design principles of SV to not add
60 new vector-related instructions unless essential or compelling.
61
62 All other proposals utilise existing scalar opcodes which already happen to have bitmanipulation, arithmetic, and inter-file transfer capability (mfcr, mfspr etc).
63 They also involve adding extra bitmanip opcodes, such that by utilising those scalar registers as predicate masks SV achieves "par" with other Cray-style Vector ISAs, all without actually having to add any actual Vector opcodes.
64
65 In addition those bitmanip operations, although some of them are obscure and unusual in the scalar world, do actually have practical applicatiobe outside of a vector context.
66
67 Adding a full set special vector opcodes just for manipulating predicate masks and being able to transfer them to other regfiles (a la mfcr) is however anomalous, costly, and unnecessary.
68
69 ## CR-based predication proposal
70
71 this involves treating each CR as providing one bit of predicate. If
72 there is limited space in SVPrefix it will be a fixed bit (bit 0)
73 otherwise it may be selected (bit 0 to 3 of the CR) through a firld in the opcode.
74
75 the crucial advantage of this proposal is that the Function Units can
76 have one more register (a CR) added as their Read Dependency Hazards
77 just like all the other incoming source registers, and there is no need
78 for a special "Predicate Shadow Function Unit".
79
80 a big advantage of this is that unpredicated operations just set the
81 predicate to an immediate of all 1s and the actual ALUs require very
82 little modification.
83
84 a disadvantage is that to support the selection of 8 bit of predicate
85 from 8 CRs (via the "full" 8x CR port") would require allocating 32-bit
86 datapath to the relevant FUs. This could be reduced by adding yet another
87 type of special virtual register port or datapath that masks out the
88 required predicate bits closer to the regfile.
89
90 another disadvantage is that the CR regfile needs to be expanded from 8x 4bit CRs to a minimum of 64x or preferably 128x 4-bit CRs. Beyond that they can be transferred using vectorised mfcr and mtcrf into INT regs. this is a huge number of CR regs, each of which will need a DM column in the FU-REGs Matrix. however this cost can be mitigated through regfile cacheing, bringing FU-REGs column numbers back down to "sane".
91
92 ### Predicated SIMD HI32-LO32 FUs
93
94 an analysis of changing the element widths (for SIMD) gives the following
95 potential arrangements, for which it is assumed that 2x 32-bit FUs
96 "pair up" for single 64 bit arithmetic, HI32 and LO32 style.
97
98 * 64-bit operations. 2 FUs and their DM rows "collaborate"
99 - 2x 32-bit source registers gang together for 64 bit input
100 - 2x 32-bit output registers likewise for output
101 - 1x CR (from the LO32 FU DM side) for a predicate bit
102 * 32-bit operations. 2 FUs collaborate 2x32 SIMD style
103 - 2x 32-bit source registers go into separate input halves of the
104 SIMD ALU
105 - 2x 32-bit outputs likewise for output
106 - 2x CRs (one for HI32, one for LO32) for a predicate bit for each of
107 the 2x32bit SIMD pair
108 * 16-bit operations. 2 FUs collaborate 4x16 SIMD style
109 - 2x 2x16-bit source registers group together to provide 4x16 inputs
110 - likewise for outputs
111 - EITHER 2x 2xCRs (2 for HI32, 2 for LO32) provide 4 predicate bits
112 - OR 1x 8xCR "full" port is utilised (on LO32 FU) followed by masking
113 at the ALU behind the FU pair, extracting the required 4 predicate bits
114 * 8-bit operations. 2 FUs collaborate 8x8 SIMD style
115 - 2x 4x8-bit source registers
116 - likewise for outputs
117 - 1x 8xCR "full" port is utilised (on LO32 FU) and all 8 bits are
118 passed through to the underlying 64-bit ALU to perform 8x 8-bit
119 predicated operations
120
121 ### Predicated SIMD straight 64-bit FUs
122
123 * 64-bit operations. 1 FU, 1 64 bit operation
124 - 1x 64-bit source register
125 - 1x 64-bit output register
126 - 1x CR for a predicate bit
127 * 32-bit operations. 1 FUs 2x32 SIMD style
128 - 1x 64-bit source register dynamically splits to 2x 32-bit
129 - 1x 64-bit output likewise
130 - 2x CRs for a predicate bit for each of the 2x32bit SIMD pair
131 * 16-bit operations. 1 FUs 4x16 SIMD style
132 - 1x 4x16-bit source registers
133 - likewise for outputs
134 - 1x 8xCR "full" port is utilised followed by masking at the ALU behind
135 the FU pair, extracting the required 4 predicate bits
136 * 8-bit operations. 1 FU 8x8 SIMD style
137 - 1x 8x8-bit source registers
138 - likewise for outputs
139 - 1x 8xCR "full" port is utilised LO32 and all 8 bits used
140 to perform 8x 8-bit predicated operations
141
142 Here again the underying 64-bit ALU requires the 8x predicate bits to
143 cover the 8x8-bit SIMD operations (7 of which are dormant/unused in 64-bit
144 predicated operations but still have to be there to cover 8x8-bit SIMD).
145
146 Given that the initial idea of using the "full" (virtual) 32-bit CR read
147 port (which reads all 8 CRs CR0-CR7 simultaneously) would require a
148 32-bit broadcast bus to every predication-capable Function Unit, the bus
149 bandwidth can again be reduced by performing the selection of the masks
150 (bit 0 thru bit 3 of each CR) closer to the regfile i.e. before hitting
151 the broadcast bus.
152
153 ## One scalar int per predicate element.
154
155 Similar to RVV and similar to the one-CR-per-element concept above, the idea here is to use the LSB of any given element in a vector of predicates. This idea has quite a lot of merit to it.
156
157 Implementation-wise just like in the CR-based case a special regfile port could be added that gets the LSB of each scalar integer register and routes them through to the broadcast bus.
158
159 The disadvantages appear on closer analysis:
160
161 * Unlike the "full" CR port (which reads 8x CRs CR0-7 in one hit) trying the same trick on the scalar integer regfile, to obtain just 8 predicate bits (each being an LSB of a given 64 bit scalar int), would require a whopping 8x64bit set of reads to the INT regfile instead of a scant 1x32bit read. Resource-wise, then, this idea is expensive.
162 * With predicate bits being distributed out amongst 64 bit scalar registers, scalar bitmanipulation operations that can be performed after transferring Vectors of CMP operations from CRs to INTs (vectorised-mfcr) are more challenging and costly. Rather than use vectorised mfcr, complex transfers of the LSBs into a single scalar int are required.
163
164 In a "normal" Vector ISA this would be solved by adding opcodes that perform the kinds of bitmanipulation operations normally needed for predicate masks, as specialist operations *on* those masks. However for SV the rule has been set: "no unnecessary additional Vector Instructions" because it is possible to use existing PowerISA scalar bitmanip opcodes to cover the same job.
165
166 The problem is that vectors of LSBs need to be transferred *to* scalar int regs, bitmanip operations carried out, *and then transferred back*, which is exceptionally costly.
167
168 On balance this is a less favourable option than vectorising CRs
169
170 ## Scalar (single) integer as predicate, with one DM row
171
172 This idea has merit in that to perform predicate bitmanip operations the preficate is already in scalar INT reg form and consequently standard scalar INT bitmanip operations can be done straight away. Vectorised mfcr can be used to get CMP results or Vectorised Rc=1 CRs into the scalar INT, easily.
173
174 This idea has several disadvantages.
175
176 * the single DM entry for the entire 64 bits creates a read hazard
177 that has to be resolved through the addition of a special Shadowing
178 Function Unit. Only when the entire predicate is available can the
179 die-cancel/ok be pulled on the FU elements each bit covers
180 * this situation is exacerbated if one vector creates a predicate
181 mask that is then used to mask immediately following instructions.
182 Ordinarily (i.e. without the predicate involved), Cray-style "chaining"
183 would be possible. The single DM entry for the entire predicate mask
184 prohibits this because the subsequent operations can only proceed when
185 the *entire* mask has been computed and placed in full
186 into the scalar integer register.
187 * Allocation of bits to FUs gets particularly complex for SIMD (elwidth
188 overrides) requiring shift and mask logic that is simply not needed
189 compared to "one-for-one" schemes (above)
190
191 Overall there is very little in favour of this concept.
192
193 ## Scalar (single) integer as predicate with one DM row per bit
194
195 The Dependency Matrix logic from the CR proposal favourably applies
196 equally to this proposal. However there are additional caveats that
197 weigh against it:
198
199 * Like the single scalar DM entry proposal, the integer scalar register
200 had to be covered also by a single DM entry (for when it is used *as*
201 an integer register).
202 * Unlike the same, it must also be covered by a 64-wide suite of bitlevel
203 Dependency Matrix Rows. These numbers are so massive as to cause some
204 concern.
205 * A solution is to introduce a virtual register naming scheme however
206 this also introduces huge complexity as the register cache has to be
207 capable of swapping reservations from 64 bitlevel to full 64bit scalar
208 level *and* keep the Dependency Matrices synchronised
209
210 it is enormously complex and likely to result in debugging, verification
211 and ongoing maintenance difficulties.
212
213 ## Schemes which split (a scalar) integer reg into mask "chunks"
214
215 These ideas are based on the principle that each chunk of 8 (or 16)
216 bits of a scalar integer register may be covered by its own DM column
217 in FU-REGs.
218 8 chunks of a scalar 64-bit integer register for use as a bit-level
219 predicate mask onto 64 vector elements would for example require 8
220 DM entries.
221
222 This would, for vector sizes of 8, solve the "chaining" problem reasonably
223 well even when two FUs (or two clock cycles) were required to deal with
224 4 elements at a time. The "compare" that generated the predicate would
225 be ready to go into the first "chunk" of predicate bits whilst the second
226 compare was still being issued.
227
228 It would also require a lot smaller DMs than the single-bit-per-element
229 ideas.
230
231 The problems start when trying to allocate bits of predicate to units.
232 Just like the single-DM-row per entire scalar reg case, a shadow-capable
233 Predicate Function Unit is now required (already determined to be costly)
234 except now if there are 8 chunks requiring 8 Predicate FUs *the problem
235 is now made 8x worse*.
236
237 Not only that but it is even more complex when trying to bring in virtual
238 register cacheing in order to bring down overall FU-REGs DM row count,
239 although the numbers are much lower: 8x 8-bit chunks of scalar int
240 only requires 8 DM Rows and 8 virtual subdivisions however *this is per
241 in-flight register*.
242
243 The additional complexity of the cross-over point between use as a chunked
244 predicate mask and when the same underlying register is used as an actual
245 scalar (or even vector) integer register is also carried over from the
246 bit-level DM subdivision case.
247
248 Out-of-order systems, to be effective, require several operations to
249 be "in-flight" (POWER10 has up to 1,000 in-flight instructions) and if
250 every predicated vector operation needed one 8-chunked scalar register
251 each it becomes exceedingly complex very quickly.
252
253 Even more than that, in a predicated chaining scenario, when computing
254 the mask from a vector "compare", the groupings are troublesome to
255 think through how to implement, which is itself a bad sign. It is
256 suspected that chaining will be complex or adversely affected by certain
257 combinations of element width.
258
259 Overall this idea which initially seems to save resources brings together
260 all the least favourable implementation aspects of other proposals and
261 requires and combines all of them.