add signed copy corrected of will https://libre-soc.org/lkcl/will
[libreriscv.git] / openpower / sv / svp64 / appendix.mdwn
1 # Appendix
2
3 * <https://bugs.libre-soc.org/show_bug.cgi?id=574>
4 * <https://bugs.libre-soc.org/show_bug.cgi?id=558#c47>
5 * <https://bugs.libre-soc.org/show_bug.cgi?id=697>
6
7 This is the appendix to [[sv/svp64]], providing explanations of modes
8 etc. leaving the main svp64 page's primary purpose as outlining the
9 instruction format.
10
11 Table of contents:
12
13 [[!toc]]
14
15 # XER, SO and other global flags
16
17 Vector systems are expected to be high performance. This is achieved
18 through parallelism, which requires that elements in the vector be
19 independent. XER SO/OV and other global "accumulation" flags (CR.SO) cause
20 Read-Write Hazards on single-bit global resources, having a significant
21 detrimental effect.
22
23 Consequently in SV, XER.SO and OV behaviour is disregarded (including
24 in `cmp` instructions). XER is simply neither read nor written.
25 This includes when `scalar identity behaviour` occurs. If precise
26 OpenPOWER v3.0/1 scalar behaviour is desired then OpenPOWER v3.0/1
27 instructions should be used without an SV Prefix.
28
29 Of note here is that XER.SO and OV may already be disregarded in the
30 Power ISA v3.0/1 SFFS (Scalar Fixed and Floating) Compliancy Subset.
31 SVP64 simply makes it mandatory to disregard even for other Subsets,
32 but only for SVP64 Prefixed Operations.
33
34 An interesting side-effect of this decision is that the OE flag is now
35 free for other uses when SV Prefixing is used, and CR.SO may likewise
36 used for other purposes (saturation for example).
37
38 XER.CA/CA32 on the other hand is expected and required to be implemented
39 according to standard Power ISA Scalar behaviour. Interestingly, due
40 to SVP64 being in effect a hardware for-loop around Scalar instructions
41 executing in precise Program Order, a little thought shows that a Vectorised
42 Carry-In-Out add is in effect a Big Integer Add, taking a single bit Carry In
43 and producing, at the end, a single bit Carry out. High performance
44 implementations may exploit this observation to deploy efficient
45 Parallel Carry Lookahead.
46
47 # assume VL=4, this results in 4 sequential ops (below)
48 sv.adde r0.v, r4.v, r8.v
49
50 # instructions that get executed in backend hardware:
51 adde r0, r4, r8 # takes carry-in, produces carry-out
52 adde r1, r5, r9 # takes carry from previous
53 ...
54 adde r3, r7, r11 # likewise
55
56 It can clearly be seen that the carry chains from one
57 64 bit add to the next, the end result being that a
58 256-bit "Big Integer Add" has been performed, and that
59 CA contains the 257th bit. A one-instruction 512-bit Add
60 may be performed by setting VL=8, and a one-instruction
61 1024-bit add by setting VL=16, and so on.
62
63 # v3.0B/v3.1 relevant instructions
64
65 SV is primarily designed for use as an efficient hybrid 3D GPU / VPU /
66 CPU ISA.
67
68 As mentioned above, OE=1 is not applicable in SV, freeing this bit for
69 alternative uses. Additionally, Vectorisation of the VSX SIMD system
70 likewise makes no sense whatsoever. SV *replaces* VSX and provides,
71 at the very minimum, predication (which VSX was designed without).
72 Thus all VSX Major Opcodes - all of them - are "unused" and must raise
73 illegal instruction exceptions in SV Prefix Mode.
74
75 Likewise, `lq` (Load Quad), and Load/Store Multiple make no sense to
76 have because they are not only provided by SV, the SV alternatives may
77 be predicated as well, making them far better suited to use in function
78 calls and context-switching.
79
80 Additionally, some v3.0/1 instructions simply make no sense at all in a
81 Vector context: `rfid` falls into this category,
82 as well as `sc` and `scv`. Here there is simply no point
83 trying to Vectorise them: the standard OpenPOWER v3.0/1 instructions
84 should be called instead.
85
86 Fortuitously this leaves several Major Opcodes free for use by SV
87 to fit alternative future instructions. In a 3D context this means
88 Vector Product, Vector Normalise, [[sv/mv.swizzle]], Texture LD/ST
89 operations, and others critical to an efficient, effective 3D GPU and
90 VPU ISA. With such instructions being included as standard in other
91 commercially-successful GPU ISAs it is likewise critical that a 3D
92 GPU/VPU based on svp64 also have such instructions.
93
94 Note however that svp64 is stand-alone and is in no way
95 critically dependent on the existence or provision of 3D GPU or VPU
96 instructions. These should be considered extensions, and their discussion
97 and specification is out of scope for this document.
98
99 Note, again: this is *only* under svp64 prefixing. Standard v3.0B /
100 v3.1B is *not* altered by svp64 in any way.
101
102 ## Major opcode map (v3.0B)
103
104 This table is taken from v3.0B.
105 Table 9: Primary Opcode Map (opcode bits 0:5)
106
107 ```
108 | 000 | 001 | 010 | 011 | 100 | 101 | 110 | 111
109 000 | | | tdi | twi | EXT04 | | | mulli | 000
110 001 | subfic | | cmpli | cmpi | addic | addic. | addi | addis | 001
111 010 | bc/l/a | EXT17 | b/l/a | EXT19 | rlwimi| rlwinm | | rlwnm | 010
112 011 | ori | oris | xori | xoris | andi. | andis. | EXT30 | EXT31 | 011
113 100 | lwz | lwzu | lbz | lbzu | stw | stwu | stb | stbu | 100
114 101 | lhz | lhzu | lha | lhau | sth | sthu | lmw | stmw | 101
115 110 | lfs | lfsu | lfd | lfdu | stfs | stfsu | stfd | stfdu | 110
116 111 | lq | EXT57 | EXT58 | EXT59 | EXT60 | EXT61 | EXT62 | EXT63 | 111
117 | 000 | 001 | 010 | 011 | 100 | 101 | 110 | 111
118 ```
119
120 ## Suitable for svp64-only
121
122 This is the same table containing v3.0B Primary Opcodes except those that
123 make no sense in a Vectorisation Context have been removed. These removed
124 POs can, *in the SV Vector Context only*, be assigned to alternative
125 (Vectorised-only) instructions, including future extensions.
126
127 Note, again, to emphasise: outside of svp64 these opcodes **do not**
128 change. When not prefixed with svp64 these opcodes **specifically**
129 retain their v3.0B / v3.1B OpenPOWER Standard compliant meaning.
130
131 ```
132 | 000 | 001 | 010 | 011 | 100 | 101 | 110 | 111
133 000 | | | | | | | | mulli | 000
134 001 | subfic | | cmpli | cmpi | addic | addic. | addi | addis | 001
135 010 | bc/l/a | | | EXT19 | rlwimi| rlwinm | | rlwnm | 010
136 011 | ori | oris | xori | xoris | andi. | andis. | EXT30 | EXT31 | 011
137 100 | lwz | lwzu | lbz | lbzu | stw | stwu | stb | stbu | 100
138 101 | lhz | lhzu | lha | lhau | sth | sthu | | | 101
139 110 | lfs | lfsu | lfd | lfdu | stfs | stfsu | stfd | stfdu | 110
140 111 | | | EXT58 | EXT59 | | EXT61 | | EXT63 | 111
141 | 000 | 001 | 010 | 011 | 100 | 101 | 110 | 111
142 ```
143
144 It is important to note that having a different v3.0B Scalar opcode
145 that is different from an SVP64 one is highly undesirable: the complexity
146 in the decoder is greatly increased.
147
148 # EXTRA Field Mapping
149
150 The purpose of the 9-bit EXTRA field mapping is to mark individual
151 registers (RT, RA, BFA) as either scalar or vector, and to extend
152 their numbering from 0..31 in Power ISA v3.0 to 0..127 in SVP64.
153 Three of the 9 bits may also be used up for a 2nd Predicate (Twin
154 Predication) leaving a mere 6 bits for qualifying registers. As can
155 be seen there is significant pressure on these (and in fact all) SVP64 bits.
156
157 In Power ISA v3.1 prefixing there are bits which describe and classify
158 the prefix in a fashion that is independent of the suffix. MLSS for
159 example. For SVP64 there is insufficient space to make the SVP64 Prefix
160 "self-describing", and consequently every single Scalar instruction
161 had to be individually analysed, by rote, to craft an EXTRA Field Mapping.
162 This process was semi-automated and is described in this section.
163 The final results, which are part of the SVP64 Specification, are here:
164
165 * [[openpower/opcode_regs_deduped]]
166
167 Firstly, every instruction's mnemonic (`add RT, RA, RB`) was analysed
168 from reading the markdown formatted version of the Scalar pseudocode
169 which is machine-readable and found in [[openpower/isatables]]. The
170 analysis gives, by instruction, a "Register Profile". `add RT, RA, RB`
171 for example is given a designation `RM-2R-1W` because it requires
172 two GPR reads and one GPR write.
173
174 Secondly, the total number of registers was added up (2R-1W is 3 registers)
175 and if less than or equal to three then that instruction could be given an
176 EXTRA3 designation. Four or more is given an EXTRA2 designation because
177 there are only 9 bits available.
178
179 Thirdly, the instruction was analysed to see if Twin or Single
180 Predication was suitable. As a general rule this was if there
181 was only a single operand and a single result (`extw` and LD/ST)
182 however it was found that some 2 or 3 operand instructions also
183 qualify. Given that 3 of the 9 bits of EXTRA had to be sacrificed for use
184 in Twin Predication, some compromises were made, here. LDST is
185 Twin but also has 3 operands in some operations, so only EXTRA2 can be used.
186
187 Fourthly, a packing format was decided: for 2R-1W an EXTRA3 indexing
188 could have been decided
189 that RA would be indexed 0 (EXTRA bits 0-2), RB indexed 1 (EXTRA bits 3-5)
190 and RT indexed 2 (EXTRA bits 6-8). In some cases (LD/ST with update)
191 RA-as-a-source is given a **different** EXTRA index from RA-as-a-result
192 (because it is possible to do, and perceived to be useful). Rc=1
193 co-results (CR0, CR1) are always given the same EXTRA index as their
194 main result (RT, FRT).
195
196 Fifthly, in an automated process the results of the analysis
197 were outputted in CSV Format for use in machine-readable form
198 by sv_analysis.py <https://git.libre-soc.org/?p=openpower-isa.git;a=blob;f=src/openpower/sv/sv_analysis.py;hb=HEAD>
199
200 This process was laborious but logical, and, crucially, once a
201 decision is made (and ratified) cannot be reversed.
202 Qualifying future Power ISA Scalar instructions for SVP64
203 is **strongly** advised to utilise this same process and the same
204 sv_analysis.py program as a canonical method of maintaining the
205 relationships. Alterations to that same program which
206 change the Designation is **prohibited** once finalised (ratified
207 through the Power ISA WG Process). It would
208 be similar to deciding that `add` should be changed from X-Form
209 to D-Form.
210
211 # Single Predication
212
213 This is a standard mode normally found in Vector ISAs. every element in every source Vector and in the destination uses the same bit of one single predicate mask.
214
215 In SVSTATE, for Single-predication, implementors MUST increment both srcstep and dststep: unlike Twin-Predication the two must be equal at all times.
216
217 # Twin Predication
218
219 This is a novel concept that allows predication to be applied to a single
220 source and a single dest register. The following types of traditional
221 Vector operations may be encoded with it, *without requiring explicit
222 opcodes to do so*
223
224 * VSPLAT (a single scalar distributed across a vector)
225 * VEXTRACT (like LLVM IR [`extractelement`](https://releases.llvm.org/11.0.0/docs/LangRef.html#extractelement-instruction))
226 * VINSERT (like LLVM IR [`insertelement`](https://releases.llvm.org/11.0.0/docs/LangRef.html#insertelement-instruction))
227 * VCOMPRESS (like LLVM IR [`llvm.masked.compressstore.*`](https://releases.llvm.org/11.0.0/docs/LangRef.html#llvm-masked-compressstore-intrinsics))
228 * VEXPAND (like LLVM IR [`llvm.masked.expandload.*`](https://releases.llvm.org/11.0.0/docs/LangRef.html#llvm-masked-expandload-intrinsics))
229
230 Those patterns (and more) may be applied to:
231
232 * mv (the usual way that V\* ISA operations are created)
233 * exts\* sign-extension
234 * rwlinm and other RS-RA shift operations (**note**: excluding
235 those that take RA as both a src and dest. These are not
236 1-src 1-dest, they are 2-src, 1-dest)
237 * LD and ST (treating AGEN as one source)
238 * FP fclass, fsgn, fneg, fabs, fcvt, frecip, fsqrt etc.
239 * Condition Register ops mfcr, mtcr and other similar
240
241 This is a huge list that creates extremely powerful combinations,
242 particularly given that one of the predicate options is `(1<<r3)`
243
244 Additional unusual capabilities of Twin Predication include a back-to-back
245 version of VCOMPRESS-VEXPAND which is effectively the ability to do
246 sequentially ordered multiple VINSERTs. The source predicate selects a
247 sequentially ordered subset of elements to be inserted; the destination
248 predicate specifies the sequentially ordered recipient locations.
249 This is equivalent to
250 `llvm.masked.compressstore.*`
251 followed by
252 `llvm.masked.expandload.*`
253 with a single instruction.
254
255 This extreme power and flexibility comes down to the fact that SVP64
256 is not actually a Vector ISA: it is a loop-abstraction-concept that
257 is applied *in general* to Scalar operations, just like the x86
258 `REP` instruction (if put on steroids).
259
260 # Reduce modes
261
262 Reduction in SVP64 is deterministic and somewhat of a misnomer. A normal
263 Vector ISA would have explicit Reduce opcodes with defined characteristics
264 per operation: in SX Aurora there is even an additional scalar argument
265 containing the initial reduction value, and the default is either 0
266 or 1 depending on the specifics of the explicit opcode.
267 SVP64 fundamentally has to
268 utilise *existing* Scalar Power ISA v3.0B operations, which presents some
269 unique challenges.
270
271 The solution turns out to be to simply define reduction as permitting
272 deterministic element-based schedules to be issued using the base Scalar
273 operations, and to rely on the underlying microarchitecture to resolve
274 Register Hazards at the element level. This goes back to
275 the fundamental principle that SV is nothing more than a Sub-Program-Counter
276 sitting between Decode and Issue phases.
277
278 Microarchitectures *may* take opportunities to parallelise the reduction
279 but only if in doing so they preserve Program Order at the Element Level.
280 Opportunities where this is possible include an `OR` operation
281 or a MIN/MAX operation: it may be possible to parallelise the reduction,
282 but for Floating Point it is not permitted due to different results
283 being obtained if the reduction is not executed in strict Program-Sequential
284 Order.
285
286 In essence it becomes the programmer's responsibility to leverage the
287 pre-determined schedules to desired effect.
288
289 ## Scalar result reduction and iteration
290
291 Scalar Reduction per se does not exist, instead is implemented in SVP64
292 as a simple and natural relaxation of the usual restriction on the Vector
293 Looping which would terminate if the destination was marked as a Scalar.
294 Scalar Reduction by contrast *keeps issuing Vector Element Operations*
295 even though the destination register is marked as scalar.
296 Thus it is up to the programmer to be aware of this, observe some
297 conventions, and thus end up achieving the desired outcome of scalar
298 reduction.
299
300 It is also important to appreciate that there is no
301 actual imposition or restriction on how this mode is utilised: there
302 will therefore be several valuable uses (including Vector Iteration
303 and "Reverse-Gear")
304 and it is up to the programmer to make best use of the
305 (strictly deterministic) capability
306 provided.
307
308 In this mode, which is suited to operations involving carry or overflow,
309 one register must be assigned, by convention by the programmer to be the
310 "accumulator". Scalar reduction is thus categorised by:
311
312 * One of the sources is a Vector
313 * the destination is a scalar
314 * optionally but most usefully when one source scalar register is
315 also the scalar destination (which may be informally termed
316 the "accumulator")
317 * That the source register type is the same as the destination register
318 type identified as the "accumulator". Scalar reduction on `cmp`,
319 `setb` or `isel` makes no sense for example because of the mixture
320 between CRs and GPRs.
321
322 *Note that issuing instructions in Scalar reduce mode such as `setb`
323 are neither `UNDEFINED` nor prohibited, despite them not making much
324 sense at first glance.
325 Scalar reduce is strictly defined behaviour, and the cost in
326 hardware terms of prohibition of seemingly non-sensical operations is too great.
327 Therefore it is permitted and required to be executed successfully.
328 Implementors **MAY** choose to optimise such instructions in instances
329 where their use results in "extraneous execution", i.e. where it is clear
330 that the sequence of operations, comprising multiple overwrites to
331 a scalar destination **without** cumulative, iterative, or reductive
332 behaviour (no "accumulator"), may discard all but the last element
333 operation. Identification
334 of such is trivial to do for `setb` and `cmp`: the source register type is
335 a completely different register file from the destination.
336 Likewise Scalar reduction when the destination is a Vector
337 is as if the Reduction Mode was not requested.*
338
339 Typical applications include simple operations such as `ADD r3, r10.v,
340 r3` where, clearly, r3 is being used to accumulate the addition of all
341 elements of the vector starting at r10.
342
343 # add RT, RA,RB but when RT==RA
344 for i in range(VL):
345 iregs[RA] += iregs[RB+i] # RT==RA
346
347 However, *unless* the operation is marked as "mapreduce" (`sv.add/mr`)
348 SV ordinarily
349 **terminates** at the first scalar operation. Only by marking the
350 operation as "mapreduce" will it continue to issue multiple sub-looped
351 (element) instructions in `Program Order`.
352
353 To perform the loop in reverse order, the ```RG``` (reverse gear) bit must be set. This may be useful in situations where the results may be different
354 (floating-point) if executed in a different order. Given that there is
355 no actual prohibition on Reduce Mode being applied when the destination
356 is a Vector, the "Reverse Gear" bit turns out to be a way to apply Iterative
357 or Cumulative Vector operations in reverse. `sv.add/rg r3.v, r4.v, r4.v`
358 for example will start at the opposite end of the Vector and push
359 a cumulative series of overlapping add operations into the Execution units of
360 the underlying hardware.
361
362 Other examples include shift-mask operations where a Vector of inserts
363 into a single destination register is required (see [[sv/bitmanip]], bmset),
364 as a way to construct
365 a value quickly from multiple arbitrary bit-ranges and bit-offsets.
366 Using the same register as both the source and destination, with Vectors
367 of different offsets masks and values to be inserted has multiple
368 applications including Video, cryptography and JIT compilation.
369
370 # assume VL=4:
371 # * Vector of shift-offsets contained in RC (r12.v)
372 # * Vector of masks contained in RB (r8.v)
373 # * Vector of values to be masked-in in RA (r4.v)
374 # * Scalar destination RT (r0) to receive all mask-offset values
375 sv.bmset/mr r0, r4.v, r8.v, r12.v
376
377 Due to the Deterministic Scheduling,
378 Subtract and Divide are still permitted to be executed in this mode,
379 although from an algorithmic perspective it is strongly discouraged.
380 It would be better to use addition followed by one final subtract,
381 or in the case of divide, to get better accuracy, to perform a multiply
382 cascade followed by a final divide.
383
384 Note that single-operand or three-operand scalar-dest reduce is perfectly
385 well permitted: the programmer may still declare one register, used as
386 both a Vector source and Scalar destination, to be utilised as
387 the "accumulator". In the case of `sv.fmadds` and `sv.maddhw` etc
388 this naturally fits well with the normal expected usage of these
389 operations.
390
391 If an interrupt or exception occurs in the middle of the scalar mapreduce,
392 the scalar destination register **MUST** be updated with the current
393 (intermediate) result, because this is how ```Program Order``` is
394 preserved (Vector Loops are to be considered to be just another way of issuing instructions
395 in Program Order). In this way, after return from interrupt,
396 the scalar mapreduce may continue where it left off. This provides
397 "precise" exception behaviour.
398
399 Note that hardware is perfectly permitted to perform multi-issue
400 parallel optimisation of the scalar reduce operation: it's just that
401 as far as the user is concerned, all exceptions and interrupts **MUST**
402 be precise.
403
404 ## Vector result reduce mode
405
406 Vector Reduce Mode issues a deterministic tree-reduction schedule to the underlying micro-architecture. Like Scalar reduction, the "Scalar Base"
407 (Power ISA v3.0B) operation is leveraged, unmodified, to give the
408 *appearance* and *effect* of Reduction.
409
410 Given that the tree-reduction schedule is deterministic,
411 Interrupts and exceptions
412 can therefore also be precise. The final result will be in the first
413 non-predicate-masked-out destination element, but due again to
414 the deterministic schedule programmers may find uses for the intermediate
415 results.
416
417 When Rc=1 a corresponding Vector of co-resultant CRs is also
418 created. No special action is taken: the result and its CR Field
419 are stored "as usual" exactly as all other SVP64 Rc=1 operations.
420
421 ## Sub-Vector Horizontal Reduction
422
423 Note that when SVM is clear and SUBVL!=1 the sub-elements are
424 *independent*, i.e. they are mapreduced per *sub-element* as a result.
425 illustration with a vec2, assuming RA==RT, e.g `sv.add/mr/vec2 r4, r4, r16`
426
427 for i in range(0, VL):
428 # RA==RT in the instruction. does not have to be
429 iregs[RT].x = op(iregs[RT].x, iregs[RB+i].x)
430 iregs[RT].y = op(iregs[RT].y, iregs[RB+i].y)
431
432 Thus logically there is nothing special or unanticipated about
433 `SVM=0`: it is expected behaviour according to standard SVP64
434 Sub-Vector rules.
435
436 By contrast, when SVM is set and SUBVL!=1, a Horizontal
437 Subvector mode is enabled, which behaves very much more
438 like a traditional Vector Processor Reduction instruction.
439 Example for a vec3:
440
441 for i in range(VL):
442 result = iregs[RA+i].x
443 result = op(result, iregs[RA+i].y)
444 result = op(result, iregs[RA+i].z)
445 iregs[RT+i] = result
446
447 In this mode, when Rc=1 the Vector of CRs is as normal: each result
448 element creates a corresponding CR element (for the final, reduced, result).
449
450 # Fail-on-first
451
452 Data-dependent fail-on-first has two distinct variants: one for LD/ST
453 (see [[sv/ldst]],
454 the other for arithmetic operations (actually, CR-driven)
455 ([[sv/normal]]) and CR operations ([[sv/cr_ops]]).
456 Note in each
457 case the assumption is that vector elements are required appear to be
458 executed in sequential Program Order, element 0 being the first.
459
460 * LD/ST ffirst treats the first LD/ST in a vector (element 0) as an
461 ordinary one. Exceptions occur "as normal". However for elements 1
462 and above, if an exception would occur, then VL is **truncated** to the
463 previous element.
464 * Data-driven (CR-driven) fail-on-first activates when Rc=1 or other
465 CR-creating operation produces a result (including cmp). Similar to
466 branch, an analysis of the CR is performed and if the test fails, the
467 vector operation terminates and discards all element operations
468 above the current one (and the current one if VLi is not set),
469 and VL is truncated to either
470 the *previous* element or the current one, depending on whether
471 VLi (VL "inclusive") is set.
472
473 Thus the new VL comprises a contiguous vector of results,
474 all of which pass the testing criteria (equal to zero, less than zero).
475
476 The CR-based data-driven fail-on-first is new and not found in ARM
477 SVE or RVV. It is extremely useful for reducing instruction count,
478 however requires speculative execution involving modifications of VL
479 to get high performance implementations. An additional mode (RC1=1)
480 effectively turns what would otherwise be an arithmetic operation
481 into a type of `cmp`. The CR is stored (and the CR.eq bit tested
482 against the `inv` field).
483 If the CR.eq bit is equal to `inv` then the Vector is truncated and
484 the loop ends.
485 Note that when RC1=1 the result elements are never stored, only the CRs.
486
487 VLi is only available as an option when `Rc=0` (or for instructions
488 which do not have Rc). When set, the current element is always
489 also included in the count (the new length that VL will be set to).
490 This may be useful in combination with "inv" to truncate the Vector
491 to `exclude` elements that fail a test, or, in the case of implementations
492 of strncpy, to include the terminating zero.
493
494 In CR-based data-driven fail-on-first there is only the option to select
495 and test one bit of each CR (just as with branch BO). For more complex
496 tests this may be insufficient. If that is the case, a vectorised crops
497 (crand, cror) may be used, and ffirst applied to the crop instead of to
498 the arithmetic vector.
499
500 One extremely important aspect of ffirst is:
501
502 * LDST ffirst may never set VL equal to zero. This because on the first
503 element an exception must be raised "as normal".
504 * CR-based data-dependent ffirst on the other hand **can** set VL equal
505 to zero. This is the only means in the entirety of SV that VL may be set
506 to zero (with the exception of via the SV.STATE SPR). When VL is set
507 zero due to the first element failing the CR bit-test, all subsequent
508 vectorised operations are effectively `nops` which is
509 *precisely the desired and intended behaviour*.
510
511 Another aspect is that for ffirst LD/STs, VL may be truncated arbitrarily
512 to a nonzero value for any implementation-specific reason. For example:
513 it is perfectly reasonable for implementations to alter VL when ffirst
514 LD or ST operations are initiated on a nonaligned boundary, such that
515 within a loop the subsequent iteration of that loop begins subsequent
516 ffirst LD/ST operations on an aligned boundary. Likewise, to reduce
517 workloads or balance resources.
518
519 CR-based data-dependent first on the other hand MUST not truncate VL
520 arbitrarily to a length decided by the hardware: VL MUST only be
521 truncated based explicitly on whether a test fails.
522 This because it is a precise test on which algorithms
523 will rely.
524
525 ## Data-dependent fail-first on CR operations (crand etc)
526
527 Operations that actually produce or alter CR Field as a result
528 do not also in turn have an Rc=1 mode. However it makes no
529 sense to try to test the 4 bits of a CR Field for being equal
530 or not equal to zero. Moreover, the result is already in the
531 form that is desired: it is a CR field. Therefore,
532 CR-based operations have their own SVP64 Mode, described
533 in [[sv/cr_ops]]
534
535 There are two primary different types of CR operations:
536
537 * Those which have a 3-bit operand field (referring to a CR Field)
538 * Those which have a 5-bit operand (referring to a bit within the
539 whole 32-bit CR)
540
541 More details can be found in [[sv/cr_ops]].
542
543 # pred-result mode
544
545 Predicate-result merges common CR testing with predication, saving on
546 instruction count. In essence, a Condition Register Field test
547 is performed, and if it fails it is considered to have been
548 *as if* the destination predicate bit was zero.
549 Arithmetic and Logical Pred-result is covered in [[sv/normal]]
550
551 Ped-result mode may not be applied on CR ops.
552
553 Although CR operations (mtcr, crand, cror) may be Vectorised,
554 predicated, pred-result mode applies to operations that have
555 an Rc=1 mode, or make sense to add an RC1 option.
556
557 # CR Operations
558
559 CRs are slightly more involved than INT or FP registers due to the
560 possibility for indexing individual bits (crops BA/BB/BT). Again however
561 the access pattern needs to be understandable in relation to v3.0B / v3.1B
562 numbering, with a clear linear relationship and mapping existing when
563 SV is applied.
564
565 ## CR EXTRA mapping table and algorithm
566
567 Numbering relationships for CR fields are already complex due to being
568 in BE format (*the relationship is not clearly explained in the v3.0B
569 or v3.1 specification*). However with some care and consideration
570 the exact same mapping used for INT and FP regfiles may be applied,
571 just to the upper bits, as explained below. The notation
572 `CR{field number}` is used to indicate access to a particular
573 Condition Register Field (as opposed to the notation `CR[bit]`
574 which accesses one bit of the 32 bit Power ISA v3.0B
575 Condition Register)
576
577 `CR{n}` refers to `CR0` when `n=0` and consequently, for CR0-7, is defined, in v3.0B pseudocode, as:
578
579 CR{7-n} = CR[32+n*4:35+n*4]
580
581 For SVP64 the relationship for the sequential
582 numbering of elements is to the CR **fields** within
583 the CR Register, not to individual bits within the CR register.
584
585 In OpenPOWER v3.0/1, BF/BT/BA/BB are all 5 bits. The top 3 bits (0:2)
586 select one of the 8 CRs; the bottom 2 bits (3:4) select one of 4 bits
587 *in* that CR. The numbering was determined (after 4 months of
588 analysis and research) to be as follows:
589
590 CR_index = 7-(BA>>2) # top 3 bits but BE
591 bit_index = 3-(BA & 0b11) # low 2 bits but BE
592 CR_reg = CR{CR_index} # get the CR
593 # finally get the bit from the CR.
594 CR_bit = (CR_reg & (1<<bit_index)) != 0
595
596 When it comes to applying SV, it is the CR\_reg number to which SV EXTRA2/3
597 applies, **not** the CR\_bit portion (bits 3:4):
598
599 if extra3_mode:
600 spec = EXTRA3
601 else:
602 spec = EXTRA2<<1 | 0b0
603 if spec[0]:
604 # vector constructs "BA[0:2] spec[1:2] 00 BA[3:4]"
605 return ((BA >> 2)<<6) | # hi 3 bits shifted up
606 (spec[1:2]<<4) | # to make room for these
607 (BA & 0b11) # CR_bit on the end
608 else:
609 # scalar constructs "00 spec[1:2] BA[0:4]"
610 return (spec[1:2] << 5) | BA
611
612 Thus, for example, to access a given bit for a CR in SV mode, the v3.0B
613 algorithm to determin CR\_reg is modified to as follows:
614
615 CR_index = 7-(BA>>2) # top 3 bits but BE
616 if spec[0]:
617 # vector mode, 0-124 increments of 4
618 CR_index = (CR_index<<4) | (spec[1:2] << 2)
619 else:
620 # scalar mode, 0-32 increments of 1
621 CR_index = (spec[1:2]<<3) | CR_index
622 # same as for v3.0/v3.1 from this point onwards
623 bit_index = 3-(BA & 0b11) # low 2 bits but BE
624 CR_reg = CR{CR_index} # get the CR
625 # finally get the bit from the CR.
626 CR_bit = (CR_reg & (1<<bit_index)) != 0
627
628 Note here that the decoding pattern to determine CR\_bit does not change.
629
630 Note: high-performance implementations may read/write Vectors of CRs in
631 batches of aligned 32-bit chunks (CR0-7, CR7-15). This is to greatly
632 simplify internal design. If instructions are issued where CR Vectors
633 do not start on a 32-bit aligned boundary, performance may be affected.
634
635 ## CR fields as inputs/outputs of vector operations
636
637 CRs (or, the arithmetic operations associated with them)
638 may be marked as Vectorised or Scalar. When Rc=1 in arithmetic operations that have no explicit EXTRA to cover the CR, the CR is Vectorised if the destination is Vectorised. Likewise if the destination is scalar then so is the CR.
639
640 When vectorized, the CR inputs/outputs are sequentially read/written
641 to 4-bit CR fields. Vectorised Integer results, when Rc=1, will begin
642 writing to CR8 (TBD evaluate) and increase sequentially from there.
643 This is so that:
644
645 * implementations may rely on the Vector CRs being aligned to 8. This
646 means that CRs may be read or written in aligned batches of 32 bits
647 (8 CRs per batch), for high performance implementations.
648 * scalar Rc=1 operation (CR0, CR1) and callee-saved CRs (CR2-4) are not
649 overwritten by vector Rc=1 operations except for very large VL
650 * CR-based predication, from CR32, is also not interfered with
651 (except by large VL).
652
653 However when the SV result (destination) is marked as a scalar by the
654 EXTRA field the *standard* v3.0B behaviour applies: the accompanying
655 CR when Rc=1 is written to. This is CR0 for integer operations and CR1
656 for FP operations.
657
658 Note that yes, the CR Fields are genuinely Vectorised. Unlike in SIMD VSX which
659 has a single CR (CR6) for a given SIMD result, SV Vectorised OpenPOWER
660 v3.0B scalar operations produce a **tuple** of element results: the
661 result of the operation as one part of that element *and a corresponding
662 CR element*. Greatly simplified pseudocode:
663
664 for i in range(VL):
665 # calculate the vector result of an add
666 iregs[RT+i] = iregs[RA+i] + iregs[RB+i]
667 # now calculate CR bits
668 CRs{8+i}.eq = iregs[RT+i] == 0
669 CRs{8+i}.gt = iregs[RT+i] > 0
670 ... etc
671
672 If a "cumulated" CR based analysis of results is desired (a la VSX CR6)
673 then a followup instruction must be performed, setting "reduce" mode on
674 the Vector of CRs, using cr ops (crand, crnor) to do so. This provides far
675 more flexibility in analysing vectors than standard Vector ISAs. Normal
676 Vector ISAs are typically restricted to "were all results nonzero" and
677 "were some results nonzero". The application of mapreduce to Vectorised
678 cr operations allows far more sophisticated analysis, particularly in
679 conjunction with the new crweird operations see [[sv/cr_int_predication]].
680
681 Note in particular that the use of a separate instruction in this way
682 ensures that high performance multi-issue OoO inplementations do not
683 have the computation of the cumulative analysis CR as a bottleneck and
684 hindrance, regardless of the length of VL.
685
686 Additionally,
687 SVP64 [[sv/branches]] may be used, even when the branch itself is to
688 the following instruction. The combined side-effects of CTR reduction
689 and VL truncation provide several benefits.
690
691 (see [[discussion]]. some alternative schemes are described there)
692
693 ## Rc=1 when SUBVL!=1
694
695 sub-vectors are effectively a form of Packed SIMD (length 2 to 4). Only 1 bit of
696 predicate is allocated per subvector; likewise only one CR is allocated
697 per subvector.
698
699 This leaves a conundrum as to how to apply CR computation per subvector,
700 when normally Rc=1 is exclusively applied to scalar elements. A solution
701 is to perform a bitwise OR or AND of the subvector tests. Given that
702 OE is ignored in SVP64, this field may (when available) be used to select OR or
703 AND behavior.
704
705 ### Table of CR fields
706
707 CRn is the notation used by the OpenPower spec to refer to CR field #i,
708 so FP instructions with Rc=1 write to CR1 (n=1).
709
710 CRs are not stored in SPRs: they are registers in their own right.
711 Therefore context-switching the full set of CRs involves a Vectorised
712 mfcr or mtcr, using VL=8 to do so. This is exactly as how
713 scalar OpenPOWER context-switches CRs: it is just that there are now
714 more of them.
715
716 The 64 SV CRs are arranged similarly to the way the 128 integer registers
717 are arranged. TODO a python program that auto-generates a CSV file
718 which can be included in a table, which is in a new page (so as not to
719 overwhelm this one). [[svp64/cr_names]]
720
721 # Register Profiles
722
723 **NOTE THIS TABLE SHOULD NO LONGER BE HAND EDITED** see
724 <https://bugs.libre-soc.org/show_bug.cgi?id=548> for details.
725
726 Instructions are broken down by Register Profiles as listed in the
727 following auto-generated page: [[opcode_regs_deduped]]. "Non-SV"
728 indicates that the operations with this Register Profile cannot be
729 Vectorised (mtspr, bc, dcbz, twi)
730
731 TODO generate table which will be here [[svp64/reg_profiles]]
732
733 # SV pseudocode illilustration
734
735 ## Single-predicated Instruction
736
737 illustration of normal mode add operation: zeroing not included, elwidth
738 overrides not included. if there is no predicate, it is set to all 1s
739
740 function op_add(rd, rs1, rs2) # add not VADD!
741 int i, id=0, irs1=0, irs2=0;
742 predval = get_pred_val(FALSE, rd);
743 for (i = 0; i < VL; i++)
744 STATE.srcoffs = i # save context
745 if (predval & 1<<i) # predication uses intregs
746 ireg[rd+id] <= ireg[rs1+irs1] + ireg[rs2+irs2];
747 if (!int_vec[rd].isvec) break;
748 if (rd.isvec) { id += 1; }
749 if (rs1.isvec) { irs1 += 1; }
750 if (rs2.isvec) { irs2 += 1; }
751 if (id == VL or irs1 == VL or irs2 == VL)
752 {
753 # end VL hardware loop
754 STATE.srcoffs = 0; # reset
755 return;
756 }
757
758 This has several modes:
759
760 * RT.v = RA.v RB.v
761 * RT.v = RA.v RB.s (and RA.s RB.v)
762 * RT.v = RA.s RB.s
763 * RT.s = RA.v RB.v
764 * RT.s = RA.v RB.s (and RA.s RB.v)
765 * RT.s = RA.s RB.s
766
767 All of these may be predicated. Vector-Vector is straightfoward.
768 When one of source is a Vector and the other a Scalar, it is clear that
769 each element of the Vector source should be added to the Scalar source,
770 each result placed into the Vector (or, if the destination is a scalar,
771 only the first nonpredicated result).
772
773 The one that is not obvious is RT=vector but both RA/RB=scalar.
774 Here this acts as a "splat scalar result", copying the same result into
775 all nonpredicated result elements. If a fixed destination scalar was
776 intended, then an all-Scalar operation should be used.
777
778 See <https://bugs.libre-soc.org/show_bug.cgi?id=552>
779
780 # Assembly Annotation
781
782 Assembly code annotation is required for SV to be able to successfully
783 mark instructions as "prefixed".
784
785 A reasonable (prototype) starting point:
786
787 svp64 [field=value]*
788
789 Fields:
790
791 * ew=8/16/32 - element width
792 * sew=8/16/32 - source element width
793 * vec=2/3/4 - SUBVL
794 * mode=reduce/satu/sats/crpred
795 * pred=1\<\<3/r3/~r3/r10/~r10/r30/~r30/lt/gt/le/ge/eq/ne
796 * spred={reg spec}
797
798 similar to x86 "rex" prefix.
799
800 For actual assembler:
801
802 sv.asmcode/mode.vec{N}.ew=8,sw=16,m={pred},sm={pred} reg.v, src.s
803
804 Qualifiers:
805
806 * m={pred}: predicate mask mode
807 * sm={pred}: source-predicate mask mode (only allowed in Twin-predication)
808 * vec{N}: vec2 OR vec3 OR vec4 - sets SUBVL=2/3/4
809 * ew={N}: ew=8/16/32 - sets elwidth override
810 * sw={N}: sw=8/16/32 - sets source elwidth override
811 * ff={xx}: see fail-first mode
812 * pr={xx}: see predicate-result mode
813 * sat{x}: satu / sats - see saturation mode
814 * mr: see map-reduce mode
815 * mr.svm see map-reduce with sub-vector mode
816 * crm: see map-reduce CR mode
817 * crm.svm see map-reduce CR with sub-vector mode
818 * sz: predication with source-zeroing
819 * dz: predication with dest-zeroing
820
821 For modes:
822
823 * pred-result:
824 - pm=lt/gt/le/ge/eq/ne/so/ns OR
825 - pm=RC1 OR pm=~RC1
826 * fail-first
827 - ff=lt/gt/le/ge/eq/ne/so/ns OR
828 - ff=RC1 OR ff=~RC1
829 * saturation:
830 - sats
831 - satu
832 * map-reduce:
833 - mr OR crm: "normal" map-reduce mode or CR-mode.
834 - mr.svm OR crm.svm: when vec2/3/4 set, sub-vector mapreduce is enabled
835
836 # Proposed Parallel-reduction algorithm
837
838 **This algorithm contains a MV operation and may NOT be used. Removal
839 of the MV operation may be achieved by using index-redirection as was
840 achieved in DCT and FFT REMAP**
841
842 ```
843 /// reference implementation of proposed SimpleV reduction semantics.
844 ///
845 // reduction operation -- we still use this algorithm even
846 // if the reduction operation isn't associative or
847 // commutative.
848 XXX VIOLATION OF SVP64 DESIGN PRINCIPLES XXXX
849 /// XXX `pred` is a user-visible Vector Condition register XXXX
850 XXX VIOLATION OF SVP64 DESIGN PRINCIPLES XXXX
851 ///
852 /// all input arrays have length `vl`
853 def reduce(vl, vec, pred):
854 pred = copy(pred) # must not damage predicate
855 step = 1;
856 while step < vl
857 step *= 2;
858 for i in (0..vl).step_by(step)
859 other = i + step / 2;
860 other_pred = other < vl && pred[other];
861 if pred[i] && other_pred
862 vec[i] += vec[other];
863 else if other_pred
864 XXX VIOLATION OF SVP64 DESIGN XXX
865 XXX vec[i] = vec[other]; XXX
866 XXX VIOLATION OF SVP64 DESIGN XXX
867 pred[i] |= other_pred;
868 ```
869
870 The first principle in SVP64 being violated is that SVP64 is a fully-independent
871 Abstraction of hardware-looping in between issue and execute phases
872 that has no relation to the operation it issues. The above pseudocode
873 conditionally changes not only the type of element operation issued
874 (a MV in some cases) but also the number of arguments (2 for a MV).
875 At the very least, for Vertical-First Mode this will result in unanticipated and unexpected behaviour (maximise "surprises" for programmers) in
876 the middle of loops, that will be far too hard to explain.
877
878 The second principle being violated by the above algorithm is the expectation
879 that temporary storage is available for a modified predicate: there is no
880 such space, and predicates are read-only to reduce complexity at the
881 micro-architectural level.
882 SVP64 is founded on the principle that all operations are
883 "re-entrant" with respect to interrupts and exceptions: SVSTATE must
884 be saved and restored alongside PC and MSR, but nothing more. It is perfectly
885 fine to have context-switching back to the operation be somewhat slower,
886 through "reconstruction" of temporary internal state based on what SVSTATE
887 contains, but nothing more.
888
889 An alternative algorithm is therefore required that does not perform MVs,
890 and does not require additional state to be saved on context-switching.
891
892 ```
893 def reduce( vl, vec, pred ):
894 pred = copy(pred) # must not damage predicate
895 j = 0
896 vi = [] # array of lookup indices to skip nonpredicated
897 for i, pbit in enumerate(pred):
898 if pbit:
899 vi[j] = i
900 j += 1
901 step = 2
902 while step <= vl
903 halfstep = step // 2
904 for i in (0..vl).step_by(step)
905 other = vi[i + halfstep]
906 ir = vi[i]
907 other_pred = other < vl && pred[other]
908 if pred[i] && other_pred
909 vec[ir] += vec[other]
910 else if other_pred:
911 vi[ir] = vi[other] # index redirection, no MV
912 pred[ir] |= other_pred # reconstructed on context-switch
913 step *= 2
914 ```
915
916 In this version the need for an explicit MV is made unnecessary by instead
917 leaving elements *in situ*. The internal modifications to the predicate may,
918 due to the reduction being entirely deterministic, be "reconstructed"
919 on a context-switch. This may make some implementations slower.
920
921 *Implementor's Note: many SIMD-based Parallel Reduction Algorithms are
922 implemented in hardware with MVs that ensure lane-crossing is minimised.
923 The mistake which would be catastrophic to SVP64 to make is to then
924 limit the Reduction Sequence for all implementors
925 based solely and exclusively on what one
926 specific internal microarchitecture does.
927 In SIMD ISAs the internal SIMD Architectural design is exposed and imposed on the programmer. Cray-style Vector ISAs on the other hand provide convenient,
928 compact and efficient encodings of abstract concepts.
929 It is the Implementor's responsibility to produce a design
930 that complies with the above algorithm,
931 utilising internal Micro-coding and other techniques to transparently
932 insert MV operations
933 if necessary or desired, to give the level of efficiency or performance
934 required.*