(no commit message)
[libreriscv.git] / openpower / sv.mdwn
1 [[!tag standards]]
2
3 Obligatory Dilbert:
4
5 <img src="https://assets.amuniversal.com/7fada35026ca01393d3d005056a9545d" width="600" />
6
7 Links:
8
9 * <https://bugs.libre-soc.org/show_bug.cgi?id=213>
10 * <https://youtu.be/ZQ5hw9AwO1U> walkthrough video (19jun2022)
11 * <https://ftp.libre-soc.org/simple_v_spec.pdf>
12 PDF version of this DRAFT specification
13
14 **SV is in DRAFT STATUS**. SV has not yet been submitted to the OpenPOWER Foundation ISA WG for review.
15
16 ===
17
18 # Scalable Vectors for the Power ISA
19
20 SV is designed as a strict RISC-paradigm
21 Scalable Vector ISA for Hybrid 3D CPU GPU VPU workloads.
22 As such it brings features normally only found in Cray Supercomputers
23 (Cray-1, NEC SX-Aurora)
24 and in GPUs, but keeps strictly to a *Simple* RISC principle of leveraging
25 a *Scalar* ISA, exclusively using "Prefixing". **Not one single actual
26 explicit Vector opcode exists in SV, at all**. It is suitable for
27 low-power Embedded and DSP Workloads as much as it is for power-efficient
28 Supercomputing.
29
30 Fundamental design principles:
31
32 * Taking the simplicity of the RISC paradigm and applying it strictly and
33 uniformly to create a Scalable Vector ISA.
34 * Effectively a hardware for-loop, pausing PC, issuing multiple scalar
35 operations
36 * Preserving the underlying scalar execution dependencies as if the
37 for-loop had been expanded as actual scalar instructions
38 (termed "preserving Program Order")
39 * Specifically designed to be Precise-Interruptible at all times
40 (many Vector ISAs have operations which, due to higher internal
41 accuracy or other complexity, must be effectively atomic for
42 the full Vector operation's duration, adversely affecting interrupt
43 response latency, or be abandoned and started again)
44 * Augments ("tags") existing instructions, providing Vectorisation
45 "context" rather than adding new instructions.
46 * Strictly does not interfere with or alter the non-Scalable Power ISA
47 in any way
48 * In the Prefix space, does not modify or deviate from the underlying
49 scalar Power ISA
50 unless it provides significant performance or other advantage to do so
51 in the Vector space (dropping the "sticky" characteristics
52 of XER.SO and CR0.SO for example)
53 * Designed for Supercomputing: avoids creating significant sequential
54 dependency hazards, allowing standard
55 high performance superscalar multi-issue
56 micro-architectures to be leveraged.
57 * Divided into Compliancy Levels to reduce cost of implementation for
58 specific needs.
59
60 Advantages of these design principles:
61
62 * Simplicity of introduction and implementation on top of
63 the existing Power ISA without disruption.
64 * It is therefore easy to create a first (and sometimes only)
65 implementation as literally a for-loop in hardware, simulators, and
66 compilers.
67 * Hardware Architects may understand and implement SV as being an
68 extra pipeline stage, inserted between decode and issue, that is
69 a simple for-loop issuing element-level sub-instructions.
70 * More complex HDL can be done by repeating existing scalar ALUs and
71 pipelines as blocks and leveraging existing Multi-Issue Infrastructure
72 * As (mostly) a high-level "context" that does not (significantly) deviate
73 from scalar Power ISA and, in its purest form being "a for loop around
74 scalar instructions", it is minimally-disruptive and consequently stands
75 a reasonable chance of broad community adoption and acceptance
76 * Completely wipes not just SIMD opcode proliferation off the
77 map (SIMD is O(N^6) opcode proliferation)
78 but off of Vectorisation ISAs as well. No more separate Vector
79 instructions.
80
81 Comparative instruction count:
82
83 * ARM NEON SIMD: around 2,000 instructions, prerequisite: ARM Scalar.
84 * ARM SVE: around 4,000 instructions, prerequisite: NEON and ARM Scalar
85 * ARM SVE2: around 1,000 instructions, prerequisite: SVE, NEON, and
86 ARM Scalar
87 * Intel AVX-512: around 4,000 instructions, prerequisite AVX, AVX2,
88 AVX-128 and AVX-256 which in turn critically rely on the rest of
89 x86, for a grand total of well over 10,000 instructions.
90 * RISV-V RVV: 192 instructions, prerequisite 96 Scalar RV64GC instructions
91 * SVP64: **six** instructions, two of which are in the same space
92 (svshape, svshape2), with 24-bit prefixing of
93 prerequisite SFS (150) or
94 SFFS (214) Compliancy Subsets.
95 **There are no dedicated Vector instructions, only Scalar-prefixed**.
96
97 Comparative Basic Design Principle:
98
99 * ARM NEON and VSX: PackedSIMD. No instruction-overloaded meaning
100 (every instruction is unique for a given register bitwidth,
101 guaranteeing binary interoperability)
102 * Intel AVX-512 (and below): Hybrid Packed-Predicated SIMD with no
103 instruction-overloading, guaranteeing binary interoperability
104 but penalising the ISA with uncontrolled opcode proliferation.
105 * ARM SVE/SVE2: Hybrid Packed-Predicated SIMD with instruction-overloading
106 that destroys binary interoperability. This is hidden behind the
107 misuse of the word "Scalable".
108 * RISC-V RVV: Cray-style Scalable Vector but with instruction-overloading
109 that destroys binary interoperability.
110 * SVP64: Cray-style Scalable Vector with no instruction-overloaded
111 meanings. The regfile numbers and bitwidths shall **not** change
112 in a future revision (for the same instruction encoding):
113 "Silicon Partner" Scaling is prohibited,
114 in order to guarantee binary interoperability. Future revisions
115 of SVP64 may extend VSX instructions to achieve larger regfiles, and
116 non-interoperability on the same will likewise be prohibited.
117
118 SV comprises several [[sv/compliancy_levels]] suited to Embedded, Energy
119 efficient High-Performance Compute, Distributed Computing and Advanced
120 Computational Supercomputing. The Compliancy Levels are arranged such
121 that even at the bare minimum Level, full Soft-Emulation of all
122 optional and future features is possible.
123
124 # Sub-pages
125
126 Pages being developed and examples
127
128 * [[sv/executive_summary]]
129 * [[sv/overview]] explaining the basics.
130 * [[sv/compliancy_levels]] for minimum subsets through to Advanced
131 Supercomputing.
132 * [[sv/implementation]] implementation planning and coordination
133 * [[sv/svp64]] contains the packet-format *only*, the [[svp64/appendix]]
134 contains explanations and further details
135 * [[sv/svp64_quirks]] things in SVP64 that slightly break the rules
136 or are not immediately apparent despite the RISC paradigm
137 * [[opcode_regs_deduped]] autogenerated table of SVP64 decoder augmentation
138 * [[sv/sprs]] SPRs
139
140 SVP64 "Modes":
141
142 * For condition register operations see [[sv/cr_ops]] - SVP64 Condition
143 Register ops: Guidelines
144 on Vectorisation of any v3.0B base operations which return
145 or modify a Condition Register bit or field.
146 * For LD/ST Modes, see [[sv/ldst]].
147 * For Branch modes, see [[sv/branches]] - SVP64 Conditional Branch
148 behaviour: All/Some Vector CRs
149 * For arithmetic and logical, see [[sv/normal]]
150 * [[sv/mv.vec]] pack/unpack move to and from vec2/3/4,
151 actually an RM.EXTRA Mode and a [[sv/remap]] mode
152
153 Core SVP64 instructions:
154
155 * [[sv/setvl]] the Cray-style "Vector Length" instruction
156 * svremap, svindex and svshape: part of [[sv/remap]] "Remapping" for
157 Matrix Multiply, DCT/FFT and RGB-style "Structure Packing"
158 as well as general-purpose Indexing. Also describes associated SPRs.
159 * [[sv/svstep]] Key stepping instruction, primarily for
160 Vertical-First Mode and also providing traditional "Vector Iota"
161 capability.
162
163 *Please note: there are only six instructions in the whole of SV.
164 Beyond this point are additional **Scalar** instructions related to
165 specific workloads that have nothing to do with the SV Specification*
166
167 # Optional Scalar instructions
168
169 **Additional Instructions for specific purposes (not SVP64)**
170
171 All of these instructions below have nothing to do with SV.
172 They are all entirely designed as Scalar instructions that, as
173 Scalar instructions, stand on their own merit. Considerable
174 lengths have been made to provide justifications for each of these
175 *Scalar* instructions in a *Scalar* context, completely independently
176 of SVP64.
177
178 Some of these Scalar instructions happen also designed to make
179 Scalable Vector binaries more efficient, such
180 as the crweird group. Others are to bring the Scalar Power ISA
181 up-to-date within specific workloads,
182 such as a Javascript Rounding instruction
183 (which saves 35 instructions including 5 branches). None of them are strictly
184 necessary but performance and power consumption may be (or, is already)
185 compromised
186 in certain workloads and use-cases without them.
187
188 Vector-related but still Scalar:
189
190 * [[sv/mv.swizzle]] vec2/3/4 Swizzles (RGBA, XYZW) for 3D and CUDA.
191 designed as a Scalar instruction.
192 * [[sv/vector_ops]] scalar operations needed for supporting vectors
193 * [[sv/cr_int_predication]] scalar instructions needed for
194 effective predication
195
196 Stand-alone Scalar Instructions:
197
198 * [[sv/bitmanip]]
199 * [[sv/fcvt]] FP Conversion (due to OpenPOWER Scalar FP32)
200 * [[sv/fclass]] detect class of FP numbers
201 * [[sv/int_fp_mv]] Move and convert GPR <-> FPR, needed for !VSX
202 * [[sv/av_opcodes]] scalar opcodes for Audio/Video
203 * TODO: OpenPOWER adaptation [[openpower/transcendentals]]
204
205 Twin targetted instructions (two registers out, one implicit, just like
206 Load-with-Update).
207
208 * [[isa/svfixedarith]]
209 * [[isa/svfparith]]
210 * [[sv/biginteger]] Operations that help with big arithmetic
211
212 Explanation of the rules for twin register targets
213 (implicit RS, FRS) explained in SVP64 [[svp64/appendix]]
214
215 # Other Scalable Vector ISAs
216
217 These Scalable Vector ISAs are listed to aid in understanding and
218 context of what is involved.
219
220 * Original Cray ISA
221 <http://www.bitsavers.org/pdf/cray/CRAY_Y-MP/HR-04001-0C_Cray_Y-MP_Computer_Systems_Functional_Description_Jun90.pdf>
222 * NEC SX Aurora (still in production, inspired by Cray)
223 <https://www.hpc.nec/documents/guide/pdfs/Aurora_ISA_guide.pdf>
224 * RISC-V RVV (inspired by Cray)
225 <https://github.com/riscv/riscv-v-spec>
226 * MRISC32 ISA Manual (under active development)
227 <https://github.com/mrisc32/mrisc32/tree/master/isa-manual>
228 * Mitch Alsup's MyISA 66000 Vector Processor ISA Manual is available from
229 Mitch on request.
230
231 A comprehensive list of 3D GPU, Packed SIMD, Predicated-SIMD and true Scalable
232 Vector ISAs may be found at the [[sv/vector_isa_comparison]] page.
233 Note: AVX-512 and SVE2 are *not Vector ISAs*, they are Predicated-SIMD.
234 *Public discussions have taken place at Conferences attended by both Intel
235 and ARM on adding a `setvl` instruction which would easily make both
236 AVX-512 and SVE2 truly "Scalable".* [[sv/comparison_table]] in tabular
237 form.
238
239 # Major opcodes summary <a name="major_op_summary"> </a>
240
241 Simple-V itself only requires six instructions with 6-bit Minor XO
242 (bits 26-31), and the SVP64 Prefix Encoding requires
243 25% space of the EXT001 Major Opcode.
244 There are **no** Vector Instructions and consequently **no further
245 opcode space is required**. Even though they are currently
246 placed in the EXT022 Sandbox, the "Management" instructions
247 (setvl, svstep, svremap, svshape, svindex) are designed to fit
248 cleanly into EXT019 (exactly like `addpcis`) or other 5/6-bit Minor
249 XO area (bits 25-31) that has space for Rc=1.
250
251 That said: for the target workloads for which Scalable Vectors are typically
252 used, the Scalar ISA on which those workloads critically rely
253 is somewhat anaemic.
254 The Libre-SOC Team has therefore been addressing that by developing
255 a number of Scalar instructions in specialist areas (Big Integer,
256 Cryptography, 3D, Audio/Video, DSP) and it is these which require
257 considerable Scalar opcode space.
258
259 Please be advised that even though SV is entirely DRAFT status, there
260 is considerable concern that because there is not yet any two-way
261 day-to-day communication established with the OPF ISA WG, we have
262 no idea if any of these are conflicting with future plans by any OPF
263 Members. **The External ISA WG RFC Process is yet to be ratified
264 and Libre-SOC may not join the OPF as an entity because it does
265 not exist except in name. Even if it existed it would be a conflict
266 of interest to join the OPF, due to our funding remit from NLnet**.
267 We therefore proceed on the basis of making public the intention to
268 submit RFCs once the External ISA WG RFC Process is in place and,
269 in a wholly unsatisfactory manner have to *hope and trust* that
270 OPF ISA WG Members are reading this and take it into consideration.
271
272 **Scalar Summary**
273
274 As in above sections, it is emphasised strongly that Simple-V in no
275 way critically depends on the 100 or so *Scalar* instructions also
276 being developed by Libre-SOC.
277
278 **None of these Draft opcodes are intended for private custom
279 secret proprietary usage. They are all intended for entirely
280 public, upstream, high-profile mass-volume day-to-day usage at the
281 same level as add, popcnt and fld**
282
283 * bitmanip requires two major opcodes (due to 16+ bit immediates)
284 those are currently EXT022 and EXT05.
285 * brownfield encoding in one of those two major opcodes still
286 requires multiple VA-Form operations (in greater numbers
287 than EXT04 has spare)
288 * space in EXT019 next to addpcis and crops is recommended
289 (or any other 5-6 bit Minor XO areas)
290 * many X-Form opcodes currently in EXT022 have no preference
291 for a location at all, and may be moved to EXT059, EXT019,
292 EXT031 or other much more suitable location.
293 * even if ratified and even if the majority (mostly X-Form)
294 is moved to other locations, the large immediate sizes of
295 the remaining bitmanip instructions means
296 it would be highly likely these remaining instructions would need two
297 major opcodes. Fortuitously the v3.1 Spec states that
298 both EXT005 and EXT009 are
299 available.
300
301 **Additional observations**
302
303 Note that there is no Sandbox allocation in the published ISA Spec for
304 v3.1 EXT01 usage, and because SVP64 is already 64-bit Prefixed,
305 Prefixed-Prefixed-instructions (SVP64 Prefixed v3.1 Prefixed)
306 would become a whopping 96-bit long instruction. Avoiding this
307 situation is a high priority which in turn by necessity puts pressure
308 on the 32-bit Major Opcode space.
309
310 SVP64 itself is already under pressure, being only 24 bits. If it is
311 not permitted to take up 25% of EXT001 then it would have to be proposed
312 in its own Major Opcode, which on first consideration would be beneficial
313 for SVP64 due to the availability of 2 extra bits.
314 However when combined with the bitmanip scalar instructions
315 requiring two Major opcodes this would come to a grand total of 3 precious
316 Major opcodes. On balance, then, sacrificing 25% of EXT001 is the "least
317 difficult" choice.
318
319 Note also that EXT022, the Official Architectural Sandbox area
320 available for "Custom non-approved purposes" according to the Power
321 ISA Spec,
322 is under severe design pressure as it is insufficient to hold
323 the full extent of the instruction additions required to create
324 a Hybrid 3D CPU-VPU-GPU. Akthough the wording of the Power ISA
325 Specification leaves open the *possibility* of not needing to
326 propose ISA Extensions to the ISA WG, it is clear that EXT022
327 is an inappropriate location for a large high-profile Extension
328 intended for mass-volume product deployment. Every in-good-faith effort will
329 therefore be made to work with the OPF ISA WG to
330 submit SVP64 via the External RFC Process.
331
332 **Whilst SVP64 is only 6 instructions
333 the heavy focus on VSX for the past 12 years has left the SFFS Level
334 anaemic and out-of-date compared to ARM and x86.**
335 This is very much
336 a blessing, as the Scalar ISA has remained clean, making it
337 highly suited to RISC-paradigm Scalable Vector Prefixing. Approximately
338 100 additional (optional) Scalar Instructions are up for proposal to bring SFFS
339 up-to-date. None of them require or depend on PackedSIMD VSX (or VMX).
340
341 # Other
342
343 Examples experiments future ideas discussion:
344
345 * [Scalar register access](https://bugs.libre-soc.org/show_bug.cgi?id=905)
346 above r31 and CR7.
347 * [[sv/propagation]] Context propagation including svp64, swizzle and remap
348 * [[sv/masked_vector_chaining]]
349 * [[sv/discussion]]
350 * [[sv/example_dep_matrices]]
351 * [[sv/major_opcode_allocation]]
352 * [[sv/byteswap]]
353 * [[sv/16_bit_compressed]] experimental
354 * [[sv/toc_data_pointer]] experimental
355 * [[sv/predication]] discussion on predication concepts
356 * [[sv/register_type_tags]]
357 * [[sv/mv.x]] deprecated in favour of Indexed REMAP
358
359 Additional links:
360
361 * <https://www.sigarch.org/simd-instructions-considered-harmful/>
362 * [[sv/vector_isa_comparison]] - a list of Packed SIMD, GPU,
363 and other Scalable Vector ISAs
364 * [[sv/comparison_table]] - a one-off (experimental) table comparing ISAs
365 * [[simple_v_extension]] old (deprecated) version
366 * [[openpower/sv/llvm]]
367