[[!tag standards]]
Obligatory Dilbert:
Links:
*
* walkthrough video (19jun2022)
**SV is in DRAFT STATUS**. SV has not yet been submitted to the OpenPOWER Foundation ISA WG for review.
===
# Scalable Vectors for the Power ISA
SV is designed as a strict RISC-paradigm
Scalable Vector ISA for Hybrid 3D CPU GPU VPU workloads.
As such it brings features normally only found in Cray Supercomputers
(Cray-1, NEC SX-Aurora)
and in GPUs, but keeps strictly to a *Simple* RISC principle of leveraging
a *Scalar* ISA, exclusively using "Prefixing". **Not one single actual
explicit Vector opcode exists in SV, at all**. It is suitable for
low-power Embedded and DSP Workloads as much as it is for power-efficient
Supercomputing.
Fundamental design principles:
* Taking the simplicity of the RISC paradigm and applying it strictly and
uniformly to create a Scalable Vector ISA.
* Effectively a hardware for-loop, pausing PC, issuing multiple scalar
operations
* Preserving the underlying scalar execution dependencies as if the
for-loop had been expanded as actual scalar instructions
(termed "preserving Program Order")
* Augments ("tags") existing instructions, providing Vectorisation
"context" rather than adding new instructions.
* Strictly does not interfere with or alter the non-Scalable Power ISA
in any way
* In the Prefix space, does not modify or deviate from the underlying
scalar Power ISA
unless it provides significant performance or other advantage to do so
in the Vector space (dropping the "sticky" characteristics
of XER.SO and CR0.SO for example)
* Designed for Supercomputing: avoids creating significant sequential
dependency hazards, allowing standard
high performance superscalar multi-issue
micro-architectures to be leveraged.
* Divided into Compliancy Levels to reduce cost of implementation for
specific needs.
Advantages of these design principles:
* Simplicity of introduction and implementation on top of
the existing Power ISA without disruption.
* It is therefore easy to create a first (and sometimes only)
implementation as literally a for-loop in hardware, simulators, and
compilers.
* Hardware Architects may understand and implement SV as being an
extra pipeline stage, inserted between decode and issue, that is
a simple for-loop issuing element-level sub-instructions.
* More complex HDL can be done by repeating existing scalar ALUs and
pipelines as blocks and leveraging existing Multi-Issue Infrastructure
* As (mostly) a high-level "context" that does not (significantly) deviate
from scalar Power ISA and, in its purest form being "a for loop around
scalar instructions", it is minimally-disruptive and consequently stands
a reasonable chance of broad community adoption and acceptance
* Completely wipes not just SIMD opcode proliferation off the
map (SIMD is O(N^6) opcode proliferation)
but off of Vectorisation ISAs as well. No more separate Vector
instructions.
Comparative instruction count:
* ARM NEON SIMD: around 2,000 instructions, prerequisite: ARM Scalar.
* ARM SVE: around 4,000 instructions, prerequisite: NEON.
* ARM SVE2: around 1,000 instructions, prerequisite: SVE
* Intel AVX-512: around 4,000 instructions, prerequisite AVX, AVX2,
AVX-128 and AVX-256 which in turn critically rely on the rest of
x86, for a grand total of well over 10,000 instructions.
* RISV-V RVV: 192 instructions, prerequisite 96 Scalar RV64GC instructions
* SVP64: **five** instructions, 24-bit prefixing of
prerequisite SFS (150) or
SFFS (214) Compliancy Subsets
SV comprises several [[sv/compliancy_levels]] suited to Embedded, Energy
efficient High-Performance Compute, Distributed Computing and Advanced
Computational Supercomputing. The Compliancy Levels are arranged such
that even at the bare minimum Level, full Soft-Emulation of all
optional and future features is possible.
# Sub-pages
Pages being developed and examples
* [[sv/overview]] explaining the basics.
* [[sv/compliancy_levels]] for minimum subsets through to Advanced
Supercomputing.
* [[sv/implementation]] implementation planning and coordination
* [[sv/svp64]] contains the packet-format *only*, the [[svp64/appendix]]
contains explanations and further details
* [[sv/svp64_quirks]] things in SVP64 that slightly break the rules
or are not immediately apparent despite the RISC paradigm
* [[opcode_regs_deduped]] autogenerated table of SVP64 decoder augmentation
* [[sv/sprs]] SPRs
SVP64 "Modes":
* For condition register operations see [[sv/cr_ops]] - SVP64 Condition
Register ops: Guidelines
on Vectorisation of any v3.0B base operations which return
or modify a Condition Register bit or field.
* For LD/ST Modes, see [[sv/ldst]].
* For Branch modes, see [[sv/branches]] - SVP64 Conditional Branch
behaviour: All/Some Vector CRs
* For arithmetic and logical, see [[sv/normal]]
* [[sv/mv.vec]] pack/unpack move to and from vec2/3/4,
actually an RM.EXTRA Mode and a [[sv/remap]] mode
Core SVP64 instructions:
* [[sv/setvl]] the Cray-style "Vector Length" instruction
* [[sv/remap]] "Remapping" for Matrix Multiply, DCT/FFT
and RGB-style "Structure Packing"
as well as Indexing. Describes svindex, svremap and svshape and
associated SPRs.
* [[sv/svstep]] Key stepping instruction, primarily for
Vertical-First Mode and also providing traditional "Vector Iota"
capability.
*Please note: there are only five instructions in the whole of SV.
Beyond this point are additional **Scalar** instructions related to
specific workloads that have nothing to do with the SV Specification*
# Optional Scalar instructions
**Additional Instructions for specific purposes (not SVP64)**
All of these instructions below have nothing to do with SV.
They are all entirely designed as Scalar instructions that, as
Scalar instructions, stand on their own merit. Considerable
lengths have been made to provide justifications for each of these
*Scalar* instructions.
Some of these Scalar instructions are specifically designed to make
Scalable Vector binaries more efficient, such
as the crweird group. Others are to bring the Scalar Power ISA
up-to-date within specific workloads,
such as a Javascript Rounding instruction
(which saves 35 instructions includibg 5 branches). None of them are strictly
necessary but performance and power consumption may be (or, is already)
compromised
in certain workloads and use-cases without them.
Vector-related but still Scalar:
* [[sv/mv.swizzle]] vec2/3/4 Swizzles (RGBA, XYZW) for 3D and CUDA.
designed as a Scalar instruction.
* [[sv/vector_ops]] scalar operations needed for supporting vectors
* [[sv/cr_int_predication]] scalar instructions needed for
effective predication
Stand-alone Scalar Instructions:
* [[sv/bitmanip]]
* [[sv/fcvt]] FP Conversion (due to OpenPOWER Scalar FP32)
* [[sv/fclass]] detect class of FP numbers
* [[sv/int_fp_mv]] Move and convert GPR <-> FPR, needed for !VSX
* [[sv/av_opcodes]] scalar opcodes for Audio/Video
* TODO: OpenPOWER adaptation [[openpower/transcendentals]]
Twin targetted instructions (two registers out, one implicit, just like
Load-with-Update).
* [[isa/svfixedarith]]
* [[isa/svfparith]]
* [[sv/biginteger]] Operations that help with big arithmetic
Explanation of the rules for twin register targets
(implicit RS, FRS) explained in SVP64 [[svp64/appendix]]
# Other Scalable Vector ISAs
These Scalable Vector ISAs are listed to aid in understanding and
context of what is involved.
* Original Cray ISA
* NEC SX Aurora (still in production, inspired by Cray)
* RISC-V RVV (inspired by Cray)
* MRISC32 ISA Manual (under active development)
* Mitch Alsup's MyISA 66000 Vector Processor ISA Manual is available from
Mitch on request.
A comprehensive list of 3D GPU, Packed SIMD, Predicated-SIMD and true Scalable
Vector ISAs may be found at the [[sv/vector_isa_comparison]] page.
Note: AVX-512 and SVE2 are *not Vector ISAs*, they are Predicated-SIMD.
*Public discussions have taken place at Conferences attended by both Intel
and ARM on adding a `setvl` instruction which would easily make both
AVX-512 and SVE2 truly "Scalable".*
# Major opcodes summary
Simple-V itself only requires five instructions with 6-bit Minor XO
(bits 26-31), and the SVP64 Prefix Encoding requires
25% space of the EXT001 Major Opcode.
There are **no** Vector Instructions and consequently **no further
opcode space is required**. Even though they are currently
placed in the EXT022 Sandbox, the "Management" instructions
(setvl, svstep, svremap, svshape, svindex) are designed to fit
cleanly into EXT019 (like `addpcis`) or other 5/6-bit Minor
XO area that has space for Rc=1.
That said: for the target workloads for which Scalable Vectors are typically
used, the Scalar ISA on which those workloads critically rely
is somewhat anaemic.
The Libre-SOC Team has therefore been addressing that by developing
a number of Scalar instructions in specialist areas (Big Integer,
Cryptography, 3D, Audio/Video, DSP) and it is these which require
considerable Scalar opcode space.
Please be advised that even though SV is entirely DRAFT status, there
is considerable concern that because there is not yet any two-way
day-to-day communication established with the OPF ISA WG, we have
no idea if any of these are conflicting with future plans by any OPF
Members. **The External ISA WG RFC Process is yet to be ratified
and Libre-SOC may not join the OPF as an entity because it does
not exist except in name. Even if it existed it would be a conflict
of interest to join the OPF, due to our funding remit from NLnet**.
We therefore proceed on the basis of making public the intention to
submit RFCs once the External ISA WG RFC Process is in place and,
in a wholly unsatisfactory manner have to *hope and trust* that
OPF ISA WG Members are reading this and take it into consideration.
**Scalar Summary**
As in above sections, it is emphasised strongly that Simple-V in no
way critically depends on the 100 or so *Scalar* instructions also
being developed by Libre-SOC.
**None of these Draft opcodes are intended for private custom
secret proprietary usage. They are all intended for entirely
public, upstream, high-profile mass-volume day-to-day usage at the
same level as add, popcnt and fld**
* bitmanip requires two major opcodes (due to 16+ bit immediates)
those are currently EXT022 and EXT05.
* brownfield encoding in one of those two major opcodes still
requires multiple VA-Form operations (in greater numbers
than EXT04 has spare)
* space in EXT019 next to addpcis and crops is recommended
(or any other 5-6 bit Minor XO areas)
* many X-Form opcodes currently in EXT022 have no preference
for a location at all, and may be moved to EXT059, EXT019,
EXT031 or other much more suitable location.
* even if ratified and even if the majority (mostly X-Form)
is moved to other locations, the large immediate sizes of
the remaining bitmanip instructions means
it would be highly likely these remaining instructions would need two
major opcodes. Fortuitously the v3.1 Spec states that
both EXT005 and EXT009 are
available.
**Additional observations**
Note that there is no Sandbox allocation in the published ISA Spec for
v3.1 EXT01 usage, and because SVP64 is already 64-bit Prefixed,
Prefixed-Prefixed-instructions (SVP64 Prefixed v3.1 Prefixed)
would become a whopping 96-bit long instruction. Avoiding this
situation is a high priority which in turn by necessity puts pressure
on the 32-bit Major Opcode space.
SVP64 itself is already under pressure, being only 24 bits. If it is
not permitted to take up 25% of EXT001 then it would have to be proposed
in its own Major Opcode, which on first consideration would be beneficial
for SVP64 due to the availability of 2 extra bits.
However when combined with the bitmanip scalar instructions
requiring two Major opcodes this would come to a grand total of 3 precious
Major opcodes. On balance, then, sacrificing 25% of EXT001 is the "least
difficult" choice.
Note also that EXT022, the Official Architectural Sandbox area
available for "Custom non-approved purposes" according to the Power
ISA Spec,
is under severe design pressure as it is insufficient to hold
the full extent of the instruction additions required to create
a Hybrid 3D CPU-VPU-GPU.
**Whilst SVP64 is only 5 instructions
the heavy focus on VSX for the past 12 years has left the SFFS Level
anaemic and out-of-date compared to ARM and x86.**
This is very much
a blessing, as the Scalar ISA has remained clean, making it
highly suited to RISC-paradigm Scalable Vector Prefixing. Approximately
100 additional (optional) Scalar Instructions are up for proposal to bring SFFS
up-to-date. None of them require or depend on PackedSIMD VSX (or VMX).
# Other
Examples experiments future ideas discussion:
* [[sv/propagation]] Context propagation including svp64, swizzle and remap
* [[sv/masked_vector_chaining]]
* [[sv/discussion]]
* [[sv/example_dep_matrices]]
* [[sv/major_opcode_allocation]]
* [[sv/byteswap]]
* [[sv/16_bit_compressed]] experimental
* [[sv/toc_data_pointer]] experimental
* [[sv/predication]] discussion on predication concepts
* [[sv/register_type_tags]]
* [[sv/mv.x]] deprecated in favour of Indexed REMAP
Additional links:
*
* [[sv/vector_isa_comparison]] - a list of Packed SIMD, GPU,
and other Scalable Vector ISAs
* [[simple_v_extension]] old (deprecated) version
* [[openpower/sv/llvm]]