From: Luke Kenneth Casson Leighton Date: Sat, 30 Jul 2022 01:10:53 +0000 (+0100) Subject: use alpha-numeric footnote numbering X-Git-Tag: opf_rfc_ls005_v1~941 X-Git-Url: https://git.libre-soc.org/?a=commitdiff_plain;h=e3591adee10a53edb469a3d7a8e49e02df9c60db;p=libreriscv.git use alpha-numeric footnote numbering --- diff --git a/openpower/sv/comparison_table.mdwn b/openpower/sv/comparison_table.mdwn index db64e1960..21e9ce4ae 100644 --- a/openpower/sv/comparison_table.mdwn +++ b/openpower/sv/comparison_table.mdwn @@ -3,13 +3,13 @@ |ISA
name |No
opcodes|No
intrinsics|Taxonomy /
Class|setvl
scalable|Predicate
Masks|Twin
Pred|Vector
regs |128-bit
ops |Bigint |LDST
F/First|Data-dep
Fail-first|Pred-
Result|HW
Matrix|DCT/FFT
HW| |---------------|--------------|-----------------|--------------------|-------------------|--------------------|-------------|----------------|-----------------|--------|----------------|-----------------------|----------------|-------------|--------------| |SVP64 |5 [^1] |see [^2] |Scalable [^3] |yes |yes |yes [^4] |no [^5] |see [^6] |yes[^7] |yes [^8] |yes [^9] |yes [^10] |yes [^11] | yes[^12] | -|VSX |700+ |700?[^27] |PackedSIMD |no |no |no |yes [^13] |yes |no |no |no |no |yes [^14] | no | -|NEON |~250 [^15] |7088 [^28] |PackedSIMD |no |no |no |yes |see [^35] |no |no |no |no |no | no | -|SVE2 |~1000 [^16] |6040 [^29] |Predicated SIMD[^17]|no [^17] |yes |no |yes |see [^35] |no |yes [^8] |no |no |yes [^33] | no | -|AVX512 [^18] |~1000s [^19] |7256 [^30] |Predicated SIMD |no |yes |no |yes |see [^35] |no |no |no |no |yes [^34] | no | -|RVV [^20] |~190 [^21] |~25000[^31] |Scalable[^22] |yes |yes |no |yes |yes [^23] |no |yes |no |no |no | no | -|Aurora SX[^24] |~200 [^25] |unknown [^32] |Scalable [^26] |yes |yes |no |yes |no |no |no |no |no |? | no | -|66000[^36] |~200 |unknown |AutoVec[^36] |see [^36] |see[^36] |no |see [^36] |no |yes[^37]|see [^36] |no |no |no | no | +|VSX |700+ |700?[^v1] |PackedSIMD |no |no |no |yes [^v2] |yes |no |no |no |no |yes [^v3] | no | +|NEON |~250 [^n1] |7088 [^n2] |PackedSIMD |no |no |no |yes |see [^35] |no |no |no |no |no | no | +|SVE2 |~1000 [^e1] |6040 [^e2] |Predicated SIMD[^e3]|no [^e3] |yes |no |yes |see [^35] |no |yes [^8] |no |no |yes [^e4] | no | +|AVX512 [^x1] |~1000s [^x2] |7256 [^x3] |Predicated SIMD |no |yes |no |yes |see [^35] |no |no |no |no |yes [^x4] | no | +|RVV [^r1] |~190 [^r2] |~25000[^r3] |Scalable[^r4] |yes |yes |no |yes |yes [^r5] |no |yes |no |no |no | no | +|Aurora SX[^s1] |~200 [^s2] |unknown [^s3] |Scalable [^s4] |yes |yes |no |yes |no |no |no |no |no |? | no | +|66000[^m1] |~200 |unknown |AutoVec[^m1] |see [^m1] |see[^m1] |no |see [^m1] |no |yes[^m2]|see [^m1] |no |no |no | no | [^1]: plus EXT001 24-bit prefixing using 25% of EXT001 space. See [[sv/svp64]] [^2]: If treated as a 1-Dimensional ISA, and designed badly, the 24-bit Prefix expands 200+ scalar instructions to well over a million intrinsics (N~=10^4 **times** M~=10^2). @@ -24,36 +24,36 @@ [^10]: Predicate-result effectively turns any standard op into a type of "cmp". See [[sv/svp64/appendix]] [^11]: Any non-power-of-two Matrices up to 127 FMACs (or other FMA-style op), full triple-loop Schedule. See [[sv/remap]] [^12]: DCT (Lee) and FFT Full Triple-loops supported, RADIX2-only. Normally only found in VLIW DSPs (TI MSP320, Qualcom Hexagon). See [[sv/remap]] -[^13]: VSX's Vector Registers are mis-named: they are 100% PackedSIMD. AVX-512 is not a Vector ISA either. See [Flynn's Taxonomy](https://en.wikipedia.org/wiki/Flynn%27s_taxonomy) -[^14]: Power ISA v3.1 contains "Matrix Multiply Assist" (MMA) which due to PackedSIMD is restricted to RADIX2 and requires inline assembler loop-unrolling for non-power-of-two Matrix dimensions -[^15]: difficult to ascertain, see [NEON/VFP](https://developer.arm.com/documentation/den0018/a/NEON-and-VFP-Instruction-Summary/List-of-all-NEON-and-VFP-instructions). +[^v2]: VSX's Vector Registers are mis-named: they are 100% PackedSIMD. AVX-512 is not a Vector ISA either. See [Flynn's Taxonomy](https://en.wikipedia.org/wiki/Flynn%27s_taxonomy) +[^v3]: Power ISA v3.1 contains "Matrix Multiply Assist" (MMA) which due to PackedSIMD is restricted to RADIX2 and requires inline assembler loop-unrolling for non-power-of-two Matrix dimensions +[^n1]: difficult to ascertain, see [NEON/VFP](https://developer.arm.com/documentation/den0018/a/NEON-and-VFP-Instruction-Summary/List-of-all-NEON-and-VFP-instructions). Critically depends on ARM Scalar instructions -[^16]: difficult to exactly ascertain, see ARM Architecture Reference Manual Supplement, DDI 0584. Critically depends on ARM Scalar instructions. -[^17]: ARM states that the Scalability is a [Silicon-partner choice](https://developer.arm.com/-/media/Arm%20Developer%20Community/PDF/102340_0001_00_en_introduction-to-sve2.pdf?revision=aae96dd2-5334-4ad3-9a47-393086a20fea). +[^e1]: difficult to exactly ascertain, see ARM Architecture Reference Manual Supplement, DDI 0584. Critically depends on ARM Scalar instructions. +[^e3]: ARM states that the Scalability is a [Silicon-partner choice](https://developer.arm.com/-/media/Arm%20Developer%20Community/PDF/102340_0001_00_en_introduction-to-sve2.pdf?revision=aae96dd2-5334-4ad3-9a47-393086a20fea). Scalability in the ISA is **not available to the programmer**: there is no `setvl` instruction in SVE2, which is already causing assembler programmer difficulties. [quote](https://gist.github.com/zingaburga/805669eb891c820bd220418ee3f0d6bd#file-sve2-md) **"you may be stuck with only using the bottom 128 bits of the vector, or need to code specifically for each width"** -[^18]: [AVX512 Wikipedia](https://en.wikipedia.org/wiki/AVX-512), [Lifecycle of an instruction set](https://media.handmade-seattle.com/tom-forsyth/) including full slides -[^19]: difficult to exactly ascertain, contains subsets. Critically depends on ISA support from earlier x86 ISA subsets (several more thousand instructions). See [SIMD ISA listing](https://www.officedaytime.com/simd512e/) -[^20]: [RVV Spec](https://github.com/riscv/riscv-v-spec/blob/master/v-spec.adoc) -[^21]: RISC-V Vectors are not stand-alone, i.e. like SVE2 and AVX-512 are critically dependent on the Scalar ISA (an additional ~96 instructions for the Scalar RV64GC set, needed for Linux). -[^22]: Like the original Cray RVV is a truly scalable Vector ISA (Cray setvl instruction). However, like SVE2, the Maximum Vector length is a Silicon-partner choice, which creates similar limitations that SVP64 does not have. +[^x1]: [AVX512 Wikipedia](https://en.wikipedia.org/wiki/AVX-512), [Lifecycle of an instruction set](https://media.handmade-seattle.com/tom-forsyth/) including full slides +[^x2]: difficult to exactly ascertain, contains subsets. Critically depends on ISA support from earlier x86 ISA subsets (several more thousand instructions). See [SIMD ISA listing](https://www.officedaytime.com/simd512e/) +[^r1]: [RVV Spec](https://github.com/riscv/riscv-v-spec/blob/master/v-spec.adoc) +[^r2]: RISC-V Vectors are not stand-alone, i.e. like SVE2 and AVX-512 are critically dependent on the Scalar ISA (an additional ~96 instructions for the Scalar RV64GC set, needed for Linux). +[^r4]: Like the original Cray RVV is a truly scalable Vector ISA (Cray setvl instruction). However, like SVE2, the Maximum Vector length is a Silicon-partner choice, which creates similar limitations that SVP64 does not have. The RISC-V Founders strongly discourage efforts by programmers to find out the Silicon's Maximum Vector Length, as an effort to steer programmers towards Silicon-independent assembler. **This requires all algorithms to contain a loop construct**. MAXVL in SVP64 is a Spec-hard-fixed quantity therefore loop constructs are not necessary 100% of the time. -[^23]: like SVP64 it is up to the hardware implementor (Silicon partner) to choose whether to support 128-bit elements. -[^24]: [NEC SX Aurora](https://ftp.libre-soc.org/NEC_SX_Aurora_TSUBASA_VectorEngine-as-manual-v1.2.pdf) is based on the original Cray Vectors -[^25]: [Aurora ISA guide](https://sxauroratsubasa.sakura.ne.jp/documents/guide/pdfs/Aurora_ISA_guide.pdf) Appendix-3 11.1 p508 -[^26]: Like the original Cray Vectors, the ISA Vector Length is independent of the underlying hardware, however Generation 1 has 256 elements per Vector register (3.2.4 p24, Aurora ISA guide) -[^27]: [Altivec gcc intrinsics](https://gcc.gnu.org/onlinedocs/gcc/PowerPC-AltiVec_002fVSX-Built-in-Functions.html), contains links to additional VSX intrinsics for ISA 2.05/6/7, 3.0 and 3.1 -[^28]: NEON 32-bit 2754 intrinsics, NEON 64-bit 4334 intrinsics. -[^29]: SVE: 4140 intrinsics, SVE2 1900 intrinsics -[^30]: Count includes SSE, SSE2, AVX, AVX2 and all AVX512 variants -[^31]: [RVV intrinsics listing](https://raw.githubusercontent.com/riscv-non-isa/rvv-intrinsic-doc/master/intrinsic_funcs.md) page is 25,000 lines long. -[^32]: Unknown. estimated to be of the order of length of RVV due to also being a Cray-style Scalable ISA, NEC maintains an [LLVM hard fork](https://github.com/sx-aurora-dev) -[^33]: [Scalable Matrix Optional Extension](https://community.arm.com/arm-community-blogs/b/architectures-and-processors-blog/posts/scalable-matrix-extension-armv9-a-architecture) +[^r5]: like SVP64 it is up to the hardware implementor (Silicon partner) to choose whether to support 128-bit elements. +[^s1]: [NEC SX Aurora](https://ftp.libre-soc.org/NEC_SX_Aurora_TSUBASA_VectorEngine-as-manual-v1.2.pdf) is based on the original Cray Vectors +[^s2]: [Aurora ISA guide](https://sxauroratsubasa.sakura.ne.jp/documents/guide/pdfs/Aurora_ISA_guide.pdf) Appendix-3 11.1 p508 +[^s4]: Like the original Cray Vectors, the ISA Vector Length is independent of the underlying hardware, however Generation 1 has 256 elements per Vector register (3.2.4 p24, Aurora ISA guide) +[^v1]: [Altivec gcc intrinsics](https://gcc.gnu.org/onlinedocs/gcc/PowerPC-AltiVec_002fVSX-Built-in-Functions.html), contains links to additional VSX intrinsics for ISA 2.05/6/7, 3.0 and 3.1 +[^n2]: NEON 32-bit 2754 intrinsics, NEON 64-bit 4334 intrinsics. +[^e2]: SVE: 4140 intrinsics, SVE2 1900 intrinsics +[^x3]: Count includes SSE, SSE2, AVX, AVX2 and all AVX512 variants +[^r3]: [RVV intrinsics listing](https://raw.githubusercontent.com/riscv-non-isa/rvv-intrinsic-doc/master/intrinsic_funcs.md) page is 25,000 lines long. +[^s3]: Unknown. estimated to be of the order of length of RVV due to also being a Cray-style Scalable ISA, NEC maintains an [LLVM hard fork](https://github.com/sx-aurora-dev) +[^e4]: [Scalable Matrix Optional Extension](https://community.arm.com/arm-community-blogs/b/architectures-and-processors-blog/posts/scalable-matrix-extension-armv9-a-architecture) outer-product instructions [SMOPA](https://developer.arm.com/documentation/ddi0602/2022-06/SME-Instructions/SMOPA--Signed-integer-sum-of-outer-products-and-accumulate-?lang=en) which are power-2 based on Silicon-partner SIMD width. Non-power-2 not supported but [zero-input masking](https://www.realworldtech.com/forum/?threadid=202688&curpostid=207774) is. -[^34]: [Advanced matrix Extensions](https://en.wikipedia.org/wiki/Advanced_Matrix_Extensions) supports BF16 and INT8 only. Separate regfile, power-of-two "tiles". Not general-purpose at all. +[^x4]: [Advanced matrix Extensions](https://en.wikipedia.org/wiki/Advanced_Matrix_Extensions) supports BF16 and INT8 only. Separate regfile, power-of-two "tiles". Not general-purpose at all. [^35]: Although registers may be 128-bit in NEON, SVE2, and AVX, unlike VSX there are very few (or no) actual arithmetic 128-bit operations. Only RVV and SVP64 have the possibility of 128-bit ops -[^36]: Mitch Alsup's MyISA 66000 is available on request. A powerful RISC ISA with a **Hardware-level auto-vectorisation** LOOP built-in as an extension named VVM. Classified as "Vertical-First". -[^37]: MyISA 66000 has a CARRY register up to 64-bit. Repeated application of FMA (esp. within Auto-Vectored LOOPS) automatically and inherently creates big-int operations with zero effort. +[^m1]: Mitch Alsup's MyISA 66000 is available on request. A powerful RISC ISA with a **Hardware-level auto-vectorisation** LOOP built-in as an extension named VVM. Classified as "Vertical-First". +[^m2]: MyISA 66000 has a CARRY register up to 64-bit. Repeated application of FMA (esp. within Auto-Vectored LOOPS) automatically and inherently creates big-int operations with zero effort.