partial update
[libreriscv.git] / simple_v_extension.mdwn
1 # SIMD / Simple-V Extension Proposal
2
3 This proposal exists so as to be able to satisfy several disparate
4 requirements: power-conscious, area-conscious, and performance-conscious
5 designs all pull an ISA and its implementation in different conflicting
6 directions, as do the specific intended uses for any given implementation.
7
8 Also, the existing P (SIMD) proposal and the V (Vector) proposals,
9 whilst each extremely powerful in their own right and clearly desirable,
10 are also:
11
12 * Clearly independent in their origins (Cray and AndeStar v3 respectively)
13 * Both contain partial duplication of pre-existing RISC-V instructions
14 (an undesirable characteristic)
15 * Both have independent and disparate methods for introducing parallelism
16 at the instruction level.
17 * Both require that their respective parallelism paradigm be implemented
18 along-side their respective functionality *or not at all*.
19 * Both independently have methods for introducing parallelism that
20 could, if separated, benefit
21 *other areas of RISC-V not just DSP or Floating-point respectively*.
22
23 Therefore it makes a huge amount of sense to have a means and method
24 of introducing instruction parallelism in a flexible way that provides
25 implementors with the option to choose exactly where they wish to offer
26 performance improvements and where they wish to optimise for power
27 and/or area. If that can be offered even on a per-operation basis that
28 would provide even more flexibility.
29
30 # Analysis and discussion of Vector vs SIMD
31
32 There are four combined areas between the two proposals that help with
33 parallelism without over-burdening the ISA with a huge proliferation of
34 instructions:
35
36 * Fixed vs variable parallelism (fixed or variable "M" in SIMD)
37 * Implicit vs fixed instruction bit-width (integral to instruction or not)
38 * Implicit vs explicit type-conversion (compounded on bit-width)
39 * Implicit vs explicit inner loops.
40 * Masks / tagging (selecting/preventing certain indexed elements from execution)
41
42 The pros and cons of each are discussed and analysed below.
43
44 ## Fixed vs variable parallelism length
45
46 In David Patterson and Andrew Waterman's analysis of SIMD and Vector
47 ISAs, the analysis comes out clearly in favour of (effectively) variable
48 length SIMD. As SIMD is a fixed width, typically 4, 8 or in extreme cases
49 16 or 32 simultaneous operations, the setup, teardown and corner-cases of SIMD
50 are extremely burdensome except for applications whose requirements
51 *specifically* match the *precise and exact* depth of the SIMD engine.
52
53 Thus, SIMD, no matter what width is chosen, is never going to be acceptable
54 for general-purpose computation, and in the context of developing a
55 general-purpose ISA, is never going to satisfy 100 percent of implementors.
56
57 That basically leaves "variable-length vector" as the clear *general-purpose*
58 winner, at least in terms of greatly simplifying the instruction set,
59 reducing the number of instructions required for any given task, and thus
60 reducing power consumption for the same.
61
62 ## Implicit vs fixed instruction bit-width
63
64 SIMD again has a severe disadvantage here, over Vector: huge proliferation
65 of specialist instructions that target 8-bit, 16-bit, 32-bit, 64-bit, and
66 have to then have operations *for each and between each*. It gets very
67 messy, very quickly.
68
69 The V-Extension on the other hand proposes to set the bit-width of
70 future instructions on a per-register basis, such that subsequent instructions
71 involving that register are *implicitly* of that particular bit-width until
72 otherwise changed or reset.
73
74 This has some extremely useful properties, without being particularly
75 burdensome to implementations, given that instruction decode already has
76 to direct the operation to a correctly-sized width ALU engine, anyway.
77
78 Not least: in places where an ISA was previously constrained (due for
79 whatever reason, including limitations of the available operand spcace),
80 implicit bit-width allows the meaning of certain operations to be
81 type-overloaded *without* pollution or alteration of frozen and immutable
82 instructions, in a fully backwards-compatible fashion.
83
84 ## Implicit and explicit type-conversion
85
86 The Draft 2.3 V-extension proposal has (deprecated) polymorphism to help
87 deal with over-population of instructions, such that type-casting from
88 integer (and floating point) of various sizes is automatically inferred
89 due to "type tagging" that is set with a special instruction. A register
90 will be *specifically* marked as "16-bit Floating-Point" and, if added
91 to an operand that is specifically tagged as "32-bit Integer" an implicit
92 type-conversion will take placce *without* requiring that type-conversion
93 to be explicitly done with its own separate instruction.
94
95 However, implicit type-conversion is not only quite burdensome to
96 implement (explosion of inferred type-to-type conversion) but also is
97 never really going to be complete. It gets even worse when bit-widths
98 also have to be taken into consideration.
99
100 Overall, type-conversion is generally best to leave to explicit
101 type-conversion instructions, or in definite specific use-cases left to
102 be part of an actual instruction (DSP or FP)
103
104 ## Zero-overhead loops vs explicit loops
105
106 The initial Draft P-SIMD Proposal by Chuanhua Chang of Andes Technology
107 contains an extremely interesting feature: zero-overhead loops. This
108 proposal would basically allow an inner loop of instructions to be
109 repeated indefinitely, a fixed number of times.
110
111 Its specific advantage over explicit loops is that the pipeline in a
112 DSP can potentially be kept completely full *even in an in-order
113 implementation*. Normally, it requires a superscalar architecture and
114 out-of-order execution capabilities to "pre-process" instructions in order
115 to keep ALU pipelines 100% occupied.
116
117 This very simple proposal offers a way to increase pipeline activity in the
118 one key area which really matters: the inner loop.
119
120 ## Mask and Tagging
121
122 *TODO: research masks as they can be superb and extremely powerful.
123 If B-Extension is implemented and provides Bit-Gather-Scatter it
124 becomes really cool and easy to switch out certain indexed values
125 from an array of data, but actually BGS **on its own** might be
126 sufficient. Bottom line, this is complex, and needs a proper analysis.
127 The other sections are pretty straightforward.*
128
129 ## Conclusions
130
131 In the above sections the four different ways where parallel instruction
132 execution has closely and loosely inter-related implications for the ISA and
133 for implementors, were outlined. The pluses and minuses came out as
134 follows:
135
136 * Fixed vs variable parallelism: <b>variable</b>
137 * Implicit (indirect) vs fixed (integral) instruction bit-width: <b>indirect</b>
138 * Implicit vs explicit type-conversion: <b>explicit</b>
139 * Implicit vs explicit inner loops: <b>implicit</b>
140 * Tag or no-tag: <b>TODO</b>
141
142 In particular: variable-length vectors came out on top because of the
143 high setup, teardown and corner-cases associated with the fixed width
144 of SIMD. Implicit bit-width helps to extend the ISA to escape from
145 former limitations and restrictions (in a backwards-compatible fashion),
146 and implicit (zero-overhead) loops provide a means to keep pipelines
147 potentially 100% occupied *without* requiring a super-scalar or out-of-order
148 architecture.
149
150 Constructing a SIMD/Simple-Vector proposal based around even only these four
151 (five?) requirements would therefore seem to be a logical thing to do.
152
153 # Instruction Format
154
155 **TODO** *basically borrow from both P and V, which should be quite simple
156 to do, with the exception of Tag/no-tag, which needs a bit more
157 thought. V's Section 17.19 of Draft V2.3 spec is reminiscent of B's BGS
158 gather-scatterer, and, if implemented, could actually be a really useful
159 way to span 8-bit up to 64-bit groups of data, where BGS as it stands
160 and described by Clifford does **bits** of up to 16 width. Lots to
161 look at and investigate!*
162
163 # References
164
165 * SIMD considered harmful <https://www.sigarch.org/simd-instructions-considered-harmful/>
166 * Link to first proposal <https://groups.google.com/a/groups.riscv.org/forum/#!topic/isa-dev/GuukrSjgBH8>
167 * Recommendation by Jacob Bachmeyer to make zero-overhead loop an
168 "implicit program-counter" <https://groups.google.com/a/groups.riscv.org/d/msg/isa-dev/vYVi95gF2Mo/SHz6a4_lAgAJ>
169 * Re-continuing P-Extension proposal <https://groups.google.com/a/groups.riscv.org/forum/#!msg/isa-dev/IkLkQn3HvXQ/SEMyC9IlAgAJ>
170 * First Draft P-SIMD (DSP) proposal <https://groups.google.com/a/groups.riscv.org/forum/#!topic/isa-dev/vYVi95gF2Mo>
171 * B-Extension discussion <https://groups.google.com/a/groups.riscv.org/forum/#!topic/isa-dev/zi_7B15kj6s>