add predication section
[libreriscv.git] / simple_v_extension.mdwn
1 # Variable-width Variable-packed SIMD / Simple-V / Parallelism Extension Proposal
2
3 [[!toc ]]
4
5 This proposal exists so as to be able to satisfy several disparate
6 requirements: power-conscious, area-conscious, and performance-conscious
7 designs all pull an ISA and its implementation in different conflicting
8 directions, as do the specific intended uses for any given implementation.
9
10 Additionally, the existing P (SIMD) proposal and the V (Vector) proposals,
11 whilst each extremely powerful in their own right and clearly desirable,
12 are also:
13
14 * Clearly independent in their origins (Cray and AndeStar v3 respectively)
15 so need work to adapt to the RISC-V ethos and paradigm
16 * Are sufficiently large so as to make adoption (and exploration for
17 analysis and review purposes) prohibitively expensive
18 * Both contain partial duplication of pre-existing RISC-V instructions
19 (an undesirable characteristic)
20 * Both have independent and disparate methods for introducing parallelism
21 at the instruction level.
22 * Both require that their respective parallelism paradigm be implemented
23 along-side and integral to their respective functionality *or not at all*.
24 * Both independently have methods for introducing parallelism that
25 could, if separated, benefit
26 *other areas of RISC-V not just DSP or Floating-point respectively*.
27
28 Therefore it makes a huge amount of sense to have a means and method
29 of introducing instruction parallelism in a flexible way that provides
30 implementors with the option to choose exactly where they wish to offer
31 performance improvements and where they wish to optimise for power
32 and/or area (and if that can be offered even on a per-operation basis that
33 would provide even more flexibility).
34
35 Additionally it makes sense to *split out* the parallelism inherent within
36 each of P and V, and to see if each of P and V then, in *combination* with
37 a "best-of-both" parallelism extension, would work well.
38
39 Furthermore, an additional goal of this proposal is to reduce the number
40 of opcodes utilised by each of P and V as they currently stand, leveraging
41 existing RISC-V opcodes where possible, and also potentially allowing
42 P and V to make use of Compressed Instructions as a result.
43
44 **TODO**: reword this to better suit this document:
45
46 Having looked at both P and V as they stand, they're _both_ very much
47 "separate engines" that, despite both their respective merits and
48 extremely powerful features, don't really cleanly fit into the RV design
49 ethos (or the flexible extensibility) and, as such, are both in danger
50 of not being widely adopted. I'm inclined towards recommending:
51
52 * splitting out the DSP aspects of P-SIMD to create a single-issue DSP
53 * splitting out the polymorphism, esoteric data types (GF, complex
54 numbers) and unusual operations of V to create a single-issue "Esoteric
55 Floating-Point" extension
56 * splitting out the loop-aspects, vector aspects and data-width aspects
57 of both P and V to a *new* "P-SIMD / Simple-V" and requiring that they
58 apply across *all* Extensions, whether those be DSP, M, Base, V, P -
59 everything.
60
61 **TODO**: propose overflow registers be actually one of the integer regs
62 (flowing to multiple regs).
63
64 **TODO**: propose "mask" (predication) registers likewise. combination with
65 standard RV instructions and overflow registers extremely powerful
66
67 ## CSRs marking registers as Vector
68
69 A 32-bit CSR would be needed (1 bit per integer register) to indicate
70 whether a register was, if referred to, implicitly to be treated as
71 a vector.
72
73 A second 32-bit CSR would be needed (1 bit per floating-point register)
74 to indicate whether a floating-point register was to be treated as a
75 vector.
76
77 In this way any standard (current or future) operation involving
78 register operands may detect if the operation is to be vector-vector,
79 vector-scalar or scalar-scalar (standard) simply through a single
80 bit test.
81
82 ## CSR vector-length and CSR SIMD packed-bitwidth
83
84 **TODO** analyse each of these:
85
86 * splitting out the loop-aspects, vector aspects and data-width aspects
87 * integer reg 0 *and* fp reg0 share CSR vlen 0 *and* CSR packed-bitwidth 0
88 * integer reg 1 *and* fp reg1 share CSR vlen 1 *and* CSR packed-bitwidth 1
89 * ....
90 * .... 
91
92 instead:
93
94 * CSR vlen 0 *and* CSR packed-bitwidth 0 register contain extra bits
95 specifying an *INDEX* of WHICH int/fp register they refer to
96 * CSR vlen 1 *and* CSR packed-bitwidth 1 register contain extra bits
97 specifying an *INDEX* of WHICH int/fp register they refer to
98 * ...
99 * ...
100
101 Have to be very *very* careful about not implementing too few of those
102 (or too many). Assess implementation impact on decode latency. Is it
103 worth it?
104
105 Implementation of the latter:
106
107 Operation involving (referring to) register M:
108
109 > bitwidth = default # default for opcode?
110 > vectorlen = 1 # scalar
111 >
112 > for (o = 0, o < 2, o++)
113 >   if (CSR-Vector_registernum[o] == M)
114 >       bitwidth = CSR-Vector_bitwidth[o]
115 >       vectorlen = CSR-Vector_len[o]
116 >       break
117
118 and for the former it would simply be:
119
120 > bitwidth = CSR-Vector_bitwidth[M]
121 > vectorlen = CSR-Vector_len[M]
122
123 Alternatives:
124
125 * One single "global" vector-length CSR
126
127 ## Stride
128
129 **TODO**: propose two LOAD/STORE offset CSRs, which mark a particular
130 register as being "if you use this reg in LOAD/STORE, use the offset
131 amount CSRoffsN (N=0,1) instead of treating LOAD/STORE as contiguous".
132 can be used for matrix spanning.
133
134 > For LOAD/STORE, could a better option be to interpret the offset in the
135 > opcode as a stride instead, so "LOAD t3, 12(t2)" would, if t3 is
136 > configured as a length-4 vector base, result in t3 = *t2, t4 = *(t2+12),
137 > t5 = *(t2+24), t6 = *(t2+32)?  Perhaps include a bit in the
138 > vector-control CSRs to select between offset-as-stride and unit-stride
139 > memory accesses?
140
141 So there would be an instruction like this:
142
143 | SETOFF | On=rN | OBank={float|int} | Smode={offs|unit} | OFFn=rM |
144 | opcode | 5 bit | 1 bit | 1 bit | 5 bit, OFFn=XLEN |
145
146
147 which would mean:
148
149 * CSR-Offset register n <= (float|int) register number N
150 * CSR-Offset Stride-mode = offset or unit
151 * CSR-Offset amount register n = contents of register M
152
153 LOAD rN, ldoffs(rM) would then be (assuming packed bit-width not set):
154
155 > offs = 0
156 > stride = 1
157 > vector-len = CSR-Vector-length register N
158 >
159 > for (o = 0, o < 2, o++)
160 > if (CSR-Offset register o == M)
161 > offs = CSR-Offset amount register o
162 > if CSR-Offset Stride-mode == offset:
163 > stride = ldoffs
164 > break
165 >
166 > for (i = 0, i < vector-len; i++)
167 > r[N+i] = mem[(offs*i + r[M+i])*stride]
168
169 # Analysis and discussion of Vector vs SIMD
170
171 There are four combined areas between the two proposals that help with
172 parallelism without over-burdening the ISA with a huge proliferation of
173 instructions:
174
175 * Fixed vs variable parallelism (fixed or variable "M" in SIMD)
176 * Implicit vs fixed instruction bit-width (integral to instruction or not)
177 * Implicit vs explicit type-conversion (compounded on bit-width)
178 * Implicit vs explicit inner loops.
179 * Masks / tagging (selecting/preventing certain indexed elements from execution)
180
181 The pros and cons of each are discussed and analysed below.
182
183 ## Fixed vs variable parallelism length
184
185 In David Patterson and Andrew Waterman's analysis of SIMD and Vector
186 ISAs, the analysis comes out clearly in favour of (effectively) variable
187 length SIMD. As SIMD is a fixed width, typically 4, 8 or in extreme cases
188 16 or 32 simultaneous operations, the setup, teardown and corner-cases of SIMD
189 are extremely burdensome except for applications whose requirements
190 *specifically* match the *precise and exact* depth of the SIMD engine.
191
192 Thus, SIMD, no matter what width is chosen, is never going to be acceptable
193 for general-purpose computation, and in the context of developing a
194 general-purpose ISA, is never going to satisfy 100 percent of implementors.
195
196 That basically leaves "variable-length vector" as the clear *general-purpose*
197 winner, at least in terms of greatly simplifying the instruction set,
198 reducing the number of instructions required for any given task, and thus
199 reducing power consumption for the same.
200
201 ## Implicit vs fixed instruction bit-width
202
203 SIMD again has a severe disadvantage here, over Vector: huge proliferation
204 of specialist instructions that target 8-bit, 16-bit, 32-bit, 64-bit, and
205 have to then have operations *for each and between each*. It gets very
206 messy, very quickly.
207
208 The V-Extension on the other hand proposes to set the bit-width of
209 future instructions on a per-register basis, such that subsequent instructions
210 involving that register are *implicitly* of that particular bit-width until
211 otherwise changed or reset.
212
213 This has some extremely useful properties, without being particularly
214 burdensome to implementations, given that instruction decode already has
215 to direct the operation to a correctly-sized width ALU engine, anyway.
216
217 Not least: in places where an ISA was previously constrained (due for
218 whatever reason, including limitations of the available operand spcace),
219 implicit bit-width allows the meaning of certain operations to be
220 type-overloaded *without* pollution or alteration of frozen and immutable
221 instructions, in a fully backwards-compatible fashion.
222
223 ## Implicit and explicit type-conversion
224
225 The Draft 2.3 V-extension proposal has (deprecated) polymorphism to help
226 deal with over-population of instructions, such that type-casting from
227 integer (and floating point) of various sizes is automatically inferred
228 due to "type tagging" that is set with a special instruction. A register
229 will be *specifically* marked as "16-bit Floating-Point" and, if added
230 to an operand that is specifically tagged as "32-bit Integer" an implicit
231 type-conversion will take placce *without* requiring that type-conversion
232 to be explicitly done with its own separate instruction.
233
234 However, implicit type-conversion is not only quite burdensome to
235 implement (explosion of inferred type-to-type conversion) but also is
236 never really going to be complete. It gets even worse when bit-widths
237 also have to be taken into consideration.
238
239 Overall, type-conversion is generally best to leave to explicit
240 type-conversion instructions, or in definite specific use-cases left to
241 be part of an actual instruction (DSP or FP)
242
243 ## Zero-overhead loops vs explicit loops
244
245 The initial Draft P-SIMD Proposal by Chuanhua Chang of Andes Technology
246 contains an extremely interesting feature: zero-overhead loops. This
247 proposal would basically allow an inner loop of instructions to be
248 repeated indefinitely, a fixed number of times.
249
250 Its specific advantage over explicit loops is that the pipeline in a
251 DSP can potentially be kept completely full *even in an in-order
252 implementation*. Normally, it requires a superscalar architecture and
253 out-of-order execution capabilities to "pre-process" instructions in order
254 to keep ALU pipelines 100% occupied.
255
256 This very simple proposal offers a way to increase pipeline activity in the
257 one key area which really matters: the inner loop.
258
259 ## Mask and Tagging (Predication)
260
261 Tagging (aka Masks aka Predication) is a pseudo-method of implementing
262 simplistic branching in a parallel fashion, by allowing execution on
263 elements of a vector to be switched on or off depending on the results
264 of prior operations in the same array position.
265
266 The reason for considering this is simple: by *definition* it
267 is not possible to perform individual parallel branches in a SIMD
268 (Single-Instruction, **Multiple**-Data) context. Branches (modifying
269 of the Program Counter) will result in *all* parallel data having
270 a different instruction executed on it: that's just the definition of
271 SIMD, and it is simply unavoidable.
272
273 So these are the ways in which conditional execution may be implemented:
274
275 * explicit compare and branch: BNE x, y -> offs would jump offs
276 instructions if x was not equal to y
277 * explicit store of tag condition: CMP x, y -> tagbit
278 * implicit (condition-code) ADD results in a carry, carry bit implicitly
279 (or sometimes explicitly) goes into a "tag" (mask) register
280
281 The first of these is a "normal" branch method, which is flat-out impossible
282 to parallelise without look-ahead and effectively rewriting instructions.
283 This would defeat the purpose of RISC.
284
285 The latter two are where parallelism becomes easy to do without complexity:
286 every operation is modified to be "conditionally executed" (in an explicit
287 way directly in the instruction format *or* implicitly).
288
289 RVV (Vector-Extension) proposes to have *explicit* storing of the compare
290 in a tag/mask register, and to *explicitly* have every vector operation
291 *require* that its operation be "predicated" on the bits within an
292 explicitly-named tag/mask register.
293
294 SIMD (P-Extension) has not yet published precise documentation on what its
295 schema is to be: there is however verbal indication at the time of writing
296 that:
297
298 > The "compare" instructions in the DSP/SIMD ISA proposed by Andes will
299 > be executed using the same compare ALU logic for the base ISA with some
300 > minor modifications to handle smaller data types. The function will not
301 > be duplicated.
302
303 This is an *implicit* form of predication as the base RV ISA does not have
304 condition-codes or predication. By adding a CSR it becomes possible
305 to also tag certain registers as "predicated if referenced as a destination".
306 Example:
307
308 > # in future operations if r0 is the destination use r5 as
309 > # the PREDICATION register
310 > IMPLICICSRPREDICATE r0, r5
311 > # store the compares in r5 as the PREDICATION register
312 > CMPEQ8 r5, r1, r2
313 > # r0 is used here. ah ha! that means it's predicated using r5!
314 > ADD8 r0, r1, r3
315
316 With enough registers (and there are enough registers) some fairly
317 complex predication can be set up and yet still execute without significant
318 stalling, even in a simple non-superscalar architecture.
319
320 ## Conclusions
321
322 In the above sections the four different ways where parallel instruction
323 execution has closely and loosely inter-related implications for the ISA and
324 for implementors, were outlined. The pluses and minuses came out as
325 follows:
326
327 * Fixed vs variable parallelism: <b>variable</b>
328 * Implicit (indirect) vs fixed (integral) instruction bit-width: <b>indirect</b>
329 * Implicit vs explicit type-conversion: <b>explicit</b>
330 * Implicit vs explicit inner loops: <b>implicit</b>
331 * Tag or no-tag: <b>Complex and needs further thought</b>
332
333 In particular: variable-length vectors came out on top because of the
334 high setup, teardown and corner-cases associated with the fixed width
335 of SIMD. Implicit bit-width helps to extend the ISA to escape from
336 former limitations and restrictions (in a backwards-compatible fashion),
337 and implicit (zero-overhead) loops provide a means to keep pipelines
338 potentially 100% occupied *without* requiring a super-scalar or out-of-order
339 architecture.
340
341 Constructing a SIMD/Simple-Vector proposal based around even only these four
342 (five?) requirements would therefore seem to be a logical thing to do.
343
344 # Instruction Format
345
346 **TODO** *basically borrow from both P and V, which should be quite simple
347 to do, with the exception of Tag/no-tag, which needs a bit more
348 thought. V's Section 17.19 of Draft V2.3 spec is reminiscent of B's BGS
349 gather-scatterer, and, if implemented, could actually be a really useful
350 way to span 8-bit up to 64-bit groups of data, where BGS as it stands
351 and described by Clifford does **bits** of up to 16 width. Lots to
352 look at and investigate!*
353
354 # Note on implementation of parallelism
355
356 One extremely important aspect of this proposal is to respect and support
357 implementors desire to focus on power, area or performance. In that regard,
358 it is proposed that implementors be free to choose whether to implement
359 the Vector (or variable-width SIMD) parallelism as sequential operations
360 with a single ALU, fully parallel (if practical) with multiple ALUs, or
361 a hybrid combination of both.
362
363 In Broadcom's Videocore-IV, they chose hybrid, and called it "Virtual
364 Parallelism". They achieve a 16-way SIMD at an **instruction** level
365 by providing a combination of a 4-way parallel ALU *and* an externally
366 transparent loop that feeds 4 sequential sets of data into each of the
367 4 ALUs.
368
369 Also in the same core, it is worth noting that particularly uncommon
370 but essential operations (Reciprocal-Square-Root for example) are
371 *not* part of the 4-way parallel ALU but instead have a *single* ALU.
372 Under the proposed Vector (varible-width SIMD) implementors would
373 be free to do precisely that: i.e. free to choose *on a per operation
374 basis* whether and how much "Virtual Parallelism" to deploy.
375
376 It is absolutely critical to note that it is proposed that such choices MUST
377 be **entirely transparent** to the end-user and the compiler. Whilst
378 a Vector (varible-width SIM) may not precisely match the width of the
379 parallelism within the implementation, the end-user **should not care**
380 and in this way the performance benefits are gained but the ISA remains
381 simple. All that happens at the end of an instruction run is: some
382 parallel units (if there are any) would remain offline, completely
383 transparently to the ISA, the program, and the compiler.
384
385 The "SIMD considered harmful" trap of having huge complexity and extra
386 instructions to deal with corner-cases is thus avoided, and implementors
387 get to choose precisely where to focus and target the benefits of their
388 implementationefforts..
389
390 # V-Extension to Simple-V Comparative Analysis
391
392 This section covers the ways in which Simple-V is comparable
393 to, or more flexible than, V-Extension (V2.3-draft). Also covered is
394 one major weak-point (register files are fixed size, where V is
395 arbitrary length), and how best to deal with that, should V be adapted
396 to be on top of Simple-V.
397
398 The first stages of this section go over each of the sections of V2.3-draft V
399 where appropriate
400
401 ## 17.3 Shape Encoding
402
403 Simple-V's proposed means of expressing whether a register (from the
404 standard integer or the standard floating-point file) is a scalar or
405 a vector is to simply set the vector length to 1. The instruction
406 would however have to specify which register file (integer or FP) that
407 the vector-length was to be applied to.
408
409 Extended shapes (2-D etc) would not be part of Simple-V at all.
410
411 ## 17.4 Representation Encoding
412
413 Simple-V would not have representation-encoding. This is part of
414 polymorphism, which is considered too complex to implement (TODO: confirm?)
415
416 ## 17.5 Element Bitwidth
417
418 This is directly equivalent to Simple-V's "Packed", and implies that
419 integer (or floating-point) are divided down into vector-indexable
420 chunks of size Bitwidth.
421
422 In this way it becomes possible to have ADD effectively and implicitly
423 turn into ADDb (8-bit add), ADDw (16-bit add) and so on, and where
424 vector-length has been set to greater than 1, it becomes a "Packed"
425 (SIMD) instruction.
426
427 It remains to be decided what should be done when RV32 / RV64 ADD (sized)
428 opcodes are used. One useful idea would be, on an RV64 system where
429 a 32-bit-sized ADD was performed, to simply use the least significant
430 32-bits of the register (exactly as is currently done) but at the same
431 time to *respect the packed bitwidth as well*.
432
433 The extended encoding (Table 17.6) would not be part of Simple-V.
434
435 ## 17.6 Base Vector Extension Supported Types
436
437 TODO: analyse. probably exactly the same.
438
439 ## 17.7 Maximum Vector Element Width
440
441 No equivalent in Simple-V
442
443 ## 17.8 Vector Configuration Registers
444
445 TODO: analyse.
446
447 ## 17.9 Legal Vector Unit Configurations
448
449 TODO: analyse
450
451 ## 17.10 Vector Unit CSRs
452
453 TODO: analyse
454
455 > Ok so this is an aspect of Simple-V that I hadn't thought through,
456 > yet (proposal / idea only a few days old!).  in V2.3-Draft ISA Section
457 > 17.10 the CSRs are listed.  I note that there's some general-purpose
458 > CSRs (including a global/active vector-length) and 16 vcfgN CSRs.  i
459 > don't precisely know what those are for.
460
461 >  In the Simple-V proposal, *every* register in both the integer
462 > register-file *and* the floating-point register-file would have at
463 > least a 2-bit "data-width" CSR and probably something like an 8-bit
464 > "vector-length" CSR (less in RV32E, by exactly one bit).
465
466 >  What I *don't* know is whether that would be considered perfectly
467 > reasonable or completely insane.  If it turns out that the proposed
468 > Simple-V CSRs can indeed be stored in SRAM then I would imagine that
469 > adding somewhere in the region of 10 bits per register would be... okay? 
470 > I really don't honestly know.
471
472 >  Would these proposed 10-or-so-bit per-register Simple-V CSRs need to
473 > be multi-ported? No I don't believe they would.
474
475 ## 17.11 Maximum Vector Length (MVL)
476
477 Basically implicitly this is set to the maximum size of the register
478 file multiplied by the number of 8-bit packed ints that can fit into
479 a register (4 for RV32, 8 for RV64 and 16 for RV128).
480
481 ## !7.12 Vector Instruction Formats
482
483 No equivalent in Simple-V because *all* instructions of *all* Extensions
484 are implicitly parallelised (and packed).
485
486 ## 17.13 Polymorphic Vector Instructions
487
488 Polymorphism (implicit type-casting) is deliberately not supported
489 in Simple-V.
490
491 ## 17.14 Rapid Configuration Instructions
492
493 TODO: analyse if this is useful to have an equivalent in Simple-V
494
495 ## 17.15 Vector-Type-Change Instructions
496
497 TODO: analyse if this is useful to have an equivalent in Simple-V
498
499 ## 17.16 Vector Length
500
501 Has a direct corresponding equivalent.
502
503 ## 17.17 Predicated Execution
504
505 Predicated Execution is another name for "masking" or "tagging". Masked
506 (or tagged) implies that there is a bit field which is indexed, and each
507 bit associated with the corresponding indexed offset register within
508 the "Vector". If the tag / mask bit is 1, when a parallel operation is
509 issued, the indexed element of the vector has the operation carried out.
510 However if the tag / mask bit is *zero*, that particular indexed element
511 of the vector does *not* have the requested operation carried out.
512
513 In V2.3-draft V, there is a significant (not recommended) difference:
514 the zero-tagged elements are *set to zero*. This loses a *significant*
515 advantage of mask / tagging, particularly if the entire mask register
516 is itself a general-purpose register, as that general-purpose register
517 can be inverted, shifted, and'ed, or'ed and so on. In other words
518 it becomes possible, especially if Carry/Overflow from each vector
519 operation is also accessible, to do conditional (step-by-step) vector
520 operations including things like turn vectors into 1024-bit or greater
521 operands with very few instructions, by treating the "carry" from
522 one instruction as a way to do "Conditional add of 1 to the register
523 next door". If V2.3-draft V sets zero-tagged elements to zero, such
524 extremely powerful techniques are simply not possible.
525
526 It is noted that there is no mention of an equivalent to BEXT (element
527 skipping) which would be particularly fascinating and powerful to have.
528 In this mode, the "mask" would skip elements where its mask bit was zero
529 in either the source or the destination operand.
530
531 Lots to be discussed.
532
533 ## 17.18 Vector Load/Store Instructions
534
535 These may not have a direct equivalent in Simple-V, except if mask/tagging
536 is to be deployed.
537
538 To be discussed.
539
540 ## 17.19 Vector Register Gather
541
542 TODO
543
544 ## TODO, sort
545
546 > However, there are also several features that go beyond simply attaching VL
547 > to a scalar operation and are crucial to being able to vectorize a lot of
548 > code. To name a few:
549 > - Conditional execution (i.e., predicated operations)
550 > - Inter-lane data movement (e.g. SLIDE, SELECT)
551 > - Reductions (e.g., VADD with a scalar destination)
552
553 Ok so the Conditional and also the Reductions is one of the reasons
554 why as part of SimpleV / variable-SIMD / parallelism (gah gotta think
555 of a decent name) i proposed that it be implemented as "if you say r0
556 is to be a vector / SIMD that means operations actually take place on
557 r0,r1,r2... r(N-1)".
558
559 Consequently any parallel operation could be paused (or... more
560 specifically: vectors disabled by resetting it back to a default /
561 scalar / vector-length=1) yet the results would actually be in the
562 *main register file* (integer or float) and so anything that wasn't
563 possible to easily do in "simple" parallel terms could be done *out*
564 of parallel "mode" instead.
565
566 I do appreciate that the above does imply that there is a limit to the
567 length that SimpleV (whatever) can be parallelised, namely that you
568 run out of registers! my thought there was, "leave space for the main
569 V-Ext proposal to extend it to the length that V currently supports".
570 Honestly i had not thought through precisely how that would work.
571
572 Inter-lane (SELECT) i saw 17.19 in V2.3-Draft p117, I liked that,
573 it reminds me of the discussion with Clifford on bit-manipulation
574 (gather-scatter except not Bit Gather Scatter, *data* gather scatter): if
575 applied "globally and outside of V and P" SLIDE and SELECT might become
576 an extremely powerful way to do fast memory copy and reordering [2[.
577
578 However I haven't quite got my head round how that would work: i am
579 used to the concept of register "tags" (the modern term is "masks")
580 and i *think* if "masks" were applied to a Simple-V-enhanced LOAD /
581 STORE you would get the exact same thing as SELECT.
582
583 SLIDE you could do simply by setting say r0 vector-length to say 16
584 (meaning that if referred to in any operation it would be an implicit
585 parallel operation on *all* registers r0 through r15), and temporarily
586 set say.... r7 vector-length to say... 5. Do a LOAD on r7 and it would
587 implicitly mean "load from memory into r7 through r11". Then you go
588 back and do an operation on r0 and ta-daa, you're actually doing an
589 operation on a SLID {SLIDED?) vector.
590
591 The advantage of Simple-V (whatever) over V would be that you could
592 actually do *operations* in the middle of vectors (not just SLIDEs)
593 simply by (as above) setting r0 vector-length to 16 and r7 vector-length
594 to 5. There would be nothing preventing you from doing an ADD on r0
595 (which meant do an ADD on r0 through r15) followed *immediately in the
596 next instruction with no setup cost* a MUL on r7 (which actually meant
597 "do a parallel MUL on r7 through r11").
598
599 btw it's worth mentioning that you'd get scalar-vector and vector-scalar
600 implicitly by having one of the source register be vector-length 1
601 (the default) and one being N > 1. but without having special opcodes
602 to do it. i *believe* (or more like "logically infer or deduce" as
603 i haven't got access to the spec) that that would result in a further
604 opcode reduction when comparing [draft] V-Ext to [proposed] Simple-V.
605
606 Also, Reduction *might* be possible by specifying that the destination be
607 a scalar (vector-length=1) whilst the source be a vector. However... it
608 would be an awful lot of work to go through *every single instruction*
609 in *every* Extension, working out which ones could be parallelised (ADD,
610 MUL, XOR) and those that definitely could not (DIV, SUB). Is that worth
611 the effort? maybe. Would it result in huge complexity? probably.
612 Could an implementor just go "I ain't doing *that* as parallel!
613 let's make it virtual-parallelism (sequential reduction) instead"?
614 absolutely. So, now that I think it through, Simple-V (whatever)
615 covers Reduction as well. huh, that's a surprise.
616
617
618 > - Vector-length speculation (making it possible to vectorize some loops with
619 > unknown trip count) - I don't think this part of the proposal is written
620 > down yet.
621
622 Now that _is_ an interesting concept. A little scary, i imagine, with
623 the possibility of putting a processor into a hard infinite execution
624 loop... :)
625
626
627 > Also, note the vector ISA consumes relatively little opcode space (all the
628 > arithmetic fits in 7/8ths of a major opcode). This is mainly because data
629 > type and size is a function of runtime configuration, rather than of opcode.
630
631 yes. i love that aspect of V, i am a huge fan of polymorphism [1]
632 which is why i am keen to advocate that the same runtime principle be
633 extended to the rest of the RISC-V ISA [3]
634
635 Yikes that's a lot. I'm going to need to pull this into the wiki to
636 make sure it's not lost.
637
638 [1] inherent data type conversion: 25 years ago i designed a hypothetical
639 hyper-hyper-hyper-escape-code-sequencing ISA based around 2-bit
640 (escape-extended) opcodes and 2-bit (escape-extended) operands that
641 only required a fixed 8-bit instruction length. that relied heavily
642 on polymorphism and runtime size configurations as well. At the time
643 I thought it would have meant one HELL of a lot of CSRs... but then I
644 met RISC-V and was cured instantly of that delusion^Wmisapprehension :)
645
646 [2] Interestingly if you then also add in the other aspect of Simple-V
647 (the data-size, which is effectively functionally orthogonal / identical
648 to "Packed" of Packed-SIMD), masked and packed *and* vectored LOAD / STORE
649 operations become byte / half-word / word augmenters of B-Ext's proposed
650 "BGS" i.e. where B-Ext's BGS dealt with bits, masked-packed-vectored
651 LOAD / STORE would deal with 8 / 16 / 32 bits at a time. Where it
652 would get really REALLY interesting would be masked-packed-vectored
653 B-Ext BGS instructions. I can't even get my head fully round that,
654 which is a good sign that the combination would be *really* powerful :)
655
656 [3] ok sadly maybe not the polymorphism, it's too complicated and I
657 think would be much too hard for implementors to easily "slide in" to an
658 existing non-Simple-V implementation.  i say that despite really *really*
659 wanting IEEE 704 FP Half-precision to end up somewhere in RISC-V in some
660 fashion, for optimising 3D Graphics.  *sigh*.
661
662 ## TODO: instructions (based on Hwacha) V-Ext duplication analysis
663
664 This is partly speculative due to lack of access to an up-to-date
665 V-Ext Spec (V2.3-draft RVV 0.4-Draft at the time of writing). However
666 basin an analysis instead on Hwacha, a cursory examination shows over
667 an **85%** duplication of V-Ext operand-related instructions when
668 compared to Simple-V on a standard RG64G base. Even Vector Fetch
669 is analogous to "zero-overhead loop".
670
671 Exceptions are:
672
673 * Vector Indexed Memory Instructions (non-contiguous)
674 * Vector Atomic Memory Instructions.
675 * Some of the Vector Arithmetic ops: MADD, MSUB,
676 VSRL, VSRA, VEIDX, VFIRST, VSGNJN, VFSGNJX and potentially more.
677 * Consensual Jump
678
679 Table of RV32V Instructions
680
681 | RV32V | |
682 | ----- | --- |
683 | VADD | |
684 | VSUB | |
685 | VSL | |
686 | VSR | |
687 | VAND | |
688 | VOR | |
689 | VXOR | |
690 | VSEQ | |
691 | VSNE | |
692 | VSLT | |
693 | VSGE | |
694 | VCLIP | |
695 | VCVT | |
696 | VMPOP | |
697 | VMFIRST | |
698 | VEXTRACT | |
699 | VINSERT | |
700 | VMERGE | |
701 | VSELECT | |
702 | VSLIDE | |
703 | VDIV | |
704 | VREM | |
705 | VMUL | |
706 | VMULH | |
707 | VMIN | |
708 | VMAX | |
709 | VSGNJ | |
710 | VSGNJN | |
711 | VSGNJX | |
712 | VSQRT | |
713 | VCLASS | |
714 | VPOPC | |
715 | VADDI | |
716 | VSLI | |
717 | VSRI | |
718 | VANDI | |
719 | VORI | |
720 | VXORI | |
721 | VCLIPI | |
722 | VMADD | |
723 | VMSUB | |
724 | VNMADD | |
725 | VNMSUB | |
726 | VLD | |
727 | VLDS | |
728 | VLDX | |
729 | VST | |
730 | VSTS | |
731 | VSTX | |
732 | VAMOSWAP | |
733 | VAMOADD | |
734 | VAMOAND | |
735 | VAMOOR | |
736 | VAMOXOR | |
737 | VAMOMIN | |
738 | VAMOMAX | |
739
740 ## TODO: sort
741
742 > I suspect that the "hardware loop" in question is actually a zero-overhead
743 > loop unit that diverts execution from address X to address Y if a certain
744 > condition is met.
745
746  not quite.  The zero-overhead loop unit interestingly would be at
747 an [independent] level above vector-length.  The distinctions are
748 as follows:
749
750 * Vector-length issues *virtual* instructions where the register
751 operands are *specifically* altered (to cover a range of registers),
752 whereas zero-overhead loops *specifically* do *NOT* alter the operands
753 in *ANY* way.
754
755 * Vector-length-driven "virtual" instructions are driven by *one*
756 and *only* one instruction (whether it be a LOAD, STORE, or pure
757 one/two/three-operand opcode) whereas zero-overhead loop units
758 specifically apply to *multiple* instructions.
759
760 Where vector-length-driven "virtual" instructions might get conceptually
761 blurred with zero-overhead loops is LOAD / STORE.  In the case of LOAD /
762 STORE, to actually be useful, vector-length-driven LOAD / STORE should
763 increment the LOAD / STORE memory address to correspondingly match the
764 increment in the register bank.  example:
765
766 * set vector-length for r0 to 4
767 * issue RV32 LOAD from addr 0x1230 to r0
768
769 translates effectively to:
770
771 * RV32 LOAD from addr 0x1230 to r0
772 * ...
773 * ...
774 * RV32 LOAD from addr 0x123B to r3
775
776 # P-Ext ISA
777
778 ## 16-bit Arithmetic
779
780 | Mnemonic | 16-bit Instruction | Simple-V Equivalent |
781 | ------------------ | ------------------------- | ------------------- |
782 | ADD16 rt, ra, rb | add | RV ADD (bitwidth=16) |
783 | RADD16 rt, ra, rb | Signed Halving add | |
784 | URADD16 rt, ra, rb | Unsigned Halving add | |
785 | KADD16 rt, ra, rb | Signed Saturating add | |
786 | UKADD16 rt, ra, rb | Unsigned Saturating add | |
787 | SUB16 rt, ra, rb | sub | RV SUB (bitwidth=16) |
788 | RSUB16 rt, ra, rb | Signed Halving sub | |
789 | URSUB16 rt, ra, rb | Unsigned Halving sub | |
790 | KSUB16 rt, ra, rb | Signed Saturating sub | |
791 | UKSUB16 rt, ra, rb | Unsigned Saturating sub | |
792 | CRAS16 rt, ra, rb | Cross Add & Sub | |
793 | RCRAS16 rt, ra, rb | Signed Halving Cross Add & Sub | |
794 | URCRAS16 rt, ra, rb| Unsigned Halving Cross Add & Sub | |
795 | KCRAS16 rt, ra, rb | Signed Saturating Cross Add & Sub | |
796 | UKCRAS16 rt, ra, rb| Unsigned Saturating Cross Add & Sub | |
797 | CRSA16 rt, ra, rb | Cross Sub & Add | |
798 | RCRSA16 rt, ra, rb | Signed Halving Cross Sub & Add | |
799 | URCRSA16 rt, ra, rb| Unsigned Halving Cross Sub & Add | |
800 | KCRSA16 rt, ra, rb | Signed Saturating Cross Sub & Add | |
801 | UKCRSA16 rt, ra, rb| Unsigned Saturating Cross Sub & Add | |
802
803 ## 8-bit Arithmetic
804
805 | Mnemonic | 16-bit Instruction | Simple-V Equivalent |
806 | ------------------ | ------------------------- | ------------------- |
807 | ADD8 rt, ra, rb | add | RV ADD (bitwidth=8)|
808 | RADD8 rt, ra, rb | Signed Halving add | |
809 | URADD8 rt, ra, rb | Unsigned Halving add | |
810 | KADD8 rt, ra, rb | Signed Saturating add | |
811 | UKADD8 rt, ra, rb | Unsigned Saturating add | |
812 | SUB8 rt, ra, rb | sub | RV SUB (bitwidth=8)|
813 | RSUB8 rt, ra, rb | Signed Halving sub | |
814 | URSUB8 rt, ra, rb | Unsigned Halving sub | |
815
816 # Exceptions
817
818 > What does an ADD of two different-sized vectors do in simple-V?
819
820 * if the two source operands are not the same, throw an exception.
821 * if the destination operand is also a vector, and the source is longer
822 than the destination, throw an exception.
823
824 > And what about instructions like JALR? 
825 > What does jumping to a vector do?
826
827 * Throw an exception. Whether that actually results in spawning threads
828 as part of the trap-handling remains to be seen.
829
830 # Impementing V on top of Simple-V
831
832 * Number of Offset CSRs extends from 2
833 * Extra register file: vector-file
834 * Setup of Vector length and bitwidth CSRs now can specify vector-file
835 as well as integer or float file.
836 * TODO
837
838 # Implementing P (renamed to DSP) on top of Simple-V
839
840 * Implementors indicate chosen bitwidth support in Vector-bitwidth CSR
841 (caveat: anything not specified drops through to software-emulation / traps)
842 * TODO
843
844 # Analysis of CSR decoding on latency
845
846 <a name="csr_decoding_analysis"></a>
847
848 It could indeed have been logically deduced (or expected), that there
849 would be additional decode latency in this proposal, because if
850 overloading the opcodes to have different meanings, there is guaranteed
851 to be some state, some-where, directly related to registers.
852
853 There are several cases:
854
855 * All operands vector-length=1 (scalars), all operands
856 packed-bitwidth="default": instructions are passed through direct as if
857 Simple-V did not exist.  Simple-V is, in effect, completely disabled.
858 * At least one operand vector-length > 1, all operands
859 packed-bitwidth="default": any parallel vector ALUs placed on "alert",
860 virtual parallelism looping may be activated.
861 * All operands vector-length=1 (scalars), at least one
862 operand packed-bitwidth != default: degenerate case of SIMD,
863 implementation-specific complexity here (packed decode before ALUs or
864 *IN* ALUs)
865 * At least one operand vector-length > 1, at least one operand
866 packed-bitwidth != default: parallel vector ALUs (if any)
867 placed on "alert", virtual parallelsim looping may be activated,
868 implementation-specific SIMD complexity kicks in (packed decode before
869 ALUs or *IN* ALUs).
870
871 Bear in mind that the proposal includes that the decision whether
872 to parallelise in hardware or whether to virtual-parallelise (to
873 dramatically simplify compilers and also not to run into the SIMD
874 instruction proliferation nightmare) *or* a transprent combination
875 of both, be done on a *per-operand basis*, so that implementors can
876 specifically choose to create an application-optimised implementation
877 that they believe (or know) will sell extremely well, without having
878 "Extra Standards-Mandated Baggage" that would otherwise blow their area
879 or power budget completely out the window.
880
881 Additionally, two possible CSR schemes have been proposed, in order to
882 greatly reduce CSR space:
883
884 * per-register CSRs (vector-length and packed-bitwidth)
885 * a smaller number of CSRs with the same information but with an *INDEX*
886 specifying WHICH register in one of three regfiles (vector, fp, int)
887 the length and bitwidth applies to.
888
889 (See "CSR vector-length and CSR SIMD packed-bitwidth" section for details)
890
891 In addition, LOAD/STORE has its own associated proposed CSRs that
892 mirror the STRIDE (but not yet STRIDE-SEGMENT?) functionality of
893 V (and Hwacha).
894
895 Also bear in mind that, for reasons of simplicity for implementors,
896 I was coming round to the idea of permitting implementors to choose
897 exactly which bitwidths they would like to support in hardware and which
898 to allow to fall through to software-trap emulation.
899
900 So the question boils down to:
901
902 * whether either (or both) of those two CSR schemes have significant
903 latency that could even potentially require an extra pipeline decode stage
904 * whether there are implementations that can be thought of which do *not*
905 introduce significant latency
906 * whether it is possible to explicitly (through quite simply
907 disabling Simple-V-Ext) or implicitly (detect the case all-vlens=1,
908 all-simd-bitwidths=default) switch OFF any decoding, perhaps even to
909 the extreme of skipping an entire pipeline stage (if one is needed)
910 * whether packed bitwidth and associated regfile splitting is so complex
911 that it should definitely, definitely be made mandatory that implementors
912 move regfile splitting into the ALU, and what are the implications of that
913 * whether even if that *is* made mandatory, is software-trapped
914 "unsupported bitwidths" still desirable, on the basis that SIMD is such
915 a complete nightmare that *even* having a software implementation is
916 better, making Simple-V have more in common with a software API than
917 anything else.
918
919 Whilst the above may seem to be severe minuses, there are some strong
920 pluses:
921
922 * Significant reduction of V's opcode space: over 85%.
923 * Smaller reduction of P's opcode space: around 10%.
924 * The potential to use Compressed instructions in both Vector and SIMD
925 due to the overloading of register meaning (implicit vectorisation,
926 implicit packing)
927 * Not only present but also future extensions automatically gain parallelism.
928 * Already mentioned but worth emphasising: the simplification to compiler
929 writers and assembly-level writers of having the same consistent ISA
930 regardless of whether the internal level of parallelism (number of
931 parallel ALUs) is only equal to one ("virtual" parallelism), or is
932 greater than one, should not be underestimated.
933
934
935 # References
936
937 * SIMD considered harmful <https://www.sigarch.org/simd-instructions-considered-harmful/>
938 * Link to first proposal <https://groups.google.com/a/groups.riscv.org/forum/#!topic/isa-dev/GuukrSjgBH8>
939 * Recommendation by Jacob Bachmeyer to make zero-overhead loop an
940 "implicit program-counter" <https://groups.google.com/a/groups.riscv.org/d/msg/isa-dev/vYVi95gF2Mo/SHz6a4_lAgAJ>
941 * Re-continuing P-Extension proposal <https://groups.google.com/a/groups.riscv.org/forum/#!msg/isa-dev/IkLkQn3HvXQ/SEMyC9IlAgAJ>
942 * First Draft P-SIMD (DSP) proposal <https://groups.google.com/a/groups.riscv.org/forum/#!topic/isa-dev/vYVi95gF2Mo>
943 * B-Extension discussion <https://groups.google.com/a/groups.riscv.org/forum/#!topic/isa-dev/zi_7B15kj6s>
944 * Broadcom VideoCore-IV <https://docs.broadcom.com/docs/12358545>
945 Figure 2 P17 and Section 3 on P16.
946 * Hwacha <https://www2.eecs.berkeley.edu/Pubs/TechRpts/2015/EECS-2015-262.html>
947 * Hwacha <https://www2.eecs.berkeley.edu/Pubs/TechRpts/2015/EECS-2015-263.html>
948 * Vector Workshop <http://riscv.org/wp-content/uploads/2015/06/riscv-vector-workshop-june2015.pdf>
949 * Predication <https://groups.google.com/a/groups.riscv.org/forum/#!topic/isa-dev/XoP4BfYSLXA>