1 \documentclass[slidestop
]{beamer
}
2 \usepackage{beamerthemesplit
}
6 \title{Simple-V RISC-V Extension for Vectorisation and SIMD
}
7 \author{Luke Kenneth Casson Leighton
}
14 \huge{Simple-V RISC-V Extension for Vectors and SIMD
}\\
16 \Large{Flexible Vectorisation
}\\
17 \Large{(aka not so Simple-V?)
}\\
18 \Large{(aka How to Parallelise the RISC-V ISA)
}\\
20 \Large{[proposed for
] Chennai
9th RISC-V Workshop
}\\
27 \frame{\frametitle{Credits and Acknowledgements
}
30 \item The Designers of RISC-V
\vspace{15pt
}
31 \item The RVV Working Group and contributors
\vspace{15pt
}
32 \item Allen Baum, Jacob Bachmeyer, Xan Phung, Chuanhua Chang,\\
33 Guy Lemurieux, Jonathan Neuschafer, Roger Brussee,
34 and others
\vspace{15pt
}
35 \item ISA-Dev Group Members
\vspace{10pt
}
40 \frame{\frametitle{Quick refresher on SIMD
}
43 \item SIMD very easy to implement (and very seductive)
\vspace{8pt
}
44 \item Parallelism is in the ALU
\vspace{8pt
}
45 \item Zero-to-Negligeable impact for rest of core
\vspace{8pt
}
47 Where SIMD Goes Wrong:
\vspace{10pt
}
49 \item See "SIMD instructions considered harmful"
50 https://sigarch.org/simd-instructions-considered-harmful
51 \item Setup and corner-cases alone are extremely complex.\\
52 Hardware is easy, but software is hell.
53 \item O($N^
{6}$) ISA opcode proliferation!\\
54 opcode, elwidth, veclen, src1-src2-dest hi/lo
58 \frame{\frametitle{Quick refresher on RVV
}
61 \item Extremely powerful (extensible to
256 registers)
\vspace{10pt
}
62 \item Supports polymorphism, several datatypes (inc. FP16)
\vspace{10pt
}
63 \item Requires a separate Register File (
32 w/ext to
256)
\vspace{10pt
}
64 \item Implemented as a separate pipeline (no impact on scalar)
\vspace{10pt
}
66 However...
\vspace{10pt
}
68 \item 98 percent opcode duplication with rest of RV (CLIP)
69 \item Extending RVV requires customisation not just of h/w:\\
70 gcc, binutils also need customisation (and maintenance)
75 \frame{\frametitle{The Simon Sinek lowdown (Why, How, What)
}
79 Implementors need flexibility in vectorisation to optimise for
80 area or performance depending on the scope:
81 embedded DSP, Mobile GPU's, Server CPU's and more.\\
82 Compilers also need flexibility in vectorisation to optimise for cost
83 of pipeline setup, amount of state to context switch
84 and software portability
86 By marking INT/FP regs as "Vectorised" and
87 adding a level of indirection,
88 SV expresses how existing instructions should act
89 on
[contiguous
] blocks of registers, in parallel, WITHOUT
90 needing any new extra arithmetic opcodes.
92 Simple-V is an "API" that implicitly extends
93 existing (scalar) instructions with explicit parallelisation\\
94 i.e. SV is actually about parallelism NOT vectors per se.\\
95 Has a lot in common with VLIW (without the actual VLIW).
100 \frame{\frametitle{What's the value of SV? Why adopt it even in non-V?
}
103 \item memcpy becomes much smaller (higher bang-per-buck)
104 \item context-switch (LOAD/STORE multiple):
1-
2 instructions
105 \item Compressed instrs further reduces I-cache (etc.)
106 \item Reduced I-cache load (and less I-reads)
107 \item Amazingly, SIMD becomes tolerable (no corner-cases)
108 \item Modularity/Abstraction in both the h/w and the toolchain.
109 \item "Reach" of registers accessible by Compressed is enhanced
110 \item Future: double the standard INT/FP register file sizes.
114 \item It's not just about Vectors: it's about instruction effectiveness
115 \item Anything implementor is not interested in HW-optimising,\\
116 let it fall through to exceptions (implement as a trap).
121 \frame{\frametitle{How does Simple-V relate to RVV? What's different?
}
124 \item RVV very heavy-duty (excellent for supercomputing)
\vspace{8pt
}
125 \item Simple-V abstracts parallelism (based on best of RVV)
\vspace{8pt
}
126 \item Graded levels: hardware, hybrid or traps (fit impl. need)
\vspace{8pt
}
127 \item Even Compressed become vectorised (RVV can't)
\vspace{8pt
}
128 \item No polymorphism in SV (too complex)
\vspace{8pt
}
130 What Simple-V is not:
\vspace{4pt
}
132 \item A full supercomputer-level Vector Proposal
133 \item A replacement for RVV (SV is designed to be over-ridden\\
134 by - or augmented to become - RVV)
139 \frame{\frametitle{How is Parallelism abstracted in Simple-V?
}
142 \item Register "typing" turns any op into an implicit Vector op:\\
143 registers are reinterpreted through a level of indirection
144 \item Primarily at the Instruction issue phase (except SIMD)\\
145 Note: it's ok to pass predication through to ALU (like SIMD)
146 \item Standard (and future, and custom) opcodes now parallel
\vspace{10pt
}
148 Note: EVERYTHING is parallelised:
150 \item All LOAD/STORE (inc. Compressed, Int/FP versions)
151 \item All ALU ops (Int, FP, SIMD, DSP, everything)
152 \item All branches become predication targets (C.FNE added?)
153 \item C.MV of particular interest (s/v, v/v, v/s)
154 \item FCVT, FMV, FSGNJ etc. very similar to C.MV
159 \frame{\frametitle{What's the deal / juice / score?
}
162 \item Standard Register File(s) overloaded with CSR "reg is vector"\\
163 (see pseudocode slides for examples)
164 \item "
2nd FP\&INT register bank" possibility, reserved for future\\
165 (would allow standard regfiles to remain unmodified)
166 \item Element width concept remain same as RVV\\
167 (CSRs give new size: overrides opcode-defined meaning)
168 \item CSRs are key-value tables (overlaps allowed: v. important)
170 Key differences from RVV:
172 \item Predication in INT reg as a BIT field (max VL=XLEN)
173 \item Minimum VL must be Num Regs -
1 (all regs single LD/ST)
174 \item SV may condense sparse Vecs: RVV lets ALU do predication
175 \item Choice to Zero or skip non-predicated elements
180 \begin{frame
}[fragile
]
181 \frametitle{ADD pseudocode (or trap, or actual hardware loop)
}
184 function op
\_add(rd, rs1, rs2, predr) # add not VADD!
185 int i, id=
0, irs1=
0, irs2=
0;
186 for (i =
0; i < VL; i++)
187 if (ireg
[predr
] &
1<<i) # predication uses intregs
188 ireg
[rd+id
] <= ireg
[rs1+irs1
] + ireg
[rs2+irs2
];
189 if (reg
\_is\_vectorised[rd
] ) \
{ id +=
1; \
}
190 if (reg
\_is\_vectorised[rs1
]) \
{ irs1 +=
1; \
}
191 if (reg
\_is\_vectorised[rs2
]) \
{ irs2 +=
1; \
}
195 \item Above is oversimplified: Reg. indirection left out (for clarity).
196 \item SIMD slightly more complex (case above is elwidth = default)
197 \item Scalar-scalar and scalar-vector and vector-vector now all in one
198 \item OoO may choose to push ADDs into instr. queue (v. busy!)
202 % yes it really *is* ADD not VADD. that's the entire point of
203 % this proposal, that *standard* operations are overloaded to
204 % become vectorised-on-demand
207 \begin{frame
}[fragile
]
208 \frametitle{Predication-Branch (or trap, or actual hardware loop)
}
211 s1 = reg
\_is\_vectorised(src1);
212 s2 = reg
\_is\_vectorised(src2);
213 if (!s2 && !s1) goto branch;
214 for (int i =
0; i < VL; ++i)
215 if (cmp(s1 ? reg
[src1+i
]:reg
[src1
],
216 s2 ? reg
[src2+i
]:reg
[src2
])
221 \item SIMD slightly more complex (case above is elwidth = default)
222 \item If s1 and s2 both scalars, Standard branch occurs
223 \item Predication stored in integer regfile as a bitfield
224 \item Scalar-vector and vector-vector supported
225 \item Overload Branch immediate to be predication target rs3
229 \begin{frame
}[fragile
]
230 \frametitle{VLD/VLD.S/VLD.X (or trap, or actual hardware loop)
}
233 if (unit-strided) stride = elsize;
234 else stride = areg
[as2
]; // constant-strided
235 for (int i =
0; i < VL; ++i)
236 if (
[!
]preg
[rd
] &
1<<i)
237 for (int j =
0; j < seglen+
1; j++)
238 if (reg
\_is\_vectorised[rs2
]) offs = vreg
[rs2+i
]
239 else offs = i*(seglen+
1)*stride;
240 vreg
[rd+j
][i
] = mem
[sreg
[base
] + offs + j*stride
]
244 \item Again: elwidth != default slightly more complex
245 \item rs2 vectorised taken to implicitly indicate VLD.X
250 \frame{\frametitle{Register key-value CSR store
}
253 \item key is int regfile number or FP regfile number (
1 bit)
254 \item treated as vector if referred to in op (
5 bits, key)
255 \item starting register to actually be used (
5 bits, value)
256 \item element bitwidth: default, dflt/
2,
8,
16 (
2 bits)
257 \item is vector: Y/N (
1 bit)
258 \item is packed SIMD: Y/N (
1 bit)
259 \item register bank:
0/reserved for future ext. (
1 bit)
263 \item References different (internal) mapping table for INT or FP
264 \item Level of indirection has implications for pipeline latency
265 \item (future) bank bit, no need to extend opcodes: set bank=
1,
266 just use normal
5-bit regs, indirection takes care of the rest.
271 \frame{\frametitle{Register element width and packed SIMD
}
275 \item default: RV32/
64/
128 opcodes define elwidth =
32/
64/
128
276 \item default/
2: RV32/
64/
128 opcodes, elwidth =
16/
32/
64 with
277 top half of register ignored (src), zero'd/s-ext (dest)
278 \item 8 or
16: elwidth =
8 (or
16), similar to default/
2
280 Packed SIMD = Y (default is moot, packing is
1:
1)
282 \item default/
2:
2 elements per register @ opcode-defined bitwidth
283 \item 8 or
16: standard
8 (or
16) packed SIMD
287 \item Different src/dest widths (and packs) PERMITTED
288 \item RV* already allows (and defines) how RV32 ops work in RV64\\
289 so just logically follow that lead/example.
294 \begin{frame
}[fragile
]
295 \frametitle{Register key-value CSR table decoding pseudocode
}
298 struct vectorised fp
\_vec[32], int
\_vec[32]; //
64 in future
300 for (i =
0; i <
16; i++) //
16 CSRs?
301 tb = int
\_vec if CSRvec
[i
].type ==
0 else fp
\_vec
302 idx = CSRvec
[i
].regkey // INT/FP src/dst reg in opcode
303 tb
[idx
].elwidth = CSRvec
[i
].elwidth
304 tb
[idx
].regidx = CSRvec
[i
].regidx // indirection
305 tb
[idx
].isvector = CSRvec
[i
].isvector
306 tb
[idx
].packed = CSRvec
[i
].packed // SIMD or not
307 tb
[idx
].bank = CSRvec
[i
].bank //
0 (
1=rsvd)
311 \item All
32 int (and
32 FP) entries zero'd before setup
312 \item Might be a bit complex to set up in hardware (keep as CAM?)
318 \frame{\frametitle{Predication key-value CSR store
}
321 \item key is int regfile number or FP regfile number (
1 bit)
322 \item register to be predicated if referred to (
5 bits, key)
323 \item INT reg with actual predication mask (
5 bits, value)
324 \item predication is inverted Y/N (
1 bit)
325 \item non-predicated elements are to be zero'd Y/N (
1 bit)
326 \item register bank:
0/reserved for future ext. (
1 bit)
330 \item Table should be expanded out for high-speed implementations
331 \item Key-value overlaps permitted, but (key+type) must be unique
332 \item RVV rules about deleting higher-indexed CSRs followed
337 \begin{frame
}[fragile
]
338 \frametitle{Predication key-value CSR table decoding pseudocode
}
341 struct pred fp
\_pred[32], int
\_pred[32]; //
64 in future
343 for (i =
0; i <
16; i++) //
16 CSRs?
344 tb = int
\_pred if CSRpred
[i
].type ==
0 else fp
\_pred
345 idx = CSRpred
[i
].regkey
346 tb
[idx
].zero = CSRpred
[i
].zero // zeroing
347 tb
[idx
].inv = CSRpred
[i
].inv // inverted
348 tb
[idx
].predidx = CSRpred
[i
].predidx // actual reg
349 tb
[idx
].bank = CSRpred
[i
].bank //
0 for now
350 tb
[idx
].enabled = true
354 \item All
32 int and
32 FP entries zero'd before setting
355 \item Might be a bit complex to set up in hardware (keep as CAM?)
361 \begin{frame
}[fragile
]
362 \frametitle{Get Predication value pseudocode
}
365 def get
\_pred\_val(bool is
\_fp\_op, int reg):
366 tb = int
\_pred if is
\_fp\_op else fp
\_pred
367 if (!tb
[reg
].enabled):
368 return ~
0x0 // all ops enabled
369 predidx = tb
[reg
].predidx // redirection occurs HERE
370 predicate = intreg
[predidx
] // actual predicate HERE
372 predicate = ~predicate // invert ALL bits
377 \item References different (internal) mapping table for INT or FP
378 \item Actual predicate bitmask ALWAYS from the INT regfile
379 \item Hard-limit on MVL of XLEN (predication only
1 intreg)
385 \frame{\frametitle{To Zero or not to place zeros in non-predicated elements?
}
388 \item Zeroing is an implementation optimisation favouring OoO
389 \item Simple implementations may skip non-predicated operations
390 \item Simple implementations explicitly have to destroy data
391 \item Complex implementations may use reg-renames to save power\\
392 Zeroing on predication chains makes optimisation harder
393 \item Compromise: REQUIRE both (specified in predication CSRs).
397 \item Complex not really impacted, simple impacted a LOT\\
398 with Zeroing... however it's useful (memzero)
399 \item Non-zero'd overlapping "Vectors" may issue overlapping ops\\
400 (
2nd op's predicated elements slot in
1st's non-predicated ops)
401 \item Please don't use Vectors for "security" (use Sec-Ext)
404 % with overlapping "vectors" - bearing in mind that "vectors" are
405 % just a remap onto the standard register file, if the top bits of
406 % predication are zero, and there happens to be a second vector
407 % that uses some of the same register file that happens to be
408 % predicated out, the second vector op may be issued *at the same time*
409 % if there are available parallel ALUs to do so.
412 \frame{\frametitle{Implementation Options
}
415 \item Absolute minimum: Exceptions: if CSRs indicate "V", trap.\\
416 (Requires as absolute minimum that CSRs be in Hardware)
417 \item Hardware loop, single-instruction issue\\
418 (Do / Don't send through predication to ALU)
419 \item Hardware loop, parallel (multi-instruction) issue\\
420 (Do / Don't send through predication to ALU)
421 \item Hardware loop, full parallel ALU (not recommended)
425 \item 4 (or more?) options above may be deployed on per-op basis
426 \item SIMD always sends predication bits to ALU (if requested)
427 \item Minimum MVL MUST be sufficient to cover regfile LD/ST
428 \item Instr. FIFO may repeatedly split off N scalar ops at a time
431 % Instr. FIFO may need its own slide. Basically, the vectorised op
432 % gets pushed into the FIFO, where it is then "processed". Processing
433 % will remove the first set of ops from its vector numbering (taking
434 % predication into account) and shoving them **BACK** into the FIFO,
435 % but MODIFYING the remaining "vectorised" op, subtracting the now
436 % scalar ops from it.
438 \frame{\frametitle{Predicated
8-parallel ADD:
1-wide ALU
}
440 \includegraphics[height=
2.5in
]{padd9_alu1.png
}\\
441 {\bf \red Predicated adds are shuffled down:
6 cycles in total
}
446 \frame{\frametitle{Predicated
8-parallel ADD:
4-wide ALU
}
448 \includegraphics[height=
2.5in
]{padd9_alu4.png
}\\
449 {\bf \red Predicated adds are shuffled down:
4 in
1st cycle,
2 in
2nd
}
454 \frame{\frametitle{Predicated
8-parallel ADD:
3 phase FIFO expansion
}
456 \includegraphics[height=
2.5in
]{padd9_fifo.png
}\\
457 {\bf \red First cycle takes first four
1s; second takes the rest
}
462 \begin{frame
}[fragile
]
463 \frametitle{ADD pseudocode with redirection (and proper predication)
}
466 function op
\_add(rd, rs1, rs2) # add not VADD!
467 int i, id=
0, irs1=
0, irs2=
0;
468 rd = int
\_vec[rd
].isvector ? int
\_vec[rd
].regidx : rd;
469 rs1 = int
\_vec[rs1
].isvector ? int
\_vec[rs1
].regidx : rs1;
470 rs2 = int
\_vec[rs2
].isvector ? int
\_vec[rs2
].regidx : rs2;
471 predval = get
\_pred\_val(FALSE, rd);
472 for (i =
0; i < VL; i++)
473 if (predval \&
1<<i) # predication uses intregs
474 ireg
[rd+id
] <= ireg
[rs1+irs1
] + ireg
[rs2+irs2
];
475 if (int
\_vec[rd
].isvector) \
{ id +=
1; \
}
476 if (int
\_vec[rs1
].isvector) \
{ irs1 +=
1; \
}
477 if (int
\_vec[rs2
].isvector) \
{ irs2 +=
1; \
}
481 \item SIMD (elwidth != default) not covered above
486 \frame{\frametitle{How are SIMD Instructions Vectorised?
}
489 \item SIMD ALU(s) primarily unchanged
490 \item Predication added down to each SIMD element (if requested,
491 otherwise entire block will be predicated as a whole)
492 \item Predication bits sent in groups to the ALU (if requested,
493 otherwise just one bit for the entire packed block)
494 \item End of Vector enables (additional) predication:
495 completely nullifies end-case code (ONLY in multi-bit
500 \item Many SIMD ALUs possible (parallel execution)
501 \item Implementor free to choose (API remains the same)
502 \item Unused ALU units wasted, but s/w DRASTICALLY simpler
503 \item Very long SIMD ALUs could waste significant die area
506 % With multiple SIMD ALUs at for example 32-bit wide they can be used
507 % to either issue 64-bit or 128-bit or 256-bit wide SIMD operations
508 % or they can be used to cover several operations on totally different
509 % vectors / registers.
511 \frame{\frametitle{Predicated
9-parallel SIMD ADD (Packed=Y)
}
513 \includegraphics[height=
2.5in
]{padd9_simd.png
}\\
514 {\bf \red 4-wide
8-bit SIMD,
4 bits of predicate passed to ALU
}
519 \frame{\frametitle{Why are overlaps allowed in Regfiles?
}
522 \item Same register(s) can have multiple "interpretations"
523 \item Set "real" register (scalar) without needing to set/unset CSRs.
524 \item xBitManip plus SIMD plus xBitManip = Hi/Lo bitops
525 \item (
32-bit GREV plus
4x8-bit SIMD plus
32-bit GREV:\\
526 GREV @ VL=N,wid=
32; SIMD @ VL=Nx4,wid=
8)
527 \item RGB
565 (video): BEXTW plus
4x8-bit SIMD plus BDEPW\\
528 (BEXT/BDEP @ VL=N,wid=
32; SIMD @ VL=Nx4,wid=
8)
529 \item Same register(s) can be offset (no need for VSLIDE)
\vspace{6pt
}
533 \item xBitManip reduces O($N^
{6}$) SIMD down to O($N^
{3}$)
534 \item Hi-Performance: Macro-op fusion (more pipeline stages?)
539 \frame{\frametitle{C.MV extremely flexible!
}
542 \item scalar-to-vector (w/ no pred): VSPLAT
543 \item scalar-to-vector (w/ dest-pred): Sparse VSPLAT
544 \item scalar-to-vector (w/
1-bit dest-pred): VINSERT
545 \item vector-to-scalar (w/
[1-bit?
] src-pred): VEXTRACT
546 \item vector-to-vector (w/ no pred): Vector Copy
547 \item vector-to-vector (w/ src pred): Vector Gather
548 \item vector-to-vector (w/ dest pred): Vector Scatter
549 \item vector-to-vector (w/ src \& dest pred): Vector Gather/Scatter
554 \item Surprisingly powerful! Zero-predication even more so
555 \item Same arrangement for FVCT, FMV, FSGNJ etc.
560 \begin{frame
}[fragile
]
561 \frametitle{MV pseudocode with predication
}
564 function op
\_mv(rd, rs) # MV not VMV!
565 rd = int
\_vec[rd
].isvector ? int
\_vec[rd
].regidx : rd;
566 rs = int
\_vec[rs
].isvector ? int
\_vec[rs
].regidx : rs;
567 ps = get
\_pred\_val(FALSE, rs); # predication on src
568 pd = get
\_pred\_val(FALSE, rd); # ... AND on dest
569 for (int i =
0, int j =
0; i < VL && j < VL;):
570 if (int
\_vec[rs
].isvec) while (!(ps \&
1<<i)) i++;
571 if (int
\_vec[rd
].isvec) while (!(pd \&
1<<j)) j++;
572 ireg
[rd+j
] <= ireg
[rs+i
];
573 if (int
\_vec[rs
].isvec) i++;
574 if (int
\_vec[rd
].isvec) j++;
578 \item elwidth != default not covered above (might be a bit hairy)
579 \item Ending early with
1-bit predication not included (VINSERT)
584 \begin{frame
}[fragile
]
585 \frametitle{VSELECT: stays or goes? Stays if MV.X exists...
}
588 def op_mv_x(rd, rs): # (hypothetical) RV MX.X
589 rs = regfile
[rs
] # level of indirection (MV.X)
590 regfile
[rd
] = regfile
[rs
] # straight regcopy
593 Vectorised version aka "VSELECT":
596 def op_mv_x(rd, rs): # SV version of MX.X
598 rs1 = regfile
[rs+i
] # indirection
599 regfile
[rd+i
] = regfile
[rs
] # straight regcopy
603 \item However MV.X does not exist in RV, so neither can VSELECT
604 \item \red SV is not about adding new functionality, only parallelism
611 \frame{\frametitle{Opcodes, compared to RVV
}
614 \item All integer and FP opcodes all removed (no CLIP, FNE)
615 \item VMPOP, VFIRST etc. all removed (use xBitManip)
616 \item VSLIDE removed (use regfile overlaps)
617 \item C.MV covers VEXTRACT VINSERT and VSPLAT (and more)
618 \item Vector (or scalar-vector) copy: use C.MV (MV is a pseudo-op)
619 \item VMERGE: twin predicated C.MVs (one inverted. macro-op'd)
620 \item VSETVL, VGETVL stay (the only ops that do!)
624 \item VSELECT stays? no MV.X, so no (add with custom ext?)
625 \item VSNE exists, but no FNE (use predication inversion?)
626 \item VCLIP is not in RV* (add with custom ext? or CSR?)
631 \begin{frame
}[fragile
]
632 \frametitle{Example c code: DAXPY
}
635 void daxpy(size_t n, double a,
636 const double x
[], double y
[])
638 for (size_t i =
0; i < n; i++) \
{
639 y
[i
] = a*x
[i
] + y
[i
];
645 \item See "SIMD Considered Harmful" for SIMD/RVV analysis\\
646 https://sigarch.org/simd-instructions-considered-harmful/
653 \begin{frame
}[fragile
]
654 \frametitle{RVV DAXPY assembly (RV32V)
}
657 # a0 is n, a1 is ptr to x
[0], a2 is ptr to y
[0], fa0 is a
659 vsetdcfg t0 # enable
2 64b Fl.Pt. registers
661 setvl t0, a0 # vl = t0 = min(mvl, n)
662 vld v0, a1 # load vector x
663 slli t1, t0,
3 # t1 = vl *
8 (in bytes)
664 vld v1, a2 # load vector y
665 add a1, a1, t1 # increment pointer to x by vl*
8
666 vfmadd v1, v0, fa0, v1 # v1 += v0 * fa0 (y = a * x + y)
667 sub a0, a0, t0 # n -= vl (t0)
669 add a2, a2, t1 # increment pointer to y by vl*
8
670 bnez a0, loop # repeat if n !=
0
675 \begin{frame
}[fragile
]
676 \frametitle{SV DAXPY assembly (RV64D)
}
679 # a0 is n, a1 is ptr to x
[0], a2 is ptr to y
[0], fa0 is a
680 CSRvect1 = \
{type: F, key: a3, val: a3, elwidth: dflt\
}
681 CSRvect2 = \
{type: F, key: a7, val: a7, elwidth: dflt\
}
683 setvl t0, a0,
4 # vl = t0 = min(
4, n)
684 ld a3, a1 # load
4 registers a3-
6 from x
685 slli t1, t0,
3 # t1 = vl *
8 (in bytes)
686 ld a7, a2 # load
4 registers a7-
10 from y
687 add a1, a1, t1 # increment pointer to x by vl*
8
688 fmadd a7, a3, fa0, a7 # v1 += v0 * fa0 (y = a * x + y)
689 sub a0, a0, t0 # n -= vl (t0)
690 st a7, a2 # store
4 registers a7-
10 to y
691 add a2, a2, t1 # increment pointer to y by vl*
8
692 bnez a0, loop # repeat if n !=
0
697 \frame{\frametitle{Under consideration
}
700 \item Should future extra bank be included now?
701 \item How many Register and Predication CSRs should there be?\\
702 (and how many in RV32E)
703 \item How many in M-Mode (for doing context-switch)?
704 \item Should use of registers be allowed to "wrap" (x30 x31 x1 x2)?
705 \item Can CLIP be done as a CSR (mode, like elwidth)
706 \item SIMD saturation (etc.) also set as a mode?
707 \item Include src1/src2 predication on Comparison Ops?\\
708 (same arrangement as C.MV, with same flexibility/power)
709 \item 8/
16-bit ops is it worthwhile adding a "start offset"? \\
710 (a bit like misaligned addressing... for registers)\\
711 or just use predication to skip start?
716 \frame{\frametitle{What's the downside(s) of SV?
}
718 \item EVERY register operation is inherently parallelised\\
719 (scalar ops are just vectors of length
1)
\vspace{4pt
}
720 \item Tightly coupled with the core (instruction issue)\\
721 could be disabled through MISA switch
\vspace{4pt
}
722 \item An extra pipeline phase almost certainly essential\\
723 for fast low-latency implementations
\vspace{4pt
}
724 \item With zeroing off, skipping non-predicated elements is hard:\\
725 it is however an optimisation (and need not be done).
\vspace{4pt
}
726 \item Setting up the Register/Predication tables (interpreting the\\
727 CSR key-value stores) might be a bit complex to optimise
728 (any change to a CSR key-value entry needs to redo the table)
733 \frame{\frametitle{Summary
}
736 \item Actually about parallelism, not Vectors (or SIMD) per se\\
737 and NOT about adding new ALU/logic/functionality.
738 \item Only needs
2 actual instructions (plus the CSRs).\\
739 RVV - and "standard" SIMD - require ISA duplication
740 \item Designed for flexibility (graded levels of complexity)
741 \item Huge range of implementor freedom
742 \item Fits RISC-V ethos: achieve more with less
743 \item Reduces SIMD ISA proliferation by
3-
4 orders of magnitude \\
744 (without SIMD downsides or sacrificing speed trade-off)
745 \item Covers
98\% of RVV, allows RVV to fit "on top"
746 \item Byproduct of SV is a reduction in code size, power usage
747 etc. (increase efficiency, just like Compressed)
754 {\Huge The end
\vspace{20pt
}\\
755 Thank you
\vspace{20pt
}\\
756 Questions?
\vspace{20pt
}
761 \item Discussion: ISA-DEV mailing list
762 \item http://libre-riscv.org/simple
\_v\_extension/