Where SIMD Goes Wrong:\vspace{10pt}
\begin{itemize}
\item See "SIMD instructions considered harmful"
- https://www.sigarch.org/simd-instructions-considered-harmful
- \item Corner-cases alone are extremely complex.\\
+ https://sigarch.org/simd-instructions-considered-harmful
+ \item Setup and corner-cases alone are extremely complex.\\
Hardware is easy, but software is hell.
\item O($N^{6}$) ISA opcode proliferation!\\
opcode, elwidth, veclen, src1-src2-dest hi/lo
\begin{itemize}
\item 98 percent opcode duplication with rest of RV (CLIP)
\item Extending RVV requires customisation not just of h/w:\\
- gcc and s/w also need customisation (and maintenance)
+ gcc, binutils also need customisation (and maintenance)
\end{itemize}
}
on [contiguous] blocks of registers, in parallel.\vspace{4pt}
\item What?
Simple-V is an "API" that implicitly extends
- existing (scalar) instructions with explicit parallelisation
+ existing (scalar) instructions with explicit parallelisation\\
(i.e. SV is actually about parallelism NOT vectors per se)
\end{itemize}
}
\item Greatly-reduced I-cache load (and less reads)
\item Amazingly, SIMD becomes (more) tolerable\\
(corner-cases for setup and teardown are gone)
+ \item Modularity/Abstraction in both the h/w and the toolchain.
+ \item "Reach" of registers accessible by Compressed is enhanced
\end{itemize}
Note:
\begin{itemize}
\item It's not just about Vectors: it's about instruction effectiveness
- \item Anything that makes SIMD tolerable has to be a good thing
\item Anything implementor is not interested in HW-optimising,\\
let it fall through to exceptions (implement as a trap).
\end{itemize}
\begin{itemize}
\item A full supercomputer-level Vector Proposal
\item A replacement for RVV (SV is designed to be over-ridden\\
- by - or augmented to become, or just be replaced by - RVV)
+ by - or augmented to become - RVV)
\end{itemize}
}
Note: EVERYTHING is parallelised:
\begin{itemize}
\item All LOAD/STORE (inc. Compressed, Int/FP versions)
- \item All ALU ops (soft / hybrid / full HW, on per-op basis)
+ \item All ALU ops (Int, FP, SIMD, DSP, everything)
\item All branches become predication targets (C.FNE added?)
\item C.MV of particular interest (s/v, v/v, v/s)
\item FCVT, FMV, FSGNJ etc. very similar to C.MV
\frame{\frametitle{Implementation Options}
\begin{itemize}
- \item Absolute minimum: Exceptions (if CSRs indicate "V", trap)
+ \item Absolute minimum: Exceptions: if CSRs indicate "V", trap.\\
+ (Requires as absolute minimum that CSRs be in H/W)
\item Hardware loop, single-instruction issue\\
(Do / Don't send through predication to ALU)
\item Hardware loop, parallel (multi-instruction) issue\\
(Do / Don't send through predication to ALU)
\item Hardware loop, full parallel ALU (not recommended)
\end{itemize}
- Notes:\vspace{6pt}
+ Notes:\vspace{4pt}
\begin{itemize}
\item 4 (or more?) options above may be deployed on per-op basis
\item SIMD always sends predication bits through to ALU
\item SIMD ALU(s) primarily unchanged\vspace{6pt}
\item Predication is added to each SIMD element\vspace{6pt}
\item Predication bits sent in groups to the ALU\vspace{6pt}
- \item End of Vector enables (additional) predication\vspace{10pt}
+ \item End of Vector enables (additional) predication\\
+ (completely nullifies need for end-case code)
\end{itemize}
Considerations:\vspace{4pt}
\begin{itemize}
\item Standard Register File(s) overloaded with CSR "reg is vector"\\
(see pseudocode slides for examples)
\item Element width (and type?) concepts remain same as RVV\\
- (CSRs are used to "interpret" elements in registers)
+ (CSRs give new size (and meaning?) to elements in registers)
\item CSRs are key-value tables (overlaps allowed)\vspace{10pt}
\end{itemize}
Key differences from RVV:\vspace{10pt}
s2 = reg\_is\_vectorised(src2);
if (!s2 && !s1) goto branch;
for (int i = 0; i < VL; ++i)
- if cmp(s1 ? reg[src1+i] : reg[src1],
- s2 ? reg[src2+i] : reg[src2])
- preg[rs3] |= 1 << i;
+ if (cmp(s1 ? reg[src1+i]:reg[src1],
+ s2 ? reg[src2+i]:reg[src2])
+ ireg[rs3] |= 1<<i;
\end{semiverbatim}
\begin{itemize}
\item If s1 and s2 both scalars, Standard branch occurs
\item Predication stored in integer regfile as a bitfield
\item Scalar-vector and vector-vector supported
+ \item Overload Branch immediate to be predication target rs3
\end{itemize}
\end{frame}
\vspace{4pt}
Notes:
\begin{itemize}
- \item Surprisingly powerful!
+ \item Surprisingly powerful! Zero-predication even more so
\item Same arrangement for FVCT, FMV, FSGNJ etc.
\end{itemize}
}
\end{frame}
+\begin{frame}[fragile]
+\frametitle{VSELECT: stays or goes? Stays if MV.X exists...}
+
+\begin{semiverbatim}
+def op_mv_x(rd, rs): # (hypothetical) RV MX.X
+ rs = regfile[rs] # level of indirection (MV.X)
+ regfile[rd] = regfile[rs] # straight regcopy
+\end{semiverbatim}
+
+Vectorised version aka "VSELECT":
+
+\begin{semiverbatim}
+def op_mv_x(rd, rs): # SV version of MX.X
+ for i in range(VL):
+ rs1 = regfile[rs+i] # indirection
+ regfile[rd+i] = regfile[rs] # straight regcopy
+\end{semiverbatim}
+
+ \begin{itemize}
+ \item However MV.X does not exist in RV, so neither can VSELECT
+ \item \red SV is not about adding new functionality, only parallelism
+ \end{itemize}
+
+
+\end{frame}
+
+
\frame{\frametitle{Opcodes, compared to RVV}
\begin{itemize}
- \item All integer and FP opcodes all removed (no CLIP!)\vspace{8pt}
- \item VMPOP, VFIRST etc. all removed (use xBitManip)\vspace{8pt}
- \item VSLIDE removed (use regfile overlaps)\vspace{8pt}
- \item C.MV covers VEXTRACT VINSERT and VSPLAT (and more)\vspace{8pt}
- \item VSETVL, VGETVL, VSELECT stay\vspace{8pt}
- \item Issue: VCLIP is not in RV* (add with custom ext?)\vspace{8pt}
- \item Vector (or scalar-vector) use C.MV (MV is a pseudo-op)\vspace{8pt}
- \item VMERGE: twin predicated C.MVs (one inverted. macro-op'd)\vspace{8pt}
+ \item All integer and FP opcodes all removed (no CLIP, FNE)
+ \item VMPOP, VFIRST etc. all removed (use xBitManip)
+ \item VSLIDE removed (use regfile overlaps)
+ \item C.MV covers VEXTRACT VINSERT and VSPLAT (and more)
+ \item Vector (or scalar-vector) copy: use C.MV (MV is a pseudo-op)
+ \item VMERGE: twin predicated C.MVs (one inverted. macro-op'd)
+ \item VSETVL, VGETVL stay (the only ops that do!)
+ \end{itemize}
+ Issues:
+ \begin{itemize}
+ \item VSELECT stays? no MV.X, so no (add with custom ext?)
+ \item VSNE exists, but no FNE (use predication inversion?)
+ \item VCLIP is not in RV* (add with custom ext?)
\end{itemize}
}
+\begin{frame}[fragile]
+\frametitle{Example c code: DAXPY}
+
+\begin{semiverbatim}
+ void daxpy(size_t n, double a,
+ const double x[], double y[])
+ \{
+ for (size_t i = 0; i < n; i++) \{
+ y[i] = a*x[i] + y[i];
+ \}
+ \}
+\end{semiverbatim}
+
+ \begin{itemize}
+ \item See "SIMD Considered Harmful" for SIMD/RVV analysis\\
+ https://sigarch.org/simd-instructions-considered-harmful/
+ \end{itemize}
+
+
+\end{frame}
+
+
+\begin{frame}[fragile]
+\frametitle{RVV DAXPY assembly (RV32V)}
+
+\begin{semiverbatim}
+# a0 is n, a1 is ptr to x[0], a2 is ptr to y[0], fa0 is a
+ li t0, 2<<25
+ vsetdcfg t0 # enable 2 64b Fl.Pt. registers
+loop:
+ setvl t0, a0 # vl = t0 = min(mvl, n)
+ vld v0, a1 # load vector x
+ slli t1, t0, 3 # t1 = vl * 8 (in bytes)
+ vld v1, a2 # load vector y
+ add a1, a1, t1 # increment pointer to x by vl*8
+ vfmadd v1, v0, fa0, v1 # v1 += v0 * fa0 (y = a * x + y)
+ sub a0, a0, t0 # n -= vl (t0)
+ vst v1, a2 # store Y
+ add a2, a2, t1 # increment pointer to y by vl*8
+ bnez a0, loop # repeat if n != 0
+\end{semiverbatim}
+\end{frame}
+
+
+\begin{frame}[fragile]
+\frametitle{SV DAXPY assembly (RV64D)}
+
+\begin{semiverbatim}
+# a0 is n, a1 is ptr to x[0], a2 is ptr to y[0], fa0 is a
+ CSRvect1 = \{type: F, key: a3, val: a3, elwidth: dflt\}
+ CSRvect2 = \{type: F, key: a7, val: a7, elwidth: dflt\}
+loop:
+ setvl t0, a0, 4 # vl = t0 = min(4, n)
+ ld a3, a1 # load 4 registers a3-6 from x
+ slli t1, t0, 3 # t1 = vl * 8 (in bytes)
+ ld a7, a2 # load 4 registers a7-10 from y
+ add a1, a1, t1 # increment pointer to x by vl*8
+ fmadd a7, a3, fa0, a7 # v1 += v0 * fa0 (y = a * x + y)
+ sub a0, a0, t0 # n -= vl (t0)
+ st a7, a2 # store 4 registers a7-10 to y
+ add a2, a2, t1 # increment pointer to y by vl*8
+ bnez a0, loop # repeat if n != 0
+\end{semiverbatim}
+\end{frame}
+
+
\frame{\frametitle{Under consideration}
\begin{itemize}
\item Can VSELECT be removed? (it's really complex)
\item Can CLIP be done as a CSR (mode, like elwidth)
\item SIMD saturation (etc.) also set as a mode?
+ \item Include src1/src2 predication on Comparison Ops?\\
+ (same arrangement as C.MV, with same flexibility/power)
\item 8/16-bit ops is it worthwhile adding a "start offset"? \\
(a bit like misaligned addressing... for registers)\\
or just use predication to skip start?
\begin{itemize}
\item EVERY register operation is inherently parallelised\\
(scalar ops are just vectors of length 1)\vspace{4pt}
- \item An extra pipeline phase is pretty much essential\\
+ \item Tightly coupled with the core (instruction issue)\\
+ could be disabled through MISA switch\vspace{4pt}
+ \item An extra pipeline phase almost certainly essential\\
for fast low-latency implementations\vspace{4pt}
- \item Assuming an instruction FIFO, N ops could be taken off\\
- of a parallel op per cycle (avoids filling entire FIFO;\\
- also is less work per cycle: lower complexity / latency)\vspace{4pt}
\item With zeroing off, skipping non-predicated elements is hard:\\
it is however an optimisation (and could be skipped).\vspace{4pt}
\item Setting up the Register/Predication tables (interpreting the\\
}
-\frame{\frametitle{TODO (break into separate slides)}
-
- \begin{itemize}
- \item Then explain why this proposal is a good way to \\
- abstract parallelism\\
- (hopefully also explaining how \\
- a good compiler can make clever use of this increase parallelism\\
- Then explain how this can be implemented (at instruction\\
- issue time???) with\\
- implementation options, and what these "cost".\\
- Finally give examples that show simple usage that compares\\
- C code\\
- RVIC\\
- RVV\\
- RVICXsimplev
- \end{itemize}
-}
-
-
\frame{\frametitle{Summary}
\begin{itemize}
- \item Actually about parallelism, not Vectors (or SIMD) per se
+ \item Actually about parallelism, not Vectors (or SIMD) per se\\
+ and NOT about adding new ALU/logic/functionality.
+ \item Only needs 2 actual instructions (plus the CSRs).\\
+ RVV - and "standard" SIMD - require ISA duplication
\item Designed for flexibility (graded levels of complexity)
\item Huge range of implementor freedom
\item Fits RISC-V ethos: achieve more with less
\item Reduces SIMD ISA proliferation by 3-4 orders of magnitude \\
(without SIMD downsides or sacrificing speed trade-off)
\item Covers 98\% of RVV, allows RVV to fit "on top"
- \item Not designed for supercomputing (that's RVV), designed for
- in between: DSPs, RV32E, Embedded 3D GPUs etc.
- \item Not specifically designed for Vectorisation: designed to\\
- reduce code size (increase efficiency, just
- like Compressed)
- \end{itemize}
-}
-
-
-\frame{\frametitle{slide}
-
- \begin{itemize}
- \item \vspace{10pt}
- \end{itemize}
- Considerations:\vspace{10pt}
- \begin{itemize}
- \item \vspace{10pt}
+ \item Byproduct of SV is a reduction in code size, power usage
+ etc. (increase efficiency, just like Compressed)
\end{itemize}
}
\frame{
\begin{center}
- {\Huge \red The end\vspace{20pt}\\
- Thank you}
+ {\Huge The end\vspace{20pt}\\
+ Thank you\vspace{20pt}\\
+ Questions?\vspace{20pt}
+ }
\end{center}
+
+ \begin{itemize}
+ \item Discussion: ISA-DEV mailing list
+ \item http://libre-riscv.org/simple\_v\_extension/
+ \end{itemize}
}