\frame{\frametitle{What's the value of SV? Why adopt it even in non-V?}
\begin{itemize}
- \item memcpy becomes much smaller (higher bang-per-buck)\vspace{10pt}
- \item context-switch (LOAD/STORE multiple): 1-2 instructions\vspace{10pt}
- \item Compressed instrs further reduces I-cache (etc.)\vspace{10pt}
- \item greatly-reduced I-cache load (and less reads)\vspace{10pt}
- \end{itemize}
- Note:\vspace{10pt}
+ \item memcpy becomes much smaller (higher bang-per-buck)
+ \item context-switch (LOAD/STORE multiple): 1-2 instructions
+ \item Compressed instrs further reduces I-cache (etc.)
+ \item Greatly-reduced I-cache load (and less reads)
+ \item Amazingly, SIMD becomes (more) tolerable\\
+ (corner-cases for setup and teardown are gone)
+ \end{itemize}
+ Note:
\begin{itemize}
\item It's not just about Vectors: it's about instruction effectiveness
+ \item Anything that makes SIMD tolerable has to be a good thing
\item Anything implementor is not interested in HW-optimising,\\
let it fall through to exceptions (implement as a trap).
\end{itemize}
Note: it's ok to pass predication through to ALU (like SIMD)
\item Standard (and future, and custom) opcodes now parallel\vspace{10pt}
\end{itemize}
- Notes:\vspace{6pt}
+ Note: EVERYTHING is parallelised:
\begin{itemize}
\item All LOAD/STORE (inc. Compressed, Int/FP versions)
\item All ALU ops (soft / hybrid / full HW, on per-op basis)
- \item All branches become predication targets (C.FNE added)
+ \item All branches become predication targets (C.FNE added?)
\item C.MV of particular interest (s/v, v/v, v/s)
+ \item FCVT, FMV, FSGNJ etc. very similar to C.MV
\end{itemize}
}
% but MODIFYING the remaining "vectorised" op, subtracting the now
% scalar ops from it.
-\frame{\frametitle{Predicated 8-parallel ADD: optimised (not masked)}
+\frame{\frametitle{Predicated 8-parallel ADD: 1-wide ALU}
+ \begin{center}
+ \includegraphics[height=2.5in]{padd9_alu1.png}\\
+ {\bf \red Predicated adds are shuffled down: 6 cycles in total}
+ \end{center}
+}
+
+
+\frame{\frametitle{Predicated 8-parallel ADD: 4-wide ALU}
\begin{center}
\includegraphics[height=2.5in]{padd9_alu4.png}\\
{\bf \red Predicated adds are shuffled down: 4 in 1st cycle, 2 in 2nd}
\frame{\frametitle{How are SIMD Instructions Vectorised?}
\begin{itemize}
- \item SIMD ALU(s) primarily unchanged\vspace{10pt}
- \item Predication is added to each SIMD element (NO ZEROING!)\vspace{10pt}
- \item End of Vector enables predication (NO ZEROING!)\vspace{10pt}
+ \item SIMD ALU(s) primarily unchanged\vspace{6pt}
+ \item Predication is added to each SIMD element\vspace{6pt}
+ \item Predication bits sent in groups to the ALU\vspace{6pt}
+ \item End of Vector enables (additional) predication\vspace{10pt}
\end{itemize}
- Considerations:\vspace{10pt}
+ Considerations:\vspace{4pt}
\begin{itemize}
- \item Many SIMD ALUs possible (parallel execution)\vspace{10pt}
- \item Very long SIMD ALUs could waste die area (short vectors)\vspace{10pt}
- \item Implementor free to choose (API remains the same)\vspace{10pt}
+ \item Many SIMD ALUs possible (parallel execution)
+ \item Implementor free to choose (API remains the same)
+ \item Unused ALU units wasted, but s/w DRASTICALLY simpler
+ \item Very long SIMD ALUs could waste significant die area
\end{itemize}
}
% With multiple SIMD ALUs at for example 32-bit wide they can be used
\frame{\frametitle{What's the deal / juice / score?}
\begin{itemize}
- \item Standard Register File(s) overloaded with CSR "vector span"\\
+ \item Standard Register File(s) overloaded with CSR "reg is vector"\\
(see pseudocode slides for examples)
- \item Element width and type concepts remain same as RVV\\
+ \item Element width (and type?) concepts remain same as RVV\\
(CSRs are used to "interpret" elements in registers)
\item CSRs are key-value tables (overlaps allowed)\vspace{10pt}
\end{itemize}
for (int i = 0; i < VL; ++i)
if (preg_enabled[rd] && ([!]preg[rd] & 1<<i))
for (int j = 0; j < seglen+1; j++)
- if (reg_is_vectorised[rs2]) offs = vreg[rs2][i]
+ if (reg_is_vectorised[rs2]) offs = vreg[rs2+i]
else offs = i*(seglen+1)*stride;
vreg[rd+j][i] = mem[sreg[base] + offs + j*stride]
\end{semiverbatim}
\frame{\frametitle{Why are overlaps allowed in Regfiles?}
\begin{itemize}
- \item Same register(s) can have multiple "interpretations"\vspace{6pt}
- \item xBitManip plus SIMD plus xBitManip = Hi/Lo bitops\vspace{6pt}
- \item (32-bit GREV plus 4x8-bit SIMD plus 32-bit GREV)\vspace{6pt}
- \item RGB 565 (video): BEXTW plus 4x8-bit SIMD plus BDEPW\vspace{6pt}
+ \item Same register(s) can have multiple "interpretations"
+ \item xBitManip plus SIMD plus xBitManip = Hi/Lo bitops
+ \item (32-bit GREV plus 4x8-bit SIMD plus 32-bit GREV:\\
+ GREV @ VL=N,wid=32; SIMD @ VL=Nx4,wid=8)
+ \item RGB 565 (video): BEXTW plus 4x8-bit SIMD plus BDEPW\\
+ (BEXT/BDEP @ VL=N,wid=32; SIMD @ VL=Nx4,wid=8)
\item Same register(s) can be offset (no need for VSLIDE)\vspace{6pt}
\end{itemize}
Note:\vspace{10pt}
\frame{\frametitle{C.MV extremely flexible!}
\begin{itemize}
- \item scalar-to-vector (w/no pred): VSPLAT
- \item scalar-to-vector (w/dest-pred): Sparse VSPLAT
- \item scalar-to-vector (w/single dest-pred): VINSERT
- \item vector-to-scalar (w/src-pred): VEXTRACT
- \item vector-to-vector (w/no pred): Vector Copy
- \item vector-to-vector (w/src xor dest pred): Sparse Vector Copy
- \item vector-to-vector (w/src and dest pred): Vector Gather/Scatter
- \end{itemize}
- \vspace{8pt}
- Notes:\vspace{10pt}
+ \item scalar-to-vector (w/ no pred): VSPLAT
+ \item scalar-to-vector (w/ dest-pred): Sparse VSPLAT
+ \item scalar-to-vector (w/ 1-bit dest-pred): VINSERT
+ \item vector-to-scalar (w/ [1-bit?] src-pred): VEXTRACT
+ \item vector-to-vector (w/ no pred): Vector Copy
+ \item vector-to-vector (w/ src pred): Vector Gather
+ \item vector-to-vector (w/ dest pred): Vector Scatter
+ \item vector-to-vector (w/ src \& dest pred): Vector Gather/Scatter
+ \end{itemize}
+ \vspace{4pt}
+ Notes:
\begin{itemize}
- \item Really powerful!
- \item Any other options?
+ \item Surprisingly powerful!
+ \item Same arrangement for FVCT, FMV, FSGNJ etc.
\end{itemize}
}
\item Can VSELECT be removed? (it's really complex)
\item Can CLIP be done as a CSR (mode, like elwidth)
\item SIMD saturation (etc.) also set as a mode?
- \item C.MV src predication no different from dest predication\\
- What to do? Make one have different meaning?
\item 8/16-bit ops is it worthwhile adding a "start offset"? \\
(a bit like misaligned addressing... for registers)\\
or just use predication to skip start?
\frame{\frametitle{What's the downside(s) of SV?}
\begin{itemize}
\item EVERY register operation is inherently parallelised\\
- (scalar ops are just vectors of length 1)
+ (scalar ops are just vectors of length 1)\vspace{8pt}
\item An extra pipeline phase is pretty much essential\\
- for fast low-latency implementations
+ for fast low-latency implementations\vspace{8pt}
\item Assuming an instruction FIFO, N ops could be taken off\\
of a parallel op per cycle (avoids filling entire FIFO;\\
- also is less work per cycle: lower complexity / latency)
+ also is less work per cycle: lower complexity / latency)\vspace{8pt}
\item With zeroing off, skipping non-predicated elements is hard:\\
it is however an optimisation (and could be skipped).
\end{itemize}
\frame{\frametitle{Summary}
\begin{itemize}
- \item Designed for flexibility (graded levels of complexity)\vspace{6pt}
- \item Huge range of implementor freedom\vspace{6pt}
- \item Fits RISC-V ethos: achieve more with less\vspace{6pt}
+ \item Actually about parallelism, not Vectors (or SIMD) per se
+ \item Designed for flexibility (graded levels of complexity)
+ \item Huge range of implementor freedom
+ \item Fits RISC-V ethos: achieve more with less
\item Reduces SIMD ISA proliferation by 3-4 orders of magnitude \\
- (without SIMD downsides or sacrificing speed trade-off)\vspace{6pt}
- \item Covers 98\% of RVV, allows RVV to fit "on top"\vspace{6pt}
+ (without SIMD downsides or sacrificing speed trade-off)
+ \item Covers 98\% of RVV, allows RVV to fit "on top"
\item Not designed for supercomputing (that's RVV), designed for
- in between: DSPs, RV32E, Embedded 3D GPUs etc.\vspace{6pt}
+ in between: DSPs, RV32E, Embedded 3D GPUs etc.
\item Not specifically designed for Vectorisation: designed to\\
reduce code size (increase efficiency, just
- like Compressed)\vspace{6pt}
+ like Compressed)
\end{itemize}
}