https://sigarch.org/simd-instructions-considered-harmful
\item Setup and corner-cases alone are extremely complex.\\
Hardware is easy, but software is hell.
- \item O($N^{6}$) ISA opcode proliferation!\\
+ \item O($N^{6}$) ISA opcode proliferation (1000s of instructions)\\
opcode, elwidth, veclen, src1-src2-dest hi/lo
\end{itemize}
}
\begin{itemize}
\item Effectively a variant of SIMD / SIMT (arbitrary length)\vspace{4pt}
+ \item Fascinatingly, despite being a SIMD-variant, RVV only has
+ O(N) opcode proliferation! (extremely well designed)
\item Extremely powerful (extensible to 256 registers)\vspace{4pt}
\item Supports polymorphism, several datatypes (inc. FP16)\vspace{4pt}
\item Requires a separate Register File (32 w/ext to 256)\vspace{4pt}
- \item Implemented as a separate pipeline (no impact on scalar)\vspace{4pt}
+ \item Implemented as a separate pipeline (no impact on scalar)
\end{itemize}
However...
\begin{itemize}
- \item 98 percent opcode duplication with rest of RV (CLIP)
+ \item 98 percent opcode duplication with rest of RV
\item Extending RVV requires customisation not just of h/w:\\
gcc, binutils also need customisation (and maintenance)
- \item Fascinatingly, despite being a SIMD-variant, RVV only has
- O(1) opcode proliferation! (extremely well designed)
\end{itemize}
}
\frame{\frametitle{What's the value of SV? Why adopt it even in non-V?}
\begin{itemize}
- \item memcpy becomes much smaller (higher bang-per-buck)
+ \item memcpy has a much higher bang-per-buck ratio
\item context-switch (LOAD/STORE multiple): 1-2 instructions
\item Compressed instrs further reduces I-cache (etc.)
\item Reduced I-cache load (and less I-reads)
\frame{\frametitle{How does Simple-V relate to RVV? What's different?}
\begin{itemize}
- \item RVV very heavy-duty (excellent for supercomputing)\vspace{8pt}
- \item Simple-V abstracts parallelism (based on best of RVV)\vspace{8pt}
- \item Graded levels: hardware, hybrid or traps (fit impl. need)\vspace{8pt}
- \item Even Compressed become vectorised (RVV can't)\vspace{8pt}
- \item No polymorphism in SV (too complex)\vspace{8pt}
+ \item RVV very heavy-duty (excellent for supercomputing)\vspace{4pt}
+ \item Simple-V abstracts parallelism (based on best of RVV)\vspace{4pt}
+ \item Graded levels: hardware, hybrid or traps (fit impl. need)\vspace{4pt}
+ \item Even Compressed become vectorised (RVV can't)\vspace{4pt}
+ \item No polymorphism in SV (too complex)\vspace{4pt}
\end{itemize}
What Simple-V is not:\vspace{4pt}
\begin{itemize}
- \item A full supercomputer-level Vector Proposal
+ \item A full supercomputer-level Vector Proposal\\
+ (it's not actually a Vector Proposal at all!)
\item A replacement for RVV (SV is designed to be over-ridden\\
by - or augmented to become - RVV)
\end{itemize}
\item Standard and future and custom opcodes now parallel\\
(crucially: with NO extra instructions needing to be added)
\end{itemize}
- Note: EVERYTHING is parallelised:
+ Note: EVERY scalar op now paralleliseable
\begin{itemize}
\item All LOAD/STORE (inc. Compressed, Int/FP versions)
\item All ALU ops (Int, FP, SIMD, DSP, everything)
- \item All branches become predication targets (C.FNE added?)
+ \item All branches become predication targets (note: no FNE)
\item C.MV of particular interest (s/v, v/v, v/s)
\item FCVT, FMV, FSGNJ etc. very similar to C.MV
\end{itemize}
\frametitle{Register key-value CSR table decoding pseudocode}
\begin{semiverbatim}
-struct vectorised fp\_vec[32], int\_vec[32]; // 64 in future
-
+struct vectorised fp\_vec[32], int\_vec[32];
for (i = 0; i < 16; i++) // 16 CSRs?
tb = int\_vec if CSRvec[i].type == 0 else fp\_vec
idx = CSRvec[i].regkey // INT/FP src/dst reg in opcode
tb[idx].elwidth = CSRvec[i].elwidth
tb[idx].regidx = CSRvec[i].regidx // indirection
+ tb[idx].regidx += CSRvec[i].bank << 5 // 0 (1=rsvd)
tb[idx].isvector = CSRvec[i].isvector
tb[idx].packed = CSRvec[i].packed // SIMD or not
- tb[idx].bank = CSRvec[i].bank // 0 (1=rsvd)
+ tb[idx].enabled = true
\end{semiverbatim}
\begin{itemize}
\frametitle{Predication key-value CSR table decoding pseudocode}
\begin{semiverbatim}
-struct pred fp\_pred[32], int\_pred[32]; // 64 in future
-
+struct pred fp\_pred[32], int\_pred[32];
for (i = 0; i < 16; i++) // 16 CSRs?
tb = int\_pred if CSRpred[i].type == 0 else fp\_pred
idx = CSRpred[i].regkey
tb[idx].zero = CSRpred[i].zero // zeroing
tb[idx].inv = CSRpred[i].inv // inverted
tb[idx].predidx = CSRpred[i].predidx // actual reg
- tb[idx].bank = CSRpred[i].bank // 0 for now
+ tb[idx].predidx += CSRvec[i].bank << 5 // 0 (1=rsvd)
tb[idx].enabled = true
\end{semiverbatim}
\begin{itemize}
- \item All 32 int and 32 FP entries zero'd before setting
+ \item All 32 int and 32 FP entries zero'd before setting\\
+ (predication disabled)
\item Might be a bit complex to set up in hardware (keep as CAM?)
\end{itemize}
\begin{semiverbatim}
def get\_pred\_val(bool is\_fp\_op, int reg):
tb = int\_pred if is\_fp\_op else fp\_pred
- if (!tb[reg].enabled):
- return ~0x0 // all ops enabled
- predidx = tb[reg].predidx // redirection occurs HERE
+ if (!tb[reg].enabled): return ~0x0 // all ops enabled
+ predidx = tb[reg].predidx // redirection occurs HERE
+ predidx += tb[reg].bank << 5 // 0 (1=rsvd)
predicate = intreg[predidx] // actual predicate HERE
if (tb[reg].inv):
predicate = ~predicate // invert ALL bits
\begin{semiverbatim}
function op\_add(rd, rs1, rs2) # add not VADD!
int i, id=0, irs1=0, irs2=0;
+ predval = get\_pred\_val(FALSE, rd);
rd = int\_vec[rd ].isvector ? int\_vec[rd ].regidx : rd;
rs1 = int\_vec[rs1].isvector ? int\_vec[rs1].regidx : rs1;
rs2 = int\_vec[rs2].isvector ? int\_vec[rs2].regidx : rs2;
- predval = get\_pred\_val(FALSE, rd);
for (i = 0; i < VL; i++)
if (predval \& 1<<i) # predication uses intregs
ireg[rd+id] <= ireg[rs1+irs1] + ireg[rs2+irs2];
\frame{\frametitle{Why are overlaps allowed in Regfiles?}
\begin{itemize}
- \item Same register(s) can have multiple "interpretations"
+ \item Same target register(s) can have multiple "interpretations"
\item CSRs are costly to write to (do it once)
\item Set "real" register (scalar) without needing to set/unset CSRs.
\item xBitManip plus SIMD plus xBitManip = Hi/Lo bitops
\end{itemize}
Note:
\begin{itemize}
- \item xBitManip reduces O($N^{6}$) SIMD down to O($N^{3}$)
+ \item xBitManip reduces O($N^{6}$) SIMD down to O($N^{3}$) on its own.
\item Hi-Performance: Macro-op fusion (more pipeline stages?)
\end{itemize}
}
\item scalar-to-vector (w/ 1-bit dest-pred): VINSERT
\item vector-to-scalar (w/ [1-bit?] src-pred): VEXTRACT
\item vector-to-vector (w/ no pred): Vector Copy
- \item vector-to-vector (w/ src pred): Vector Gather
- \item vector-to-vector (w/ dest pred): Vector Scatter
+ \item vector-to-vector (w/ src pred): Vector Gather (inc VSLIDE)
+ \item vector-to-vector (w/ dest pred): Vector Scatter (inc. VSLIDE)
\item vector-to-vector (w/ src \& dest pred): Vector Gather/Scatter
\end{itemize}
\vspace{4pt}
CSRvect1 = \{type: F, key: a3, val: a3, elwidth: dflt\}
CSRvect2 = \{type: F, key: a7, val: a7, elwidth: dflt\}
loop:
- setvl t0, a0, 4 # vl = t0 = min(4, n)
+ setvl t0, a0, 4 # vl = t0 = min(min(63, 4), a0))
ld a3, a1 # load 4 registers a3-6 from x
slli t1, t0, 3 # t1 = vl * 8 (in bytes)
ld a7, a2 # load 4 registers a7-10 from y
\end{frame}
-\frame{\frametitle{Under consideration}
+\frame{\frametitle{Under consideration (some answers documented)}
\begin{itemize}
\item Should future extra bank be included now?
\item 8/16-bit ops is it worthwhile adding a "start offset"? \\
(a bit like misaligned addressing... for registers)\\
or just use predication to skip start?
+ \item see http://libre-riscv.org/simple\_v\_extension/\#issues
\end{itemize}
}