# REMAP
* matrix multiply
* add svindex
* svindex in simulator
* offset svshape option
* parallel reduction
* DCT/FFT "strides"
* see [[sv/remap/appendix]] for examples and usage
* see [[sv/propagation]] for a future way to apply REMAP
* [[remap/discussion]]
REMAP is an advanced form of Vector "Structure Packing" that provides
hardware-level support for commonly-used *nested* loop patterns that would
otherwise require full inline loop unrolling. For more general reordering
an Indexed REMAP mode is available (a RISC-paradigm
abstracted analog to `xxperm`).
REMAP allows the usual sequential vector loop `0..VL-1` to be "reshaped"
(re-mapped) from a linear form to a 2D or 3D transposed form, or "offset"
to permit arbitrary access to elements (when elwidth overrides are
used), independently on each Vector src or dest register. Aside from
Indexed REMAP this is entirely Hardware-accelerated reordering and
consequently not costly in terms of register access. It will however
place a burden on Multi-Issue systems but no more than if the equivalent
Scalar instructions were explicitly loop-unrolled without SVP64, and
some advanced implementations may even find the Deterministic nature of
the Scheduling to be easier on resources.
The initial primary motivation of REMAP was for Matrix Multiplication,
reordering of sequential data in-place: in-place DCT and FFT were
easily justified given the exceptionally high usage in Computer Science.
Four SPRs are provided which may be applied to any GPR, FPR or CR Field so
that for example a single FMAC may be used in a single hardware-controlled
100% Deterministic loop to perform 5x3 times 3x4 Matrix multiplication,
generating 60 FMACs *without needing explicit assembler unrolling*.
Additional uses include regular "Structure Packing" such as RGB pixel
data extraction and reforming (although less costly vec2/3/4 reshaping
is achievable with `PACK/UNPACK`).
Even once designed as an independent RISC-paradigm abstraction system
it was realised that Matrix REMAP could be applied to min/max instructions to
achieve Floyd-Warshall Graph computations, or to AND/OR Ternary
bitmanipulation to compute Warshall Transitive Closure, or
to perform Cryptographic Matrix operations with Galois Field
variants of Multiply-Accumulate and many more uses expected to be
discovered. This *without
adding actual explicit Vector opcodes for any of the same*.
Thus it should be very clear:
REMAP, like all of SV, is abstracted out, meaning that unlike traditional
Vector ISAs which would typically only have a limited set of instructions
that can be structure-packed (LD/ST and Move operations
being the most common), REMAP may be applied to
literally any instruction: CRs, Arithmetic, Logical, LD/ST, even
Vectorised Branch-Conditional.
When SUBVL is greater than 1 a given group of Subvector
elements are kept together: effectively the group becomes the
element, and with REMAP applying to elements
(not sub-elements) each group is REMAPed together.
Swizzle *can* however be applied to the same
instruction as REMAP, providing re-sequencing of
Subvector elements which REMAP cannot. Also as explained in [[sv/mv.swizzle]], [[sv/mv.vec]] and the [[svp64/appendix]], Pack and Unpack Mode bits
can extend down into Sub-vector elements to influence vec2/vec3/vec4
sequential reordering, but even here, REMAP reordering is not *individually*
extended down to the actual sub-vector elements themselves.
This keeps the relevant Predicate Mask bit applicable to the Subvector
group, just as it does when REMAP is not active.
In its general form, REMAP is quite expensive to set up, and on some
implementations may introduce latency, so should realistically be used
only where it is worthwhile. Given that even with latency the fact
that up to 127 operations can be Deterministically issued (from a single
instruction) it should be clear that REMAP should not be dismissed
for *possible* latency alone. Commonly-used patterns such as Matrix
Multiply, DCT and FFT have helper instruction options which make REMAP
easier to use.
*Future specification note: future versions of the REMAP Management instructions
will extend to EXT1xx Prefixed variants. This will overcome some of the limitations
present in the 32-bit variants of the REMAP Management instructions that at
present require direct writing to SVSHAPE0-3 SPRs. Additional
REMAP Modes may also be introduced at that time.*
There are four types of REMAP:
* **Matrix**, also known as 2D and 3D reshaping, can perform in-place
Matrix transpose and rotate. The Shapes are set up for an "Outer Product"
Matrix Multiply.
* **FFT/DCT**, with full triple-loop in-place support: limited to
Power-2 RADIX
* **Indexing**, for any general-purpose reordering, also includes
limited 2D reshaping as well as Element "offsetting".
* **Parallel Reduction**, for scheduling a sequence of operations
in a Deterministic fashion, in a way that may be parallelised,
to reduce a Vector down to a single value.
Best implemented on top of a Multi-Issue Out-of-Order Micro-architecture,
REMAP Schedules are 100% Deterministic **including Indexing** and are
designed to be incorporated in between the Decode and Issue phases,
directly into Register Hazard Management.
As long as the SVSHAPE SPRs
are not written to directly, Hardware may treat REMAP as 100%
Deterministic: all REMAP Management instructions take static
operands (no dynamic register operands)
with the exception of Indexed Mode, and even then
Architectural State is permitted to assume that the Indices
are cacheable from the point at which the `svindex` instruction
is executed.
Parallel Reduction is unusual in that it requires a full vector array
of results (not a scalar) and uses the rest of the result Vector for
the purposes of storing intermediary calculations. As these intermediary
results are Deterministically computed they may be useful.
Additionally, because the intermediate results are always written out
it is possible to service Precise Interrupts without affecting latency
(a common limitation of Vector ISAs implementing explicit
Parallel Reduction instructions, because their Architectural State cannot
hold the partial results).
## Basic principle
The following illustrates why REMAP was added.
* normal vector element read/write of operands would be sequential
(0 1 2 3 ....)
* this is not appropriate for (e.g.) Matrix multiply which requires
accessing elements in alternative sequences (0 3 6 1 4 7 ...)
* normal Vector ISAs use either Indexed-MV or Indexed-LD/ST to "cope"
with this. both are expensive (copy large vectors, spill through memory)
and very few Packed SIMD ISAs cope with non-Power-2
(Duplicate-data inline-loop-unrolling is the costly solution)
* REMAP **redefines** the order of access according to set
(Deterministic) "Schedules".
* Matrix Schedules are not at all restricted to power-of-two boundaries
making it unnecessary to have for example specialised 3x4 transpose
instructions of other Vector ISAs.
* DCT and FFT REMAP are RADIX-2 limited but this is the case in existing Packed/Predicated
SIMD ISAs anyway (and Bluestein Convolution is typically deployed to
solve that).
Only the most commonly-used algorithms in computer science have REMAP
support, due to the high cost in both the ISA and in hardware. For
arbitrary remapping the `Indexed` REMAP may be used.
## Example Usage
* `svshape` to set the type of reordering to be applied to an
otherwise usual `0..VL-1` hardware for-loop
* `svremap` to set which registers a given reordering is to apply to
(RA, RT etc)
* `sv.{instruction}` where any Vectorised register marked by `svremap`
will have its ordering REMAPPED according to the schedule set
by `svshape`.
The following illustrative example multiplies a 3x4 and a 5x3
matrix to create
a 5x4 result:
```
svshape 5,4,3,0,0 # Outer Product 5x4 by 4x3
svremap 15,1,2,3,0,0,0,0 # link Schedule to registers
sv.fmadds *0,*32,*64,*0 # 60 FMACs get executed here
```
* svshape sets up the four SVSHAPE SPRS for a Matrix Schedule
* svremap activates four out of five registers RA RB RC RT RS (15)
* svremap requests:
- RA to use SVSHAPE1
- RB to use SVSHAPE2
- RC to use SVSHAPE3
- RT to use SVSHAPE0
- RS Remapping to not be activated
* sv.fmadds has vectors RT=0, RA=32, RB=64, RC=0
* With REMAP being active each register's element index is
*independently* transformed using the specified SHAPEs.
Thus the Vector Loop is arranged such that the use of
the multiply-and-accumulate instruction executes precisely the required
Schedule to perform an in-place in-registers Outer Product
Matrix Multiply with no
need to perform additional Transpose or register copy instructions.
The example above may be executed as a unit test and demo,
[here](https://git.libre-soc.org/?p=openpower-isa.git;a=blob;f=src/openpower/decoder/isa/test_caller_svp64_matrix.py;h=c15479db9a36055166b6b023c7495f9ca3637333;hb=a17a252e474d5d5bf34026c25a19682e3f2015c3#l94)
*Hardware Architectural note: with the Scheduling applying as a Phase between
Decode and Issue in a Deterministic fashion the Register Hazards may be
easily computed and a standard Out-of-Order Micro-Architecture exploited to good
effect. Even an In-Order system may observe that for large Outer Product
Schedules there will be no stalls, but if the Matrices are particularly
small size an In-Order system would have to stall, just as it would if
the operations were loop-unrolled without Simple-V. Thus: regardless
of the Micro-Architecture the Hardware Engineer should first consider
how best to process the exact same equivalent loop-unrolled instruction
stream.*
## REMAP types
This section summarises the motivation for each REMAP Schedule
and briefly goes over their characteristics and limitations.
Further details on the Deterministic Precise-Interruptible algorithms
used in these Schedules is found in the [[sv/remap/appendix]].
### Matrix (1D/2D/3D shaping)
Matrix Multiplication is a huge part of High-Performance Compute,
and 3D.
In many PackedSIMD as well as Scalable Vector ISAs, non-power-of-two
Matrix sizes are a serious challenge. PackedSIMD ISAs, in order to
cope with for example 3x4 Matrices, recommend rolling data-repetition and loop-unrolling.
Aside from the cost of the load on the L1 I-Cache, the trick only
works if one of the dimensions X or Y are power-two. Prime Numbers
(5x7, 3x5) become deeply problematic to unroll.
Even traditional Scalable Vector ISAs have issues with Matrices, often
having to perform data Transpose by pushing out through Memory and back
(costly),
or computing Transposition Indices (costly) then copying to another
Vector (costly).
Matrix REMAP was thus designed to solve these issues by providing Hardware
Assisted
"Schedules" that can view what would otherwise be limited to a strictly
linear Vector as instead being 2D (even 3D) *in-place* reordered.
With both Transposition and non-power-two being supported the issues
faced by other ISAs are mitigated.
Limitations of Matrix REMAP are that the Vector Length (VL) is currently
restricted to 127: up to 127 FMAs (or other operation)
may be performed in total.
Also given that it is in-registers only at present some care has to be
taken on regfile resource utilisation. However it is perfectly possible
to utilise Matrix REMAP to perform the three inner-most "kernel" loops of
the usual 6-level "Tiled" large Matrix Multiply, without the usual
difficulties associated with SIMD.
Also the `svshape` instruction only provides access to part of the
Matrix REMAP capability. Rotation and mirroring need to be done by
programming the SVSHAPE SPRs directly, which can take a lot more
instructions. Future versions of SVP64 will include EXT1xx prefixed
variants (`psvshape`) which provide more comprehensive capacity and
mitigate the need to write direct to the SVSHAPE SPRs.
### FFT/DCT Triple Loop
DCT and FFT are some of the most astonishingly used algorithms in
Computer Science. Radar, Audio, Video, R.F. Baseband and dozens more. At least
two DSPs, TMS320 and Hexagon, have VLIW instructions specially tailored
to FFT.
An in-depth analysis showed that it is possible to do in-place in-register
DCT and FFT as long as twin-result "butterfly" instructions are provided.
These can be found in the [[openpower/isa/svfparith]] page if performing
IEEE754 FP transforms. *(For fixed-point transforms, equivalent 3-in 2-out
integer operations would be required)*. These "butterfly" instructions
avoid the need for a temporary register because the two array positions
being overwritten will be "in-flight" in any In-Order or Out-of-Order
micro-architecture.
DCT and FFT Schedules are currently limited to RADIX2 sizes and do not
accept predicate masks. Given that it is common to perform recursive
convolutions combining smaller Power-2 DCT/FFT to create larger DCT/FFTs
in practice the RADIX2 limit is not a problem. A Bluestein convolution
to compute arbitrary length is demonstrated by
[Project Nayuki](https://www.nayuki.io/res/free-small-fft-in-multiple-languages/fft.py)
### Indexed
The purpose of Indexing is to provide a generalised version of
Vector ISA "Permute" instructions, such as VSX `vperm`. The
Indexing is abstracted out and may be applied to much more
than an element move/copy, and is not limited for example
to the number of bytes that can fit into a VSX register.
Indexing may be applied to LD/ST (even on Indexed LD/ST
instructions such as `sv.lbzx`), arithmetic operations,
extsw: there is no artificial limit.
The only major caveat is that the registers to be used as
Indices must not be modified by any instruction after Indexed Mode
is established, and neither must MAXVL be altered. Additionally,
no register used as an Index may exceed MAXVL-1.
Failure to observe
these conditions results in `UNDEFINED` behaviour.
These conditions allow a Read-After-Write (RAW) Hazard to be created on
the entire range of Indices to be subsequently used, but a corresponding
Write-After-Read Hazard by any instruction that modifies the Indices
**does not have to be created**. Given the large number of registers
involved in Indexing this is a huge resource saving and reduction
in micro-architectural complexity. MAXVL is likewise
included in the RAW Hazards because it is involved in calculating
how many registers are to be considered Indices.
With these Hazard Mitigations in place, high-performance implementations
may read-cache the Indices at the point where a given `svindex` instruction
is called (or SVSHAPE SPRs - and MAXVL - directly altered) by issuing
background GPR register file reads whilst other instructions are being
issued and executed.
The original motivation for Indexed REMAP was to mitigate the need to add
an expensive `mv.x` to the Scalar ISA, which was likely to be rejected as
a stand-alone instruction
(`GPR(RT) <- GPR(GPR(RA))`). Usually a Vector ISA would add a non-conflicting
variant (as in VSX `vperm`) but it is common to need to permute by source,
with the risk of conflict, that has to be resolved, for example, in AVX-512
with `conflictd`.
Indexed REMAP on the other hand **does not prevent conflicts** (overlapping
destinations), which on a superficial analysis may be perceived to be a
problem, until it is recalled that, firstly, Simple-V is designed specifically
to require Program Order to be respected, and that Matrix, DCT and FFT
all *already* critically depend on overlapping Reads/Writes: Matrix
uses overlapping registers as accumulators. Thus the Register Hazard
Management needed by Indexed REMAP *has* to be in place anyway.
The cost compared to Matrix and other REMAPs (and Pack/Unpack) is
clearly that of the additional reading of the GPRs to be used as Indices,
plus the setup cost associated with creating those same Indices.
If any Deterministic REMAP can cover the required task, clearly it
is adviseable to use it instead.
*Programmer's note: some algorithms may require skipping of Indices exceeding
VL-1, not MAXVL-1. This may be achieved programmatically by performing
an `sv.cmp *BF,*RA,RB` where RA is the same GPRs used in the Indexed REMAP,
and RB contains the value of VL returned from `setvl`. The resultant
CR Fields may then be used as Predicate Masks to exclude those operations
with an Index exceeding VL-1.*
### Parallel Reduction
Vector Reduce Mode issues a deterministic tree-reduction schedule to the underlying micro-architecture. Like Scalar reduction, the "Scalar Base"
(Power ISA v3.0B) operation is leveraged, unmodified, to give the
*appearance* and *effect* of Reduction. Parallel Reduction is not limited
to Power-of-two but is limited as usual by the total number of
element operations (127) as well as available register file size.
In Horizontal-First Mode, Vector-result reduction **requires**
the destination to be a Vector, which will be used to store
intermediary results, in order to achieve a correct final
result.
Given that the tree-reduction schedule is deterministic,
Interrupts and exceptions
can therefore also be precise. The final result will be in the first
non-predicate-masked-out destination element, but due again to
the deterministic schedule programmers may find uses for the intermediate
results.
When Rc=1 a corresponding Vector of co-resultant CRs is also
created. No special action is taken: the result *and its CR Field*
are stored "as usual" exactly as all other SVP64 Rc=1 operations.
Note that the Schedule only makes sense on top of certain instructions:
X-Form with a Register Profile of `RT,RA,RB` is fine because two sources
and the destination are all the same type. Like Scalar
Reduction, nothing is prohibited:
the results of execution on an unsuitable instruction may simply
not make sense. With care, even 3-input instructions (madd, fmadd, ternlogi)
may be used, and whilst it is down to the Programmer to walk through the
process the Programmer can be confident that the Parallel-Reduction is
guaranteed 100% Deterministic.
Critical to note regarding use of Parallel-Reduction REMAP is that,
exactly as with all REMAP Modes, the `svshape` instruction *requests*
a certain Vector Length (number of elements to reduce) and then
sets VL and MAXVL at the number of **operations** needed to be
carried out. Thus, equally as importantly, like Matrix REMAP
the total number of operations
is restricted to 127. Any Parallel-Reduction requiring more operations
will need to be done manually in batches (hierarchical
recursive Reduction).
Also important to note is that the Deterministic Schedule is arranged
so that some implementations *may* parallelise it (as long as doing so
respects Program Order and Register Hazards). Performance (speed)
of any given
implementation is neither strictly defined or guaranteed. As with
the Vulkan(tm) Specification, strict compliance is paramount whilst
performance is at the discretion of Implementors.
**Parallel-Reduction with Predication**
To avoid breaking the strict RISC-paradigm, keeping the Issue-Schedule
completely separate from the actual element-level (scalar) operations,
Move operations are **not** included in the Schedule. This means that
the Schedule leaves the final (scalar) result in the first-non-masked
element of the Vector used. With the predicate mask being dynamic
(but deterministic) at a superficial glance it seems this result
could be anywhere.
If that result is needed to be moved to a (single) scalar register
then a follow-up `sv.mv/sm=predicate rt, *ra` instruction will be
needed to get it, where the predicate is the exact same predicate used
in the prior Parallel-Reduction instruction.
* If there was only a single
bit in the predicate then the result will not have moved or been altered
from the source vector prior to the Reduction
* If there was more than one bit the result will be in the
first element with a predicate bit set.
In either case the result is in the element with the first bit set in
the predicate mask. Thus, no move/copy *within the Reduction itself* was needed.
Programmer's Note: For *some* hardware implementations
the vector-to-scalar copy may be a slow operation, as may the Predicated
Parallel Reduction itself.
It may be better to perform a pre-copy
of the values, compressing them (VREDUCE-style) into a contiguous block,
which will guarantee that the result goes into the very first element
of the destination vector, in which case clearly no follow-up
predicated vector-to-scalar MV operation is needed. A VREDUCE effect
is achieved by setting just a source predicate mask on Twin-Predicated
operations.
**Usage conditions**
The simplest usage is to perform an overwrite, specifying all three
register operands the same.
```
svshape parallelreduce, 6
sv.add *8, *8, *8
```
The Reduction Schedule will issue the Parallel Tree Reduction spanning
registers 8 through 13, by adjusting the offsets to RT, RA and RB as
necessary (see "Parallel Reduction algorithm" in a later section).
A non-overwrite is possible as well but just as with the overwrite
version, only those destination elements necessary for storing
intermediary computations will be written to: the remaining elements
will **not** be overwritten and will **not** be zero'd.
```
svshape parallelreduce, 6
sv.add *0, *8, *8
```
However it is critical to note that if the source and destination are
not the same then the trick of using a follow-up vector-scalar MV will
not work.
### Sub-Vector Horizontal Reduction
To achieve Sub-Vector Horizontal Reduction, Pack/Unpack should be enabled,
which will turn the Schedule around such that issuing of the Scalar
Defined Words is done with SUBVL looping as the inner loop not the
outer loop. Rc=1 with Sub-Vectors (SUBVL=2,3,4) is `UNDEFINED` behaviour.
*Programmer's Note: Overwrite Parallel Reduction with Sub-Vectors
will clearly result in data corruption. It may be best to perform
a Pack/Unpack Transposing copy of the data first*
## Determining Register Hazards
For high-performance (Multi-Issue, Out-of-Order) systems it is critical
to be able to statically determine the extent of Vectors in order to
allocate pre-emptive Hazard protection. The next task is to eliminate
masked-out elements using predicate bits, freeing up the associated
Hazards.
For non-REMAP situations `VL` is sufficient to ascertain early
Hazard coverage, and with SVSTATE being a high priority cached
quantity at the same level of MSR and PC this is not a problem.
The problems come when REMAP is enabled. Indexed REMAP must instead
use `MAXVL` as the earliest (simplest)
batch-level Hazard Reservation indicator (after taking element-width
overriding on the Index source into consideration),
but Matrix, FFT and Parallel Reduction must all use completely different
schemes. The reason is that VL is used to step through the total
number of *operations*, not the number of registers.
The "Saving Grace" is that all of the REMAP Schedules are 100% Deterministic.
Advance-notice Parallel computation and subsequent cacheing
of all of these complex Deterministic REMAP Schedules is
*strongly recommended*, thus allowing clear and precise multi-issue
batched Hazard coverage to be deployed, *even for Indexed Mode*.
This is only possible for Indexed due to the strict guidelines
given to Programmers.
In short, there exists solutions to the problem of Hazard Management,
with varying degrees of refinement possible at correspondingly
increasing levels of complexity in hardware.
A reminder: when Rc=1 each result register (element) has an associated
co-result CR Field (one per result element). Thus above when determining
the Write-Hazards for result registers the corresponding Write-Hazards for the
corresponding associated co-result CR Field must not be forgotten, *including* when
Predication is used.
## REMAP area of SVSTATE SPR
The following bits of the SVSTATE SPR are used for REMAP:
```
|32:33|34:35|36:37|38:39|40:41| 42:46 | 62 |
| -- | -- | -- | -- | -- | ----- | ------ |
|mi0 |mi1 |mi2 |mo0 |mo1 | SVme | RMpst |
```
mi0-2 and mo0-1 each select SVSHAPE0-3 to apply to a given register.
mi0-2 apply to RA, RB, RC respectively, as input registers, and
likewise mo0-1 apply to output registers (RT/FRT, RS/FRS) respectively.
SVme is 5 bits (one for each of mi0-2/mo0-1) and indicates whether the
SVSHAPE is actively applied or not.
* bit 0 of SVme indicates if mi0 is applied to RA / FRA / BA / BFA
* bit 1 of SVme indicates if mi1 is applied to RB / FRB / BB
* bit 2 of SVme indicates if mi2 is applied to RC / FRC / BC
* bit 3 of SVme indicates if mo0 is applied to RT / FRT / BT / BF
* bit 4 of SVme indicates if mo1 is applied to Effective Address / FRS / RS
(LD/ST-with-update has an implicit 2nd write register, RA)
The "persistence" bit if set will result in all Active REMAPs being applied
indefinitely.
-----------
\newpage{}
# svremap instruction
SVRM-Form:
|0 |6 |11 |13 |15 |17 |19 |21 | 22:25 |26:31 |
| -- | -- | -- | -- | -- | -- | -- | -- | ---- | ----- |
| PO | SVme |mi0 | mi1 | mi2 | mo0 | mo1 | pst | rsvd | XO |
* svremap SVme,mi0,mi1,mi2,mo0,mo1,pst
Pseudo-code:
```
# registers RA RB RC RT EA/FRS SVSHAPE0-3 indices
SVSTATE[32:33] <- mi0
SVSTATE[34:35] <- mi1
SVSTATE[36:37] <- mi2
SVSTATE[38:39] <- mo0
SVSTATE[40:41] <- mo1
# enable bit for RA RB RC RT EA/FRS
SVSTATE[42:46] <- SVme
# persistence bit (applies to more than one instruction)
SVSTATE[62] <- pst
```
Special Registers Altered:
```
SVSTATE
```
`svremap` determines the relationship between registers and SVSHAPE SPRs.
The bitmask `SVme` determines which registers have a REMAP applied, and mi0-mo1
determine which shape is applied to an activated register. the `pst` bit if
cleared indicated that the REMAP operation shall only apply to the immediately-following
instruction. If set then REMAP remains permanently enabled until such time as it is
explicitly disabled, either by `setvl` setting a new MAXVL, or with another
`svremap` instruction. `svindex` and `svshape2` are also capable of setting or
clearing persistence, as well as partially covering a subset of the capability of
`svremap` to set register-to-SVSHAPE relationships.
Programmer's Note: applying non-persistent `svremap` to an instruction that has
no REMAP enabled or is a Scalar operation will obviously have no effect but
the bits 32 to 46 will at least have been set in SVSTATE. This may prove useful
when using `svindex` or `svshape2`.
Hardware Architectural Note: when persistence is not set it is critically important
to treat the `svremap` and the following SVP64 instruction as an indivisible fused operation.
*No state* is stored in the SVSTATE SPR in order to allow continuation should an
Interrupt occur between the two instructions. Thus, Interrupts must be prohibited
from occurring or other workaround deployed. When persistence is set this issue
is moot.
It is critical to note that if persistence is clear then `svremap` is the *only* way
to activate REMAP on any given (following) instruction. If persistence is set however then
**all** SVP64 instructions go through REMAP as long as `SVme` is non-zero.
-------------
\newpage{}
# SHAPE Remapping SPRs
There are four "shape" SPRs, SHAPE0-3, 32-bits in each,
which have the same format.
Shape is 32-bits. When SHAPE is set entirely to zeros, remapping is
disabled: the register's elements are a linear (1D) vector.
|31.30|29..28 |27..24| 23..21 | 20..18 | 17..12 |11..6 |5..0 | Mode |
|---- |------ |------| ------ | ------- | ------- |----- |----- | ----- |
|mode |skip |offset| invxyz | permute | zdimsz |ydimsz|xdimsz|Matrix |
|0b00 |elwidth|offset|sk1/invxy|0b110/0b111|SVGPR|ydimsz|xdimsz|Indexed|
|0b01 |submode|offset| invxyz | submode2| zdimsz |mode |xdimsz|DCT/FFT|
|0b10 |submode|offset| invxyz | rsvd | rsvd |rsvd |xdimsz|Preduce|
|0b11 | | | | | | | |rsvd |
mode sets different behaviours (straight matrix multiply, FFT, DCT).
* **mode=0b00** sets straight Matrix Mode
* **mode=0b00** with permute=0b110 or 0b111 sets Indexed Mode
* **mode=0b01** sets "FFT/DCT" mode and activates submodes
* **mode=0b10** sets "Parallel Reduction" Schedules.
*Architectural Resource Allocation note: the four SVSHAPE SPRs are best
allocated sequentially and contiguously in order that `sv.mtspr` may
be used*
## Parallel Reduction Mode
Creates the Schedules for Parallel Tree Reduction.
* **submode=0b00** selects the left operand index
* **submode=0b01** selects the right operand index
* When bit 0 of `invxyz` is set, the order of the indices
in the inner for-loop are reversed. This has the side-effect
of placing the final reduced result in the last-predicated element.
It also has the indirect side-effect of swapping the source
registers: Left-operand index numbers will always exceed
Right-operand indices.
When clear, the reduced result will be in the first-predicated
element, and Left-operand indices will always be *less* than
Right-operand ones.
* When bit 1 of `invxyz` is set, the order of the outer loop
step is inverted: stepping begins at the nearest power-of two
to half of the vector length and reduces by half each time.
When clear the step will begin at 2 and double on each
inner loop.
## FFT/DCT mode
submode2=0 is for FFT. For FFT submode the following schedules may be
selected:
* **submode=0b00** selects the ``j`` offset of the innermost for-loop
of Tukey-Cooley
* **submode=0b10** selects the ``j+halfsize`` offset of the innermost for-loop
of Tukey-Cooley
* **submode=0b11** selects the ``k`` of exptable (which coefficient)
When submode2 is 1 or 2, for DCT inner butterfly submode the following
schedules may be selected. When submode2 is 1, additional bit-reversing
is also performed.
* **submode=0b00** selects the ``j`` offset of the innermost for-loop,
in-place
* **submode=0b010** selects the ``j+halfsize`` offset of the innermost for-loop,
in reverse-order, in-place
* **submode=0b10** selects the ``ci`` count of the innermost for-loop,
useful for calculating the cosine coefficient
* **submode=0b11** selects the ``size`` offset of the outermost for-loop,
useful for the cosine coefficient ``cos(ci + 0.5) * pi / size``
When submode2 is 3 or 4, for DCT outer butterfly submode the following
schedules may be selected. When submode is 3, additional bit-reversing
is also performed.
* **submode=0b00** selects the ``j`` offset of the innermost for-loop,
* **submode=0b01** selects the ``j+1`` offset of the innermost for-loop,
`zdimsz` is used as an in-place "Stride", particularly useful for
column-based in-place DCT/FFT.
## Matrix Mode
In Matrix Mode, skip allows dimensions to be skipped from being included
in the resultant output index. this allows sequences to be repeated:
```0 0 0 1 1 1 2 2 2 ...``` or in the case of skip=0b11 this results in
modulo ```0 1 2 0 1 2 ...```
* **skip=0b00** indicates no dimensions to be skipped
* **skip=0b01** sets "skip 1st dimension"
* **skip=0b10** sets "skip 2nd dimension"
* **skip=0b11** sets "skip 3rd dimension"
invxyz will invert the start index of each of x, y or z. If invxyz[0] is
zero then x-dimensional counting begins from 0 and increments, otherwise
it begins from xdimsz-1 and iterates down to zero. Likewise for y and z.
offset will have the effect of offsetting the result by ```offset``` elements:
```
for i in 0..VL-1:
GPR(RT + remap(i) + SVSHAPE.offset) = ....
```
this appears redundant because the register RT could simply be changed by a compiler, until element width overrides are introduced. also
bear in mind that unlike a static compiler SVSHAPE.offset may
be set dynamically at runtime.
xdimsz, ydimsz and zdimsz are offset by 1, such that a value of 0 indicates
that the array dimensionality for that dimension is 1. any dimension
not intended to be used must have its value set to 0 (dimensionality
of 1). A value of xdimsz=2 would indicate that in the first dimension
there are 3 elements in the array. For example, to create a 2D array
X,Y of dimensionality X=3 and Y=2, set xdimsz=2, ydimsz=1 and zdimsz=0
The format of the array is therefore as follows:
```
array[xdimsz+1][ydimsz+1][zdimsz+1]
```
However whilst illustrative of the dimensionality, that does not take the
"permute" setting into account. "permute" may be any one of six values
(0-5, with values of 6 and 7 indicating "Indexed" Mode). The table
below shows how the permutation dimensionality order works:
| permute | order | array format |
| ------- | ----- | ------------------------ |
| 000 | 0,1,2 | (xdim+1)(ydim+1)(zdim+1) |
| 001 | 0,2,1 | (xdim+1)(zdim+1)(ydim+1) |
| 010 | 1,0,2 | (ydim+1)(xdim+1)(zdim+1) |
| 011 | 1,2,0 | (ydim+1)(zdim+1)(xdim+1) |
| 100 | 2,0,1 | (zdim+1)(xdim+1)(ydim+1) |
| 101 | 2,1,0 | (zdim+1)(ydim+1)(xdim+1) |
| 110 | 0,1 | Indexed (xdim+1)(ydim+1) |
| 111 | 1,0 | Indexed (ydim+1)(xdim+1) |
In other words, the "permute" option changes the order in which
nested for-loops over the array would be done. See executable
python reference code for further details.
*Note: permute=0b110 and permute=0b111 enable Indexed REMAP Mode,
described below*
With all these options it is possible to support in-place transpose,
in-place rotate, Matrix Multiply and Convolutions, without being
limited to Power-of-Two dimension sizes.
## Indexed Mode
Indexed Mode activates reading of the element indices from the GPR
and includes optional limited 2D reordering.
In its simplest form (without elwidth overrides or other modes):
```
def index_remap(i):
return GPR((SVSHAPE.SVGPR<<1)+i) + SVSHAPE.offset
for i in 0..VL-1:
element_result = ....
GPR(RT + indexed_remap(i)) = element_result
```
With element-width overrides included, and using the pseudocode
from the SVP64 [[sv/svp64/appendix#elwidth]] elwidth section
this becomes:
```
def index_remap(i):
svreg = SVSHAPE.SVGPR << 1
srcwid = elwid_to_bitwidth(SVSHAPE.elwid)
offs = SVSHAPE.offset
return get_polymorphed_reg(svreg, srcwid, i) + offs
for i in 0..VL-1:
element_result = ....
rt_idx = indexed_remap(i)
set_polymorphed_reg(RT, destwid, rt_idx, element_result)
```
Matrix-style reordering still applies to the indices, except limited
to up to 2 Dimensions (X,Y). Ordering is therefore limited to (X,Y) or
(Y,X) for in-place Transposition.
Only one dimension may optionally be skipped. Inversion of either
X or Y or both is possible (2D mirroring). Pseudocode for Indexed Mode (including elwidth
overrides) may be written in terms of Matrix Mode, specifically
purposed to ensure that the 3rd dimension (Z) has no effect:
```
def index_remap(ISHAPE, i):
MSHAPE.skip = 0b0 || ISHAPE.sk1
MSHAPE.invxyz = 0b0 || ISHAPE.invxy
MSHAPE.xdimsz = ISHAPE.xdimsz
MSHAPE.ydimsz = ISHAPE.ydimsz
MSHAPE.zdimsz = 0 # disabled
if ISHAPE.permute = 0b110 # 0,1
MSHAPE.permute = 0b000 # 0,1,2
if ISHAPE.permute = 0b111 # 1,0
MSHAPE.permute = 0b010 # 1,0,2
el_idx = remap_matrix(MSHAPE, i)
svreg = ISHAPE.SVGPR << 1
srcwid = elwid_to_bitwidth(ISHAPE.elwid)
offs = ISHAPE.offset
return get_polymorphed_reg(svreg, srcwid, el_idx) + offs
```
The most important observation above is that the Matrix-style
remapping occurs first and the Index lookup second. Thus it
becomes possible to perform in-place Transpose of Indices which
may have been costly to set up or costly to duplicate
(waste register file space).
-------------
\newpage{}
# svshape instruction
SVM-Form
svshape SVxd,SVyd,SVzd,SVRM,vf
| 0:5|6:10 |11:15 |16:20 | 21:24 | 25 | 26:31 | name |
| -- | -- | --- | ----- | ------ | -- | ------| -------- |
|PO | SVxd | SVyd | SVzd | SVRM | vf | XO | svshape |
```
# for convenience, VL to be calculated and stored in SVSTATE
vlen <- [0] * 7
mscale[0:5] <- 0b000001 # for scaling MAXVL
itercount[0:6] <- [0] * 7
SVSTATE[0:31] <- [0] * 32
# only overwrite REMAP if "persistence" is zero
if (SVSTATE[62] = 0b0) then
SVSTATE[32:33] <- 0b00
SVSTATE[34:35] <- 0b00
SVSTATE[36:37] <- 0b00
SVSTATE[38:39] <- 0b00
SVSTATE[40:41] <- 0b00
SVSTATE[42:46] <- 0b00000
SVSTATE[62] <- 0b0
SVSTATE[63] <- 0b0
# clear out all SVSHAPEs
SVSHAPE0[0:31] <- [0] * 32
SVSHAPE1[0:31] <- [0] * 32
SVSHAPE2[0:31] <- [0] * 32
SVSHAPE3[0:31] <- [0] * 32
# set schedule up for multiply
if (SVrm = 0b0000) then
# VL in Matrix Multiply is xd*yd*zd
xd <- (0b00 || SVxd) + 1
yd <- (0b00 || SVyd) + 1
zd <- (0b00 || SVzd) + 1
n <- xd * yd * zd
vlen[0:6] <- n[14:20]
# set up template in SVSHAPE0, then copy to 1-3
SVSHAPE0[0:5] <- (0b0 || SVxd) # xdim
SVSHAPE0[6:11] <- (0b0 || SVyd) # ydim
SVSHAPE0[12:17] <- (0b0 || SVzd) # zdim
SVSHAPE0[28:29] <- 0b11 # skip z
# copy
SVSHAPE1[0:31] <- SVSHAPE0[0:31]
SVSHAPE2[0:31] <- SVSHAPE0[0:31]
SVSHAPE3[0:31] <- SVSHAPE0[0:31]
# set up FRA
SVSHAPE1[18:20] <- 0b001 # permute x,z,y
SVSHAPE1[28:29] <- 0b01 # skip z
# FRC
SVSHAPE2[18:20] <- 0b001 # permute x,z,y
SVSHAPE2[28:29] <- 0b11 # skip y
# set schedule up for FFT butterfly
if (SVrm = 0b0001) then
# calculate O(N log2 N)
n <- [0] * 3
do while n < 5
if SVxd[4-n] = 0 then
leave
n <- n + 1
n <- ((0b0 || SVxd) + 1) * n
vlen[0:6] <- n[1:7]
# set up template in SVSHAPE0, then copy to 1-3
# for FRA and FRT
SVSHAPE0[0:5] <- (0b0 || SVxd) # xdim
SVSHAPE0[12:17] <- (0b0 || SVzd) # zdim - "striding" (2D FFT)
mscale <- (0b0 || SVzd) + 1
SVSHAPE0[30:31] <- 0b01 # Butterfly mode
# copy
SVSHAPE1[0:31] <- SVSHAPE0[0:31]
SVSHAPE2[0:31] <- SVSHAPE0[0:31]
# set up FRB and FRS
SVSHAPE1[28:29] <- 0b01 # j+halfstep schedule
# FRC (coefficients)
SVSHAPE2[28:29] <- 0b10 # k schedule
# set schedule up for (i)DCT Inner butterfly
# SVrm Mode 4 (Mode 12 for iDCT) is for on-the-fly (Vertical-First Mode)
if ((SVrm = 0b0100) |
(SVrm = 0b1100)) then
# calculate O(N log2 N)
n <- [0] * 3
do while n < 5
if SVxd[4-n] = 0 then
leave
n <- n + 1
n <- ((0b0 || SVxd) + 1) * n
vlen[0:6] <- n[1:7]
# set up template in SVSHAPE0, then copy to 1-3
# set up FRB and FRS
SVSHAPE0[0:5] <- (0b0 || SVxd) # xdim
SVSHAPE0[12:17] <- (0b0 || SVzd) # zdim - "striding" (2D DCT)
mscale <- (0b0 || SVzd) + 1
if (SVrm = 0b1100) then
SVSHAPE0[30:31] <- 0b11 # iDCT mode
SVSHAPE0[18:20] <- 0b011 # iDCT Inner Butterfly sub-mode
else
SVSHAPE0[30:31] <- 0b01 # DCT mode
SVSHAPE0[18:20] <- 0b001 # DCT Inner Butterfly sub-mode
SVSHAPE0[21:23] <- 0b001 # "inverse" on outer loop
SVSHAPE0[6:11] <- 0b000011 # (i)DCT Inner Butterfly mode 4
# copy
SVSHAPE1[0:31] <- SVSHAPE0[0:31]
SVSHAPE2[0:31] <- SVSHAPE0[0:31]
if (SVrm != 0b0100) & (SVrm != 0b1100) then
SVSHAPE3[0:31] <- SVSHAPE0[0:31]
# for FRA and FRT
SVSHAPE0[28:29] <- 0b01 # j+halfstep schedule
# for cos coefficient
SVSHAPE2[28:29] <- 0b10 # ci (k for mode 4) schedule
SVSHAPE2[12:17] <- 0b000000 # reset costable "striding" to 1
if (SVrm != 0b0100) & (SVrm != 0b1100) then
SVSHAPE3[28:29] <- 0b11 # size schedule
# set schedule up for (i)DCT Outer butterfly
if (SVrm = 0b0011) | (SVrm = 0b1011) then
# calculate O(N log2 N) number of outer butterfly overlapping adds
vlen[0:6] <- [0] * 7
n <- 0b000
size <- 0b0000001
itercount[0:6] <- (0b00 || SVxd) + 0b0000001
itercount[0:6] <- (0b0 || itercount[0:5])
do while n < 5
if SVxd[4-n] = 0 then
leave
n <- n + 1
count <- (itercount - 0b0000001) * size
vlen[0:6] <- vlen + count[7:13]
size[0:6] <- (size[1:6] || 0b0)
itercount[0:6] <- (0b0 || itercount[0:5])
# set up template in SVSHAPE0, then copy to 1-3
# set up FRB and FRS
SVSHAPE0[0:5] <- (0b0 || SVxd) # xdim
SVSHAPE0[12:17] <- (0b0 || SVzd) # zdim - "striding" (2D DCT)
mscale <- (0b0 || SVzd) + 1
if (SVrm = 0b1011) then
SVSHAPE0[30:31] <- 0b11 # iDCT mode
SVSHAPE0[18:20] <- 0b011 # iDCT Outer Butterfly sub-mode
SVSHAPE0[21:23] <- 0b101 # "inverse" on outer and inner loop
else
SVSHAPE0[30:31] <- 0b01 # DCT mode
SVSHAPE0[18:20] <- 0b100 # DCT Outer Butterfly sub-mode
SVSHAPE0[6:11] <- 0b000010 # DCT Butterfly mode
# copy
SVSHAPE1[0:31] <- SVSHAPE0[0:31] # j+halfstep schedule
SVSHAPE2[0:31] <- SVSHAPE0[0:31] # costable coefficients
# for FRA and FRT
SVSHAPE1[28:29] <- 0b01 # j+halfstep schedule
# reset costable "striding" to 1
SVSHAPE2[12:17] <- 0b000000
# set schedule up for DCT COS table generation
if (SVrm = 0b0101) | (SVrm = 0b1101) then
# calculate O(N log2 N)
vlen[0:6] <- [0] * 7
itercount[0:6] <- (0b00 || SVxd) + 0b0000001
itercount[0:6] <- (0b0 || itercount[0:5])
n <- [0] * 3
do while n < 5
if SVxd[4-n] = 0 then
leave
n <- n + 1
vlen[0:6] <- vlen + itercount
itercount[0:6] <- (0b0 || itercount[0:5])
# set up template in SVSHAPE0, then copy to 1-3
# set up FRB and FRS
SVSHAPE0[0:5] <- (0b0 || SVxd) # xdim
SVSHAPE0[12:17] <- (0b0 || SVzd) # zdim - "striding" (2D DCT)
mscale <- (0b0 || SVzd) + 1
SVSHAPE0[30:31] <- 0b01 # DCT/FFT mode
SVSHAPE0[6:11] <- 0b000100 # DCT Inner Butterfly COS-gen mode
if (SVrm = 0b0101) then
SVSHAPE0[21:23] <- 0b001 # "inverse" on outer loop for DCT
# copy
SVSHAPE1[0:31] <- SVSHAPE0[0:31]
SVSHAPE2[0:31] <- SVSHAPE0[0:31]
# for cos coefficient
SVSHAPE1[28:29] <- 0b10 # ci schedule
SVSHAPE2[28:29] <- 0b11 # size schedule
# set schedule up for iDCT / DCT inverse of half-swapped ordering
if (SVrm = 0b0110) | (SVrm = 0b1110) | (SVrm = 0b1111) then
vlen[0:6] <- (0b00 || SVxd) + 0b0000001
# set up template in SVSHAPE0
SVSHAPE0[0:5] <- (0b0 || SVxd) # xdim
SVSHAPE0[12:17] <- (0b0 || SVzd) # zdim - "striding" (2D DCT)
mscale <- (0b0 || SVzd) + 1
if (SVrm = 0b1110) then
SVSHAPE0[18:20] <- 0b001 # DCT opposite half-swap
if (SVrm = 0b1111) then
SVSHAPE0[30:31] <- 0b01 # FFT mode
else
SVSHAPE0[30:31] <- 0b11 # DCT mode
SVSHAPE0[6:11] <- 0b000101 # DCT "half-swap" mode
# set schedule up for parallel reduction
if (SVrm = 0b0111) then
# calculate the total number of operations (brute-force)
vlen[0:6] <- [0] * 7
itercount[0:6] <- (0b00 || SVxd) + 0b0000001
step[0:6] <- 0b0000001
i[0:6] <- 0b0000000
do while step
SVI-Form
| 0:5|6:10 |11:15 |16:20 | 21:25 | 26:31 | Form |
| -- | -- | --- | ---- | ----------- | ------| -------- |
| PO | SVG | rmm | SVd | ew/yx/mm/sk | XO | SVI-Form |
* svindex SVG,rmm,SVd,ew,SVyx,mm,sk
Pseudo-code:
```
# based on nearest MAXVL compute other dimension
MVL <- SVSTATE[0:6]
d <- [0] * 6
dim <- SVd+1
do while d*dim N`. There is
no requirement to set VL equal to a multiple of N.
* **Modulo 2D transposed**: `SVd=M,sk=0,yx=1`, sets
`xdim=M,ydim=CEIL(MAXVL/M)`.
Beyond these mappings it becomes necessary to write directly to
the SVSTATE SPRs manually.
-------------
\newpage{}
# svshape2 (offset-priority)
SVM2-Form
| 0:5|6:9 |10|11:15 |16:20 | 21:24 | 25 | 26:31 | Form |
| -- |----|--| --- | ----- | ------ | -- | ------| -------- |
| PO |offs|yx| rmm | SVd | 100/mm | sk | XO | SVM2-Form |
* svshape2 offs,yx,rmm,SVd,sk,mm
Pseudo-code:
```
# based on nearest MAXVL compute other dimension
MVL <- SVSTATE[0:6]
d <- [0] * 6
dim <- SVd+1
do while d*dim