radv: Allow triggering thread traces by file.
[mesa.git] / src / amd / compiler / README-ISA.md
1 # Unofficial GCN/RDNA ISA reference errata
2
3 ## `v_sad_u32`
4
5 The Vega ISA reference writes its behaviour as:
6
7 ```
8 D.u = abs(S0.i - S1.i) + S2.u.
9 ```
10
11 This is incorrect. The actual behaviour is what is written in the GCN3 reference
12 guide:
13
14 ```
15 ABS_DIFF (A,B) = (A>B) ? (A-B) : (B-A)
16 D.u = ABS_DIFF (S0.u,S1.u) + S2.u
17 ```
18
19 The instruction doesn't subtract the S0 and S1 and use the absolute value (the
20 _signed_ distance), it uses the _unsigned_ distance between the operands. So
21 `v_sad_u32(-5, 0, 0)` would return `4294967291` (`-5` interpreted as unsigned),
22 not `5`.
23
24 ## `s_bfe_*`
25
26 Both the RDNA, Vega and GCN3 ISA references write that these instructions don't write
27 SCC. They do.
28
29 ## `v_bcnt_u32_b32`
30
31 The Vega ISA reference writes its behaviour as:
32
33 ```
34 D.u = 0;
35 for i in 0 ... 31 do
36 D.u += (S0.u[i] == 1 ? 1 : 0);
37 endfor.
38 ```
39
40 This is incorrect. The actual behaviour (and number of operands) is what
41 is written in the GCN3 reference guide:
42
43 ```
44 D.u = CountOneBits(S0.u) + S1.u.
45 ```
46
47 ## SMEM stores
48
49 The Vega ISA references doesn't say this (or doesn't make it clear), but
50 the offset for SMEM stores must be in m0 if IMM == 0.
51
52 The RDNA ISA doesn't mention SMEM stores at all, but they seem to be supported
53 by the chip and are present in LLVM. AMD devs however highly recommend avoiding
54 these instructions.
55
56 ## SMEM atomics
57
58 RDNA ISA: same as the SMEM stores, the ISA pretends they don't exist, but they
59 are there in LLVM.
60
61 ## VMEM stores
62
63 All reference guides say (under "Vector Memory Instruction Data Dependencies"):
64
65 > When a VM instruction is issued, the address is immediately read out of VGPRs
66 > and sent to the texture cache. Any texture or buffer resources and samplers
67 > are also sent immediately. However, write-data is not immediately sent to the
68 > texture cache.
69
70 Reading that, one might think that waitcnts need to be added when writing to
71 the registers used for a VMEM store's data. Experimentation has shown that this
72 does not seem to be the case on GFX8 and GFX9 (GFX6 and GFX7 are untested). It
73 also seems unlikely, since NOPs are apparently needed in a subset of these
74 situations.
75
76 ## MIMG opcodes on GFX8/GCN3
77
78 The `image_atomic_{swap,cmpswap,add,sub}` opcodes in the GCN3 ISA reference
79 guide are incorrect. The Vega ISA reference guide has the correct ones.
80
81 ## VINTRP encoding
82
83 VEGA ISA doc says the encoding should be `110010` but `110101` works.
84
85 ## VOP1 instructions encoded as VOP3
86
87 RDNA ISA doc says that `0x140` should be added to the opcode, but that doesn't
88 work. What works is adding `0x180`, which LLVM also does.
89
90 ## FLAT, Scratch, Global instructions
91
92 The NV bit was removed in RDNA, but some parts of the doc still mention it.
93
94 RDNA ISA doc 13.8.1 says that SADDR should be set to 0x7f when ADDR is used, but
95 9.3.1 says it should be set to NULL. We assume 9.3.1 is correct and set it to
96 SGPR_NULL.
97
98 ## Legacy instructions
99
100 Some instructions have a `_LEGACY` variant which implements "DX9 rules", in which
101 the zero "wins" in multiplications, ie. `0.0*x` is always `0.0`. The VEGA ISA
102 mentions `V_MAC_LEGACY_F32` but this instruction is not really there on VEGA.
103
104 ## RDNA L0, L1 cache and DLC, GLC bits
105
106 The old L1 cache was renamed to L0, and a new L1 cache was added to RDNA. The
107 L1 cache is 1 cache per shader array. Some instruction encodings have DLC and
108 GLC bits that interact with the cache.
109
110 * DLC ("device level coherent") bit: controls the L1 cache
111 * GLC ("globally coherent") bit: controls the L0 cache
112
113 The recommendation from AMD devs is to always set these two bits at the same time,
114 as it doesn't make too much sense to set them independently, aside from some
115 circumstances (eg. we needn't set DLC when only one shader array is used).
116
117 Stores and atomics always bypass the L1 cache, so they don't support the DLC bit,
118 and it shouldn't be set in these cases. Setting the DLC for these cases can result
119 in graphical glitches or hangs.
120
121 ## RDNA `s_dcache_wb`
122
123 The `s_dcache_wb` is not mentioned in the RDNA ISA doc, but it is needed in order
124 to achieve correct behavior in some SSBO CTS tests.
125
126 ## RDNA subvector mode
127
128 The documentation of `s_subvector_loop_begin` and `s_subvector_mode_end` is not clear
129 on what sort of addressing should be used, but it says that it
130 "is equivalent to an `S_CBRANCH` with extra math", so the subvector loop handling
131 in ACO is done according to the `s_cbranch` doc.
132
133 # Hardware Bugs
134
135 ## SMEM corrupts VCCZ on SI/CI
136
137 [See this LLVM source.](https://github.com/llvm/llvm-project/blob/acb089e12ae48b82c0b05c42326196a030df9b82/llvm/lib/Target/AMDGPU/SIInsertWaits.cpp#L580-L616)
138
139 After issuing a SMEM instructions, we need to wait for the SMEM instructions to
140 finish and then write to vcc (for example, `s_mov_b64 vcc, vcc`) to correct vccz
141
142 Currently, we don't do this.
143
144 ## GCN / GFX6 hazards
145
146 ### VINTRP followed by a read with `v_readfirstlane` or `v_readlane`
147
148 It's required to insert 1 wait state if the dst VGPR of any `v_interp_*` is
149 followed by a read with `v_readfirstlane` or `v_readlane` to fix GPU hangs on GFX6.
150 Note that `v_writelane_*` is apparently not affected. This hazard isn't
151 documented anywhere but AMD confirmed it.
152
153 ## RDNA / GFX10 hazards
154
155 ### SMEM store followed by a load with the same address
156
157 We found that an `s_buffer_load` will produce incorrect results if it is preceded
158 by an `s_buffer_store` with the same address. Inserting an `s_nop` between them
159 does not mitigate the issue, so an `s_waitcnt lgkmcnt(0)` must be inserted.
160 This is not mentioned by LLVM among the other GFX10 bugs, but LLVM doesn't use
161 SMEM stores, so it's not surprising that they didn't notice it.
162
163 ### VMEMtoScalarWriteHazard
164
165 Triggered by:
166 VMEM/FLAT/GLOBAL/SCRATCH/DS instruction reads an SGPR (or EXEC, or M0).
167 Then, a SALU/SMEM instruction writes the same SGPR.
168
169 Mitigated by:
170 A VALU instruction or an `s_waitcnt vmcnt(0)` between the two instructions.
171
172 ### SMEMtoVectorWriteHazard
173
174 Triggered by:
175 An SMEM instruction reads an SGPR. Then, a VALU instruction writes that same SGPR.
176
177 Mitigated by:
178 Any non-SOPP SALU instruction (except `s_setvskip`, `s_version`, and any non-lgkmcnt `s_waitcnt`).
179
180 ### Offset3fBug
181
182 Any branch that is located at offset 0x3f will be buggy. Just insert some NOPs to make sure no branch
183 is located at this offset.
184
185 ### InstFwdPrefetchBug
186
187 According to LLVM, the `s_inst_prefetch` instruction can cause a hang.
188 There are no further details.
189
190 ### LdsMisalignedBug
191
192 When there is a misaligned multi-dword FLAT load/store instruction in WGP mode,
193 it needs to be split into multiple single-dword FLAT instructions.
194
195 ACO doesn't use FLAT load/store on GFX10, so is unaffected.
196
197 ### FlatSegmentOffsetBug
198
199 The 12-bit immediate OFFSET field of FLAT instructions must always be 0.
200 GLOBAL and SCRATCH are unaffected.
201
202 ACO doesn't use FLAT load/store on GFX10, so is unaffected.
203
204 ### VcmpxPermlaneHazard
205
206 Triggered by:
207 Any permlane instruction that follows any VOPC instruction.
208 Confirmed by AMD devs that despite the name, this doesn't only affect v_cmpx.
209
210 Mitigated by: any VALU instruction except `v_nop`.
211
212 ### VcmpxExecWARHazard
213
214 Triggered by:
215 Any non-VALU instruction reads the EXEC mask. Then, any VALU instruction writes the EXEC mask.
216
217 Mitigated by:
218 A VALU instruction that writes an SGPR (or has a valid SDST operand), or `s_waitcnt_depctr 0xfffe`.
219 Note: `s_waitcnt_depctr` is an internal instruction, so there is no further information
220 about what it does or what its operand means.
221
222 ### LdsBranchVmemWARHazard
223
224 Triggered by:
225 VMEM/GLOBAL/SCRATCH instruction, then a branch, then a DS instruction,
226 or vice versa: DS instruction, then a branch, then a VMEM/GLOBAL/SCRATCH instruction.
227
228 Mitigated by:
229 Only `s_waitcnt_vscnt null, 0`. Needed even if the first instruction is a load.