radv: align the LDS size in calculate_tess_lds_size()
[mesa.git] / src / amd / compiler / README.md
1 # Unofficial GCN/RDNA ISA reference errata
2
3 ## v_sad_u32
4
5 The Vega ISA reference writes it's behaviour as:
6 ```
7 D.u = abs(S0.i - S1.i) + S2.u.
8 ```
9 This is incorrect. The actual behaviour is what is written in the GCN3 reference
10 guide:
11 ```
12 ABS_DIFF (A,B) = (A>B) ? (A-B) : (B-A)
13 D.u = ABS_DIFF (S0.u,S1.u) + S2.u
14 ```
15 The instruction doesn't subtract the S0 and S1 and use the absolute value (the
16 _signed_ distance), it uses the _unsigned_ distance between the operands. So
17 `v_sad_u32(-5, 0, 0)` would return `4294967291` (`-5` interpreted as unsigned),
18 not `5`.
19
20 ## s_bfe_*
21
22 Both the Vega and GCN3 ISA references write that these instructions don't write
23 SCC. They do.
24
25 ## v_bcnt_u32_b32
26
27 The Vega ISA reference writes it's behaviour as:
28 ```
29 D.u = 0;
30 for i in 0 ... 31 do
31 D.u += (S0.u[i] == 1 ? 1 : 0);
32 endfor.
33 ```
34 This is incorrect. The actual behaviour (and number of operands) is what
35 is written in the GCN3 reference guide:
36 ```
37 D.u = CountOneBits(S0.u) + S1.u.
38 ```
39
40 ## SMEM stores
41
42 The Vega ISA references doesn't say this (or doesn't make it clear), but
43 the offset for SMEM stores must be in m0 if IMM == 0.
44
45 The RDNA ISA doesn't mention SMEM stores at all, but they seem to be supported
46 by the chip and are present in LLVM. AMD devs however highly recommend avoiding
47 these instructions.
48
49 ## SMEM atomics
50
51 RDNA ISA: same as the SMEM stores, the ISA pretends they don't exist, but they
52 are there in LLVM.
53
54 ## VMEM stores
55
56 All reference guides say (under "Vector Memory Instruction Data Dependencies"):
57 > When a VM instruction is issued, the address is immediately read out of VGPRs
58 > and sent to the texture cache. Any texture or buffer resources and samplers
59 > are also sent immediately. However, write-data is not immediately sent to the
60 > texture cache.
61 Reading that, one might think that waitcnts need to be added when writing to
62 the registers used for a VMEM store's data. Experimentation has shown that this
63 does not seem to be the case on GFX8 and GFX9 (GFX6 and GFX7 are untested). It
64 also seems unlikely, since NOPs are apparently needed in a subset of these
65 situations.
66
67 ## MIMG opcodes on GFX8/GCN3
68
69 The `image_atomic_{swap,cmpswap,add,sub}` opcodes in the GCN3 ISA reference
70 guide are incorrect. The Vega ISA reference guide has the correct ones.
71
72 ## VINTRP encoding
73
74 VEGA ISA doc says the encoding should be `110010` but `110101` works.
75
76 ## VOP1 instructions encoded as VOP3
77
78 RDNA ISA doc says that `0x140` should be added to the opcode, but that doesn't
79 work. What works is adding `0x180`, which LLVM also does.
80
81 ## FLAT, Scratch, Global instructions
82
83 The NV bit was removed in RDNA, but some parts of the doc still mention it.
84
85 RDNA ISA doc 13.8.1 says that SADDR should be set to 0x7f when ADDR is used, but
86 9.3.1 says it should be set to NULL. We assume 9.3.1 is correct and set it to
87 SGPR_NULL.
88
89 ## Legacy instructions
90
91 Some instructions have a `_LEGACY` variant which implements "DX9 rules", in which
92 the zero "wins" in multiplications, ie. `0.0*x` is always `0.0`. The VEGA ISA
93 mentions `V_MAC_LEGACY_F32` but this instruction is not really there on VEGA.
94
95 ## RDNA L0, L1 cache and DLC, GLC bits
96
97 The old L1 cache was renamed to L0, and a new L1 cache was added to RDNA. The
98 L1 cache is 1 cache per shader array. Some instruction encodings have DLC and
99 GLC bits that interact with the cache.
100
101 * DLC ("device level coherent") bit: controls the L1 cache
102 * GLC ("globally coherent") bit: controls the L0 cache
103
104 The recommendation from AMD devs is to always set these two bits at the same time,
105 as it doesn't make too much sense to set them independently, aside from some
106 circumstances (eg. we needn't set DLC when only one shader array is used).
107
108 Stores and atomics always bypass the L1 cache, so they don't support the DLC bit,
109 and it shouldn't be set in these cases. Setting the DLC for these cases can result
110 in graphical glitches.
111
112 ## RDNA S_DCACHE_WB
113
114 The S_DCACHE_WB is not mentioned in the RDNA ISA doc, but it is needed in order
115 to achieve correct behavior in some SSBO CTS tests.
116
117 ## RDNA subvector mode
118
119 The documentation of S_SUBVECTOR_LOOP_BEGIN and S_SUBVECTOR_LOOP_END is not clear
120 on what sort of addressing should be used, but it says that it
121 "is equivalent to an S_CBRANCH with extra math", so the subvector loop handling
122 in ACO is done according to the S_CBRANCH doc.
123
124 # Hardware Bugs
125
126 ## SMEM corrupts VCCZ on SI/CI
127
128 https://github.com/llvm/llvm-project/blob/acb089e12ae48b82c0b05c42326196a030df9b82/llvm/lib/Target/AMDGPU/SIInsertWaits.cpp#L580-L616
129 After issuing a SMEM instructions, we need to wait for the SMEM instructions to
130 finish and then write to vcc (for example, `s_mov_b64 vcc, vcc`) to correct vccz
131
132 Currently, we don't do this.
133
134 ## GCN / GFX6 hazards
135
136 ### VINTRP followed by a read with v_readfirstlane or v_readlane
137
138 It's required to insert 1 wait state if the dst VGPR of any v_interp_* is
139 followed by a read with v_readfirstlane or v_readlane to fix GPU hangs on GFX6.
140 Note that v_writelane_* is apparently not affected. This hazard isn't
141 documented anywhere but AMD confirmed it.
142
143 ## RDNA / GFX10 hazards
144
145 ### SMEM store followed by a load with the same address
146
147 We found that an `s_buffer_load` will produce incorrect results if it is preceded
148 by an `s_buffer_store` with the same address. Inserting an `s_nop` between them
149 does not mitigate the issue, so an `s_waitcnt lgkmcnt(0)` must be inserted.
150 This is not mentioned by LLVM among the other GFX10 bugs, but LLVM doesn't use
151 SMEM stores, so it's not surprising that they didn't notice it.
152
153 ### VMEMtoScalarWriteHazard
154
155 Triggered by:
156 VMEM/FLAT/GLOBAL/SCRATCH/DS instruction reads an SGPR (or EXEC, or M0).
157 Then, a SALU/SMEM instruction writes the same SGPR.
158
159 Mitigated by:
160 A VALU instruction or an `s_waitcnt vmcnt(0)` between the two instructions.
161
162 ### SMEMtoVectorWriteHazard
163
164 Triggered by:
165 An SMEM instruction reads an SGPR. Then, a VALU instruction writes that same SGPR.
166
167 Mitigated by:
168 Any non-SOPP SALU instruction (except `s_setvskip`, `s_version`, and any non-lgkmcnt `s_waitcnt`).
169
170 ### Offset3fBug
171
172 Any branch that is located at offset 0x3f will be buggy. Just insert some NOPs to make sure no branch
173 is located at this offset.
174
175 ### InstFwdPrefetchBug
176
177 According to LLVM, the `s_inst_prefetch` instruction can cause a hang.
178 There are no further details.
179
180 ### LdsMisalignedBug
181
182 When there is a misaligned multi-dword FLAT load/store instruction in WGP mode,
183 it needs to be split into multiple single-dword FLAT instructions.
184
185 ACO doesn't use FLAT load/store on GFX10, so is unaffected.
186
187 ### FlatSegmentOffsetBug
188
189 The 12-bit immediate OFFSET field of FLAT instructions must always be 0.
190 GLOBAL and SCRATCH are unaffected.
191
192 ACO doesn't use FLAT load/store on GFX10, so is unaffected.
193
194 ### VcmpxPermlaneHazard
195
196 Triggered by:
197 Any permlane instruction that follows any VOPC instruction.
198 Confirmed by AMD devs that despite the name, this doesn't only affect v_cmpx.
199
200 Mitigated by: any VALU instruction except `v_nop`.
201
202 ### VcmpxExecWARHazard
203
204 Triggered by:
205 Any non-VALU instruction reads the EXEC mask. Then, any VALU instruction writes the EXEC mask.
206
207 Mitigated by:
208 A VALU instruction that writes an SGPR (or has a valid SDST operand), or `s_waitcnt_depctr 0xfffe`.
209 Note: `s_waitcnt_depctr` is an internal instruction, so there is no further information
210 about what it does or what its operand means.
211
212 ### LdsBranchVmemWARHazard
213
214 Triggered by:
215 VMEM/GLOBAL/SCRATCH instruction, then a branch, then a DS instruction,
216 or vice versa: DS instruction, then a branch, then a VMEM/GLOBAL/SCRATCH instruction.
217
218 Mitigated by:
219 Only `s_waitcnt_vscnt null, 0`. Needed even if the first instruction is a load.