From 7b4c24eb679e248894751f30e2ea842dcf3f21f3 Mon Sep 17 00:00:00 2001 From: Rhys Perry Date: Tue, 4 Aug 2020 19:20:21 +0100 Subject: [PATCH] aco: don't move memory accesses to before control barriers MIME-Version: 1.0 Content-Type: text/plain; charset=utf8 Content-Transfer-Encoding: 8bit Fixes random failures of dEQP-VK.image.qualifiers.volatile.cube_array.r32i and similar tests on Vega. fossil-db (Navi): Totals from 6 (0.00% of 135946) affected shaders: VMEM: 1218 -> 1110 (-8.87%); split: +2.46%, -11.33% SMEM: 174 -> 189 (+8.62%) Copies: 84 -> 87 (+3.57%) Signed-off-by: Rhys Perry Reviewed-by: Daniel Schürmann Fixes: cd392a10d05 ('radv/aco,aco: use scoped barriers') Part-of: --- src/amd/compiler/aco_scheduler.cpp | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/src/amd/compiler/aco_scheduler.cpp b/src/amd/compiler/aco_scheduler.cpp index 52b64f02116..9d6810672b5 100644 --- a/src/amd/compiler/aco_scheduler.cpp +++ b/src/amd/compiler/aco_scheduler.cpp @@ -495,6 +495,12 @@ HazardResult perform_hazard_query(hazard_query *query, Instruction *instr, bool if (first->bar_classes && second->bar_classes) return hazard_fail_barrier; + /* Don't move memory accesses to before control barriers. I don't think + * this is necessary for the Vulkan memory model, but it might be for GLSL450. */ + unsigned control_classes = storage_buffer | storage_atomic_counter | storage_image | storage_shared; + if (first->has_control_barrier && ((second->access_atomic | second->access_relaxed) & control_classes)) + return hazard_fail_barrier; + /* don't move memory loads/stores past potentially aliasing loads/stores */ unsigned aliasing_storage = instr->format == Format::SMEM ? query->aliasing_storage_smem : -- 2.30.2