intel/nir: Stop adding redundant barriers
authorJason Ekstrand <jason@jlekstrand.net>
Tue, 7 Jan 2020 20:58:45 +0000 (14:58 -0600)
committerMarge Bot <eric+marge@anholt.net>
Mon, 13 Jan 2020 17:23:47 +0000 (17:23 +0000)
Now that both GLSL and SPIR-V are adding shared and tcs_patch barriers
(as appropreate) prior to the nir_intrinsic_barrier, we don't need to do
it ourselves in the back-end.  This reverts commit
26e950a5de01564e3b5f2148ae994454ae5205fe.

Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/merge_requests/3307>

src/intel/compiler/brw_nir_lower_cs_intrinsics.c

index 3f48a3c5dda6c1c8110643b34cb6cc18248a20bf..434ad005281172e3268359c12e9de411fc33d172 100644 (file)
@@ -55,20 +55,6 @@ lower_cs_intrinsics_convert_block(struct lower_intrinsics_state *state,
 
       nir_ssa_def *sysval;
       switch (intrinsic->intrinsic) {
-      case nir_intrinsic_barrier: {
-         /* Our HW barrier instruction doesn't do a memory barrier for us but
-          * the GLSL barrier() intrinsic does for shared memory.  Insert a
-          * shared memory barrier before every barrier().
-          */
-         b->cursor = nir_before_instr(&intrinsic->instr);
-
-         nir_intrinsic_instr *shared_barrier =
-            nir_intrinsic_instr_create(b->shader,
-                                       nir_intrinsic_memory_barrier_shared);
-         nir_builder_instr_insert(b, &shared_barrier->instr);
-         continue;
-      }
-
       case nir_intrinsic_load_local_invocation_index:
       case nir_intrinsic_load_local_invocation_id: {
          /* First time we are using those, so let's calculate them. */