Since SSBOs can be written by a different GPU thread, copy propagating a
read can cause the value to magically change. SSBO reads are also very
expensive, so doing it twice will be slower.
The same shader was helped by this patch and the previous.
Haswell, Broadwell, and Skylake had similar results. (Skylake shown)
total instructions in shared programs:
14399119 ->
14399113 (<.01%)
instructions in affected programs: 683 -> 677 (-0.88%)
helped: 1
HURT: 0
total cycles in shared programs:
532973113 ->
532971865 (<.01%)
cycles in affected programs: 524666 -> 523418 (-0.24%)
helped: 1
HURT: 0
Signed-off-by: Ian Romanick <ian.d.romanick@intel.com>
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Cc: mesa-stable@lists.freedesktop.org
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=106774
if (!lhs || !(lhs->type->is_scalar() || lhs->type->is_vector()))
return;
+ if (lhs->var->data.mode == ir_var_shader_storage ||
+ lhs->var->data.mode == ir_var_shader_shared)
+ return;
+
ir_dereference_variable *rhs = ir->rhs->as_dereference_variable();
if (!rhs) {
ir_swizzle *swiz = ir->rhs->as_swizzle();
orig_swizzle[3] = swiz->mask.w;
}
+ if (rhs->var->data.mode == ir_var_shader_storage ||
+ rhs->var->data.mode == ir_var_shader_shared)
+ return;
+
/* Move the swizzle channels out to the positions they match in the
* destination. We don't want to have to rewrite the swizzle[]
* array every time we clear a bit of the write_mask.