From: Jason Ekstrand Date: Fri, 12 Jul 2019 23:47:15 +0000 (-0500) Subject: intel: Run the optimization loop before and after lowering int64 X-Git-Url: https://git.libre-soc.org/?a=commitdiff_plain;h=974fabe810cad996cdf0c1acbcd1cba6e04f7357;p=mesa.git intel: Run the optimization loop before and after lowering int64 For bindless SSBO access, we have to do 64-bit address calculations. On ICL and above, we don't have 64-bit integer support so we have to lower the address calculations to 32-bit arithmetic. If we don't run the optimization loop before lowering, we won't fold any of the address chain calculations before lowering 64-bit arithmetic and they aren't really foldable afterwards. This cuts the size of the generated code in the compute shader in dEQP-VK.ssbo.phys.layout.random.16bit.scalar.13 by around 30%. Reviewed-by: Kenneth Graunke Reviewed-by: Caio Marcelo de Oliveira Filho --- diff --git a/src/intel/compiler/brw_nir.c b/src/intel/compiler/brw_nir.c index ef387e51601..a0805758160 100644 --- a/src/intel/compiler/brw_nir.c +++ b/src/intel/compiler/brw_nir.c @@ -821,7 +821,6 @@ brw_postprocess_nir(nir_shader *nir, const struct brw_compiler *compiler, UNUSED bool progress; /* Written by OPT */ OPT(brw_nir_lower_mem_access_bit_sizes); - OPT(nir_lower_int64, nir->options->lower_int64_options); do { progress = false; @@ -830,6 +829,9 @@ brw_postprocess_nir(nir_shader *nir, const struct brw_compiler *compiler, brw_nir_optimize(nir, compiler, is_scalar, false); + if (OPT(nir_lower_int64, nir->options->lower_int64_options)) + brw_nir_optimize(nir, compiler, is_scalar, false); + if (devinfo->gen >= 6) { /* Try and fuse multiply-adds */ OPT(brw_nir_opt_peephole_ffma);