nir/search: Use the correct bit size for integer comparisons
authorJason Ekstrand <jason.ekstrand@intel.com>
Thu, 19 Jan 2017 19:28:31 +0000 (11:28 -0800)
committerJason Ekstrand <jason.ekstrand@intel.com>
Sat, 21 Jan 2017 18:34:21 +0000 (10:34 -0800)
commitbb96b034616d9d099752efb005b5c05e8644059c
tree0936de16fa395d441300fb6b801c6de4b0d7afe7
parent817f9e3b17c784cbe40639a4b370edc762bd2513
nir/search: Use the correct bit size for integer comparisons

The previous code always compared integers as 64-bit.  Due to variations
in sign-extension in the code generated by nir_opt_algebraic.py, this
meant that nir_search doesn't always do what you want.  Instead, 32-bit
values should be matched as 32-bit and 64-bit values should be matched
as 64-bit.  While we're here we unify the unsigned and signed paths.
Now that we're using the right bit size, they should be the same since
the only difference we had before was sign extension.

This gets the UE4 bitfield_extract optimization working again.  It had
stopped working due to the constant 0xff00ff00 getting sign-extended
when it shouldn't have.

Reviewed-by: Iago Toral Quiroga <itoral@igalia.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
Cc: "17.0 13.0" <mesa-stable@lists.freedesktop.org>
src/compiler/nir/nir_search.c