radv: only enable shaderInt16 on GFX9+ and LLVM7+
authorSamuel Pitoiset <samuel.pitoiset@gmail.com>
Thu, 20 Sep 2018 20:17:03 +0000 (22:17 +0200)
committerSamuel Pitoiset <samuel.pitoiset@gmail.com>
Fri, 21 Sep 2018 08:56:17 +0000 (10:56 +0200)
The throughput is similar to 32-bit integers on GFX8 and
AMDVLK does not expose 16-bit integers on pre Vega as well.
On GFX9+, only LLVM 7+ has support.

This fixes a bunch of CTS crashes on GFX9/LLVM 6.

Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
src/amd/vulkan/radv_device.c

index 31d9bb59637bfe9410dd5edac1223d4f0e0a9a17..f7752eac83b4adf3e7c9cdc25fba98a04eddcc5c 100644 (file)
@@ -763,7 +763,7 @@ void radv_GetPhysicalDeviceFeatures(
                .shaderCullDistance                       = true,
                .shaderFloat64                            = true,
                .shaderInt64                              = true,
-               .shaderInt16                              = true,
+               .shaderInt16                              = pdevice->rad_info.chip_class >= GFX9 && HAVE_LLVM >= 0x700,
                .sparseBinding                            = true,
                .variableMultisampleRate                  = true,
                .inheritedQueries                         = true,