The Vulkan spec says:
"shaderInt16 specifies whether 16-bit integers (signed and unsigned)
are supported in shader code. If this feature is not enabled, 16-bit
integer types must not be used in shader code."
I think it's just safe to enable it because 16-bit integers should
be fully supported with LLVM and also with ACO and GFX8+. On GFX8
and earlier generations, throughput of 16-bit int is same as 32-bit
but that should't change anything.
For GFX6-GFX7 ACO support, we have to implement conversions without
SDWA.
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Daniel Schürmann <daniel@schuermann.dev>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/4874>
                .shaderCullDistance                       = true,
                .shaderFloat64                            = true,
                .shaderInt64                              = true,
-               .shaderInt16                              = pdevice->rad_info.chip_class >= GFX9,
+               .shaderInt16                              = !pdevice->use_aco || pdevice->rad_info.chip_class >= GFX8,
                .sparseBinding                            = true,
                .variableMultisampleRate                  = true,
                .inheritedQueries                         = true,