XXH32 is doing access through u32 *, and with strict aliasing the compiler
gets to assume that those are independent of the u16 writes we did in
fd6_texture_key setup, and based on various tweaks to the code, would
result in bad hashes computed after inlining. The failure was:
../src/util/hash_table.c:326:_mesa_hash_table_search_pre_hashed: Assertion
`ht->key_hash_function == ((void *)0) || hash == ht->key_hash_function(key)'
failed.)
By setting these two flags, we always take the unaligned,
memcpy-the-32-bit-data path. I believe this should be same perf on x86
(which will happily unaligned load 32 bits in the end), while it will be
slower on arm (where you have to a special unaligned load operation iirc).
This should still be far faster than our old hash.
Fixes: edd62619a1c4 ("freedreno: replace fnv1a hash function with xxhash")
Acked-by: Rob Clark <robdclark@chromium.org>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/5271>
XXH32 6.8 GB/s 6.0 GB/s
*/
+/* Mesa leaves strict aliasing on in the compiler, and this code likes to
+ * dereference the passed in data as u32*, which means that the compiler is
+ * free to move the u32 read before the write of the struct members being
+ * hashed, and in practice it did in freedreno. Forcing these two things
+ * prevents it.
+ */
+#define XXH_FORCE_ALIGN_CHECK 0
+#define XXH_FORCE_MEMORY_ACCESS 0
+
#if defined (__cplusplus)
extern "C" {
#endif