x86: Fix -O0 intrinsic *gather*/*scatter* macros [PR94832]
As reported in the PR, while most intrinsic -O0 macro argument uses
are properly wrapped in ()s or used in context where having a complex
expression passed as the argument doesn't pose a problem (e.g. when
macro argument use is in between commas, or between ( and comma, or
between comma and ) etc.), especially the gather/scatter macros don't do
this and if one passes to some macro e.g. x + y as argument, the
corresponding inline function would do cast on the argument, but
the macro does (int) ARG, then it is (int) x + y rather than (int) (x + y).
The following patch fixes those issues in *gather/*scatter*; additionally,
the AVX2 macros were passing incorrect mask of e.g.
(__v2df)_mm_set1_pd((double)(long long int) -1)
which is IMHO equivalent to
(__v2df){-1.0, -1.0}
when it really wants to pass __v2df vector with all bits set.
I've used what the inline functions use for those cases.
2020-04-29 Jakub Jelinek <jakub@redhat.com>
PR target/94832
* config/i386/avx2intrin.h (_mm_mask_i32gather_pd,
_mm256_mask_i32gather_pd, _mm_mask_i64gather_pd,
_mm256_mask_i64gather_pd, _mm_mask_i32gather_ps,
_mm256_mask_i32gather_ps, _mm_mask_i64gather_ps,
_mm256_mask_i64gather_ps, _mm_i32gather_epi64,
_mm_mask_i32gather_epi64, _mm256_i32gather_epi64,
_mm256_mask_i32gather_epi64, _mm_i64gather_epi64,
_mm_mask_i64gather_epi64, _mm256_i64gather_epi64,
_mm256_mask_i64gather_epi64, _mm_i32gather_epi32,
_mm_mask_i32gather_epi32, _mm256_i32gather_epi32,
_mm256_mask_i32gather_epi32, _mm_i64gather_epi32,
_mm_mask_i64gather_epi32, _mm256_i64gather_epi32,
_mm256_mask_i64gather_epi32): Surround macro parameter uses with
parens.
(_mm_i32gather_pd, _mm256_i32gather_pd, _mm_i64gather_pd,
_mm256_i64gather_pd, _mm_i32gather_ps, _mm256_i32gather_ps,
_mm_i64gather_ps, _mm256_i64gather_ps): Likewise. Don't use
as mask vector containing -1.0 or -1.0f elts, but instead vector
with all bits set using _mm*_cmpeq_p? with zero operands.
* config/i386/avx512fintrin.h (_mm512_i32gather_ps,
_mm512_mask_i32gather_ps, _mm512_i32gather_pd,
_mm512_mask_i32gather_pd, _mm512_i64gather_ps,
_mm512_mask_i64gather_ps, _mm512_i64gather_pd,
_mm512_mask_i64gather_pd, _mm512_i32gather_epi32,
_mm512_mask_i32gather_epi32, _mm512_i32gather_epi64,
_mm512_mask_i32gather_epi64, _mm512_i64gather_epi32,
_mm512_mask_i64gather_epi32, _mm512_i64gather_epi64,
_mm512_mask_i64gather_epi64, _mm512_i32scatter_ps,
_mm512_mask_i32scatter_ps, _mm512_i32scatter_pd,
_mm512_mask_i32scatter_pd, _mm512_i64scatter_ps,
_mm512_mask_i64scatter_ps, _mm512_i64scatter_pd,
_mm512_mask_i64scatter_pd, _mm512_i32scatter_epi32,
_mm512_mask_i32scatter_epi32, _mm512_i32scatter_epi64,
_mm512_mask_i32scatter_epi64, _mm512_i64scatter_epi32,
_mm512_mask_i64scatter_epi32, _mm512_i64scatter_epi64,
_mm512_mask_i64scatter_epi64): Surround macro parameter uses with
parens.
* config/i386/avx512pfintrin.h (_mm512_prefetch_i32gather_pd,
_mm512_prefetch_i32gather_ps, _mm512_mask_prefetch_i32gather_pd,
_mm512_mask_prefetch_i32gather_ps, _mm512_prefetch_i64gather_pd,
_mm512_prefetch_i64gather_ps, _mm512_mask_prefetch_i64gather_pd,
_mm512_mask_prefetch_i64gather_ps, _mm512_prefetch_i32scatter_pd,
_mm512_prefetch_i32scatter_ps, _mm512_mask_prefetch_i32scatter_pd,
_mm512_mask_prefetch_i32scatter_ps, _mm512_prefetch_i64scatter_pd,
_mm512_prefetch_i64scatter_ps, _mm512_mask_prefetch_i64scatter_pd,
_mm512_mask_prefetch_i64scatter_ps): Likewise.
* config/i386/avx512vlintrin.h (_mm256_mmask_i32gather_ps,
_mm_mmask_i32gather_ps, _mm256_mmask_i32gather_pd,
_mm_mmask_i32gather_pd, _mm256_mmask_i64gather_ps,
_mm_mmask_i64gather_ps, _mm256_mmask_i64gather_pd,
_mm_mmask_i64gather_pd, _mm256_mmask_i32gather_epi32,
_mm_mmask_i32gather_epi32, _mm256_mmask_i32gather_epi64,
_mm_mmask_i32gather_epi64, _mm256_mmask_i64gather_epi32,
_mm_mmask_i64gather_epi32, _mm256_mmask_i64gather_epi64,
_mm_mmask_i64gather_epi64, _mm256_i32scatter_ps,
_mm256_mask_i32scatter_ps, _mm_i32scatter_ps, _mm_mask_i32scatter_ps,
_mm256_i32scatter_pd, _mm256_mask_i32scatter_pd, _mm_i32scatter_pd,
_mm_mask_i32scatter_pd, _mm256_i64scatter_ps,
_mm256_mask_i64scatter_ps, _mm_i64scatter_ps, _mm_mask_i64scatter_ps,
_mm256_i64scatter_pd, _mm256_mask_i64scatter_pd, _mm_i64scatter_pd,
_mm_mask_i64scatter_pd, _mm256_i32scatter_epi32,
_mm256_mask_i32scatter_epi32, _mm_i32scatter_epi32,
_mm_mask_i32scatter_epi32, _mm256_i32scatter_epi64,
_mm256_mask_i32scatter_epi64, _mm_i32scatter_epi64,
_mm_mask_i32scatter_epi64, _mm256_i64scatter_epi32,
_mm256_mask_i64scatter_epi32, _mm_i64scatter_epi32,
_mm_mask_i64scatter_epi32, _mm256_i64scatter_epi64,
_mm256_mask_i64scatter_epi64, _mm_i64scatter_epi64,
_mm_mask_i64scatter_epi64): Likewise.