From: Wei Xiao Date: Thu, 17 Jan 2019 09:54:56 +0000 (+0000) Subject: re PR target/88794 (fixupimm intrinsics are unusable) X-Git-Url: https://git.libre-soc.org/?a=commitdiff_plain;h=040d2bbad88f2c690b39bb5ca4c580d8be8c3b23;p=gcc.git re PR target/88794 (fixupimm intrinsics are unusable) gcc/ChangeLog 2019-01-17 Wei Xiao PR target/88794 Revert: 2018-11-06 Wei Xiao * config/i386/avx512fintrin.h: Update VFIXUPIMM* intrinsics. (_mm512_fixupimm_round_pd): Update parameters and builtin. (_mm512_maskz_fixupimm_round_pd): Ditto. (_mm512_fixupimm_round_ps): Ditto. (_mm512_maskz_fixupimm_round_ps): Ditto. (_mm_fixupimm_round_sd): Ditto. (_mm_maskz_fixupimm_round_sd): Ditto. (_mm_fixupimm_round_ss): Ditto. (_mm_maskz_fixupimm_round_ss): Ditto. (_mm512_fixupimm_pd): Ditto. (_mm512_maskz_fixupimm_pd): Ditto. (_mm512_fixupimm_ps): Ditto. (_mm512_maskz_fixupimm_ps): Ditto. (_mm_fixupimm_sd): Ditto. (_mm_maskz_fixupimm_sd): Ditto. (_mm_fixupimm_ss): Ditto. (_mm_maskz_fixupimm_ss): Ditto. (_mm512_mask_fixupimm_round_pd): Update builtin. (_mm512_mask_fixupimm_round_ps): Ditto. (_mm_mask_fixupimm_round_sd): Ditto. (_mm_mask_fixupimm_round_ss): Ditto. (_mm512_mask_fixupimm_pd): Ditto. (_mm512_mask_fixupimm_ps): Ditto. (_mm_mask_fixupimm_sd): Ditto. (_mm_mask_fixupimm_ss): Ditto. * config/i386/avx512vlintrin.h: (_mm256_fixupimm_pd): Update parameters and builtin. (_mm256_maskz_fixupimm_pd): Ditto. (_mm256_fixupimm_ps): Ditto. (_mm256_maskz_fixupimm_ps): Ditto. (_mm_fixupimm_pd): Ditto. (_mm_maskz_fixupimm_pd): Ditto. (_mm_fixupimm_ps): Ditto. (_mm_maskz_fixupimm_ps): Ditto. (_mm256_mask_fixupimm_pd): Update builtin. (_mm256_mask_fixupimm_ps): Ditto. (_mm_mask_fixupimm_pd): Ditto. (_mm_mask_fixupimm_ps): Ditto. * config/i386/i386-builtin-types.def: Add new types and remove useless ones. * config/i386/i386-builtin.def: Update builtin definitions. * config/i386/i386.c: Handle new builtin types and remove useless ones. * config/i386/sse.md: Update VFIXUPIMM* patterns. (_fixupimm_maskz): Update. (_fixupimm): Update. (_fixupimm_mask): Update. (avx512f_sfixupimm_maskz): Update. (avx512f_sfixupimm): Update. (avx512f_sfixupimm_mask): Update. * config/i386/subst.md: (round_saeonly_sd_mask_operand4): Add new subst_attr. (round_saeonly_sd_mask_op4): Ditto. (round_saeonly_expand_operand5): Ditto. (round_saeonly_expand): Update. gcc/testsuite/ChangeLog 2019-01-17 Wei Xiao PR target/88794 Revert: 2018-11-06 Wei Xiao * gcc.target/i386/avx-1.c: Update tests for VFIXUPIMM* intrinsics. * gcc.target/i386/avx512f-vfixupimmpd-1.c: Ditto. * gcc.target/i386/avx512f-vfixupimmpd-2.c: Ditto. * gcc.target/i386/avx512f-vfixupimmps-1.c: Ditto. * gcc.target/i386/avx512f-vfixupimmsd-1.c: Ditto. * gcc.target/i386/avx512f-vfixupimmsd-2.c: Ditto. * gcc.target/i386/avx512f-vfixupimmss-1.c: Ditto. * gcc.target/i386/avx512f-vfixupimmss-2.c: Ditto. * gcc.target/i386/avx512vl-vfixupimmpd-1.c: Ditto. * gcc.target/i386/avx512vl-vfixupimmps-1.c: Ditto. * gcc.target/i386/sse-13.c: Ditto. * gcc.target/i386/sse-14.c: Ditto. * gcc.target/i386/sse-22.c: Ditto. * gcc.target/i386/sse-23.c: Ditto. * gcc.target/i386/testimm-10.c: Ditto. * gcc.target/i386/testround-1.c: Ditto. From-SVN: r268013 --- diff --git a/gcc/ChangeLog b/gcc/ChangeLog index e379d2edcbe..f65833a1742 100644 --- a/gcc/ChangeLog +++ b/gcc/ChangeLog @@ -1,3 +1,64 @@ +2019-01-17 Wei Xiao + + PR target/88794 + Revert: + + 2018-11-06 Wei Xiao + + * config/i386/avx512fintrin.h: Update VFIXUPIMM* intrinsics. + (_mm512_fixupimm_round_pd): Update parameters and builtin. + (_mm512_maskz_fixupimm_round_pd): Ditto. + (_mm512_fixupimm_round_ps): Ditto. + (_mm512_maskz_fixupimm_round_ps): Ditto. + (_mm_fixupimm_round_sd): Ditto. + (_mm_maskz_fixupimm_round_sd): Ditto. + (_mm_fixupimm_round_ss): Ditto. + (_mm_maskz_fixupimm_round_ss): Ditto. + (_mm512_fixupimm_pd): Ditto. + (_mm512_maskz_fixupimm_pd): Ditto. + (_mm512_fixupimm_ps): Ditto. + (_mm512_maskz_fixupimm_ps): Ditto. + (_mm_fixupimm_sd): Ditto. + (_mm_maskz_fixupimm_sd): Ditto. + (_mm_fixupimm_ss): Ditto. + (_mm_maskz_fixupimm_ss): Ditto. + (_mm512_mask_fixupimm_round_pd): Update builtin. + (_mm512_mask_fixupimm_round_ps): Ditto. + (_mm_mask_fixupimm_round_sd): Ditto. + (_mm_mask_fixupimm_round_ss): Ditto. + (_mm512_mask_fixupimm_pd): Ditto. + (_mm512_mask_fixupimm_ps): Ditto. + (_mm_mask_fixupimm_sd): Ditto. + (_mm_mask_fixupimm_ss): Ditto. + * config/i386/avx512vlintrin.h: + (_mm256_fixupimm_pd): Update parameters and builtin. + (_mm256_maskz_fixupimm_pd): Ditto. + (_mm256_fixupimm_ps): Ditto. + (_mm256_maskz_fixupimm_ps): Ditto. + (_mm_fixupimm_pd): Ditto. + (_mm_maskz_fixupimm_pd): Ditto. + (_mm_fixupimm_ps): Ditto. + (_mm_maskz_fixupimm_ps): Ditto. + (_mm256_mask_fixupimm_pd): Update builtin. + (_mm256_mask_fixupimm_ps): Ditto. + (_mm_mask_fixupimm_pd): Ditto. + (_mm_mask_fixupimm_ps): Ditto. + * config/i386/i386-builtin-types.def: Add new types and remove useless ones. + * config/i386/i386-builtin.def: Update builtin definitions. + * config/i386/i386.c: Handle new builtin types and remove useless ones. + * config/i386/sse.md: Update VFIXUPIMM* patterns. + (_fixupimm_maskz): Update. + (_fixupimm): Update. + (_fixupimm_mask): Update. + (avx512f_sfixupimm_maskz): Update. + (avx512f_sfixupimm): Update. + (avx512f_sfixupimm_mask): Update. + * config/i386/subst.md: + (round_saeonly_sd_mask_operand4): Add new subst_attr. + (round_saeonly_sd_mask_op4): Ditto. + (round_saeonly_expand_operand5): Ditto. + (round_saeonly_expand): Update. + 2019-01-17 Wei Xiao PR target/88794 diff --git a/gcc/config/i386/avx512fintrin.h b/gcc/config/i386/avx512fintrin.h index 88bee3cd599..68320c28da5 100644 --- a/gcc/config/i386/avx512fintrin.h +++ b/gcc/config/i386/avx512fintrin.h @@ -6977,132 +6977,140 @@ _mm512_maskz_shuffle_pd (__mmask8 __U, __m512d __M, __m512d __V, extern __inline __m512d __attribute__ ((__gnu_inline__, __always_inline__, __artificial__)) -_mm512_fixupimm_round_pd (__m512d __A, __m512i __B, +_mm512_fixupimm_round_pd (__m512d __A, __m512d __B, __m512i __C, const int __imm, const int __R) { - return (__m512d) __builtin_ia32_fixupimmpd512 ((__v8df) __A, - (__v8di) __B, + return (__m512d) __builtin_ia32_fixupimmpd512_mask ((__v8df) __A, + (__v8df) __B, + (__v8di) __C, __imm, - __R); + (__mmask8) -1, __R); } extern __inline __m512d __attribute__ ((__gnu_inline__, __always_inline__, __artificial__)) -_mm512_mask_fixupimm_round_pd (__m512d __W, __mmask8 __U, __m512d __A, - __m512i __B, const int __imm, const int __R) +_mm512_mask_fixupimm_round_pd (__m512d __A, __mmask8 __U, __m512d __B, + __m512i __C, const int __imm, const int __R) { return (__m512d) __builtin_ia32_fixupimmpd512_mask ((__v8df) __A, - (__v8di) __B, + (__v8df) __B, + (__v8di) __C, __imm, - (__v8df) __W, (__mmask8) __U, __R); } extern __inline __m512d __attribute__ ((__gnu_inline__, __always_inline__, __artificial__)) -_mm512_maskz_fixupimm_round_pd (__mmask8 __U, __m512d __A, - __m512i __B, const int __imm, const int __R) +_mm512_maskz_fixupimm_round_pd (__mmask8 __U, __m512d __A, __m512d __B, + __m512i __C, const int __imm, const int __R) { return (__m512d) __builtin_ia32_fixupimmpd512_maskz ((__v8df) __A, - (__v8di) __B, + (__v8df) __B, + (__v8di) __C, __imm, (__mmask8) __U, __R); } extern __inline __m512 __attribute__ ((__gnu_inline__, __always_inline__, __artificial__)) -_mm512_fixupimm_round_ps (__m512 __A, __m512i __B, +_mm512_fixupimm_round_ps (__m512 __A, __m512 __B, __m512i __C, const int __imm, const int __R) { - return (__m512) __builtin_ia32_fixupimmps512 ((__v16sf) __A, - (__v16si) __B, + return (__m512) __builtin_ia32_fixupimmps512_mask ((__v16sf) __A, + (__v16sf) __B, + (__v16si) __C, __imm, - __R); + (__mmask16) -1, __R); } extern __inline __m512 __attribute__ ((__gnu_inline__, __always_inline__, __artificial__)) -_mm512_mask_fixupimm_round_ps (__m512 __W, __mmask16 __U, __m512 __A, - __m512i __B, const int __imm, const int __R) +_mm512_mask_fixupimm_round_ps (__m512 __A, __mmask16 __U, __m512 __B, + __m512i __C, const int __imm, const int __R) { return (__m512) __builtin_ia32_fixupimmps512_mask ((__v16sf) __A, - (__v16si) __B, + (__v16sf) __B, + (__v16si) __C, __imm, - (__v16sf) __W, (__mmask16) __U, __R); } extern __inline __m512 __attribute__ ((__gnu_inline__, __always_inline__, __artificial__)) -_mm512_maskz_fixupimm_round_ps (__mmask16 __U, __m512 __A, - __m512i __B, const int __imm, const int __R) +_mm512_maskz_fixupimm_round_ps (__mmask16 __U, __m512 __A, __m512 __B, + __m512i __C, const int __imm, const int __R) { return (__m512) __builtin_ia32_fixupimmps512_maskz ((__v16sf) __A, - (__v16si) __B, + (__v16sf) __B, + (__v16si) __C, __imm, (__mmask16) __U, __R); } extern __inline __m128d __attribute__ ((__gnu_inline__, __always_inline__, __artificial__)) -_mm_fixupimm_round_sd (__m128d __A, __m128i __B, +_mm_fixupimm_round_sd (__m128d __A, __m128d __B, __m128i __C, const int __imm, const int __R) { - return (__m128d) __builtin_ia32_fixupimmsd ((__v2df) __A, - (__v2di) __B, __imm, - __R); + return (__m128d) __builtin_ia32_fixupimmsd_mask ((__v2df) __A, + (__v2df) __B, + (__v2di) __C, __imm, + (__mmask8) -1, __R); } extern __inline __m128d __attribute__ ((__gnu_inline__, __always_inline__, __artificial__)) -_mm_mask_fixupimm_round_sd (__m128d __W, __mmask8 __U, __m128d __A, - __m128i __B, const int __imm, const int __R) +_mm_mask_fixupimm_round_sd (__m128d __A, __mmask8 __U, __m128d __B, + __m128i __C, const int __imm, const int __R) { return (__m128d) __builtin_ia32_fixupimmsd_mask ((__v2df) __A, - (__v2di) __B, __imm, - (__v2df) __W, + (__v2df) __B, + (__v2di) __C, __imm, (__mmask8) __U, __R); } extern __inline __m128d __attribute__ ((__gnu_inline__, __always_inline__, __artificial__)) -_mm_maskz_fixupimm_round_sd (__mmask8 __U, __m128d __A, - __m128i __B, const int __imm, const int __R) +_mm_maskz_fixupimm_round_sd (__mmask8 __U, __m128d __A, __m128d __B, + __m128i __C, const int __imm, const int __R) { return (__m128d) __builtin_ia32_fixupimmsd_maskz ((__v2df) __A, - (__v2di) __B, + (__v2df) __B, + (__v2di) __C, __imm, (__mmask8) __U, __R); } extern __inline __m128 __attribute__ ((__gnu_inline__, __always_inline__, __artificial__)) -_mm_fixupimm_round_ss (__m128 __A, __m128i __B, +_mm_fixupimm_round_ss (__m128 __A, __m128 __B, __m128i __C, const int __imm, const int __R) { - return (__m128) __builtin_ia32_fixupimmss ((__v4sf) __A, - (__v4si) __B, __imm, - __R); + return (__m128) __builtin_ia32_fixupimmss_mask ((__v4sf) __A, + (__v4sf) __B, + (__v4si) __C, __imm, + (__mmask8) -1, __R); } extern __inline __m128 __attribute__ ((__gnu_inline__, __always_inline__, __artificial__)) -_mm_mask_fixupimm_round_ss (__m128 __W, __mmask8 __U, __m128 __A, - __m128i __B, const int __imm, const int __R) +_mm_mask_fixupimm_round_ss (__m128 __A, __mmask8 __U, __m128 __B, + __m128i __C, const int __imm, const int __R) { return (__m128) __builtin_ia32_fixupimmss_mask ((__v4sf) __A, - (__v4si) __B, __imm, - (__v4sf) __W, + (__v4sf) __B, + (__v4si) __C, __imm, (__mmask8) __U, __R); } extern __inline __m128 __attribute__ ((__gnu_inline__, __always_inline__, __artificial__)) -_mm_maskz_fixupimm_round_ss (__mmask8 __U, __m128 __A, - __m128i __B, const int __imm, const int __R) +_mm_maskz_fixupimm_round_ss (__mmask8 __U, __m128 __A, __m128 __B, + __m128i __C, const int __imm, const int __R) { return (__m128) __builtin_ia32_fixupimmss_maskz ((__v4sf) __A, - (__v4si) __B, __imm, + (__v4sf) __B, + (__v4si) __C, __imm, (__mmask8) __U, __R); } @@ -7143,63 +7151,64 @@ _mm_maskz_fixupimm_round_ss (__mmask8 __U, __m128 __A, (__v16sf)(__m512)_mm512_setzero_ps(),\ (__mmask16)(U))) -#define _mm512_fixupimm_round_pd(X, Y, C, R) \ - ((__m512d)__builtin_ia32_fixupimmpd512 ((__v8df)(__m512d)(X), \ - (__v8di)(__m512i)(Y), (int)(C), (R))) +#define _mm512_fixupimm_round_pd(X, Y, Z, C, R) \ + ((__m512d)__builtin_ia32_fixupimmpd512_mask ((__v8df)(__m512d)(X), \ + (__v8df)(__m512d)(Y), (__v8di)(__m512i)(Z), (int)(C), \ + (__mmask8)(-1), (R))) -#define _mm512_mask_fixupimm_round_pd(W, U, X, Y, C, R) \ +#define _mm512_mask_fixupimm_round_pd(X, U, Y, Z, C, R) \ ((__m512d)__builtin_ia32_fixupimmpd512_mask ((__v8df)(__m512d)(X), \ - (__v8di)(__m512i)(Y), (int)(C), (__v8df)(__m512d)(W), \ + (__v8df)(__m512d)(Y), (__v8di)(__m512i)(Z), (int)(C), \ (__mmask8)(U), (R))) -#define _mm512_maskz_fixupimm_round_pd(U, X, Y, C, R) \ +#define _mm512_maskz_fixupimm_round_pd(U, X, Y, Z, C, R) \ ((__m512d)__builtin_ia32_fixupimmpd512_maskz ((__v8df)(__m512d)(X), \ - (__v8di)(__m512i)(Y), (int)(C), \ + (__v8df)(__m512d)(Y), (__v8di)(__m512i)(Z), (int)(C), \ (__mmask8)(U), (R))) -#define _mm512_fixupimm_round_ps(X, Y, C, R) \ - ((__m512)__builtin_ia32_fixupimmps512 ((__v16sf)(__m512)(X), \ - (__v16si)(__m512i)(Y), (int)(C), \ - (R))) +#define _mm512_fixupimm_round_ps(X, Y, Z, C, R) \ + ((__m512)__builtin_ia32_fixupimmps512_mask ((__v16sf)(__m512)(X), \ + (__v16sf)(__m512)(Y), (__v16si)(__m512i)(Z), (int)(C), \ + (__mmask16)(-1), (R))) -#define _mm512_mask_fixupimm_round_ps(W, U, X, Y, C, R) \ +#define _mm512_mask_fixupimm_round_ps(X, U, Y, Z, C, R) \ ((__m512)__builtin_ia32_fixupimmps512_mask ((__v16sf)(__m512)(X), \ - (__v16si)(__m512i)(Y), (int)(C), \ - (__v16sf)(__m512)(W), (__mmask16)(U), (R))) + (__v16sf)(__m512)(Y), (__v16si)(__m512i)(Z), (int)(C), \ + (__mmask16)(U), (R))) -#define _mm512_maskz_fixupimm_round_ps(U, X, Y, C, R) \ +#define _mm512_maskz_fixupimm_round_ps(U, X, Y, Z, C, R) \ ((__m512)__builtin_ia32_fixupimmps512_maskz ((__v16sf)(__m512)(X), \ - (__v16si)(__m512i)(Y), (int)(C), \ + (__v16sf)(__m512)(Y), (__v16si)(__m512i)(Z), (int)(C), \ (__mmask16)(U), (R))) -#define _mm_fixupimm_round_sd(X, Y, C, R) \ - ((__m128d)__builtin_ia32_fixupimmsd ((__v2df)(__m128d)(X), \ - (__v2di)(__m128i)(Y), (int)(C), \ - (R))) +#define _mm_fixupimm_round_sd(X, Y, Z, C, R) \ + ((__m128d)__builtin_ia32_fixupimmsd_mask ((__v2df)(__m128d)(X), \ + (__v2df)(__m128d)(Y), (__v2di)(__m128i)(Z), (int)(C), \ + (__mmask8)(-1), (R))) -#define _mm_mask_fixupimm_round_sd(W, U, X, Y, C, R) \ +#define _mm_mask_fixupimm_round_sd(X, U, Y, Z, C, R) \ ((__m128d)__builtin_ia32_fixupimmsd_mask ((__v2df)(__m128d)(X), \ - (__v2di)(__m128i)(Y), (int)(C), \ - (__v2df)(__m128d)(W), (__mmask8)(U), (R))) + (__v2df)(__m128d)(Y), (__v2di)(__m128i)(Z), (int)(C), \ + (__mmask8)(U), (R))) -#define _mm_maskz_fixupimm_round_sd(U, X, Y, C, R) \ +#define _mm_maskz_fixupimm_round_sd(U, X, Y, Z, C, R) \ ((__m128d)__builtin_ia32_fixupimmsd_maskz ((__v2df)(__m128d)(X), \ - (__v2di)(__m128i)(Y), (int)(C), \ + (__v2df)(__m128d)(Y), (__v2di)(__m128i)(Z), (int)(C), \ (__mmask8)(U), (R))) -#define _mm_fixupimm_round_ss(X, Y, C, R) \ - ((__m128)__builtin_ia32_fixupimmss ((__v4sf)(__m128)(X), \ - (__v4si)(__m128i)(Y), (int)(C), \ - (R))) +#define _mm_fixupimm_round_ss(X, Y, Z, C, R) \ + ((__m128)__builtin_ia32_fixupimmss_mask ((__v4sf)(__m128)(X), \ + (__v4sf)(__m128)(Y), (__v4si)(__m128i)(Z), (int)(C), \ + (__mmask8)(-1), (R))) -#define _mm_mask_fixupimm_round_ss(W, U, X, Y, C, R) \ +#define _mm_mask_fixupimm_round_ss(X, U, Y, Z, C, R) \ ((__m128)__builtin_ia32_fixupimmss_mask ((__v4sf)(__m128)(X), \ - (__v4si)(__m128i)(Y), (int)(C), \ - (__v4sf)(__m128)(W), (__mmask8)(U), (R))) + (__v4sf)(__m128)(Y), (__v4si)(__m128i)(Z), (int)(C), \ + (__mmask8)(U), (R))) -#define _mm_maskz_fixupimm_round_ss(U, X, Y, C, R) \ +#define _mm_maskz_fixupimm_round_ss(U, X, Y, Z, C, R) \ ((__m128)__builtin_ia32_fixupimmss_maskz ((__v4sf)(__m128)(X), \ - (__v4si)(__m128i)(Y), (int)(C), \ + (__v4sf)(__m128)(Y), (__v4si)(__m128i)(Z), (int)(C), \ (__mmask8)(U), (R))) #endif @@ -13206,34 +13215,37 @@ _mm512_maskz_cvtepu32_ps (__mmask16 __U, __m512i __A) #ifdef __OPTIMIZE__ extern __inline __m512d __attribute__ ((__gnu_inline__, __always_inline__, __artificial__)) -_mm512_fixupimm_pd (__m512d __A, __m512i __B, const int __imm) +_mm512_fixupimm_pd (__m512d __A, __m512d __B, __m512i __C, const int __imm) { - return (__m512d) __builtin_ia32_fixupimmpd512 ((__v8df) __A, - (__v8di) __B, + return (__m512d) __builtin_ia32_fixupimmpd512_mask ((__v8df) __A, + (__v8df) __B, + (__v8di) __C, __imm, + (__mmask8) -1, _MM_FROUND_CUR_DIRECTION); } extern __inline __m512d __attribute__ ((__gnu_inline__, __always_inline__, __artificial__)) -_mm512_mask_fixupimm_pd (__m512d __W, __mmask8 __U, __m512d __A, - __m512i __B, const int __imm) +_mm512_mask_fixupimm_pd (__m512d __A, __mmask8 __U, __m512d __B, + __m512i __C, const int __imm) { return (__m512d) __builtin_ia32_fixupimmpd512_mask ((__v8df) __A, - (__v8di) __B, + (__v8df) __B, + (__v8di) __C, __imm, - (__v8df) __W, (__mmask8) __U, _MM_FROUND_CUR_DIRECTION); } extern __inline __m512d __attribute__ ((__gnu_inline__, __always_inline__, __artificial__)) -_mm512_maskz_fixupimm_pd (__mmask8 __U, __m512d __A, - __m512i __B, const int __imm) +_mm512_maskz_fixupimm_pd (__mmask8 __U, __m512d __A, __m512d __B, + __m512i __C, const int __imm) { return (__m512d) __builtin_ia32_fixupimmpd512_maskz ((__v8df) __A, - (__v8di) __B, + (__v8df) __B, + (__v8di) __C, __imm, (__mmask8) __U, _MM_FROUND_CUR_DIRECTION); @@ -13241,34 +13253,37 @@ _mm512_maskz_fixupimm_pd (__mmask8 __U, __m512d __A, extern __inline __m512 __attribute__ ((__gnu_inline__, __always_inline__, __artificial__)) -_mm512_fixupimm_ps (__m512 __A, __m512i __B, const int __imm) +_mm512_fixupimm_ps (__m512 __A, __m512 __B, __m512i __C, const int __imm) { - return (__m512) __builtin_ia32_fixupimmps512 ((__v16sf) __A, - (__v16si) __B, + return (__m512) __builtin_ia32_fixupimmps512_mask ((__v16sf) __A, + (__v16sf) __B, + (__v16si) __C, __imm, + (__mmask16) -1, _MM_FROUND_CUR_DIRECTION); } extern __inline __m512 __attribute__ ((__gnu_inline__, __always_inline__, __artificial__)) -_mm512_mask_fixupimm_ps (__m512 __W, __mmask16 __U, __m512 __A, - __m512i __B, const int __imm) +_mm512_mask_fixupimm_ps (__m512 __A, __mmask16 __U, __m512 __B, + __m512i __C, const int __imm) { return (__m512) __builtin_ia32_fixupimmps512_mask ((__v16sf) __A, - (__v16si) __B, + (__v16sf) __B, + (__v16si) __C, __imm, - (__v16sf) __W, (__mmask16) __U, _MM_FROUND_CUR_DIRECTION); } extern __inline __m512 __attribute__ ((__gnu_inline__, __always_inline__, __artificial__)) -_mm512_maskz_fixupimm_ps (__mmask16 __U, __m512 __A, - __m512i __B, const int __imm) +_mm512_maskz_fixupimm_ps (__mmask16 __U, __m512 __A, __m512 __B, + __m512i __C, const int __imm) { return (__m512) __builtin_ia32_fixupimmps512_maskz ((__v16sf) __A, - (__v16si) __B, + (__v16sf) __B, + (__v16si) __C, __imm, (__mmask16) __U, _MM_FROUND_CUR_DIRECTION); @@ -13276,32 +13291,35 @@ _mm512_maskz_fixupimm_ps (__mmask16 __U, __m512 __A, extern __inline __m128d __attribute__ ((__gnu_inline__, __always_inline__, __artificial__)) -_mm_fixupimm_sd (__m128d __A, __m128i __B, const int __imm) +_mm_fixupimm_sd (__m128d __A, __m128d __B, __m128i __C, const int __imm) { - return (__m128d) __builtin_ia32_fixupimmsd ((__v2df) __A, - (__v2di) __B, __imm, + return (__m128d) __builtin_ia32_fixupimmsd_mask ((__v2df) __A, + (__v2df) __B, + (__v2di) __C, __imm, + (__mmask8) -1, _MM_FROUND_CUR_DIRECTION); } extern __inline __m128d __attribute__ ((__gnu_inline__, __always_inline__, __artificial__)) -_mm_mask_fixupimm_sd (__m128d __W, __mmask8 __U, __m128d __A, - __m128i __B, const int __imm) +_mm_mask_fixupimm_sd (__m128d __A, __mmask8 __U, __m128d __B, + __m128i __C, const int __imm) { return (__m128d) __builtin_ia32_fixupimmsd_mask ((__v2df) __A, - (__v2di) __B, __imm, - (__v2df) __W, + (__v2df) __B, + (__v2di) __C, __imm, (__mmask8) __U, _MM_FROUND_CUR_DIRECTION); } extern __inline __m128d __attribute__ ((__gnu_inline__, __always_inline__, __artificial__)) -_mm_maskz_fixupimm_sd (__mmask8 __U, __m128d __A, - __m128i __B, const int __imm) +_mm_maskz_fixupimm_sd (__mmask8 __U, __m128d __A, __m128d __B, + __m128i __C, const int __imm) { return (__m128d) __builtin_ia32_fixupimmsd_maskz ((__v2df) __A, - (__v2di) __B, + (__v2df) __B, + (__v2di) __C, __imm, (__mmask8) __U, _MM_FROUND_CUR_DIRECTION); @@ -13309,94 +13327,97 @@ _mm_maskz_fixupimm_sd (__mmask8 __U, __m128d __A, extern __inline __m128 __attribute__ ((__gnu_inline__, __always_inline__, __artificial__)) -_mm_fixupimm_ss (__m128 __A, __m128i __B, const int __imm) +_mm_fixupimm_ss (__m128 __A, __m128 __B, __m128i __C, const int __imm) { - return (__m128) __builtin_ia32_fixupimmss ((__v4sf) __A, - (__v4si) __B, __imm, + return (__m128) __builtin_ia32_fixupimmss_mask ((__v4sf) __A, + (__v4sf) __B, + (__v4si) __C, __imm, + (__mmask8) -1, _MM_FROUND_CUR_DIRECTION); } extern __inline __m128 __attribute__ ((__gnu_inline__, __always_inline__, __artificial__)) -_mm_mask_fixupimm_ss (__m128 __W, __mmask8 __U, __m128 __A, - __m128i __B, const int __imm) +_mm_mask_fixupimm_ss (__m128 __A, __mmask8 __U, __m128 __B, + __m128i __C, const int __imm) { return (__m128) __builtin_ia32_fixupimmss_mask ((__v4sf) __A, - (__v4si) __B, __imm, - (__v4sf) __W, + (__v4sf) __B, + (__v4si) __C, __imm, (__mmask8) __U, _MM_FROUND_CUR_DIRECTION); } extern __inline __m128 __attribute__ ((__gnu_inline__, __always_inline__, __artificial__)) -_mm_maskz_fixupimm_ss (__mmask8 __U, __m128 __A, - __m128i __B, const int __imm) +_mm_maskz_fixupimm_ss (__mmask8 __U, __m128 __A, __m128 __B, + __m128i __C, const int __imm) { return (__m128) __builtin_ia32_fixupimmss_maskz ((__v4sf) __A, - (__v4si) __B, __imm, + (__v4sf) __B, + (__v4si) __C, __imm, (__mmask8) __U, _MM_FROUND_CUR_DIRECTION); } #else -#define _mm512_fixupimm_pd(X, Y, C) \ - ((__m512d)__builtin_ia32_fixupimmpd512((__v8df)(__m512d)(X), \ - (__v8di)(__m512i)(Y), (int)(C), \ - _MM_FROUND_CUR_DIRECTION)) +#define _mm512_fixupimm_pd(X, Y, Z, C) \ + ((__m512d)__builtin_ia32_fixupimmpd512_mask ((__v8df)(__m512d)(X), \ + (__v8df)(__m512d)(Y), (__v8di)(__m512i)(Z), (int)(C), \ + (__mmask8)(-1), _MM_FROUND_CUR_DIRECTION)) -#define _mm512_mask_fixupimm_pd(W, U, X, Y, C) \ +#define _mm512_mask_fixupimm_pd(X, U, Y, Z, C) \ ((__m512d)__builtin_ia32_fixupimmpd512_mask ((__v8df)(__m512d)(X), \ - (__v8di)(__m512i)(Y), (int)(C), (__v8df)(__m512d)(W), \ + (__v8df)(__m512d)(Y), (__v8di)(__m512i)(Z), (int)(C), \ (__mmask8)(U), _MM_FROUND_CUR_DIRECTION)) -#define _mm512_maskz_fixupimm_pd(U, X, Y, C) \ +#define _mm512_maskz_fixupimm_pd(U, X, Y, Z, C) \ ((__m512d)__builtin_ia32_fixupimmpd512_maskz ((__v8df)(__m512d)(X), \ - (__v8di)(__m512i)(Y), (int)(C), \ + (__v8df)(__m512d)(Y), (__v8di)(__m512i)(Z), (int)(C), \ (__mmask8)(U), _MM_FROUND_CUR_DIRECTION)) -#define _mm512_fixupimm_ps(X, Y, C) \ - ((__m512)__builtin_ia32_fixupimmps512 ((__v16sf)(__m512)(X), \ - (__v16si)(__m512i)(Y), (int)(C), \ - _MM_FROUND_CUR_DIRECTION)) +#define _mm512_fixupimm_ps(X, Y, Z, C) \ + ((__m512)__builtin_ia32_fixupimmps512_mask ((__v16sf)(__m512)(X), \ + (__v16sf)(__m512)(Y), (__v16si)(__m512i)(Z), (int)(C), \ + (__mmask16)(-1), _MM_FROUND_CUR_DIRECTION)) -#define _mm512_mask_fixupimm_ps(W, U, X, Y, C) \ +#define _mm512_mask_fixupimm_ps(X, U, Y, Z, C) \ ((__m512)__builtin_ia32_fixupimmps512_mask ((__v16sf)(__m512)(X), \ - (__v16si)(__m512i)(Y), (int)(C), (__v16sf)(__m512)(W), \ + (__v16sf)(__m512)(Y), (__v16si)(__m512i)(Z), (int)(C), \ (__mmask16)(U), _MM_FROUND_CUR_DIRECTION)) -#define _mm512_maskz_fixupimm_ps(U, X, Y, C) \ +#define _mm512_maskz_fixupimm_ps(U, X, Y, Z, C) \ ((__m512)__builtin_ia32_fixupimmps512_maskz ((__v16sf)(__m512)(X), \ - (__v16si)(__m512i)(Y), (int)(C), \ + (__v16sf)(__m512)(Y), (__v16si)(__m512i)(Z), (int)(C), \ (__mmask16)(U), _MM_FROUND_CUR_DIRECTION)) -#define _mm_fixupimm_sd(X, Y, C) \ - ((__m128d)__builtin_ia32_fixupimmsd ((__v2df)(__m128d)(X), \ - (__v2di)(__m128i)(Y), (int)(C), \ - _MM_FROUND_CUR_DIRECTION)) +#define _mm_fixupimm_sd(X, Y, Z, C) \ + ((__m128d)__builtin_ia32_fixupimmsd_mask ((__v2df)(__m128d)(X), \ + (__v2df)(__m128d)(Y), (__v2di)(__m128i)(Z), (int)(C), \ + (__mmask8)(-1), _MM_FROUND_CUR_DIRECTION)) -#define _mm_mask_fixupimm_sd(W, U, X, Y, C) \ +#define _mm_mask_fixupimm_sd(X, U, Y, Z, C) \ ((__m128d)__builtin_ia32_fixupimmsd_mask ((__v2df)(__m128d)(X), \ - (__v2di)(__m128i)(Y), (int)(C), (__v2df)(__m128d)(W), \ + (__v2df)(__m128d)(Y), (__v2di)(__m128i)(Z), (int)(C), \ (__mmask8)(U), _MM_FROUND_CUR_DIRECTION)) -#define _mm_maskz_fixupimm_sd(U, X, Y, C) \ +#define _mm_maskz_fixupimm_sd(U, X, Y, Z, C) \ ((__m128d)__builtin_ia32_fixupimmsd_maskz ((__v2df)(__m128d)(X), \ - (__v2di)(__m128i)(Y), (int)(C), \ + (__v2df)(__m128d)(Y), (__v2di)(__m128i)(Z), (int)(C), \ (__mmask8)(U), _MM_FROUND_CUR_DIRECTION)) -#define _mm_fixupimm_ss(X, Y, C) \ - ((__m128)__builtin_ia32_fixupimmss ((__v4sf)(__m128)(X), \ - (__v4si)(__m128i)(Y), (int)(C), \ - _MM_FROUND_CUR_DIRECTION)) +#define _mm_fixupimm_ss(X, Y, Z, C) \ + ((__m128)__builtin_ia32_fixupimmss_mask ((__v4sf)(__m128)(X), \ + (__v4sf)(__m128)(Y), (__v4si)(__m128i)(Z), (int)(C), \ + (__mmask8)(-1), _MM_FROUND_CUR_DIRECTION)) -#define _mm_mask_fixupimm_ss(W, U, X, Y, C) \ +#define _mm_mask_fixupimm_ss(X, U, Y, Z, C) \ ((__m128)__builtin_ia32_fixupimmss_mask ((__v4sf)(__m128)(X), \ - (__v4si)(__m128i)(Y), (int)(C), (__v4sf)(__m128)(W), \ + (__v4sf)(__m128)(Y), (__v4si)(__m128i)(Z), (int)(C), \ (__mmask8)(U), _MM_FROUND_CUR_DIRECTION)) -#define _mm_maskz_fixupimm_ss(U, X, Y, C) \ +#define _mm_maskz_fixupimm_ss(U, X, Y, Z, C) \ ((__m128)__builtin_ia32_fixupimmss_maskz ((__v4sf)(__m128)(X), \ - (__v4si)(__m128i)(Y), (int)(C), \ + (__v4sf)(__m128)(Y), (__v4si)(__m128i)(Z), (int)(C), \ (__mmask8)(U), _MM_FROUND_CUR_DIRECTION)) #endif diff --git a/gcc/config/i386/avx512vlintrin.h b/gcc/config/i386/avx512vlintrin.h index b2b3a4b8f35..3eaf817f898 100644 --- a/gcc/config/i386/avx512vlintrin.h +++ b/gcc/config/i386/avx512vlintrin.h @@ -10242,131 +10242,143 @@ _mm256_maskz_shuffle_f32x4 (__mmask8 __U, __m256 __A, __m256 __B, extern __inline __m256d __attribute__ ((__gnu_inline__, __always_inline__, __artificial__)) -_mm256_fixupimm_pd (__m256d __A, __m256i __B, +_mm256_fixupimm_pd (__m256d __A, __m256d __B, __m256i __C, const int __imm) { - return (__m256d) __builtin_ia32_fixupimmpd256 ((__v4df) __A, - (__v4di) __B, - __imm); + return (__m256d) __builtin_ia32_fixupimmpd256_mask ((__v4df) __A, + (__v4df) __B, + (__v4di) __C, + __imm, + (__mmask8) -1); } extern __inline __m256d __attribute__ ((__gnu_inline__, __always_inline__, __artificial__)) -_mm256_mask_fixupimm_pd (__m256d __W, __mmask8 __U, __m256d __A, - __m256i __B, const int __imm) +_mm256_mask_fixupimm_pd (__m256d __A, __mmask8 __U, __m256d __B, + __m256i __C, const int __imm) { return (__m256d) __builtin_ia32_fixupimmpd256_mask ((__v4df) __A, - (__v4di) __B, + (__v4df) __B, + (__v4di) __C, __imm, - (__v4df) __W, (__mmask8) __U); } extern __inline __m256d __attribute__ ((__gnu_inline__, __always_inline__, __artificial__)) -_mm256_maskz_fixupimm_pd (__mmask8 __U, __m256d __A, - __m256i __B, const int __imm) +_mm256_maskz_fixupimm_pd (__mmask8 __U, __m256d __A, __m256d __B, + __m256i __C, const int __imm) { return (__m256d) __builtin_ia32_fixupimmpd256_maskz ((__v4df) __A, - (__v4di) __B, + (__v4df) __B, + (__v4di) __C, __imm, (__mmask8) __U); } extern __inline __m256 __attribute__ ((__gnu_inline__, __always_inline__, __artificial__)) -_mm256_fixupimm_ps (__m256 __A, __m256i __B, +_mm256_fixupimm_ps (__m256 __A, __m256 __B, __m256i __C, const int __imm) { - return (__m256) __builtin_ia32_fixupimmps256 ((__v8sf) __A, - (__v8si) __B, - __imm); + return (__m256) __builtin_ia32_fixupimmps256_mask ((__v8sf) __A, + (__v8sf) __B, + (__v8si) __C, + __imm, + (__mmask8) -1); } extern __inline __m256 __attribute__ ((__gnu_inline__, __always_inline__, __artificial__)) -_mm256_mask_fixupimm_ps (__m256 __W, __mmask8 __U, __m256 __A, - __m256i __B, const int __imm) +_mm256_mask_fixupimm_ps (__m256 __A, __mmask8 __U, __m256 __B, + __m256i __C, const int __imm) { return (__m256) __builtin_ia32_fixupimmps256_mask ((__v8sf) __A, - (__v8si) __B, + (__v8sf) __B, + (__v8si) __C, __imm, - (__v8sf) __W, (__mmask8) __U); } extern __inline __m256 __attribute__ ((__gnu_inline__, __always_inline__, __artificial__)) -_mm256_maskz_fixupimm_ps (__mmask8 __U, __m256 __A, - __m256i __B, const int __imm) +_mm256_maskz_fixupimm_ps (__mmask8 __U, __m256 __A, __m256 __B, + __m256i __C, const int __imm) { return (__m256) __builtin_ia32_fixupimmps256_maskz ((__v8sf) __A, - (__v8si) __B, + (__v8sf) __B, + (__v8si) __C, __imm, (__mmask8) __U); } extern __inline __m128d __attribute__ ((__gnu_inline__, __always_inline__, __artificial__)) -_mm_fixupimm_pd (__m128d __A, __m128i __B, +_mm_fixupimm_pd (__m128d __A, __m128d __B, __m128i __C, const int __imm) { - return (__m128d) __builtin_ia32_fixupimmpd128 ((__v2df) __A, - (__v2di) __B, - __imm); + return (__m128d) __builtin_ia32_fixupimmpd128_mask ((__v2df) __A, + (__v2df) __B, + (__v2di) __C, + __imm, + (__mmask8) -1); } extern __inline __m128d __attribute__ ((__gnu_inline__, __always_inline__, __artificial__)) -_mm_mask_fixupimm_pd (__m128d __W, __mmask8 __U, __m128d __A, - __m128i __B, const int __imm) +_mm_mask_fixupimm_pd (__m128d __A, __mmask8 __U, __m128d __B, + __m128i __C, const int __imm) { return (__m128d) __builtin_ia32_fixupimmpd128_mask ((__v2df) __A, - (__v2di) __B, + (__v2df) __B, + (__v2di) __C, __imm, - (__v2df) __W, (__mmask8) __U); } extern __inline __m128d __attribute__ ((__gnu_inline__, __always_inline__, __artificial__)) -_mm_maskz_fixupimm_pd (__mmask8 __U, __m128d __A, - __m128i __B, const int __imm) +_mm_maskz_fixupimm_pd (__mmask8 __U, __m128d __A, __m128d __B, + __m128i __C, const int __imm) { return (__m128d) __builtin_ia32_fixupimmpd128_maskz ((__v2df) __A, - (__v2di) __B, + (__v2df) __B, + (__v2di) __C, __imm, (__mmask8) __U); } extern __inline __m128 __attribute__ ((__gnu_inline__, __always_inline__, __artificial__)) -_mm_fixupimm_ps (__m128 __A, __m128i __B, const int __imm) +_mm_fixupimm_ps (__m128 __A, __m128 __B, __m128i __C, const int __imm) { - return (__m128) __builtin_ia32_fixupimmps128 ((__v4sf) __A, - (__v4si) __B, - __imm); + return (__m128) __builtin_ia32_fixupimmps128_mask ((__v4sf) __A, + (__v4sf) __B, + (__v4si) __C, + __imm, + (__mmask8) -1); } extern __inline __m128 __attribute__ ((__gnu_inline__, __always_inline__, __artificial__)) -_mm_mask_fixupimm_ps (__m128 __W, __mmask8 __U, __m128 __A, - __m128i __B, const int __imm) +_mm_mask_fixupimm_ps (__m128 __A, __mmask8 __U, __m128 __B, + __m128i __C, const int __imm) { return (__m128) __builtin_ia32_fixupimmps128_mask ((__v4sf) __A, - (__v4si) __B, + (__v4sf) __B, + (__v4si) __C, __imm, - (__v4sf) __W, (__mmask8) __U); } extern __inline __m128 __attribute__ ((__gnu_inline__, __always_inline__, __artificial__)) -_mm_maskz_fixupimm_ps (__mmask8 __U, __m128 __A, - __m128i __B, const int __imm) +_mm_maskz_fixupimm_ps (__mmask8 __U, __m128 __A, __m128 __B, + __m128i __C, const int __imm) { return (__m128) __builtin_ia32_fixupimmps128_maskz ((__v4sf) __A, - (__v4si) __B, + (__v4sf) __B, + (__v4si) __C, __imm, (__mmask8) __U); } @@ -12645,74 +12657,78 @@ _mm256_permutex_pd (__m256d __X, const int __M) (__v4sf)(__m128)_mm_setzero_ps (), \ (__mmask8)(U))) -#define _mm256_fixupimm_pd(X, Y, C) \ +#define _mm256_fixupimm_pd(X, Y, Z, C) \ ((__m256d)__builtin_ia32_fixupimmpd256_mask ((__v4df)(__m256d)(X), \ - (__v4di)(__m256i)(Y), (int)(C), \ + (__v4df)(__m256d)(Y), \ + (__v4di)(__m256i)(Z), (int)(C), \ (__mmask8)(-1))) -#define _mm256_mask_fixupimm_pd(W, U, X, Y, C) \ +#define _mm256_mask_fixupimm_pd(X, U, Y, Z, C) \ ((__m256d)__builtin_ia32_fixupimmpd256_mask ((__v4df)(__m256d)(X), \ - (__v4di)(__m256i)(Y), (int)(C), \ - (__v4df)(__m256d)(W), \ + (__v4df)(__m256d)(Y), \ + (__v4di)(__m256i)(Z), (int)(C), \ (__mmask8)(U))) -#define _mm256_maskz_fixupimm_pd(U, X, Y, C) \ +#define _mm256_maskz_fixupimm_pd(U, X, Y, Z, C) \ ((__m256d)__builtin_ia32_fixupimmpd256_maskz ((__v4df)(__m256d)(X), \ - (__v4di)(__m256i)(Y), \ - (int)(C),\ + (__v4df)(__m256d)(Y), \ + (__v4di)(__m256i)(Z), (int)(C),\ (__mmask8)(U))) -#define _mm256_fixupimm_ps(X, Y, C) \ +#define _mm256_fixupimm_ps(X, Y, Z, C) \ ((__m256)__builtin_ia32_fixupimmps256_mask ((__v8sf)(__m256)(X), \ - (__v8si)(__m256i)(Y), (int)(C), \ + (__v8sf)(__m256)(Y), \ + (__v8si)(__m256i)(Z), (int)(C), \ (__mmask8)(-1))) -#define _mm256_mask_fixupimm_ps(W, U, X, Y, C) \ +#define _mm256_mask_fixupimm_ps(X, U, Y, Z, C) \ ((__m256)__builtin_ia32_fixupimmps256_mask ((__v8sf)(__m256)(X), \ - (__v8si)(__m256i)(Y), (int)(C), \ - (__v8sf)(__m256)(W), \ + (__v8sf)(__m256)(Y), \ + (__v8si)(__m256i)(Z), (int)(C), \ (__mmask8)(U))) -#define _mm256_maskz_fixupimm_ps(U, X, Y, C) \ +#define _mm256_maskz_fixupimm_ps(U, X, Y, Z, C) \ ((__m256)__builtin_ia32_fixupimmps256_maskz ((__v8sf)(__m256)(X), \ - (__v8si)(__m256i)(Y), \ - (int)(C),\ + (__v8sf)(__m256)(Y), \ + (__v8si)(__m256i)(Z), (int)(C),\ (__mmask8)(U))) -#define _mm_fixupimm_pd(X, Y, C) \ +#define _mm_fixupimm_pd(X, Y, Z, C) \ ((__m128d)__builtin_ia32_fixupimmpd128_mask ((__v2df)(__m128d)(X), \ - (__v2di)(__m128i)(Y), (int)(C), \ + (__v2df)(__m128d)(Y), \ + (__v2di)(__m128i)(Z), (int)(C), \ (__mmask8)(-1))) -#define _mm_mask_fixupimm_pd(W, U, X, Y, C) \ +#define _mm_mask_fixupimm_pd(X, U, Y, Z, C) \ ((__m128d)__builtin_ia32_fixupimmpd128_mask ((__v2df)(__m128d)(X), \ - (__v2di)(__m128i)(Y), (int)(C), \ - (__v2df)(__m128d)(W), \ + (__v2df)(__m128d)(Y), \ + (__v2di)(__m128i)(Z), (int)(C), \ (__mmask8)(U))) -#define _mm_maskz_fixupimm_pd(U, X, Y, C) \ +#define _mm_maskz_fixupimm_pd(U, X, Y, Z, C) \ ((__m128d)__builtin_ia32_fixupimmpd128_maskz ((__v2df)(__m128d)(X), \ - (__v2di)(__m128i)(Y), \ - (int)(C),\ + (__v2df)(__m128d)(Y), \ + (__v2di)(__m128i)(Z), (int)(C),\ (__mmask8)(U))) -#define _mm_fixupimm_ps(X, Y, C) \ +#define _mm_fixupimm_ps(X, Y, Z, C) \ ((__m128)__builtin_ia32_fixupimmps128_mask ((__v4sf)(__m128)(X), \ - (__v4si)(__m128i)(Y), (int)(C), \ + (__v4sf)(__m128)(Y), \ + (__v4si)(__m128i)(Z), (int)(C), \ (__mmask8)(-1))) -#define _mm_mask_fixupimm_ps(W, U, X, Y, C) \ +#define _mm_mask_fixupimm_ps(X, U, Y, Z, C) \ ((__m128)__builtin_ia32_fixupimmps128_mask ((__v4sf)(__m128)(X), \ - (__v4si)(__m128i)(Y), (int)(C),\ - (__v4sf)(__m128)(W), \ + (__v4sf)(__m128)(Y), \ + (__v4si)(__m128i)(Z), (int)(C),\ (__mmask8)(U))) -#define _mm_maskz_fixupimm_ps(U, X, Y, C) \ +#define _mm_maskz_fixupimm_ps(U, X, Y, Z, C) \ ((__m128)__builtin_ia32_fixupimmps128_maskz ((__v4sf)(__m128)(X), \ - (__v4si)(__m128i)(Y), \ - (int)(C),\ + (__v4sf)(__m128)(Y), \ + (__v4si)(__m128i)(Z), (int)(C),\ (__mmask8)(U))) #define _mm256_mask_srli_epi32(W, U, A, B) \ diff --git a/gcc/config/i386/i386-builtin-types.def b/gcc/config/i386/i386-builtin-types.def index 61c9e6e11f0..dfe13adb95a 100644 --- a/gcc/config/i386/i386-builtin-types.def +++ b/gcc/config/i386/i386-builtin-types.def @@ -444,6 +444,9 @@ DEF_FUNCTION_TYPE (V8DF, V8DF, V8DF, INT, V8DF, UQI) DEF_FUNCTION_TYPE (V8DF, V8DF, V8DF, INT, V8DF, QI, INT) DEF_FUNCTION_TYPE (V8DF, V8DF, INT, V8DF, UQI) DEF_FUNCTION_TYPE (V8DF, V8DF, V8DF, V8DI, INT) +DEF_FUNCTION_TYPE (V4DF, V4DF, V4DF, V4DI, INT, UQI) +DEF_FUNCTION_TYPE (V2DF, V2DF, V2DF, V2DI, INT, UQI) +DEF_FUNCTION_TYPE (V8DF, V8DF, V8DF, V8DI, INT, QI, INT) DEF_FUNCTION_TYPE (V8DF, V8DF, V8DF) DEF_FUNCTION_TYPE (V16SF, V16SF, V16SF, INT) DEF_FUNCTION_TYPE (V16SF, V16SF, V16SF, INT, V16SF, UHI) @@ -451,6 +454,11 @@ DEF_FUNCTION_TYPE (V16SF, V16SF, V16SF, INT, V16SF, HI, INT) DEF_FUNCTION_TYPE (V16SF, V16SF, INT, V16SF, UHI) DEF_FUNCTION_TYPE (V16SI, V16SI, V4SI, INT, V16SI, UHI) DEF_FUNCTION_TYPE (V16SF, V16SF, V16SF, V16SI, INT) +DEF_FUNCTION_TYPE (V16SF, V16SF, V16SF, V16SI, INT, HI, INT) +DEF_FUNCTION_TYPE (V8SF, V8SF, V8SF, V8SI, INT, UQI) +DEF_FUNCTION_TYPE (V4SF, V4SF, V4SF, V4SI, INT, UQI) +DEF_FUNCTION_TYPE (V4SF, V4SF, V4SF, V4SI, INT, QI, INT) +DEF_FUNCTION_TYPE (V2DF, V2DF, V2DF, V2DI, INT, QI, INT) DEF_FUNCTION_TYPE (V2DF, V2DF, V2DF, INT, V2DF, UQI, INT) DEF_FUNCTION_TYPE (V4SF, V4SF, V4SF, INT, V4SF, UQI, INT) DEF_FUNCTION_TYPE (V16SF, V16SF, V4SF, INT) @@ -545,9 +553,6 @@ DEF_FUNCTION_TYPE (V4SF, V4SF, V4SF, V4SF, V4SF, V4SF, PCV4SF, V4SF, UQI) DEF_FUNCTION_TYPE (V16SI, V16SI, V16SI, V16SI, V16SI, V16SI, PCV4SI, V16SI, UHI) DEF_FUNCTION_TYPE (V16SI, V16SI, V16SI, V16SI, V16SI, V16SI, PCV4SI) -DEF_FUNCTION_TYPE (V8SF, V8SF, V8SI, INT) -DEF_FUNCTION_TYPE (V2DF, V2DF, V2DI, INT) -DEF_FUNCTION_TYPE (V4SF, V4SF, V4SI, INT) # Instructions returning mask DEF_FUNCTION_TYPE (UCHAR, UQI, UQI, PUCHAR) @@ -982,15 +987,6 @@ DEF_FUNCTION_TYPE (V8QI, QI, QI, QI, QI, QI, QI, QI, QI) DEF_FUNCTION_TYPE (UCHAR, UCHAR, UINT, UINT, PUNSIGNED) DEF_FUNCTION_TYPE (UCHAR, UCHAR, ULONGLONG, ULONGLONG, PULONGLONG) -DEF_FUNCTION_TYPE (V4DF, V4DF, V4DI, INT, V4DF, UQI) -DEF_FUNCTION_TYPE (V4DF, V4DF, V4DI, INT, UQI) -DEF_FUNCTION_TYPE (V8SF, V8SF, V8SI, INT, V8SF, UQI) -DEF_FUNCTION_TYPE (V8SF, V8SF, V8SI, INT, UQI) -DEF_FUNCTION_TYPE (V2DF, V2DF, V2DI, INT, V2DF, UQI) -DEF_FUNCTION_TYPE (V2DF, V2DF, V2DI, INT, UQI) -DEF_FUNCTION_TYPE (V4SF, V4SF, V4SI, INT, V4SF, UQI) -DEF_FUNCTION_TYPE (V4SF, V4SF, V4SI, INT, UQI) - # Instructions with rounding DEF_FUNCTION_TYPE (UINT64, V2DF, INT) DEF_FUNCTION_TYPE (UINT64, V4SF, INT) @@ -1131,19 +1127,6 @@ DEF_FUNCTION_TYPE (VOID, QI, V8DI, PCVOID, INT, INT) DEF_FUNCTION_TYPE (VOID, PV8QI, V8HI, UQI) DEF_FUNCTION_TYPE (VOID, PV16QI, V16HI, UHI) -DEF_FUNCTION_TYPE (V8DF, V8DF, V8DI, INT, INT) -DEF_FUNCTION_TYPE (V8DF, V8DF, V8DI, INT, V8DF, QI, INT) -DEF_FUNCTION_TYPE (V8DF, V8DF, V8DI, INT, QI, INT) -DEF_FUNCTION_TYPE (V16SF, V16SF, V16SI, INT, INT) -DEF_FUNCTION_TYPE (V16SF, V16SF, V16SI, INT, V16SF, HI, INT) -DEF_FUNCTION_TYPE (V16SF, V16SF, V16SI, INT, HI, INT) -DEF_FUNCTION_TYPE (V2DF, V2DF, V2DI, INT, INT) -DEF_FUNCTION_TYPE (V2DF, V2DF, V2DI, INT, V2DF, QI, INT) -DEF_FUNCTION_TYPE (V2DF, V2DF, V2DI, INT, QI, INT) -DEF_FUNCTION_TYPE (V4SF, V4SF, V4SI, INT, INT) -DEF_FUNCTION_TYPE (V4SF, V4SF, V4SI, INT, V4SF, QI, INT) -DEF_FUNCTION_TYPE (V4SF, V4SF, V4SI, INT, QI, INT) - DEF_FUNCTION_TYPE_ALIAS (V2DF_FTYPE_V2DF, ROUND) DEF_FUNCTION_TYPE_ALIAS (V4DF_FTYPE_V4DF, ROUND) DEF_FUNCTION_TYPE_ALIAS (V8DF_FTYPE_V8DF, ROUND) diff --git a/gcc/config/i386/i386-builtin.def b/gcc/config/i386/i386-builtin.def index 322be4bb84a..42959ced4dd 100644 --- a/gcc/config/i386/i386-builtin.def +++ b/gcc/config/i386/i386-builtin.def @@ -1797,18 +1797,14 @@ BDESC (OPTION_MASK_ISA_AVX512VL, CODE_FOR_avx512vl_getexpv8sf_mask, "__builtin_i BDESC (OPTION_MASK_ISA_AVX512VL, CODE_FOR_avx512vl_getexpv4df_mask, "__builtin_ia32_getexppd256_mask", IX86_BUILTIN_GETEXPPD256, UNKNOWN, (int) V4DF_FTYPE_V4DF_V4DF_UQI) BDESC (OPTION_MASK_ISA_AVX512VL, CODE_FOR_avx512vl_getexpv4sf_mask, "__builtin_ia32_getexpps128_mask", IX86_BUILTIN_GETEXPPS128, UNKNOWN, (int) V4SF_FTYPE_V4SF_V4SF_UQI) BDESC (OPTION_MASK_ISA_AVX512VL, CODE_FOR_avx512vl_getexpv2df_mask, "__builtin_ia32_getexppd128_mask", IX86_BUILTIN_GETEXPPD128, UNKNOWN, (int) V2DF_FTYPE_V2DF_V2DF_UQI) -BDESC (OPTION_MASK_ISA_AVX512VL, CODE_FOR_avx512vl_fixupimmv4df, "__builtin_ia32_fixupimmpd256", IX86_BUILTIN_FIXUPIMMPD256, UNKNOWN, (int) V4DF_FTYPE_V4DF_V4DI_INT) -BDESC (OPTION_MASK_ISA_AVX512VL, CODE_FOR_avx512vl_fixupimmv4df_mask, "__builtin_ia32_fixupimmpd256_mask", IX86_BUILTIN_FIXUPIMMPD256_MASK, UNKNOWN, (int) V4DF_FTYPE_V4DF_V4DI_INT_V4DF_UQI) -BDESC (OPTION_MASK_ISA_AVX512VL, CODE_FOR_avx512vl_fixupimmv4df_maskz, "__builtin_ia32_fixupimmpd256_maskz", IX86_BUILTIN_FIXUPIMMPD256_MASKZ, UNKNOWN, (int) V4DF_FTYPE_V4DF_V4DI_INT_UQI) -BDESC (OPTION_MASK_ISA_AVX512VL, CODE_FOR_avx512vl_fixupimmv8sf, "__builtin_ia32_fixupimmps256", IX86_BUILTIN_FIXUPIMMPS256, UNKNOWN, (int) V8SF_FTYPE_V8SF_V8SI_INT) -BDESC (OPTION_MASK_ISA_AVX512VL, CODE_FOR_avx512vl_fixupimmv8sf_mask, "__builtin_ia32_fixupimmps256_mask", IX86_BUILTIN_FIXUPIMMPS256_MASK, UNKNOWN, (int) V8SF_FTYPE_V8SF_V8SI_INT_V8SF_UQI) -BDESC (OPTION_MASK_ISA_AVX512VL, CODE_FOR_avx512vl_fixupimmv8sf_maskz, "__builtin_ia32_fixupimmps256_maskz", IX86_BUILTIN_FIXUPIMMPS256_MASKZ, UNKNOWN, (int) V8SF_FTYPE_V8SF_V8SI_INT_UQI) -BDESC (OPTION_MASK_ISA_AVX512VL, CODE_FOR_avx512vl_fixupimmv2df, "__builtin_ia32_fixupimmpd128", IX86_BUILTIN_FIXUPIMMPD128, UNKNOWN, (int) V2DF_FTYPE_V2DF_V2DI_INT) -BDESC (OPTION_MASK_ISA_AVX512VL, CODE_FOR_avx512vl_fixupimmv2df_mask, "__builtin_ia32_fixupimmpd128_mask", IX86_BUILTIN_FIXUPIMMPD128_MASK, UNKNOWN, (int) V2DF_FTYPE_V2DF_V2DI_INT_V2DF_UQI) -BDESC (OPTION_MASK_ISA_AVX512VL, CODE_FOR_avx512vl_fixupimmv2df_maskz, "__builtin_ia32_fixupimmpd128_maskz", IX86_BUILTIN_FIXUPIMMPD128_MASKZ, UNKNOWN, (int) V2DF_FTYPE_V2DF_V2DI_INT_UQI) -BDESC (OPTION_MASK_ISA_AVX512VL, CODE_FOR_avx512vl_fixupimmv4sf, "__builtin_ia32_fixupimmps128", IX86_BUILTIN_FIXUPIMMPS128, UNKNOWN, (int) V4SF_FTYPE_V4SF_V4SI_INT) -BDESC (OPTION_MASK_ISA_AVX512VL, CODE_FOR_avx512vl_fixupimmv4sf_mask, "__builtin_ia32_fixupimmps128_mask", IX86_BUILTIN_FIXUPIMMPS128_MASK, UNKNOWN, (int) V4SF_FTYPE_V4SF_V4SI_INT_V4SF_UQI) -BDESC (OPTION_MASK_ISA_AVX512VL, CODE_FOR_avx512vl_fixupimmv4sf_maskz, "__builtin_ia32_fixupimmps128_maskz", IX86_BUILTIN_FIXUPIMMPS128_MASKZ, UNKNOWN, (int) V4SF_FTYPE_V4SF_V4SI_INT_UQI) +BDESC (OPTION_MASK_ISA_AVX512VL, CODE_FOR_avx512vl_fixupimmv4df_mask, "__builtin_ia32_fixupimmpd256_mask", IX86_BUILTIN_FIXUPIMMPD256_MASK, UNKNOWN, (int) V4DF_FTYPE_V4DF_V4DF_V4DI_INT_UQI) +BDESC (OPTION_MASK_ISA_AVX512VL, CODE_FOR_avx512vl_fixupimmv4df_maskz, "__builtin_ia32_fixupimmpd256_maskz", IX86_BUILTIN_FIXUPIMMPD256_MASKZ, UNKNOWN, (int) V4DF_FTYPE_V4DF_V4DF_V4DI_INT_UQI) +BDESC (OPTION_MASK_ISA_AVX512VL, CODE_FOR_avx512vl_fixupimmv8sf_mask, "__builtin_ia32_fixupimmps256_mask", IX86_BUILTIN_FIXUPIMMPS256_MASK, UNKNOWN, (int) V8SF_FTYPE_V8SF_V8SF_V8SI_INT_UQI) +BDESC (OPTION_MASK_ISA_AVX512VL, CODE_FOR_avx512vl_fixupimmv8sf_maskz, "__builtin_ia32_fixupimmps256_maskz", IX86_BUILTIN_FIXUPIMMPS256_MASKZ, UNKNOWN, (int) V8SF_FTYPE_V8SF_V8SF_V8SI_INT_UQI) +BDESC (OPTION_MASK_ISA_AVX512VL, CODE_FOR_avx512vl_fixupimmv2df_mask, "__builtin_ia32_fixupimmpd128_mask", IX86_BUILTIN_FIXUPIMMPD128_MASK, UNKNOWN, (int) V2DF_FTYPE_V2DF_V2DF_V2DI_INT_UQI) +BDESC (OPTION_MASK_ISA_AVX512VL, CODE_FOR_avx512vl_fixupimmv2df_maskz, "__builtin_ia32_fixupimmpd128_maskz", IX86_BUILTIN_FIXUPIMMPD128_MASKZ, UNKNOWN, (int) V2DF_FTYPE_V2DF_V2DF_V2DI_INT_UQI) +BDESC (OPTION_MASK_ISA_AVX512VL, CODE_FOR_avx512vl_fixupimmv4sf_mask, "__builtin_ia32_fixupimmps128_mask", IX86_BUILTIN_FIXUPIMMPS128_MASK, UNKNOWN, (int) V4SF_FTYPE_V4SF_V4SF_V4SI_INT_UQI) +BDESC (OPTION_MASK_ISA_AVX512VL, CODE_FOR_avx512vl_fixupimmv4sf_maskz, "__builtin_ia32_fixupimmps128_maskz", IX86_BUILTIN_FIXUPIMMPS128_MASKZ, UNKNOWN, (int) V4SF_FTYPE_V4SF_V4SF_V4SI_INT_UQI) BDESC (OPTION_MASK_ISA_AVX512VL, CODE_FOR_absv4di2_mask, "__builtin_ia32_pabsq256_mask", IX86_BUILTIN_PABSQ256, UNKNOWN, (int) V4DI_FTYPE_V4DI_V4DI_UQI) BDESC (OPTION_MASK_ISA_AVX512VL, CODE_FOR_absv2di2_mask, "__builtin_ia32_pabsq128_mask", IX86_BUILTIN_PABSQ128, UNKNOWN, (int) V2DI_FTYPE_V2DI_V2DI_UQI) BDESC (OPTION_MASK_ISA_AVX512VL, CODE_FOR_absv8si2_mask, "__builtin_ia32_pabsd256_mask", IX86_BUILTIN_PABSD256_MASK, UNKNOWN, (int) V8SI_FTYPE_V8SI_V8SI_UQI) @@ -2706,18 +2702,14 @@ BDESC (OPTION_MASK_ISA_AVX512F, CODE_FOR_sse2_vmdivv2df3_round, "__builtin_ia32_ BDESC (OPTION_MASK_ISA_AVX512F, CODE_FOR_sse2_vmdivv2df3_mask_round, "__builtin_ia32_divsd_mask_round", IX86_BUILTIN_DIVSD_MASK_ROUND, UNKNOWN, (int) V2DF_FTYPE_V2DF_V2DF_V2DF_UQI_INT) BDESC (OPTION_MASK_ISA_AVX512F, CODE_FOR_sse_vmdivv4sf3_round, "__builtin_ia32_divss_round", IX86_BUILTIN_DIVSS_ROUND, UNKNOWN, (int) V4SF_FTYPE_V4SF_V4SF_INT) BDESC (OPTION_MASK_ISA_AVX512F, CODE_FOR_sse_vmdivv4sf3_mask_round, "__builtin_ia32_divss_mask_round", IX86_BUILTIN_DIVSS_MASK_ROUND, UNKNOWN, (int) V4SF_FTYPE_V4SF_V4SF_V4SF_UQI_INT) -BDESC (OPTION_MASK_ISA_AVX512F, CODE_FOR_avx512f_fixupimmv8df_round, "__builtin_ia32_fixupimmpd512", IX86_BUILTIN_FIXUPIMMPD512, UNKNOWN, (int) V8DF_FTYPE_V8DF_V8DI_INT_INT) -BDESC (OPTION_MASK_ISA_AVX512F, CODE_FOR_avx512f_fixupimmv8df_mask_round, "__builtin_ia32_fixupimmpd512_mask", IX86_BUILTIN_FIXUPIMMPD512_MASK, UNKNOWN, (int) V8DF_FTYPE_V8DF_V8DI_INT_V8DF_QI_INT) -BDESC (OPTION_MASK_ISA_AVX512F, CODE_FOR_avx512f_fixupimmv8df_maskz_round, "__builtin_ia32_fixupimmpd512_maskz", IX86_BUILTIN_FIXUPIMMPD512_MASKZ, UNKNOWN, (int) V8DF_FTYPE_V8DF_V8DI_INT_QI_INT) -BDESC (OPTION_MASK_ISA_AVX512F, CODE_FOR_avx512f_fixupimmv16sf_round, "__builtin_ia32_fixupimmps512", IX86_BUILTIN_FIXUPIMMPS512, UNKNOWN, (int) V16SF_FTYPE_V16SF_V16SI_INT_INT) -BDESC (OPTION_MASK_ISA_AVX512F, CODE_FOR_avx512f_fixupimmv16sf_mask_round, "__builtin_ia32_fixupimmps512_mask", IX86_BUILTIN_FIXUPIMMPS512_MASK, UNKNOWN, (int) V16SF_FTYPE_V16SF_V16SI_INT_V16SF_HI_INT) -BDESC (OPTION_MASK_ISA_AVX512F, CODE_FOR_avx512f_fixupimmv16sf_maskz_round, "__builtin_ia32_fixupimmps512_maskz", IX86_BUILTIN_FIXUPIMMPS512_MASKZ, UNKNOWN, (int) V16SF_FTYPE_V16SF_V16SI_INT_HI_INT) -BDESC (OPTION_MASK_ISA_AVX512F, CODE_FOR_avx512f_sfixupimmv2df_round, "__builtin_ia32_fixupimmsd", IX86_BUILTIN_FIXUPIMMSD128, UNKNOWN, (int) V2DF_FTYPE_V2DF_V2DI_INT_INT) -BDESC (OPTION_MASK_ISA_AVX512F, CODE_FOR_avx512f_sfixupimmv2df_mask_round, "__builtin_ia32_fixupimmsd_mask", IX86_BUILTIN_FIXUPIMMSD128_MASK, UNKNOWN, (int) V2DF_FTYPE_V2DF_V2DI_INT_V2DF_QI_INT) -BDESC (OPTION_MASK_ISA_AVX512F, CODE_FOR_avx512f_sfixupimmv2df_maskz_round, "__builtin_ia32_fixupimmsd_maskz", IX86_BUILTIN_FIXUPIMMSD128_MASKZ, UNKNOWN, (int) V2DF_FTYPE_V2DF_V2DI_INT_QI_INT) -BDESC (OPTION_MASK_ISA_AVX512F, CODE_FOR_avx512f_sfixupimmv4sf_round, "__builtin_ia32_fixupimmss", IX86_BUILTIN_FIXUPIMMSS128, UNKNOWN, (int) V4SF_FTYPE_V4SF_V4SI_INT_INT) -BDESC (OPTION_MASK_ISA_AVX512F, CODE_FOR_avx512f_sfixupimmv4sf_mask_round, "__builtin_ia32_fixupimmss_mask", IX86_BUILTIN_FIXUPIMMSS128_MASK, UNKNOWN, (int) V4SF_FTYPE_V4SF_V4SI_INT_V4SF_QI_INT) -BDESC (OPTION_MASK_ISA_AVX512F, CODE_FOR_avx512f_sfixupimmv4sf_maskz_round, "__builtin_ia32_fixupimmss_maskz", IX86_BUILTIN_FIXUPIMMSS128_MASKZ, UNKNOWN, (int) V4SF_FTYPE_V4SF_V4SI_INT_QI_INT) +BDESC (OPTION_MASK_ISA_AVX512F, CODE_FOR_avx512f_fixupimmv8df_mask_round, "__builtin_ia32_fixupimmpd512_mask", IX86_BUILTIN_FIXUPIMMPD512_MASK, UNKNOWN, (int) V8DF_FTYPE_V8DF_V8DF_V8DI_INT_QI_INT) +BDESC (OPTION_MASK_ISA_AVX512F, CODE_FOR_avx512f_fixupimmv8df_maskz_round, "__builtin_ia32_fixupimmpd512_maskz", IX86_BUILTIN_FIXUPIMMPD512_MASKZ, UNKNOWN, (int) V8DF_FTYPE_V8DF_V8DF_V8DI_INT_QI_INT) +BDESC (OPTION_MASK_ISA_AVX512F, CODE_FOR_avx512f_fixupimmv16sf_mask_round, "__builtin_ia32_fixupimmps512_mask", IX86_BUILTIN_FIXUPIMMPS512_MASK, UNKNOWN, (int) V16SF_FTYPE_V16SF_V16SF_V16SI_INT_HI_INT) +BDESC (OPTION_MASK_ISA_AVX512F, CODE_FOR_avx512f_fixupimmv16sf_maskz_round, "__builtin_ia32_fixupimmps512_maskz", IX86_BUILTIN_FIXUPIMMPS512_MASKZ, UNKNOWN, (int) V16SF_FTYPE_V16SF_V16SF_V16SI_INT_HI_INT) +BDESC (OPTION_MASK_ISA_AVX512F, CODE_FOR_avx512f_sfixupimmv2df_mask_round, "__builtin_ia32_fixupimmsd_mask", IX86_BUILTIN_FIXUPIMMSD128_MASK, UNKNOWN, (int) V2DF_FTYPE_V2DF_V2DF_V2DI_INT_QI_INT) +BDESC (OPTION_MASK_ISA_AVX512F, CODE_FOR_avx512f_sfixupimmv2df_maskz_round, "__builtin_ia32_fixupimmsd_maskz", IX86_BUILTIN_FIXUPIMMSD128_MASKZ, UNKNOWN, (int) V2DF_FTYPE_V2DF_V2DF_V2DI_INT_QI_INT) +BDESC (OPTION_MASK_ISA_AVX512F, CODE_FOR_avx512f_sfixupimmv4sf_mask_round, "__builtin_ia32_fixupimmss_mask", IX86_BUILTIN_FIXUPIMMSS128_MASK, UNKNOWN, (int) V4SF_FTYPE_V4SF_V4SF_V4SI_INT_QI_INT) +BDESC (OPTION_MASK_ISA_AVX512F, CODE_FOR_avx512f_sfixupimmv4sf_maskz_round, "__builtin_ia32_fixupimmss_maskz", IX86_BUILTIN_FIXUPIMMSS128_MASKZ, UNKNOWN, (int) V4SF_FTYPE_V4SF_V4SF_V4SI_INT_QI_INT) BDESC (OPTION_MASK_ISA_AVX512F, CODE_FOR_avx512f_getexpv8df_mask_round, "__builtin_ia32_getexppd512_mask", IX86_BUILTIN_GETEXPPD512, UNKNOWN, (int) V8DF_FTYPE_V8DF_V8DF_QI_INT) BDESC (OPTION_MASK_ISA_AVX512F, CODE_FOR_avx512f_getexpv16sf_mask_round, "__builtin_ia32_getexpps512_mask", IX86_BUILTIN_GETEXPPS512, UNKNOWN, (int) V16SF_FTYPE_V16SF_V16SF_HI_INT) BDESC (OPTION_MASK_ISA_AVX512F, CODE_FOR_avx512f_sgetexpv2df_round, "__builtin_ia32_getexpsd128_round", IX86_BUILTIN_GETEXPSD128, UNKNOWN, (int) V2DF_FTYPE_V2DF_V2DF_INT) diff --git a/gcc/config/i386/i386.c b/gcc/config/i386/i386.c index b0b758096b4..112a504c85a 100644 --- a/gcc/config/i386/i386.c +++ b/gcc/config/i386/i386.c @@ -35142,10 +35142,6 @@ ix86_expand_args_builtin (const struct builtin_description *d, case V32HI_FTYPE_V32HI_V32HI_INT: case V16SI_FTYPE_V16SI_V16SI_INT: case V8DI_FTYPE_V8DI_V8DI_INT: - case V4DF_FTYPE_V4DF_V4DI_INT: - case V8SF_FTYPE_V8SF_V8SI_INT: - case V2DF_FTYPE_V2DF_V2DI_INT: - case V4SF_FTYPE_V4SF_V4SI_INT: nargs = 3; nargs_constant = 1; break; @@ -35301,10 +35297,6 @@ ix86_expand_args_builtin (const struct builtin_description *d, break; case UQI_FTYPE_V8DI_V8DI_INT_UQI: case UHI_FTYPE_V16SI_V16SI_INT_UHI: - case V4DF_FTYPE_V4DF_V4DI_INT_UQI: - case V8SF_FTYPE_V8SF_V8SI_INT_UQI: - case V2DF_FTYPE_V2DF_V2DI_INT_UQI: - case V4SF_FTYPE_V4SF_V4SI_INT_UQI: mask_pos = 1; nargs = 4; nargs_constant = 1; @@ -35370,17 +35362,17 @@ ix86_expand_args_builtin (const struct builtin_description *d, case V8SI_FTYPE_V8SI_V4SI_INT_V8SI_UQI: case V4DI_FTYPE_V4DI_V2DI_INT_V4DI_UQI: case V4DF_FTYPE_V4DF_V2DF_INT_V4DF_UQI: - case V4DF_FTYPE_V4DF_V4DI_INT_V4DF_UQI: - case V8SF_FTYPE_V8SF_V8SI_INT_V8SF_UQI: - case V2DF_FTYPE_V2DF_V2DI_INT_V2DF_UQI: - case V4SF_FTYPE_V4SF_V4SI_INT_V4SF_UQI: nargs = 5; mask_pos = 2; nargs_constant = 1; break; case V8DI_FTYPE_V8DI_V8DI_V8DI_INT_UQI: case V16SI_FTYPE_V16SI_V16SI_V16SI_INT_UHI: + case V2DF_FTYPE_V2DF_V2DF_V2DI_INT_UQI: + case V4SF_FTYPE_V4SF_V4SF_V4SI_INT_UQI: + case V8SF_FTYPE_V8SF_V8SF_V8SI_INT_UQI: case V8SI_FTYPE_V8SI_V8SI_V8SI_INT_UQI: + case V4DF_FTYPE_V4DF_V4DF_V4DI_INT_UQI: case V4DI_FTYPE_V4DI_V4DI_V4DI_INT_UQI: case V4SI_FTYPE_V4SI_V4SI_V4SI_INT_UQI: case V2DI_FTYPE_V2DI_V2DI_V2DI_INT_UQI: @@ -35844,10 +35836,6 @@ ix86_expand_round_builtin (const struct builtin_description *d, break; case V4SF_FTYPE_V4SF_V4SF_INT_INT: case V2DF_FTYPE_V2DF_V2DF_INT_INT: - case V8DF_FTYPE_V8DF_V8DI_INT_INT: - case V16SF_FTYPE_V16SF_V16SI_INT_INT: - case V2DF_FTYPE_V2DF_V2DI_INT_INT: - case V4SF_FTYPE_V4SF_V4SI_INT_INT: nargs_constant = 2; nargs = 4; break; @@ -35873,10 +35861,6 @@ ix86_expand_round_builtin (const struct builtin_description *d, case UQI_FTYPE_V2DF_V2DF_INT_UQI_INT: case UHI_FTYPE_V16SF_V16SF_INT_UHI_INT: case UQI_FTYPE_V4SF_V4SF_INT_UQI_INT: - case V8DF_FTYPE_V8DF_V8DI_INT_QI_INT: - case V16SF_FTYPE_V16SF_V16SI_INT_HI_INT: - case V2DF_FTYPE_V2DF_V2DI_INT_QI_INT: - case V4SF_FTYPE_V4SF_V4SI_INT_QI_INT: nargs_constant = 3; nargs = 5; break; @@ -35886,13 +35870,16 @@ ix86_expand_round_builtin (const struct builtin_description *d, case V2DF_FTYPE_V2DF_V2DF_INT_V2DF_QI_INT: case V2DF_FTYPE_V2DF_V2DF_INT_V2DF_UQI_INT: case V4SF_FTYPE_V4SF_V4SF_INT_V4SF_UQI_INT: - case V8DF_FTYPE_V8DF_V8DI_INT_V8DF_QI_INT: - case V16SF_FTYPE_V16SF_V16SI_INT_V16SF_HI_INT: - case V2DF_FTYPE_V2DF_V2DI_INT_V2DF_QI_INT: - case V4SF_FTYPE_V4SF_V4SI_INT_V4SF_QI_INT: nargs = 6; nargs_constant = 4; break; + case V8DF_FTYPE_V8DF_V8DF_V8DI_INT_QI_INT: + case V16SF_FTYPE_V16SF_V16SF_V16SI_INT_HI_INT: + case V2DF_FTYPE_V2DF_V2DF_V2DI_INT_QI_INT: + case V4SF_FTYPE_V4SF_V4SF_V4SI_INT_QI_INT: + nargs = 6; + nargs_constant = 3; + break; default: gcc_unreachable (); } diff --git a/gcc/config/i386/sse.md b/gcc/config/i386/sse.md index 48708a41b3b..3af4adc63dd 100644 --- a/gcc/config/i386/sse.md +++ b/gcc/config/i386/sse.md @@ -8862,27 +8862,29 @@ (define_expand "_fixupimm_maskz" [(match_operand:VF_AVX512VL 0 "register_operand") (match_operand:VF_AVX512VL 1 "register_operand") - (match_operand: 2 "") - (match_operand:SI 3 "const_0_to_255_operand") - (match_operand: 4 "register_operand")] + (match_operand:VF_AVX512VL 2 "register_operand") + (match_operand: 3 "") + (match_operand:SI 4 "const_0_to_255_operand") + (match_operand: 5 "register_operand")] "TARGET_AVX512F" { emit_insn (gen__fixupimm_maskz_1 ( operands[0], operands[1], operands[2], operands[3], - CONST0_RTX (mode), operands[4] - )); + operands[4], CONST0_RTX (mode), operands[5] + )); DONE; }) (define_insn "_fixupimm" [(set (match_operand:VF_AVX512VL 0 "register_operand" "=v") (unspec:VF_AVX512VL - [(match_operand:VF_AVX512VL 1 "register_operand" "v") - (match_operand: 2 "nonimmediate_operand" "") - (match_operand:SI 3 "const_0_to_255_operand")] + [(match_operand:VF_AVX512VL 1 "register_operand" "0") + (match_operand:VF_AVX512VL 2 "register_operand" "v") + (match_operand: 3 "nonimmediate_operand" "") + (match_operand:SI 4 "const_0_to_255_operand")] UNSPEC_FIXUPIMM))] "TARGET_AVX512F" - "vfixupimm\t{%3, %2, %1, %0|%0, %1, %2, %3}"; + "vfixupimm\t{%4, %3, %2, %0|%0, %2, %3, %4}"; [(set_attr "prefix" "evex") (set_attr "mode" "")]) @@ -8890,56 +8892,66 @@ [(set (match_operand:VF_AVX512VL 0 "register_operand" "=v") (vec_merge:VF_AVX512VL (unspec:VF_AVX512VL - [(match_operand:VF_AVX512VL 1 "register_operand" "v") - (match_operand: 2 "nonimmediate_operand" "") - (match_operand:SI 3 "const_0_to_255_operand")] + [(match_operand:VF_AVX512VL 1 "register_operand" "0") + (match_operand:VF_AVX512VL 2 "register_operand" "v") + (match_operand: 3 "nonimmediate_operand" "") + (match_operand:SI 4 "const_0_to_255_operand")] UNSPEC_FIXUPIMM) - (match_operand:VF_AVX512VL 4 "register_operand" "0") + (match_dup 1) (match_operand: 5 "register_operand" "Yk")))] "TARGET_AVX512F" - "vfixupimm\t{%3, %2, %1, %0%{%5%}|%0%{%5%}, %1, %2, %3}"; + "vfixupimm\t{%4, %3, %2, %0%{%5%}|%0%{%5%}, %2, %3, %4}"; [(set_attr "prefix" "evex") (set_attr "mode" "")]) (define_expand "avx512f_sfixupimm_maskz" [(match_operand:VF_128 0 "register_operand") (match_operand:VF_128 1 "register_operand") - (match_operand: 2 "") - (match_operand:SI 3 "const_0_to_255_operand") - (match_operand: 4 "register_operand")] + (match_operand:VF_128 2 "register_operand") + (match_operand: 3 "") + (match_operand:SI 4 "const_0_to_255_operand") + (match_operand: 5 "register_operand")] "TARGET_AVX512F" { emit_insn (gen_avx512f_sfixupimm_maskz_1 ( operands[0], operands[1], operands[2], operands[3], - CONST0_RTX (mode), operands[4] - )); + operands[4], CONST0_RTX (mode), operands[5] + )); DONE; }) (define_insn "avx512f_sfixupimm" [(set (match_operand:VF_128 0 "register_operand" "=v") - (unspec:VF_128 - [(match_operand:VF_128 1 "register_operand" "v") - (match_operand: 2 "" "") - (match_operand:SI 3 "const_0_to_255_operand")] - UNSPEC_FIXUPIMM))] + (vec_merge:VF_128 + (unspec:VF_128 + [(match_operand:VF_128 1 "register_operand" "0") + (match_operand:VF_128 2 "register_operand" "v") + (match_operand: 3 "" "") + (match_operand:SI 4 "const_0_to_255_operand")] + UNSPEC_FIXUPIMM) + (match_dup 1) + (const_int 1)))] "TARGET_AVX512F" - "vfixupimm\t{%3, %2, %1, %0|%0, %1, %2, %3}"; + "vfixupimm\t{%4, %3, %2, %0|%0, %2, %3, %4}"; [(set_attr "prefix" "evex") (set_attr "mode" "")]) (define_insn "avx512f_sfixupimm_mask" [(set (match_operand:VF_128 0 "register_operand" "=v") (vec_merge:VF_128 + (vec_merge:VF_128 (unspec:VF_128 - [(match_operand:VF_128 1 "register_operand" "v") - (match_operand: 2 "" "") - (match_operand:SI 3 "const_0_to_255_operand")] + [(match_operand:VF_128 1 "register_operand" "0") + (match_operand:VF_128 2 "register_operand" "v") + (match_operand: 3 "" "") + (match_operand:SI 4 "const_0_to_255_operand")] UNSPEC_FIXUPIMM) - (match_operand:VF_128 4 "register_operand" "0") + (match_dup 1) + (const_int 1)) + (match_dup 1) (match_operand: 5 "register_operand" "Yk")))] "TARGET_AVX512F" - "vfixupimm\t{%3, %2, %1, %0%{%5%}|%0%{%5%}, %1, %2, %3}"; + "vfixupimm\t{%4, %3, %2, %0%{%5%}|%0%{%5%}, %2, %3, %4}"; [(set_attr "prefix" "evex") (set_attr "mode" "")]) diff --git a/gcc/config/i386/subst.md b/gcc/config/i386/subst.md index 3f67cf43a92..99198a3ea69 100644 --- a/gcc/config/i386/subst.md +++ b/gcc/config/i386/subst.md @@ -149,7 +149,6 @@ (define_subst_attr "round_saeonly_mask_operand3" "mask" "%r3" "%r5") (define_subst_attr "round_saeonly_mask_operand4" "mask" "%r4" "%r6") (define_subst_attr "round_saeonly_mask_scalar_merge_operand4" "mask_scalar_merge" "%r4" "%r5") -(define_subst_attr "round_saeonly_sd_mask_operand4" "sd" "%r4" "%r6") (define_subst_attr "round_saeonly_sd_mask_operand5" "sd" "%r5" "%r7") (define_subst_attr "round_saeonly_op2" "round_saeonly" "" "%r2") (define_subst_attr "round_saeonly_op3" "round_saeonly" "" "%r3") @@ -161,7 +160,6 @@ (define_subst_attr "round_saeonly_mask_op3" "round_saeonly" "" "") (define_subst_attr "round_saeonly_mask_op4" "round_saeonly" "" "") (define_subst_attr "round_saeonly_mask_scalar_merge_op4" "round_saeonly" "" "") -(define_subst_attr "round_saeonly_sd_mask_op4" "round_saeonly" "" "") (define_subst_attr "round_saeonly_sd_mask_op5" "round_saeonly" "" "") (define_subst_attr "round_saeonly_mask_arg3" "round_saeonly" "" ", operands[]") (define_subst_attr "round_saeonly_constraint" "round_saeonly" "vm" "v") @@ -214,21 +212,23 @@ (define_subst_attr "round_saeonly_expand_name" "round_saeonly_expand" "" "_round") (define_subst_attr "round_saeonly_expand_nimm_predicate" "round_saeonly_expand" "nonimmediate_operand" "register_operand") -(define_subst_attr "round_saeonly_expand_operand5" "round_saeonly_expand" "" ", operands[5]") +(define_subst_attr "round_saeonly_expand_operand6" "round_saeonly_expand" "" ", operands[6]") (define_subst "round_saeonly_expand" [(match_operand:SUBST_V 0) (match_operand:SUBST_V 1) - (match_operand:SUBST_A 2) - (match_operand:SI 3) - (match_operand:SUBST_S 4)] + (match_operand:SUBST_V 2) + (match_operand:SUBST_A 3) + (match_operand:SI 4) + (match_operand:SUBST_S 5)] "TARGET_AVX512F" [(match_dup 0) (match_dup 1) (match_dup 2) (match_dup 3) (match_dup 4) - (unspec [(match_operand:SI 5 "const48_operand")] UNSPEC_EMBEDDED_ROUNDING)]) + (match_dup 5) + (unspec [(match_operand:SI 6 "const48_operand")] UNSPEC_EMBEDDED_ROUNDING)]) (define_subst_attr "mask_expand4_name" "mask_expand4" "" "_mask") (define_subst_attr "mask_expand4_args" "mask_expand4" "" ", operands[4], operands[5]") diff --git a/gcc/testsuite/ChangeLog b/gcc/testsuite/ChangeLog index 188974d3657..60b308bc234 100644 --- a/gcc/testsuite/ChangeLog +++ b/gcc/testsuite/ChangeLog @@ -1,3 +1,26 @@ +2019-01-17 Wei Xiao + + PR target/88794 + Revert: + 2018-11-06 Wei Xiao + + * gcc.target/i386/avx-1.c: Update tests for VFIXUPIMM* intrinsics. + * gcc.target/i386/avx512f-vfixupimmpd-1.c: Ditto. + * gcc.target/i386/avx512f-vfixupimmpd-2.c: Ditto. + * gcc.target/i386/avx512f-vfixupimmps-1.c: Ditto. + * gcc.target/i386/avx512f-vfixupimmsd-1.c: Ditto. + * gcc.target/i386/avx512f-vfixupimmsd-2.c: Ditto. + * gcc.target/i386/avx512f-vfixupimmss-1.c: Ditto. + * gcc.target/i386/avx512f-vfixupimmss-2.c: Ditto. + * gcc.target/i386/avx512vl-vfixupimmpd-1.c: Ditto. + * gcc.target/i386/avx512vl-vfixupimmps-1.c: Ditto. + * gcc.target/i386/sse-13.c: Ditto. + * gcc.target/i386/sse-14.c: Ditto. + * gcc.target/i386/sse-22.c: Ditto. + * gcc.target/i386/sse-23.c: Ditto. + * gcc.target/i386/testimm-10.c: Ditto. + * gcc.target/i386/testround-1.c: Ditto. + 2019-01-17 Wei Xiao PR target/88794 diff --git a/gcc/testsuite/gcc.target/i386/avx-1.c b/gcc/testsuite/gcc.target/i386/avx-1.c index c5830613011..f67bc5f5044 100644 --- a/gcc/testsuite/gcc.target/i386/avx-1.c +++ b/gcc/testsuite/gcc.target/i386/avx-1.c @@ -214,18 +214,14 @@ #define __builtin_ia32_extractf64x4_mask(A, E, C, D) __builtin_ia32_extractf64x4_mask(A, 1, C, D) #define __builtin_ia32_extracti32x4_mask(A, E, C, D) __builtin_ia32_extracti32x4_mask(A, 1, C, D) #define __builtin_ia32_extracti64x4_mask(A, E, C, D) __builtin_ia32_extracti64x4_mask(A, 1, C, D) -#define __builtin_ia32_fixupimmpd512(A, B, C, I) __builtin_ia32_fixupimmpd512(A, B, 1, 8) -#define __builtin_ia32_fixupimmpd512_mask(A, B, C, I, E, F) __builtin_ia32_fixupimmpd512_mask(A, B, 1, I, E, 8) -#define __builtin_ia32_fixupimmpd512_maskz(B, C, I, E, F) __builtin_ia32_fixupimmpd512_maskz(B, C, 1, E, 8) -#define __builtin_ia32_fixupimmps512(A, B, C, I) __builtin_ia32_fixupimmps512(A, B, 1, 8) -#define __builtin_ia32_fixupimmps512_mask(A, B, C, I, E, F) __builtin_ia32_fixupimmps512_mask(A, B, 1, I, E, 8) -#define __builtin_ia32_fixupimmps512_maskz(B, C, I, E, F) __builtin_ia32_fixupimmps512_maskz(B, C, 1, E, 8) -#define __builtin_ia32_fixupimmsd(A, B, C, I) __builtin_ia32_fixupimmsd(A, B, 1, 8) -#define __builtin_ia32_fixupimmsd_mask(A, B, C, I, E, F) __builtin_ia32_fixupimmsd_mask(A, B, 1, I, E, 8) -#define __builtin_ia32_fixupimmsd_maskz(B, C, I, E, F) __builtin_ia32_fixupimmsd_maskz(B, C, 1, E, 8) -#define __builtin_ia32_fixupimmss(A, B, C, I) __builtin_ia32_fixupimmss(A, B, 1, 8) -#define __builtin_ia32_fixupimmss_mask(A, B, C, I, E, F) __builtin_ia32_fixupimmss_mask(A, B, 1, I, E, 8) -#define __builtin_ia32_fixupimmss_maskz(B, C, I, E, F) __builtin_ia32_fixupimmss_maskz(B, C, 1, E, 8) +#define __builtin_ia32_fixupimmpd512_mask(A, B, C, I, E, F) __builtin_ia32_fixupimmpd512_mask(A, B, C, 1, E, 8) +#define __builtin_ia32_fixupimmpd512_maskz(A, B, C, I, E, F) __builtin_ia32_fixupimmpd512_maskz(A, B, C, 1, E, 8) +#define __builtin_ia32_fixupimmps512_mask(A, B, C, I, E, F) __builtin_ia32_fixupimmps512_mask(A, B, C, 1, E, 8) +#define __builtin_ia32_fixupimmps512_maskz(A, B, C, I, E, F) __builtin_ia32_fixupimmps512_maskz(A, B, C, 1, E, 8) +#define __builtin_ia32_fixupimmsd_mask(A, B, C, I, E, F) __builtin_ia32_fixupimmsd_mask(A, B, C, 1, E, 8) +#define __builtin_ia32_fixupimmsd_maskz(A, B, C, I, E, F) __builtin_ia32_fixupimmsd_maskz(A, B, C, 1, E, 8) +#define __builtin_ia32_fixupimmss_mask(A, B, C, I, E, F) __builtin_ia32_fixupimmss_mask(A, B, C, 1, E, 8) +#define __builtin_ia32_fixupimmss_maskz(A, B, C, I, E, F) __builtin_ia32_fixupimmss_maskz(A, B, C, 1, E, 8) #define __builtin_ia32_gatherdiv8df(A, B, C, D, F) __builtin_ia32_gatherdiv8df(A, B, C, D, 8) #define __builtin_ia32_gatherdiv8di(A, B, C, D, F) __builtin_ia32_gatherdiv8di(A, B, C, D, 8) #define __builtin_ia32_gatherdiv16sf(A, B, C, D, F) __builtin_ia32_gatherdiv16sf(A, B, C, D, 8) @@ -554,19 +550,14 @@ #define __builtin_ia32_gather3div4df(A, B, C, D, F) __builtin_ia32_gather3div4df(A, B, C, D, 1) #define __builtin_ia32_gather3div2di(A, B, C, D, F) __builtin_ia32_gather3div2di(A, B, C, D, 1) #define __builtin_ia32_gather3div2df(A, B, C, D, F) __builtin_ia32_gather3div2df(A, B, C, D, 1) -#define __builtin_ia32_fixupimmps256_maskz(B, C, F, E) __builtin_ia32_fixupimmps256_maskz(B, C, 1, E) -#define __builtin_ia32_fixupimmps256_mask(A, B, C, F, E) __builtin_ia32_fixupimmps256_mask(A, B, 1, F, E) -#define __builtin_ia32_fixupimmps256(A, B, C) __builtin_ia32_fixupimmps256(A, B, 1) - -#define __builtin_ia32_fixupimmps128_maskz(B, C, F, E) __builtin_ia32_fixupimmps128_maskz(B, C, 1, E) -#define __builtin_ia32_fixupimmps128_mask(A, B, C, F, E) __builtin_ia32_fixupimmps128_mask(A, B, 1, F, E) -#define __builtin_ia32_fixupimmps128(A, B, C) __builtin_ia32_fixupimmps128(A, B, 1) -#define __builtin_ia32_fixupimmpd256_maskz(B, C, F, E) __builtin_ia32_fixupimmpd256_maskz(B, C, 1, E) -#define __builtin_ia32_fixupimmpd256_mask(A, B, C, F, E) __builtin_ia32_fixupimmpd256_mask(A, B, 1, F, E) -#define __builtin_ia32_fixupimmpd256(A, B, C) __builtin_ia32_fixupimmpd256(A, B, 1) -#define __builtin_ia32_fixupimmpd128_maskz(B, C, F, E) __builtin_ia32_fixupimmpd128_maskz(B, C, 1, E) -#define __builtin_ia32_fixupimmpd128_mask(A, B, C, F, E) __builtin_ia32_fixupimmpd128_mask(A, B, 1, F, E) -#define __builtin_ia32_fixupimmpd128(A, B, C) __builtin_ia32_fixupimmpd128(A, B, 1) +#define __builtin_ia32_fixupimmps256_maskz(A, B, C, F, E) __builtin_ia32_fixupimmps256_maskz(A, B, C, 1, E) +#define __builtin_ia32_fixupimmps256_mask(A, B, C, F, E) __builtin_ia32_fixupimmps256_mask(A, B, C, 1, E) +#define __builtin_ia32_fixupimmps128_maskz(A, B, C, F, E) __builtin_ia32_fixupimmps128_maskz(A, B, C, 1, E) +#define __builtin_ia32_fixupimmps128_mask(A, B, C, F, E) __builtin_ia32_fixupimmps128_mask(A, B, C, 1, E) +#define __builtin_ia32_fixupimmpd256_maskz(A, B, C, F, E) __builtin_ia32_fixupimmpd256_maskz(A, B, C, 1, E) +#define __builtin_ia32_fixupimmpd256_mask(A, B, C, F, E) __builtin_ia32_fixupimmpd256_mask(A, B, C, 1, E) +#define __builtin_ia32_fixupimmpd128_maskz(A, B, C, F, E) __builtin_ia32_fixupimmpd128_maskz(A, B, C, 1, E) +#define __builtin_ia32_fixupimmpd128_mask(A, B, C, F, E) __builtin_ia32_fixupimmpd128_mask(A, B, C, 1, E) #define __builtin_ia32_extracti32x4_256_mask(A, E, C, D) __builtin_ia32_extracti32x4_256_mask(A, 1, C, D) #define __builtin_ia32_extractf32x4_256_mask(A, E, C, D) __builtin_ia32_extractf32x4_256_mask(A, 1, C, D) #define __builtin_ia32_cmpq256_mask(A, B, E, D) __builtin_ia32_cmpq256_mask(A, B, 1, D) diff --git a/gcc/testsuite/gcc.target/i386/avx512f-vfixupimmpd-1.c b/gcc/testsuite/gcc.target/i386/avx512f-vfixupimmpd-1.c index fc58c347b15..aac0150e1c3 100644 --- a/gcc/testsuite/gcc.target/i386/avx512f-vfixupimmpd-1.c +++ b/gcc/testsuite/gcc.target/i386/avx512f-vfixupimmpd-1.c @@ -16,10 +16,10 @@ volatile __mmask8 m; void extern avx512f_test (void) { - x1 = _mm512_fixupimm_pd (x2, y, 3); + x1 = _mm512_fixupimm_pd (x1, x2, y, 3); x1 = _mm512_mask_fixupimm_pd (x1, m, x2, y, 3); - x1 = _mm512_maskz_fixupimm_pd (m, x2, y, 3); - x1 = _mm512_fixupimm_round_pd (x2, y, 3, _MM_FROUND_NO_EXC); + x1 = _mm512_maskz_fixupimm_pd (m, x1, x2, y, 3); + x1 = _mm512_fixupimm_round_pd (x1, x2, y, 3, _MM_FROUND_NO_EXC); x1 = _mm512_mask_fixupimm_round_pd (x1, m, x2, y, 3, _MM_FROUND_NO_EXC); - x1 = _mm512_maskz_fixupimm_round_pd (m, x2, y, 3, _MM_FROUND_NO_EXC); + x1 = _mm512_maskz_fixupimm_round_pd (m, x1, x2, y, 3, _MM_FROUND_NO_EXC); } diff --git a/gcc/testsuite/gcc.target/i386/avx512f-vfixupimmpd-2.c b/gcc/testsuite/gcc.target/i386/avx512f-vfixupimmpd-2.c index 8c4e1631635..7ce980632db 100644 --- a/gcc/testsuite/gcc.target/i386/avx512f-vfixupimmpd-2.c +++ b/gcc/testsuite/gcc.target/i386/avx512f-vfixupimmpd-2.c @@ -99,9 +99,9 @@ TEST (void) CALC (&res_ref[j], s1.a[j], s2.a[j]); } - res1.x = INTRINSIC (_fixupimm_pd) (s1.x, s2.x, 0); + res1.x = INTRINSIC (_fixupimm_pd) (res1.x, s1.x, s2.x, 0); res2.x = INTRINSIC (_mask_fixupimm_pd) (res2.x, mask, s1.x, s2.x, 0); - res3.x = INTRINSIC (_maskz_fixupimm_pd) (mask, s1.x, s2.x, 0); + res3.x = INTRINSIC (_maskz_fixupimm_pd) (mask, res3.x, s1.x, s2.x, 0); if (UNION_CHECK (AVX512F_LEN, d) (res1, res_ref)) abort (); diff --git a/gcc/testsuite/gcc.target/i386/avx512f-vfixupimmps-1.c b/gcc/testsuite/gcc.target/i386/avx512f-vfixupimmps-1.c index 7921a1b546b..f9237a89f5d 100644 --- a/gcc/testsuite/gcc.target/i386/avx512f-vfixupimmps-1.c +++ b/gcc/testsuite/gcc.target/i386/avx512f-vfixupimmps-1.c @@ -16,10 +16,10 @@ volatile __mmask16 m; void extern avx512f_test (void) { - x1 = _mm512_fixupimm_ps (x2, y, 3); + x1 = _mm512_fixupimm_ps (x1, x2, y, 3); x1 = _mm512_mask_fixupimm_ps (x1, m, x2, y, 3); - x1 = _mm512_maskz_fixupimm_ps (m, x2, y, 3); - x1 = _mm512_fixupimm_round_ps (x2, y, 3, _MM_FROUND_NO_EXC); + x1 = _mm512_maskz_fixupimm_ps (m, x1, x2, y, 3); + x1 = _mm512_fixupimm_round_ps (x1, x2, y, 3, _MM_FROUND_NO_EXC); x1 = _mm512_mask_fixupimm_round_ps (x1, m, x2, y, 3, _MM_FROUND_NO_EXC); - x1 = _mm512_maskz_fixupimm_round_ps (m, x2, y, 3, _MM_FROUND_NO_EXC); + x1 = _mm512_maskz_fixupimm_round_ps (m, x1, x2, y, 3, _MM_FROUND_NO_EXC); } diff --git a/gcc/testsuite/gcc.target/i386/avx512f-vfixupimmps-2.c b/gcc/testsuite/gcc.target/i386/avx512f-vfixupimmps-2.c index 5c60a855f93..ffd899eaa50 100644 --- a/gcc/testsuite/gcc.target/i386/avx512f-vfixupimmps-2.c +++ b/gcc/testsuite/gcc.target/i386/avx512f-vfixupimmps-2.c @@ -104,9 +104,9 @@ TEST (void) CALC (&res_ref[j], s1.a[j], s2.a[j]); } - res1.x = INTRINSIC (_fixupimm_ps) (s1.x, s2.x, 0); + res1.x = INTRINSIC (_fixupimm_ps) (res1.x, s1.x, s2.x, 0); res2.x = INTRINSIC (_mask_fixupimm_ps) (res2.x, mask, s1.x, s2.x, 0); - res3.x = INTRINSIC (_maskz_fixupimm_ps) (mask, s1.x, s2.x, 0); + res3.x = INTRINSIC (_maskz_fixupimm_ps) (mask, res3.x, s1.x, s2.x, 0); if (UNION_CHECK (AVX512F_LEN,) (res1, res_ref)) abort (); diff --git a/gcc/testsuite/gcc.target/i386/avx512f-vfixupimmsd-1.c b/gcc/testsuite/gcc.target/i386/avx512f-vfixupimmsd-1.c index 926a5dc64e3..80baa75e3ae 100644 --- a/gcc/testsuite/gcc.target/i386/avx512f-vfixupimmsd-1.c +++ b/gcc/testsuite/gcc.target/i386/avx512f-vfixupimmsd-1.c @@ -16,10 +16,10 @@ volatile __mmask8 m; void extern avx512f_test (void) { - x = _mm_fixupimm_sd (x, y, 3); + x = _mm_fixupimm_sd (x, x, y, 3); x = _mm_mask_fixupimm_sd (x, m, x, y, 3); - x = _mm_maskz_fixupimm_sd (m, x, y, 3); - x = _mm_fixupimm_round_sd (x, y, 3, _MM_FROUND_NO_EXC); + x = _mm_maskz_fixupimm_sd (m, x, x, y, 3); + x = _mm_fixupimm_round_sd (x, x, y, 3, _MM_FROUND_NO_EXC); x = _mm_mask_fixupimm_round_sd (x, m, x, y, 3, _MM_FROUND_NO_EXC); - x = _mm_maskz_fixupimm_round_sd (m, x, y, 3, _MM_FROUND_NO_EXC); + x = _mm_maskz_fixupimm_round_sd (m, x, x, y, 3, _MM_FROUND_NO_EXC); } diff --git a/gcc/testsuite/gcc.target/i386/avx512f-vfixupimmsd-2.c b/gcc/testsuite/gcc.target/i386/avx512f-vfixupimmsd-2.c index e2947b34e0b..62c8d4eff14 100644 --- a/gcc/testsuite/gcc.target/i386/avx512f-vfixupimmsd-2.c +++ b/gcc/testsuite/gcc.target/i386/avx512f-vfixupimmsd-2.c @@ -100,9 +100,9 @@ avx512f_test (void) s2.a[0] = controls[j]; compute_fixupimmpd (&res_ref[0], s1.a[0], s2.a[0]); - res1.x = _mm_fixupimm_sd (s1.x, s2.x, 0); + res1.x = _mm_fixupimm_sd (res1.x, s1.x, s2.x, 0); res2.x = _mm_mask_fixupimm_sd (res2.x, mask, s1.x, s2.x, 0); - res3.x = _mm_maskz_fixupimm_sd (mask, s1.x, s2.x, 0); + res3.x = _mm_maskz_fixupimm_sd (mask, res3.x, s1.x, s2.x, 0); if (check_union128d (res1, res_ref)) abort (); diff --git a/gcc/testsuite/gcc.target/i386/avx512f-vfixupimmss-1.c b/gcc/testsuite/gcc.target/i386/avx512f-vfixupimmss-1.c index 65ba291cca1..38dbbe2224c 100644 --- a/gcc/testsuite/gcc.target/i386/avx512f-vfixupimmss-1.c +++ b/gcc/testsuite/gcc.target/i386/avx512f-vfixupimmss-1.c @@ -16,10 +16,10 @@ volatile __mmask8 m; void extern avx512f_test (void) { - x = _mm_fixupimm_ss (x, y, 3); + x = _mm_fixupimm_ss (x, x, y, 3); x = _mm_mask_fixupimm_ss (x, m, x, y, 3); - x = _mm_maskz_fixupimm_ss (m, x, y, 3); - x = _mm_fixupimm_round_ss (x, y, 3, _MM_FROUND_NO_EXC); + x = _mm_maskz_fixupimm_ss (m, x, x, y, 3); + x = _mm_fixupimm_round_ss (x, x, y, 3, _MM_FROUND_NO_EXC); x = _mm_mask_fixupimm_round_ss (x, m, x, y, 3, _MM_FROUND_NO_EXC); - x = _mm_maskz_fixupimm_round_ss (m, x, y, 3, _MM_FROUND_NO_EXC); + x = _mm_maskz_fixupimm_round_ss (m, x, x, y, 3, _MM_FROUND_NO_EXC); } diff --git a/gcc/testsuite/gcc.target/i386/avx512f-vfixupimmss-2.c b/gcc/testsuite/gcc.target/i386/avx512f-vfixupimmss-2.c index 2f307f6ec8d..26f45a09497 100644 --- a/gcc/testsuite/gcc.target/i386/avx512f-vfixupimmss-2.c +++ b/gcc/testsuite/gcc.target/i386/avx512f-vfixupimmss-2.c @@ -101,9 +101,9 @@ avx512f_test (void) s2.a[0] = controls[j]; compute_fixupimmps (&res_ref[0], s1.a[0], s2.a[0]); - res1.x = _mm_fixupimm_ss (s1.x, s2.x, 0); + res1.x = _mm_fixupimm_ss (res1.x, s1.x, s2.x, 0); res2.x = _mm_mask_fixupimm_ss (res2.x, mask, s1.x, s2.x, 0); - res3.x = _mm_maskz_fixupimm_ss (mask, s1.x, s2.x, 0); + res3.x = _mm_maskz_fixupimm_ss (mask, res3.x, s1.x, s2.x, 0); if (check_union128 (res1, res_ref)) abort (); diff --git a/gcc/testsuite/gcc.target/i386/avx512vl-vfixupimmpd-1.c b/gcc/testsuite/gcc.target/i386/avx512vl-vfixupimmpd-1.c index 5835dbc3e55..dbc02b93309 100644 --- a/gcc/testsuite/gcc.target/i386/avx512vl-vfixupimmpd-1.c +++ b/gcc/testsuite/gcc.target/i386/avx512vl-vfixupimmpd-1.c @@ -16,10 +16,10 @@ volatile __mmask8 m; void extern avx512vl_test (void) { - xx = _mm256_fixupimm_pd (xx, yy, 3); + xx = _mm256_fixupimm_pd (xx, xx, yy, 3); xx = _mm256_mask_fixupimm_pd (xx, m, xx, yy, 3); - xx = _mm256_maskz_fixupimm_pd (m, xx, yy, 3); - x2 = _mm_fixupimm_pd (x2, y2, 3); + xx = _mm256_maskz_fixupimm_pd (m, xx, xx, yy, 3); + x2 = _mm_fixupimm_pd (x2, x2, y2, 3); x2 = _mm_mask_fixupimm_pd (x2, m, x2, y2, 3); - x2 = _mm_maskz_fixupimm_pd (m, x2, y2, 3); + x2 = _mm_maskz_fixupimm_pd (m, x2, x2, y2, 3); } diff --git a/gcc/testsuite/gcc.target/i386/avx512vl-vfixupimmps-1.c b/gcc/testsuite/gcc.target/i386/avx512vl-vfixupimmps-1.c index c195333f877..fbdd6df1b8c 100644 --- a/gcc/testsuite/gcc.target/i386/avx512vl-vfixupimmps-1.c +++ b/gcc/testsuite/gcc.target/i386/avx512vl-vfixupimmps-1.c @@ -16,10 +16,10 @@ volatile __mmask8 m; void extern avx512vl_test (void) { - xx = _mm256_fixupimm_ps (xx, yy, 3); + xx = _mm256_fixupimm_ps (xx, xx, yy, 3); xx = _mm256_mask_fixupimm_ps (xx, m, xx, yy, 3); - xx = _mm256_maskz_fixupimm_ps (m, xx, yy, 3); - x2 = _mm_fixupimm_ps (x2, y2, 3); + xx = _mm256_maskz_fixupimm_ps (m, xx, xx, yy, 3); + x2 = _mm_fixupimm_ps (x2, x2, y2, 3); x2 = _mm_mask_fixupimm_ps (x2, m, x2, y2, 3); - x2 = _mm_maskz_fixupimm_ps (m, x2, y2, 3); + x2 = _mm_maskz_fixupimm_ps (m, x2, x2, y2, 3); } diff --git a/gcc/testsuite/gcc.target/i386/sse-13.c b/gcc/testsuite/gcc.target/i386/sse-13.c index 48c009ebec4..64da3cd1992 100644 --- a/gcc/testsuite/gcc.target/i386/sse-13.c +++ b/gcc/testsuite/gcc.target/i386/sse-13.c @@ -231,18 +231,14 @@ #define __builtin_ia32_extractf64x4_mask(A, E, C, D) __builtin_ia32_extractf64x4_mask(A, 1, C, D) #define __builtin_ia32_extracti32x4_mask(A, E, C, D) __builtin_ia32_extracti32x4_mask(A, 1, C, D) #define __builtin_ia32_extracti64x4_mask(A, E, C, D) __builtin_ia32_extracti64x4_mask(A, 1, C, D) -#define __builtin_ia32_fixupimmpd512(A, B, C, I) __builtin_ia32_fixupimmpd512(A, B, 1, 8) -#define __builtin_ia32_fixupimmpd512_mask(A, B, C, I, E, F) __builtin_ia32_fixupimmpd512_mask(A, B, 1, I, E, 8) -#define __builtin_ia32_fixupimmpd512_maskz(B, C, I, E, F) __builtin_ia32_fixupimmpd512_maskz(B, C, 1, E, 8) -#define __builtin_ia32_fixupimmps512(A, B, C, I) __builtin_ia32_fixupimmps512(A, B, 1, 8) -#define __builtin_ia32_fixupimmps512_mask(A, B, C, I, E, F) __builtin_ia32_fixupimmps512_mask(A, B, 1, I, E, 8) -#define __builtin_ia32_fixupimmps512_maskz(B, C, I, E, F) __builtin_ia32_fixupimmps512_maskz(B, C, 1, E, 8) -#define __builtin_ia32_fixupimmsd(A, B, C, I) __builtin_ia32_fixupimmsd(A, B, 1, 8) -#define __builtin_ia32_fixupimmsd_mask(A, B, C, I, E, F) __builtin_ia32_fixupimmsd_mask(A, B, 1, I, E, 8) -#define __builtin_ia32_fixupimmsd_maskz(B, C, I, E, F) __builtin_ia32_fixupimmsd_maskz(B, C, 1, E, 8) -#define __builtin_ia32_fixupimmss(A, B, C, I) __builtin_ia32_fixupimmss(A, B, 1, 8) -#define __builtin_ia32_fixupimmss_mask(A, B, C, I, E, F) __builtin_ia32_fixupimmss_mask(A, B, 1, I, E, 8) -#define __builtin_ia32_fixupimmss_maskz(B, C, I, E, F) __builtin_ia32_fixupimmss_maskz(B, C, 1, E, 8) +#define __builtin_ia32_fixupimmpd512_mask(A, B, C, I, E, F) __builtin_ia32_fixupimmpd512_mask(A, B, C, 1, E, 8) +#define __builtin_ia32_fixupimmpd512_maskz(A, B, C, I, E, F) __builtin_ia32_fixupimmpd512_maskz(A, B, C, 1, E, 8) +#define __builtin_ia32_fixupimmps512_mask(A, B, C, I, E, F) __builtin_ia32_fixupimmps512_mask(A, B, C, 1, E, 8) +#define __builtin_ia32_fixupimmps512_maskz(A, B, C, I, E, F) __builtin_ia32_fixupimmps512_maskz(A, B, C, 1, E, 8) +#define __builtin_ia32_fixupimmsd_mask(A, B, C, I, E, F) __builtin_ia32_fixupimmsd_mask(A, B, C, 1, E, 8) +#define __builtin_ia32_fixupimmsd_maskz(A, B, C, I, E, F) __builtin_ia32_fixupimmsd_maskz(A, B, C, 1, E, 8) +#define __builtin_ia32_fixupimmss_mask(A, B, C, I, E, F) __builtin_ia32_fixupimmss_mask(A, B, C, 1, E, 8) +#define __builtin_ia32_fixupimmss_maskz(A, B, C, I, E, F) __builtin_ia32_fixupimmss_maskz(A, B, C, 1, E, 8) #define __builtin_ia32_gatherdiv8df(A, B, C, D, F) __builtin_ia32_gatherdiv8df(A, B, C, D, 8) #define __builtin_ia32_gatherdiv8di(A, B, C, D, F) __builtin_ia32_gatherdiv8di(A, B, C, D, 8) #define __builtin_ia32_gatherdiv16sf(A, B, C, D, F) __builtin_ia32_gatherdiv16sf(A, B, C, D, 8) @@ -571,19 +567,14 @@ #define __builtin_ia32_gather3div4df(A, B, C, D, F) __builtin_ia32_gather3div4df(A, B, C, D, 1) #define __builtin_ia32_gather3div2di(A, B, C, D, F) __builtin_ia32_gather3div2di(A, B, C, D, 1) #define __builtin_ia32_gather3div2df(A, B, C, D, F) __builtin_ia32_gather3div2df(A, B, C, D, 1) -#define __builtin_ia32_fixupimmps256_maskz(B, C, F, E) __builtin_ia32_fixupimmps256_maskz(B, C, 1, E) -#define __builtin_ia32_fixupimmps256_mask(A, B, C, F, E) __builtin_ia32_fixupimmps256_mask(A, B, 1, F, E) -#define __builtin_ia32_fixupimmps256(A, B, C) __builtin_ia32_fixupimmps256(A, B, 1) - -#define __builtin_ia32_fixupimmps128_maskz(B, C, F, E) __builtin_ia32_fixupimmps128_maskz(B, C, 1, E) -#define __builtin_ia32_fixupimmps128_mask(A, B, C, F, E) __builtin_ia32_fixupimmps128_mask(A, B, 1, F, E) -#define __builtin_ia32_fixupimmps128(A, B, C) __builtin_ia32_fixupimmps128(A, B, 1) -#define __builtin_ia32_fixupimmpd256_maskz(B, C, F, E) __builtin_ia32_fixupimmpd256_maskz(B, C, 1, E) -#define __builtin_ia32_fixupimmpd256_mask(A, B, C, F, E) __builtin_ia32_fixupimmpd256_mask(A, B, 1, F, E) -#define __builtin_ia32_fixupimmpd256(A, B, C) __builtin_ia32_fixupimmpd256(A, B, 1) -#define __builtin_ia32_fixupimmpd128_maskz(B, C, F, E) __builtin_ia32_fixupimmpd128_maskz(B, C, 1, E) -#define __builtin_ia32_fixupimmpd128_mask(A, B, C, F, E) __builtin_ia32_fixupimmpd128_mask(A, B, 1, F, E) -#define __builtin_ia32_fixupimmpd128(A, B, C) __builtin_ia32_fixupimmpd128(A, B, 1) +#define __builtin_ia32_fixupimmps256_maskz(A, B, C, F, E) __builtin_ia32_fixupimmps256_maskz(A, B, C, 1, E) +#define __builtin_ia32_fixupimmps256_mask(A, B, C, F, E) __builtin_ia32_fixupimmps256_mask(A, B, C, 1, E) +#define __builtin_ia32_fixupimmps128_maskz(A, B, C, F, E) __builtin_ia32_fixupimmps128_maskz(A, B, C, 1, E) +#define __builtin_ia32_fixupimmps128_mask(A, B, C, F, E) __builtin_ia32_fixupimmps128_mask(A, B, C, 1, E) +#define __builtin_ia32_fixupimmpd256_maskz(A, B, C, F, E) __builtin_ia32_fixupimmpd256_maskz(A, B, C, 1, E) +#define __builtin_ia32_fixupimmpd256_mask(A, B, C, F, E) __builtin_ia32_fixupimmpd256_mask(A, B, C, 1, E) +#define __builtin_ia32_fixupimmpd128_maskz(A, B, C, F, E) __builtin_ia32_fixupimmpd128_maskz(A, B, C, 1, E) +#define __builtin_ia32_fixupimmpd128_mask(A, B, C, F, E) __builtin_ia32_fixupimmpd128_mask(A, B, C, 1, E) #define __builtin_ia32_extracti32x4_256_mask(A, E, C, D) __builtin_ia32_extracti32x4_256_mask(A, 1, C, D) #define __builtin_ia32_extractf32x4_256_mask(A, E, C, D) __builtin_ia32_extractf32x4_256_mask(A, 1, C, D) #define __builtin_ia32_cmpq256_mask(A, B, E, D) __builtin_ia32_cmpq256_mask(A, B, 1, D) diff --git a/gcc/testsuite/gcc.target/i386/sse-14.c b/gcc/testsuite/gcc.target/i386/sse-14.c index 6393ab42486..0f663bec702 100644 --- a/gcc/testsuite/gcc.target/i386/sse-14.c +++ b/gcc/testsuite/gcc.target/i386/sse-14.c @@ -444,8 +444,8 @@ test_3v (_mm512_i64scatter_pd, void *, __m512i, __m512d, 1) test_3v (_mm512_i64scatter_ps, void *, __m512i, __m256, 1) test_3x (_mm512_mask_roundscale_round_pd, __m512d, __m512d, __mmask8, __m512d, 1, 8) test_3x (_mm512_mask_roundscale_round_ps, __m512, __m512, __mmask16, __m512, 1, 8) -test_2x (_mm_fixupimm_round_sd, __m128d, __m128d, __m128i, 1, 8) -test_2x (_mm_fixupimm_round_ss, __m128, __m128, __m128i, 1, 8) +test_3x (_mm_fixupimm_round_sd, __m128d, __m128d, __m128d, __m128i, 1, 8) +test_3x (_mm_fixupimm_round_ss, __m128, __m128, __m128, __m128i, 1, 8) test_3x (_mm_mask_cmp_round_sd_mask, __mmask8, __mmask8, __m128d, __m128d, 1, 8) test_3x (_mm_mask_cmp_round_ss_mask, __mmask8, __mmask8, __m128, __m128, 1, 8) test_4 (_mm512_mask3_fmadd_round_pd, __m512d, __m512d, __m512d, __m512d, __mmask8, 9) @@ -544,12 +544,12 @@ test_4v (_mm512_mask_i64scatter_pd, void *, __mmask8, __m512i, __m512d, 1) test_4v (_mm512_mask_i64scatter_ps, void *, __mmask8, __m512i, __m256, 1) test_4x (_mm512_mask_fixupimm_round_pd, __m512d, __m512d, __mmask8, __m512d, __m512i, 1, 8) test_4x (_mm512_mask_fixupimm_round_ps, __m512, __m512, __mmask16, __m512, __m512i, 1, 8) -test_3x (_mm512_maskz_fixupimm_round_pd, __m512d, __mmask8, __m512d, __m512i, 1, 8) -test_3x (_mm512_maskz_fixupimm_round_ps, __m512, __mmask16, __m512, __m512i, 1, 8) +test_4x (_mm512_maskz_fixupimm_round_pd, __m512d, __mmask8, __m512d, __m512d, __m512i, 1, 8) +test_4x (_mm512_maskz_fixupimm_round_ps, __m512, __mmask16, __m512, __m512, __m512i, 1, 8) test_4x (_mm_mask_fixupimm_round_sd, __m128d, __m128d, __mmask8, __m128d, __m128i, 1, 8) test_4x (_mm_mask_fixupimm_round_ss, __m128, __m128, __mmask8, __m128, __m128i, 1, 8) -test_3x (_mm_maskz_fixupimm_round_sd, __m128d, __mmask8, __m128d, __m128i, 1, 8) -test_3x (_mm_maskz_fixupimm_round_ss, __m128, __mmask8, __m128, __m128i, 1, 8) +test_4x (_mm_maskz_fixupimm_round_sd, __m128d, __mmask8, __m128d, __m128d, __m128i, 1, 8) +test_4x (_mm_maskz_fixupimm_round_ss, __m128, __mmask8, __m128, __m128, __m128i, 1, 8) /* avx512pfintrin.h */ test_2vx (_mm512_prefetch_i32gather_ps, __m512i, void const *, 1, _MM_HINT_T0) diff --git a/gcc/testsuite/gcc.target/i386/sse-22.c b/gcc/testsuite/gcc.target/i386/sse-22.c index d21769c0de8..99af58a995d 100644 --- a/gcc/testsuite/gcc.target/i386/sse-22.c +++ b/gcc/testsuite/gcc.target/i386/sse-22.c @@ -555,8 +555,8 @@ test_3x (_mm512_mask_roundscale_round_pd, __m512d, __m512d, __mmask8, __m512d, 1 test_3x (_mm512_mask_roundscale_round_ps, __m512, __m512, __mmask16, __m512, 1, 8) test_3x (_mm512_mask_cmp_round_pd_mask, __mmask8, __mmask8, __m512d, __m512d, 1, 8) test_3x (_mm512_mask_cmp_round_ps_mask, __mmask16, __mmask16, __m512, __m512, 1, 8) -test_2x (_mm_fixupimm_round_sd, __m128d, __m128d, __m128i, 1, 8) -test_2x (_mm_fixupimm_round_ss, __m128, __m128, __m128i, 1, 8) +test_3x (_mm_fixupimm_round_sd, __m128d, __m128d, __m128d, __m128i, 1, 8) +test_3x (_mm_fixupimm_round_ss, __m128, __m128, __m128, __m128i, 1, 8) test_3x (_mm_mask_cmp_round_sd_mask, __mmask8, __mmask8, __m128d, __m128d, 1, 8) test_3x (_mm_mask_cmp_round_ss_mask, __mmask8, __mmask8, __m128, __m128, 1, 8) test_4 (_mm512_mask3_fmadd_round_pd, __m512d, __m512d, __m512d, __m512d, __mmask8, 9) @@ -643,12 +643,12 @@ test_4v (_mm512_mask_i64scatter_pd, void *, __mmask8, __m512i, __m512d, 1) test_4v (_mm512_mask_i64scatter_ps, void *, __mmask8, __m512i, __m256, 1) test_4x (_mm512_mask_fixupimm_round_pd, __m512d, __m512d, __mmask8, __m512d, __m512i, 1, 8) test_4x (_mm512_mask_fixupimm_round_ps, __m512, __m512, __mmask16, __m512, __m512i, 1, 8) -test_3x (_mm512_maskz_fixupimm_round_pd, __m512d, __mmask8, __m512d, __m512i, 1, 8) -test_3x (_mm512_maskz_fixupimm_round_ps, __m512, __mmask16, __m512, __m512i, 1, 8) +test_4x (_mm512_maskz_fixupimm_round_pd, __m512d, __mmask8, __m512d, __m512d, __m512i, 1, 8) +test_4x (_mm512_maskz_fixupimm_round_ps, __m512, __mmask16, __m512, __m512, __m512i, 1, 8) test_4x (_mm_mask_fixupimm_round_sd, __m128d, __m128d, __mmask8, __m128d, __m128i, 1, 8) test_4x (_mm_mask_fixupimm_round_ss, __m128, __m128, __mmask8, __m128, __m128i, 1, 8) -test_3x (_mm_maskz_fixupimm_round_sd, __m128d, __mmask8, __m128d, __m128i, 1, 8) -test_3x (_mm_maskz_fixupimm_round_ss, __m128, __mmask8, __m128, __m128i, 1, 8) +test_4x (_mm_maskz_fixupimm_round_sd, __m128d, __mmask8, __m128d, __m128d, __m128i, 1, 8) +test_4x (_mm_maskz_fixupimm_round_ss, __m128, __mmask8, __m128, __m128, __m128i, 1, 8) /* avx512pfintrin.h */ test_2vx (_mm512_prefetch_i32gather_ps, __m512i, void const *, 1, _MM_HINT_T0) diff --git a/gcc/testsuite/gcc.target/i386/sse-23.c b/gcc/testsuite/gcc.target/i386/sse-23.c index 2ca333959ec..f9d372c47e2 100644 --- a/gcc/testsuite/gcc.target/i386/sse-23.c +++ b/gcc/testsuite/gcc.target/i386/sse-23.c @@ -232,18 +232,14 @@ #define __builtin_ia32_extractf64x4_mask(A, E, C, D) __builtin_ia32_extractf64x4_mask(A, 1, C, D) #define __builtin_ia32_extracti32x4_mask(A, E, C, D) __builtin_ia32_extracti32x4_mask(A, 1, C, D) #define __builtin_ia32_extracti64x4_mask(A, E, C, D) __builtin_ia32_extracti64x4_mask(A, 1, C, D) -#define __builtin_ia32_fixupimmpd512(A, B, C, I) __builtin_ia32_fixupimmpd512(A, B, 1, 8) -#define __builtin_ia32_fixupimmpd512_mask(A, B, C, I, E, F) __builtin_ia32_fixupimmpd512_mask(A, B, 1, I, E, 8) -#define __builtin_ia32_fixupimmpd512_maskz(B, C, I, E, F) __builtin_ia32_fixupimmpd512_maskz(B, C, 1, E, 8) -#define __builtin_ia32_fixupimmps512(A, B, C, I) __builtin_ia32_fixupimmps512(A, B, 1, 8) -#define __builtin_ia32_fixupimmps512_mask(A, B, C, I, E, F) __builtin_ia32_fixupimmps512_mask(A, B, 1, I, E, 8) -#define __builtin_ia32_fixupimmps512_maskz(B, C, I, E, F) __builtin_ia32_fixupimmps512_maskz(B, C, 1, E, 8) -#define __builtin_ia32_fixupimmsd(A, B, C, I) __builtin_ia32_fixupimmsd(A, B, 1, 8) -#define __builtin_ia32_fixupimmsd_mask(A, B, C, I, E, F) __builtin_ia32_fixupimmsd_mask(A, B, 1, I, E, 8) -#define __builtin_ia32_fixupimmsd_maskz(B, C, I, E, F) __builtin_ia32_fixupimmsd_maskz(B, C, 1, E, 8) -#define __builtin_ia32_fixupimmss(A, B, C, I) __builtin_ia32_fixupimmss(A, B, 1, 8) -#define __builtin_ia32_fixupimmss_mask(A, B, C, I, E, F) __builtin_ia32_fixupimmss_mask(A, B, 1, I, E, 8) -#define __builtin_ia32_fixupimmss_maskz(B, C, I, E, F) __builtin_ia32_fixupimmss_maskz(B, C, 1, E, 8) +#define __builtin_ia32_fixupimmpd512_mask(A, B, C, I, E, F) __builtin_ia32_fixupimmpd512_mask(A, B, C, 1, E, 8) +#define __builtin_ia32_fixupimmpd512_maskz(A, B, C, I, E, F) __builtin_ia32_fixupimmpd512_maskz(A, B, C, 1, E, 8) +#define __builtin_ia32_fixupimmps512_mask(A, B, C, I, E, F) __builtin_ia32_fixupimmps512_mask(A, B, C, 1, E, 8) +#define __builtin_ia32_fixupimmps512_maskz(A, B, C, I, E, F) __builtin_ia32_fixupimmps512_maskz(A, B, C, 1, E, 8) +#define __builtin_ia32_fixupimmsd_mask(A, B, C, I, E, F) __builtin_ia32_fixupimmsd_mask(A, B, C, 1, E, 8) +#define __builtin_ia32_fixupimmsd_maskz(A, B, C, I, E, F) __builtin_ia32_fixupimmsd_maskz(A, B, C, 1, E, 8) +#define __builtin_ia32_fixupimmss_mask(A, B, C, I, E, F) __builtin_ia32_fixupimmss_mask(A, B, C, 1, E, 8) +#define __builtin_ia32_fixupimmss_maskz(A, B, C, I, E, F) __builtin_ia32_fixupimmss_maskz(A, B, C, 1, E, 8) #define __builtin_ia32_gatherdiv8df(A, B, C, D, F) __builtin_ia32_gatherdiv8df(A, B, C, D, 8) #define __builtin_ia32_gatherdiv8di(A, B, C, D, F) __builtin_ia32_gatherdiv8di(A, B, C, D, 8) #define __builtin_ia32_gatherdiv16sf(A, B, C, D, F) __builtin_ia32_gatherdiv16sf(A, B, C, D, 8) @@ -570,19 +566,14 @@ #define __builtin_ia32_gather3div4df(A, B, C, D, F) __builtin_ia32_gather3div4df(A, B, C, D, 1) #define __builtin_ia32_gather3div2di(A, B, C, D, F) __builtin_ia32_gather3div2di(A, B, C, D, 1) #define __builtin_ia32_gather3div2df(A, B, C, D, F) __builtin_ia32_gather3div2df(A, B, C, D, 1) -#define __builtin_ia32_fixupimmps256_maskz(B, C, F, E) __builtin_ia32_fixupimmps256_maskz(B, C, 1, E) -#define __builtin_ia32_fixupimmps256_mask(A, B, C, F, E) __builtin_ia32_fixupimmps256_mask(A, B, 1, F, E) -#define __builtin_ia32_fixupimmps256(A, B, C) __builtin_ia32_fixupimmps256(A, B, 1) - -#define __builtin_ia32_fixupimmps128_maskz(B, C, F, E) __builtin_ia32_fixupimmps128_maskz(B, C, 1, E) -#define __builtin_ia32_fixupimmps128_mask(A, B, C, F, E) __builtin_ia32_fixupimmps128_mask(A, B, 1, F, E) -#define __builtin_ia32_fixupimmps128(A, B, C) __builtin_ia32_fixupimmps128(A, B, 1) -#define __builtin_ia32_fixupimmpd256_maskz(B, C, F, E) __builtin_ia32_fixupimmpd256_maskz(B, C, 1, E) -#define __builtin_ia32_fixupimmpd256_mask(A, B, C, F, E) __builtin_ia32_fixupimmpd256_mask(A, B, 1, F, E) -#define __builtin_ia32_fixupimmpd256(A, B, C) __builtin_ia32_fixupimmpd256(A, B, 1) -#define __builtin_ia32_fixupimmpd128_maskz(B, C, F, E) __builtin_ia32_fixupimmpd128_maskz(B, C, 1, E) -#define __builtin_ia32_fixupimmpd128_mask(A, B, C, F, E) __builtin_ia32_fixupimmpd128_mask(A, B, 1, F, E) -#define __builtin_ia32_fixupimmpd128(A, B, C) __builtin_ia32_fixupimmpd128(A, B, 1) +#define __builtin_ia32_fixupimmps256_maskz(A, B, C, F, E) __builtin_ia32_fixupimmps256_maskz(A, B, C, 1, E) +#define __builtin_ia32_fixupimmps256_mask(A, B, C, F, E) __builtin_ia32_fixupimmps256_mask(A, B, C, 1, E) +#define __builtin_ia32_fixupimmps128_maskz(A, B, C, F, E) __builtin_ia32_fixupimmps128_maskz(A, B, C, 1, E) +#define __builtin_ia32_fixupimmps128_mask(A, B, C, F, E) __builtin_ia32_fixupimmps128_mask(A, B, C, 1, E) +#define __builtin_ia32_fixupimmpd256_maskz(A, B, C, F, E) __builtin_ia32_fixupimmpd256_maskz(A, B, C, 1, E) +#define __builtin_ia32_fixupimmpd256_mask(A, B, C, F, E) __builtin_ia32_fixupimmpd256_mask(A, B, C, 1, E) +#define __builtin_ia32_fixupimmpd128_maskz(A, B, C, F, E) __builtin_ia32_fixupimmpd128_maskz(A, B, C, 1, E) +#define __builtin_ia32_fixupimmpd128_mask(A, B, C, F, E) __builtin_ia32_fixupimmpd128_mask(A, B, C, 1, E) #define __builtin_ia32_extracti32x4_256_mask(A, E, C, D) __builtin_ia32_extracti32x4_256_mask(A, 1, C, D) #define __builtin_ia32_extractf32x4_256_mask(A, E, C, D) __builtin_ia32_extractf32x4_256_mask(A, 1, C, D) #define __builtin_ia32_cmpq256_mask(A, B, E, D) __builtin_ia32_cmpq256_mask(A, B, 1, D) diff --git a/gcc/testsuite/gcc.target/i386/testimm-10.c b/gcc/testsuite/gcc.target/i386/testimm-10.c index 932b8902394..d0e9b42f2fe 100644 --- a/gcc/testsuite/gcc.target/i386/testimm-10.c +++ b/gcc/testsuite/gcc.target/i386/testimm-10.c @@ -69,21 +69,21 @@ test8bit (void) m512 = _mm512_mask_shuffle_ps (m512, mmask16, m512, m512, 256); /* { dg-error "the last argument must be an 8-bit immediate" } */ m512 = _mm512_maskz_shuffle_ps (mmask16, m512, m512, 256); /* { dg-error "the last argument must be an 8-bit immediate" } */ - m512d = _mm512_fixupimm_pd (m512d, m512i, 256); /* { dg-error "the immediate argument must be an 8-bit immediate" } */ + m512d = _mm512_fixupimm_pd (m512d, m512d, m512i, 256); /* { dg-error "the immediate argument must be an 8-bit immediate" } */ m512d = _mm512_mask_fixupimm_pd (m512d, mmask8, m512d, m512i, 256); /* { dg-error "the immediate argument must be an 8-bit immediate" } */ - m512d = _mm512_maskz_fixupimm_pd (mmask8, m512d, m512i, 256); /* { dg-error "the immediate argument must be an 8-bit immediate" } */ + m512d = _mm512_maskz_fixupimm_pd (mmask8, m512d, m512d, m512i, 256); /* { dg-error "the immediate argument must be an 8-bit immediate" } */ - m512 = _mm512_fixupimm_ps (m512, m512i, 256); /* { dg-error "the immediate argument must be an 8-bit immediate" } */ + m512 = _mm512_fixupimm_ps (m512, m512, m512i, 256); /* { dg-error "the immediate argument must be an 8-bit immediate" } */ m512 = _mm512_mask_fixupimm_ps (m512, mmask16, m512, m512i, 256); /* { dg-error "the immediate argument must be an 8-bit immediate" } */ - m512 = _mm512_maskz_fixupimm_ps (mmask16, m512, m512i, 256); /* { dg-error "the immediate argument must be an 8-bit immediate" } */ + m512 = _mm512_maskz_fixupimm_ps (mmask16, m512, m512, m512i, 256); /* { dg-error "the immediate argument must be an 8-bit immediate" } */ - m128d = _mm_fixupimm_sd (m128d, m128i, 256); /* { dg-error "the immediate argument must be an 8-bit immediate" } */ + m128d = _mm_fixupimm_sd (m128d, m128d, m128i, 256); /* { dg-error "the immediate argument must be an 8-bit immediate" } */ m128d = _mm_mask_fixupimm_sd (m128d, mmask8, m128d, m128i, 256); /* { dg-error "the immediate argument must be an 8-bit immediate" } */ - m128d = _mm_maskz_fixupimm_sd (mmask8, m128d, m128i, 256); /* { dg-error "the immediate argument must be an 8-bit immediate" } */ + m128d = _mm_maskz_fixupimm_sd (mmask8, m128d, m128d, m128i, 256); /* { dg-error "the immediate argument must be an 8-bit immediate" } */ - m128 = _mm_fixupimm_ss (m128, m128i, 256); /* { dg-error "the immediate argument must be an 8-bit immediate" } */ + m128 = _mm_fixupimm_ss (m128, m128, m128i, 256); /* { dg-error "the immediate argument must be an 8-bit immediate" } */ m128 = _mm_mask_fixupimm_ss (m128, mmask8, m128, m128i, 256); /* { dg-error "the immediate argument must be an 8-bit immediate" } */ - m128 = _mm_maskz_fixupimm_ss (mmask8, m128, m128i, 256); /* { dg-error "the immediate argument must be an 8-bit immediate" } */ + m128 = _mm_maskz_fixupimm_ss (mmask8, m128, m128, m128i, 256); /* { dg-error "the immediate argument must be an 8-bit immediate" } */ m512i = _mm512_rol_epi32 (m512i, 256); /* { dg-error "the last argument must be an 8-bit immediate" } */ m512i = _mm512_mask_rol_epi32 (m512i, mmask16, m512i, 256); /* { dg-error "the last argument must be an 8-bit immediate" } */ diff --git a/gcc/testsuite/gcc.target/i386/testround-1.c b/gcc/testsuite/gcc.target/i386/testround-1.c index e51d77b09e7..d5ab95c208e 100644 --- a/gcc/testsuite/gcc.target/i386/testround-1.c +++ b/gcc/testsuite/gcc.target/i386/testround-1.c @@ -220,18 +220,18 @@ test_round (void) m512i = _mm512_mask_cvtt_roundps_epu32 (m512i, mmask16, m512, 7); /* { dg-error "incorrect rounding operand" } */ m512i = _mm512_maskz_cvtt_roundps_epu32 (mmask16, m512, 7); /* { dg-error "incorrect rounding operand" } */ - m512d = _mm512_fixupimm_round_pd (m512d, m512i, 4, 7); /* { dg-error "incorrect rounding operand" } */ + m512d = _mm512_fixupimm_round_pd (m512d, m512d, m512i, 4, 7); /* { dg-error "incorrect rounding operand" } */ m512d = _mm512_mask_fixupimm_round_pd (m512d, mmask8, m512d, m512i, 4, 7); /* { dg-error "incorrect rounding operand" } */ - m512d = _mm512_maskz_fixupimm_round_pd (mmask8, m512d, m512i, 4, 7); /* { dg-error "incorrect rounding operand" } */ - m512 = _mm512_fixupimm_round_ps (m512, m512i, 4, 7); /* { dg-error "incorrect rounding operand" } */ + m512d = _mm512_maskz_fixupimm_round_pd (mmask8, m512d, m512d, m512i, 4, 7); /* { dg-error "incorrect rounding operand" } */ + m512 = _mm512_fixupimm_round_ps (m512, m512, m512i, 4, 7); /* { dg-error "incorrect rounding operand" } */ m512 = _mm512_mask_fixupimm_round_ps (m512, mmask16, m512, m512i, 4, 7); /* { dg-error "incorrect rounding operand" } */ - m512 = _mm512_maskz_fixupimm_round_ps (mmask16, m512, m512i, 4, 7); /* { dg-error "incorrect rounding operand" } */ - m128d = _mm_fixupimm_round_sd (m128d, m128i, 4, 7); /* { dg-error "incorrect rounding operand" } */ + m512 = _mm512_maskz_fixupimm_round_ps (mmask16, m512, m512, m512i, 4, 7); /* { dg-error "incorrect rounding operand" } */ + m128d = _mm_fixupimm_round_sd (m128d, m128d, m128i, 4, 7); /* { dg-error "incorrect rounding operand" } */ m128d = _mm_mask_fixupimm_round_sd (m128d, mmask8, m128d, m128i, 4, 7); /* { dg-error "incorrect rounding operand" } */ - m128d = _mm_maskz_fixupimm_round_sd (mmask8, m128d, m128i, 4, 7); /* { dg-error "incorrect rounding operand" } */ - m128 = _mm_fixupimm_round_ss (m128, m128i, 4, 7); /* { dg-error "incorrect rounding operand" } */ + m128d = _mm_maskz_fixupimm_round_sd (mmask8, m128d, m128d, m128i, 4, 7); /* { dg-error "incorrect rounding operand" } */ + m128 = _mm_fixupimm_round_ss (m128, m128, m128i, 4, 7); /* { dg-error "incorrect rounding operand" } */ m128 = _mm_mask_fixupimm_round_ss (m128, mmask8, m128, m128i, 4, 7); /* { dg-error "incorrect rounding operand" } */ - m128 = _mm_maskz_fixupimm_round_ss (mmask8, m128, m128i, 4, 7); /* { dg-error "incorrect rounding operand" } */ + m128 = _mm_maskz_fixupimm_round_ss (mmask8, m128, m128, m128i, 4, 7); /* { dg-error "incorrect rounding operand" } */ ui = _mm_cvtt_roundss_u32 (m128, 7); /* { dg-error "incorrect rounding operand" } */ i = _mm_cvtt_roundss_i32 (m128, 7); /* { dg-error "incorrect rounding operand" } */ @@ -503,18 +503,18 @@ test_sae_only (void) m512i = _mm512_mask_cvtt_roundps_epu32 (m512i, mmask16, m512, 3); /* { dg-error "incorrect rounding operand" } */ m512i = _mm512_maskz_cvtt_roundps_epu32 (mmask16, m512, 3); /* { dg-error "incorrect rounding operand" } */ - m512d = _mm512_fixupimm_round_pd (m512d, m512i, 4, 3); /* { dg-error "incorrect rounding operand" } */ + m512d = _mm512_fixupimm_round_pd (m512d, m512d, m512i, 4, 3); /* { dg-error "incorrect rounding operand" } */ m512d = _mm512_mask_fixupimm_round_pd (m512d, mmask8, m512d, m512i, 4, 3); /* { dg-error "incorrect rounding operand" } */ - m512d = _mm512_maskz_fixupimm_round_pd (mmask8, m512d, m512i, 4, 3); /* { dg-error "incorrect rounding operand" } */ - m512 = _mm512_fixupimm_round_ps (m512, m512i, 4, 3); /* { dg-error "incorrect rounding operand" } */ + m512d = _mm512_maskz_fixupimm_round_pd (mmask8, m512d, m512d, m512i, 4, 3); /* { dg-error "incorrect rounding operand" } */ + m512 = _mm512_fixupimm_round_ps (m512, m512, m512i, 4, 3); /* { dg-error "incorrect rounding operand" } */ m512 = _mm512_mask_fixupimm_round_ps (m512, mmask16, m512, m512i, 4, 3); /* { dg-error "incorrect rounding operand" } */ - m512 = _mm512_maskz_fixupimm_round_ps (mmask16, m512, m512i, 4, 3); /* { dg-error "incorrect rounding operand" } */ - m128d = _mm_fixupimm_round_sd (m128d, m128i, 4, 3); /* { dg-error "incorrect rounding operand" } */ + m512 = _mm512_maskz_fixupimm_round_ps (mmask16, m512, m512, m512i, 4, 3); /* { dg-error "incorrect rounding operand" } */ + m128d = _mm_fixupimm_round_sd (m128d, m128d, m128i, 4, 3); /* { dg-error "incorrect rounding operand" } */ m128d = _mm_mask_fixupimm_round_sd (m128d, mmask8, m128d, m128i, 4, 3); /* { dg-error "incorrect rounding operand" } */ - m128d = _mm_maskz_fixupimm_round_sd (mmask8, m128d, m128i, 4, 3); /* { dg-error "incorrect rounding operand" } */ - m128 = _mm_fixupimm_round_ss (m128, m128i, 4, 3); /* { dg-error "incorrect rounding operand" } */ + m128d = _mm_maskz_fixupimm_round_sd (mmask8, m128d, m128d, m128i, 4, 3); /* { dg-error "incorrect rounding operand" } */ + m128 = _mm_fixupimm_round_ss (m128, m128, m128i, 4, 3); /* { dg-error "incorrect rounding operand" } */ m128 = _mm_mask_fixupimm_round_ss (m128, mmask8, m128, m128i, 4, 3); /* { dg-error "incorrect rounding operand" } */ - m128 = _mm_maskz_fixupimm_round_ss (mmask8, m128, m128i, 4, 3); /* { dg-error "incorrect rounding operand" } */ + m128 = _mm_maskz_fixupimm_round_ss (mmask8, m128, m128, m128i, 4, 3); /* { dg-error "incorrect rounding operand" } */ ui = _mm_cvtt_roundss_u32 (m128, 3); /* { dg-error "incorrect rounding operand" } */ i = _mm_cvtt_roundss_i32 (m128, 3); /* { dg-error "incorrect rounding operand" } */