* config/i386/mmx.md (EMMS): New int iterator.
(emms): New int attribute.
(mmx_<emms>): Macroize insn from *mmx_emms and *mmx_femms using
EMMS int iterator. Explicitly declare clobbers.
(mmx_emms): Remove expander.
(mmx_femms): Ditto.
* config/i386/predicates.md (emms_operation): Remove predicate.
(vzeroall_pattern): New predicate.
(vzeroupper_pattern): Rename from vzeroupper_operation.
* config/i386/i386.c (ix86_avx_u128_mode_after): Use
vzeroupper_pattern and vzeroall_pattern predicates.
From-SVN: r264727
+2018-09-30 Uros Bizjak <ubizjak@gmail.com>
+
+ * config/i386/mmx.md (EMMS): New int iterator.
+ (emms): New int attribute.
+ (mmx_<emms>): Macroize insn from *mmx_emms and *mmx_femms using
+ EMMS int iterator. Explicitly declare clobbers.
+ (mmx_emms): Remove expander.
+ (mmx_femms): Ditto.
+ * config/i386/predicates.md (emms_operation): Remove predicate.
+ (vzeroall_pattern): New predicate.
+ (vzeroupper_pattern): Rename from vzeroupper_operation.
+ * config/i386/i386.c (ix86_avx_u128_mode_after): Use
+ vzeroupper_pattern and vzeroall_pattern predicates.
+
2018-09-30 Peter Bergner <bergner@linux.ibm.com>
PR rtl-optimization/86939
* configure: Regenerate.
2018-09-28 Eric Botcazou <ebotcazou@adacore.com>
- Pierre-Marie de Rodat <derodat@adacore.com>
+ Pierre-Marie de Rodat <derodat@adacore.com>
* calls.c (expand_call): Try to do a tail call for thunks at -O0 too.
* cgraph.h (struct cgraph_thunk_info): Add indirect_offset.
to subblocks.
2018-09-27 Andrew Stubbs <ams@codesourcery.com>
- Tom de Vries <tom@codesourcery.com>
+ Tom de Vries <tom@codesourcery.com>
PR 82089
2018-09-25 Bernd Edlinger <bernd.edlinger@hotmail.de>
PR c/87387
- * builtins.c (unterminated_array): Simplify.
+ * builtins.c (unterminated_array): Simplify.
* expr.c (string_constant): Handle SSA_NAME. Add more exceptions
where pointer arithmetic is safe.
* genattrtab.c (mk_attr_alt): Use alternative_mask.
(attr_rtx_1): Adjust caching to match the new EQ_ATTR_ALT field
- types.
+ types.
(check_attr_test): Use alternative_mask.
(get_attr_value): Likewise.
(compute_alternative_mask): Use alternative_mask and XWINT.
(attr_alt_intersection): Use alternative_mask and XWINT.
(attr_alt_union): Likewise.
(attr_alt_complement): Use HOST_WIDE_INT and XWINT.
- (mk_attr_alt): Use alternative_mask and HOST_WIDE_INT.
+ (mk_attr_alt): Use alternative_mask and HOST_WIDE_INT.
(simplify_test_exp): Use alternative_mask and XWINT.
(write_test_expr): Use alternative_mask and XWINT, adjust bit
- number calculation to support 64 bits. Generate code that
- checks 64-bit masks.
+ number calculation to support 64 bits. Generate code that
+ checks 64-bit masks.
(main): Use alternative_mask.
* rtl.def (EQ_ATTR_ALT): Change field types from ii to ww.
2018-09-23 Uros Bizjak <ubizjak@gmail.com>
* config/i386/i386.c (regclass_map): Declare integer REX registers
- as GENERAL_REGS.
+ as GENERAL_REGS.
2018-09-23 Gerald Pfeifer <gerald@pfeifer.com>
* config.gcc: Prepend vxworks-dummy.h to tm_file for powerpc*
2018-09-21 Shaokun Zhang <zhangshaokun@hisilicon.com>
- Bo Zhou <zbo.zhou@hisilicon.com>
+ Bo Zhou <zbo.zhou@hisilicon.com>
* config/aarch64/aarch64-cores.def (tsv110): New CPU.
* config/aarch64/aarch64-tune.md: Regenerated.
* cfgexpand.c (expand_gimple_cond): Likewise.
2018-09-09 Cesar Philippidis <cesar@codesourcery.com>
- Julian Brown <julian@codesourcery.com>
+ Julian Brown <julian@codesourcery.com>
PR middle-end/86336
* gimplify.c (gimplify_scan_omp_clauses): Set
{
rtx pat = PATTERN (insn);
- if (vzeroupper_operation (pat, VOIDmode)
- || vzeroall_operation (pat, VOIDmode))
+ if (vzeroupper_pattern (pat, VOIDmode)
+ || vzeroall_pattern (pat, VOIDmode))
return AVX_U128_CLEAN;
/* We know that state is clean after CALL insn if there are no
(set_attr "znver1_decode" "vector")
(set_attr "mode" "DI")])
-(define_expand "mmx_emms"
- [(match_par_dup 0 [(const_int 0)])]
- "TARGET_MMX"
-{
- int regno;
-
- operands[0] = gen_rtx_PARALLEL (VOIDmode, rtvec_alloc (17));
-
- XVECEXP (operands[0], 0, 0)
- = gen_rtx_UNSPEC_VOLATILE (VOIDmode, gen_rtvec (1, const0_rtx),
- UNSPECV_EMMS);
-
- for (regno = 0; regno < 8; regno++)
- {
- XVECEXP (operands[0], 0, regno + 1)
- = gen_rtx_CLOBBER (VOIDmode,
- gen_rtx_REG (XFmode, FIRST_STACK_REG + regno));
-
- XVECEXP (operands[0], 0, regno + 9)
- = gen_rtx_CLOBBER (VOIDmode,
- gen_rtx_REG (DImode, FIRST_MMX_REG + regno));
- }
-})
-
-(define_insn "*mmx_emms"
- [(match_parallel 0 "emms_operation"
- [(unspec_volatile [(const_int 0)] UNSPECV_EMMS)])]
- "TARGET_MMX"
- "emms"
- [(set_attr "type" "mmx")
- (set_attr "modrm" "0")
- (set_attr "memory" "none")])
-
-(define_expand "mmx_femms"
- [(match_par_dup 0 [(const_int 0)])]
- "TARGET_3DNOW"
-{
- int regno;
-
- operands[0] = gen_rtx_PARALLEL (VOIDmode, rtvec_alloc (17));
-
- XVECEXP (operands[0], 0, 0)
- = gen_rtx_UNSPEC_VOLATILE (VOIDmode, gen_rtvec (1, const0_rtx),
- UNSPECV_FEMMS);
-
- for (regno = 0; regno < 8; regno++)
- {
- XVECEXP (operands[0], 0, regno + 1)
- = gen_rtx_CLOBBER (VOIDmode,
- gen_rtx_REG (XFmode, FIRST_STACK_REG + regno));
-
- XVECEXP (operands[0], 0, regno + 9)
- = gen_rtx_CLOBBER (VOIDmode,
- gen_rtx_REG (DImode, FIRST_MMX_REG + regno));
- }
-})
-
-(define_insn "*mmx_femms"
- [(match_parallel 0 "emms_operation"
- [(unspec_volatile [(const_int 0)] UNSPECV_FEMMS)])]
- "TARGET_3DNOW"
- "femms"
+(define_int_iterator EMMS
+ [(UNSPECV_EMMS "TARGET_MMX")
+ (UNSPECV_FEMMS "TARGET_3DNOW")])
+
+(define_int_attr emms
+ [(UNSPECV_EMMS "emms")
+ (UNSPECV_FEMMS "femms")])
+
+(define_insn "mmx_<emms>"
+ [(unspec_volatile [(const_int 0)] EMMS)
+ (clobber (reg:XF ST0_REG))
+ (clobber (reg:XF ST1_REG))
+ (clobber (reg:XF ST2_REG))
+ (clobber (reg:XF ST3_REG))
+ (clobber (reg:XF ST4_REG))
+ (clobber (reg:XF ST5_REG))
+ (clobber (reg:XF ST6_REG))
+ (clobber (reg:XF ST7_REG))
+ (clobber (reg:DI MM0_REG))
+ (clobber (reg:DI MM1_REG))
+ (clobber (reg:DI MM2_REG))
+ (clobber (reg:DI MM3_REG))
+ (clobber (reg:DI MM4_REG))
+ (clobber (reg:DI MM5_REG))
+ (clobber (reg:DI MM6_REG))
+ (clobber (reg:DI MM7_REG))]
+ ""
+ "<emms>"
[(set_attr "type" "mmx")
(set_attr "modrm" "0")
(set_attr "memory" "none")])
(and (match_code "mem")
(match_test "MEM_ALIGN (op) < GET_MODE_BITSIZE (mode)")))
-;; Return true if OP is a emms operation, known to be a PARALLEL.
-(define_predicate "emms_operation"
- (match_code "parallel")
-{
- unsigned i;
-
- if (XVECLEN (op, 0) != 17)
- return false;
-
- for (i = 0; i < 8; i++)
- {
- rtx elt = XVECEXP (op, 0, i+1);
-
- if (GET_CODE (elt) != CLOBBER
- || GET_CODE (SET_DEST (elt)) != REG
- || GET_MODE (SET_DEST (elt)) != XFmode
- || REGNO (SET_DEST (elt)) != FIRST_STACK_REG + i)
- return false;
-
- elt = XVECEXP (op, 0, i+9);
-
- if (GET_CODE (elt) != CLOBBER
- || GET_CODE (SET_DEST (elt)) != REG
- || GET_MODE (SET_DEST (elt)) != DImode
- || REGNO (SET_DEST (elt)) != FIRST_MMX_REG + i)
- return false;
- }
- return true;
-})
-
;; Return true if OP is a vzeroall operation, known to be a PARALLEL.
(define_predicate "vzeroall_operation"
(match_code "parallel")
return true;
})
-;; return true if OP is a vzeroupper operation.
-(define_predicate "vzeroupper_operation"
+;; return true if OP is a vzeroall pattern.
+(define_predicate "vzeroall_pattern"
+ (and (match_code "parallel")
+ (match_code "unspec_volatile" "a")
+ (match_test "XINT (XVECEXP (op, 0, 0), 1) == UNSPECV_VZEROALL")))
+
+;; return true if OP is a vzeroupper pattern.
+(define_predicate "vzeroupper_pattern"
(and (match_code "unspec_volatile")
(match_test "XINT (op, 1) == UNSPECV_VZEROUPPER")))