* config/rs6000/altivec.md (build_vector_mask_for_load): Use MEM_P.
* config/rs6000/constraints.md (Q constraint): Use REG_P.
* config/rs6000/darwin.h (PREFERRED_RELOAD_CLASS): Use SYMBOL_REF_P.
* config/rs6000/freebsd64.h (ASM_OUTPUT_SPECIAL_POOL_ENTRY_P): Use
SYMBOL_REF_P, CONST_INT_P and CONST_DOUBLE_P.
* config/rs6000/linux64.h (ASM_OUTPUT_SPECIAL_POOL_ENTRY_P): Likewise.
* config/rs6000/predicates.md (altivec_register_operand, vint_operand,
vsx_register_operand, vsx_reg_sfsubreg_ok, vfloat_operand,
vlogical_operand, gpc_reg_operand, int_reg_operand,
int_reg_operand_not_pseudo): Use SUBREG_P and HARD_REGISTER_P.
(ca_operand, base_reg_operand, htm_spr_reg_operand, cc_reg_operand,
cc_reg_not_cr0_operand, input_operand): Use SUBREG_P.
(save_world_operation, restore_world_operation, lmw_operation,
stmw_operation): Use MEM_P and REG_P.
(tie_operand): Use MEM_P.
(vrsave_operation, crsave_operation): Use REG_P.
(mfcr_operation, mtcrf_operation): Use REG_P and CONST_INT_P.
(fpr_reg_operand): Use SUBREG_P and HARD_REGISTER_NUM_P.
(quad_int_reg_operand): Use HARD_REGISTER_NUM_P.
(call_operand): Use HARD_REGISTER_P.
(indexed_or_indirect_operand, altivec_indexed_or_indirect_operand):
Use CONST_INT_P.
(lwa_operand): Use SUBREG_P, REG_P and CONST_INT_P.
* config/rs6000/rs6000-p8swap.c (insn_is_load_p, insn_is_store_p,
quad_aligned_load_p, replace_swapped_aligned_store,
recombine_lvx_pattern, replace_swapped_aligned_load,
recombine_stvx_pattern): Use MEM_P.
(const_load_sequence_p, adjust_vperm, replace_swapped_load_constant):
Use MEM_P and SYMBOL_REF_P.
(rtx_is_swappable_p): Use REG_P and CONST_INT_P.
(insn_is_swappable_p): Use REG_P and MEM_P.
(insn_is_swap_p, (alignment_mask): Use CONST_INT_P.
* config/rs6000/rs6000-string.c (expand_block_clear, expand_block_move):
Use CONST_INT_P.
* config/rs6000/rs6000.c (rs6000_secondary_reload, rs6000_emit_cmove):
Use CONST_DOUBLE_P.
(rs6000_output_move_128bit): Use CONST_DOUBLE_P, CONST_INT_P and
CONST_WIDE_INT_P.
(rs6000_legitimize_address): Use CONST_DOUBLE_P, CONST_INT_P,
CONST_WIDE_INT_P, REG_P and SYMBOL_REF_P.
(rs6000_emit_move): Use CONST_DOUBLE_P, CONST_INT_P, HARD_REGISTER_P,
HARD_REGISTER_NUM_P, MEM_P, REG_P, SUBREG_P, SYMBOL_REF_P and
reg_or_subregno:
(output_toc): Use CONST_DOUBLE_P, CONST_INT_P and SYMBOL_REF_P.
(easy_altivec_constant, rs6000_legitimate_offset_address_p,
rs6000_mode_dependent_address, rs6000_expand_mtfsf_builtin,
rs6000_expand_set_fpscr_rn_builtin, rs6000_expand_set_fpscr_drn_builtin,
rs6000_expand_unop_builtin, INT_P, rs6000_generate_compare,
rs6000_machopic_legitimize_pic_address, rs6000_split_logical_inner,
rs6000_split_logical_di): Use CONST_INT_P.
(rs6000_legitimize_reload_address): Use CONST_INT_P, HARD_REGISTER_P,
REG_P and SYMBOL_REF_P.
(setup_incoming_varargs, rs6000_rtx_costs): Use CONST_INT_P and MEM_P.
(print_operand): Use CONST_INT_P, MEM_P and REG_P.
(virtual_stack_registers_memory_p, rs6000_legitimate_address_p,
mems_ok_for_quad_peep): Use CONST_INT_P and REG_P.
(rs6000_secondary_reload_memory): Use CONST_INT_P and SUBREG_P.
(small_data_operand, print_operand_address): Use CONST_INT_P and
SYMBOL_REF_P.
(split_stack_arg_pointer_used_p): Use HARD_REGISTER_P.
(rs6000_init_hard_regno_mode_ok, direct_move_p):
Use HARD_REGISTER_NUM_P.
(rs6000_secondary_reload_gpr): Use HARD_REGISTER_NUM_P and MEM_P.
(rs6000_secondary_reload_class): Use HARD_REGISTER_NUM_P, REG_P,
SUBREG_P and SYMBOL_REF_P.
(register_to_reg_type, rs6000_secondary_reload_inner): Use SUBREG_P
and HARD_REGISTER_NUM_P.
(rs6000_adjust_vec_address): Use HARD_REGISTER_NUM_P and
reg_or_subregno.
(rs6000_adjust_cost, find_mem_ref): Use MEM_P.
(macho_lo_sum_memory_operand, rs6000_eliminate_indexed_memrefs): Use
MEM_P and REG_P.
(legitimate_indirect_address_p, legitimate_lo_sum_address_p,
registers_ok_for_quad_peep, rs6000_output_function_epilogue,
find_addr_reg): Use REG_P.
(altivec_expand_vec_perm_const): Use REG_P and SUBREG_P.
(rs6000_emit_le_vsx_move): Use SUBREG_P.
(offsettable_ok_by_alignment, constant_pool_expr_p,
legitimate_small_data_p, rs6000_output_dwarf_dtprel,
rs6000_delegitimize_address, rs6000_const_not_ok_for_debug_p,
rs6000_cannot_force_const_mem, rs6000_output_addr_const_extra,
rs6000_assemble_integer, create_TOC_reference,
rs6000_emit_allocate_stack, rs6000_xcoff_encode_section_info,
rs6000_call_aix, rs6000_call_aix): Use SYMBOL_REF_P.
(rs6000_split_vec_extract_var): Use reg_or_subregno.
* config/rs6000/rtems.h (ASM_OUTPUT_SPECIAL_POOL_ENTRY_P): Use
CONST_DOUBLE_P, CONST_INT_P and SYMBOL_REF_P.
* config/rs6000/sysv4.h (ASM_OUTPUT_SPECIAL_POOL_ENTRY_P): Likewise.
* config/rs6000/xcoff.h (ASM_OUTPUT_SPECIAL_POOL_ENTRY_P): Likewise.
* config/rs6000/rs6000.h (RS6000_SYMBOL_REF_TLS_P): Use SYMBOL_REF_P.
(REGNO_OK_FOR_INDEX_P, REGNO_OK_FOR_BASE_P): Use HARD_REGISTER_NUM_P.
(INT_REG_OK_FOR_INDEX_P, INT_REG_OK_FOR_BASE_P): Use HARD_REGISTER_P.
(CONSTANT_ADDRESS_P): Use CONST_INT_P and SYMBOL_REF_P.
* config/rs6000/rs6000.md (define_expands strlensi, mod<mode>3
and cbranch<mode>4): Use CONST_INT_P.
(multiple define_splits): Use REG_P and SUBREG_P.
(define_expands call, call_value): Use MEM_P.
(define_expands sibcall, sibcall_value): Use CONST_INT_P and MEM_P.
(define insn *mtcrfsi): Use CONST_INT_P and REG_P.
* config/rs6000/vsx.md (*vsx_le_perm_load_<mode>,
*vsx_le_perm_load_v8hi, *vsx_le_perm_load_v16qi): Use HARD_REGISTER_P
and HARD_REGISTER_NUM_P.
(multiple define_splits): Use HARD_REGISTER_NUM_P.
From-SVN: r268253
+2019-01-24 Peter Bergner <bergner@linux.ibm.com>
+
+ * config/rs6000/altivec.md (build_vector_mask_for_load): Use MEM_P.
+ * config/rs6000/constraints.md (Q constraint): Use REG_P.
+ * config/rs6000/darwin.h (PREFERRED_RELOAD_CLASS): Use SYMBOL_REF_P.
+ * config/rs6000/freebsd64.h (ASM_OUTPUT_SPECIAL_POOL_ENTRY_P): Use
+ SYMBOL_REF_P, CONST_INT_P and CONST_DOUBLE_P.
+ * config/rs6000/linux64.h (ASM_OUTPUT_SPECIAL_POOL_ENTRY_P): Likewise.
+ * config/rs6000/predicates.md (altivec_register_operand, vint_operand,
+ vsx_register_operand, vsx_reg_sfsubreg_ok, vfloat_operand,
+ vlogical_operand, gpc_reg_operand, int_reg_operand,
+ int_reg_operand_not_pseudo): Use SUBREG_P and HARD_REGISTER_P.
+ (ca_operand, base_reg_operand, htm_spr_reg_operand, cc_reg_operand,
+ cc_reg_not_cr0_operand, input_operand): Use SUBREG_P.
+ (save_world_operation, restore_world_operation, lmw_operation,
+ stmw_operation): Use MEM_P and REG_P.
+ (tie_operand): Use MEM_P.
+ (vrsave_operation, crsave_operation): Use REG_P.
+ (mfcr_operation, mtcrf_operation): Use REG_P and CONST_INT_P.
+ (fpr_reg_operand): Use SUBREG_P and HARD_REGISTER_NUM_P.
+ (quad_int_reg_operand): Use HARD_REGISTER_NUM_P.
+ (call_operand): Use HARD_REGISTER_P.
+ (indexed_or_indirect_operand, altivec_indexed_or_indirect_operand):
+ Use CONST_INT_P.
+ (lwa_operand): Use SUBREG_P, REG_P and CONST_INT_P.
+ * config/rs6000/rs6000-p8swap.c (insn_is_load_p, insn_is_store_p,
+ quad_aligned_load_p, replace_swapped_aligned_store,
+ recombine_lvx_pattern, replace_swapped_aligned_load,
+ recombine_stvx_pattern): Use MEM_P.
+ (const_load_sequence_p, adjust_vperm, replace_swapped_load_constant):
+ Use MEM_P and SYMBOL_REF_P.
+ (rtx_is_swappable_p): Use REG_P and CONST_INT_P.
+ (insn_is_swappable_p): Use REG_P and MEM_P.
+ (insn_is_swap_p, (alignment_mask): Use CONST_INT_P.
+ * config/rs6000/rs6000-string.c (expand_block_clear, expand_block_move):
+ Use CONST_INT_P.
+ * config/rs6000/rs6000.c (rs6000_secondary_reload, rs6000_emit_cmove):
+ Use CONST_DOUBLE_P.
+ (rs6000_output_move_128bit): Use CONST_DOUBLE_P, CONST_INT_P and
+ CONST_WIDE_INT_P.
+ (rs6000_legitimize_address): Use CONST_DOUBLE_P, CONST_INT_P,
+ CONST_WIDE_INT_P, REG_P and SYMBOL_REF_P.
+ (rs6000_emit_move): Use CONST_DOUBLE_P, CONST_INT_P, HARD_REGISTER_P,
+ HARD_REGISTER_NUM_P, MEM_P, REG_P, SUBREG_P, SYMBOL_REF_P and
+ reg_or_subregno:
+ (output_toc): Use CONST_DOUBLE_P, CONST_INT_P and SYMBOL_REF_P.
+ (easy_altivec_constant, rs6000_legitimate_offset_address_p,
+ rs6000_mode_dependent_address, rs6000_expand_mtfsf_builtin,
+ rs6000_expand_set_fpscr_rn_builtin, rs6000_expand_set_fpscr_drn_builtin,
+ rs6000_expand_unop_builtin, INT_P, rs6000_generate_compare,
+ rs6000_machopic_legitimize_pic_address, rs6000_split_logical_inner,
+ rs6000_split_logical_di): Use CONST_INT_P.
+ (rs6000_legitimize_reload_address): Use CONST_INT_P, HARD_REGISTER_P,
+ REG_P and SYMBOL_REF_P.
+ (setup_incoming_varargs, rs6000_rtx_costs): Use CONST_INT_P and MEM_P.
+ (print_operand): Use CONST_INT_P, MEM_P and REG_P.
+ (virtual_stack_registers_memory_p, rs6000_legitimate_address_p,
+ mems_ok_for_quad_peep): Use CONST_INT_P and REG_P.
+ (rs6000_secondary_reload_memory): Use CONST_INT_P and SUBREG_P.
+ (small_data_operand, print_operand_address): Use CONST_INT_P and
+ SYMBOL_REF_P.
+ (split_stack_arg_pointer_used_p): Use HARD_REGISTER_P.
+ (rs6000_init_hard_regno_mode_ok, direct_move_p):
+ Use HARD_REGISTER_NUM_P.
+ (rs6000_secondary_reload_gpr): Use HARD_REGISTER_NUM_P and MEM_P.
+ (rs6000_secondary_reload_class): Use HARD_REGISTER_NUM_P, REG_P,
+ SUBREG_P and SYMBOL_REF_P.
+ (register_to_reg_type, rs6000_secondary_reload_inner): Use SUBREG_P
+ and HARD_REGISTER_NUM_P.
+ (rs6000_adjust_vec_address): Use HARD_REGISTER_NUM_P and
+ reg_or_subregno.
+ (rs6000_adjust_cost, find_mem_ref): Use MEM_P.
+ (macho_lo_sum_memory_operand, rs6000_eliminate_indexed_memrefs): Use
+ MEM_P and REG_P.
+ (legitimate_indirect_address_p, legitimate_lo_sum_address_p,
+ registers_ok_for_quad_peep, rs6000_output_function_epilogue,
+ find_addr_reg): Use REG_P.
+ (altivec_expand_vec_perm_const): Use REG_P and SUBREG_P.
+ (rs6000_emit_le_vsx_move): Use SUBREG_P.
+ (offsettable_ok_by_alignment, constant_pool_expr_p,
+ legitimate_small_data_p, rs6000_output_dwarf_dtprel,
+ rs6000_delegitimize_address, rs6000_const_not_ok_for_debug_p,
+ rs6000_cannot_force_const_mem, rs6000_output_addr_const_extra,
+ rs6000_assemble_integer, create_TOC_reference,
+ rs6000_emit_allocate_stack, rs6000_xcoff_encode_section_info,
+ rs6000_call_aix, rs6000_call_aix): Use SYMBOL_REF_P.
+ (rs6000_split_vec_extract_var): Use reg_or_subregno.
+ * config/rs6000/rtems.h (ASM_OUTPUT_SPECIAL_POOL_ENTRY_P): Use
+ CONST_DOUBLE_P, CONST_INT_P and SYMBOL_REF_P.
+ * config/rs6000/sysv4.h (ASM_OUTPUT_SPECIAL_POOL_ENTRY_P): Likewise.
+ * config/rs6000/xcoff.h (ASM_OUTPUT_SPECIAL_POOL_ENTRY_P): Likewise.
+ * config/rs6000/rs6000.h (RS6000_SYMBOL_REF_TLS_P): Use SYMBOL_REF_P.
+ (REGNO_OK_FOR_INDEX_P, REGNO_OK_FOR_BASE_P): Use HARD_REGISTER_NUM_P.
+ (INT_REG_OK_FOR_INDEX_P, INT_REG_OK_FOR_BASE_P): Use HARD_REGISTER_P.
+ (CONSTANT_ADDRESS_P): Use CONST_INT_P and SYMBOL_REF_P.
+ * config/rs6000/rs6000.md (define_expands strlensi, mod<mode>3
+ and cbranch<mode>4): Use CONST_INT_P.
+ (multiple define_splits): Use REG_P and SUBREG_P.
+ (define_expands call, call_value): Use MEM_P.
+ (define_expands sibcall, sibcall_value): Use CONST_INT_P and MEM_P.
+ (define insn *mtcrfsi): Use CONST_INT_P and REG_P.
+ * config/rs6000/vsx.md (*vsx_le_perm_load_<mode>,
+ *vsx_le_perm_load_v8hi, *vsx_le_perm_load_v16qi): Use HARD_REGISTER_P
+ and HARD_REGISTER_NUM_P.
+ (multiple define_splits): Use HARD_REGISTER_NUM_P.
+
2019-01-24 Uroš Bizjak <ubizjak@gmail.com>
PR rtl-optimization/88948
rtx addr;
rtx temp;
- gcc_assert (GET_CODE (operands[1]) == MEM);
+ gcc_assert (MEM_P (operands[1]));
addr = XEXP (operands[1], 0);
temp = gen_reg_rtx (GET_MODE (addr));
"Memory operand that is an offset from a register (it is usually better
to use @samp{m} or @samp{es} in @code{asm} statements)"
(and (match_code "mem")
- (match_test "GET_CODE (XEXP (op, 0)) == REG")))
+ (match_test "REG_P (XEXP (op, 0))")))
(define_memory_constraint "Y"
"memory operand for 8 byte and 16 byte gpr load/store"
((CONSTANT_P (X) \
&& reg_classes_intersect_p ((CLASS), FLOAT_REGS)) \
? NO_REGS \
- : ((GET_CODE (X) == SYMBOL_REF || GET_CODE (X) == HIGH) \
+ : ((SYMBOL_REF_P (X) || GET_CODE (X) == HIGH) \
&& reg_class_subset_p (BASE_REGS, (CLASS))) \
? BASE_REGS \
: (GET_MODE_CLASS (GET_MODE (X)) == MODE_INT \
#undef ASM_OUTPUT_SPECIAL_POOL_ENTRY_P
#define ASM_OUTPUT_SPECIAL_POOL_ENTRY_P(X, MODE) \
(TARGET_TOC \
- && (GET_CODE (X) == SYMBOL_REF \
+ && (SYMBOL_REF_P (X) \
|| (GET_CODE (X) == CONST && GET_CODE (XEXP (X, 0)) == PLUS \
- && GET_CODE (XEXP (XEXP (X, 0), 0)) == SYMBOL_REF) \
+ && SYMBOL_REF_P (XEXP (XEXP (X, 0), 0))) \
|| GET_CODE (X) == LABEL_REF \
- || (GET_CODE (X) == CONST_INT \
+ || (CONST_INT_P (X) \
&& GET_MODE_BITSIZE (MODE) <= GET_MODE_BITSIZE (Pmode)) \
- || (GET_CODE (X) == CONST_DOUBLE \
+ || (CONST_DOUBLE_P (X) \
&& ((TARGET_64BIT \
&& (TARGET_MINIMAL_TOC \
|| (SCALAR_FLOAT_MODE_P (GET_MODE (X)) \
#undef ASM_OUTPUT_SPECIAL_POOL_ENTRY_P
#define ASM_OUTPUT_SPECIAL_POOL_ENTRY_P(X, MODE) \
(TARGET_TOC \
- && (GET_CODE (X) == SYMBOL_REF \
+ && (SYMBOL_REF_P (X) \
|| (GET_CODE (X) == CONST && GET_CODE (XEXP (X, 0)) == PLUS \
- && GET_CODE (XEXP (XEXP (X, 0), 0)) == SYMBOL_REF) \
+ && SYMBOL_REF_P (XEXP (XEXP (X, 0), 0))) \
|| GET_CODE (X) == LABEL_REF \
- || (GET_CODE (X) == CONST_INT \
+ || (CONST_INT_P (X) \
&& TARGET_CMODEL != CMODEL_MEDIUM \
&& GET_MODE_BITSIZE (MODE) <= GET_MODE_BITSIZE (Pmode)) \
- || (GET_CODE (X) == CONST_DOUBLE \
+ || (CONST_DOUBLE_P (X) \
&& ((TARGET_64BIT \
&& (TARGET_MINIMAL_TOC \
|| (SCALAR_FLOAT_MODE_P (GET_MODE (X)) \
(define_predicate "altivec_register_operand"
(match_operand 0 "register_operand")
{
- if (GET_CODE (op) == SUBREG)
+ if (SUBREG_P (op))
{
if (TARGET_NO_SF_SUBREG && sf_subreg_operand (op, mode))
return 0;
if (!REG_P (op))
return 0;
- if (REGNO (op) >= FIRST_PSEUDO_REGISTER)
+ if (!HARD_REGISTER_P (op))
return 1;
return ALTIVEC_REGNO_P (REGNO (op));
(define_predicate "vsx_register_operand"
(match_operand 0 "register_operand")
{
- if (GET_CODE (op) == SUBREG)
+ if (SUBREG_P (op))
{
if (TARGET_NO_SF_SUBREG && sf_subreg_operand (op, mode))
return 0;
if (!REG_P (op))
return 0;
- if (REGNO (op) >= FIRST_PSEUDO_REGISTER)
+ if (!HARD_REGISTER_P (op))
return 1;
return VSX_REGNO_P (REGNO (op));
(define_predicate "vsx_reg_sfsubreg_ok"
(match_operand 0 "register_operand")
{
- if (GET_CODE (op) == SUBREG)
+ if (SUBREG_P (op))
op = SUBREG_REG (op);
if (!REG_P (op))
return 0;
- if (REGNO (op) >= FIRST_PSEUDO_REGISTER)
+ if (!HARD_REGISTER_P (op))
return 1;
return VSX_REGNO_P (REGNO (op));
(define_predicate "vfloat_operand"
(match_operand 0 "register_operand")
{
- if (GET_CODE (op) == SUBREG)
+ if (SUBREG_P (op))
{
if (TARGET_NO_SF_SUBREG && sf_subreg_operand (op, mode))
return 0;
if (!REG_P (op))
return 0;
- if (REGNO (op) >= FIRST_PSEUDO_REGISTER)
+ if (!HARD_REGISTER_P (op))
return 1;
return VFLOAT_REGNO_P (REGNO (op));
(define_predicate "vint_operand"
(match_operand 0 "register_operand")
{
- if (GET_CODE (op) == SUBREG)
+ if (SUBREG_P (op))
{
if (TARGET_NO_SF_SUBREG && sf_subreg_operand (op, mode))
return 0;
if (!REG_P (op))
return 0;
- if (REGNO (op) >= FIRST_PSEUDO_REGISTER)
+ if (!HARD_REGISTER_P (op))
return 1;
return VINT_REGNO_P (REGNO (op));
(define_predicate "vlogical_operand"
(match_operand 0 "register_operand")
{
- if (GET_CODE (op) == SUBREG)
+ if (SUBREG_P (op))
{
if (TARGET_NO_SF_SUBREG && sf_subreg_operand (op, mode))
return 0;
if (!REG_P (op))
return 0;
- if (REGNO (op) >= FIRST_PSEUDO_REGISTER)
+ if (!HARD_REGISTER_P (op))
return 1;
return VLOGICAL_REGNO_P (REGNO (op));
(define_predicate "ca_operand"
(match_operand 0 "register_operand")
{
- if (GET_CODE (op) == SUBREG)
+ if (SUBREG_P (op))
op = SUBREG_REG (op);
if (!REG_P (op))
(define_predicate "gpc_reg_operand"
(match_operand 0 "register_operand")
{
- if (GET_CODE (op) == SUBREG)
+ if (SUBREG_P (op))
{
if (TARGET_NO_SF_SUBREG && sf_subreg_operand (op, mode))
return 0;
if (!REG_P (op))
return 0;
- if (REGNO (op) >= FIRST_PSEUDO_REGISTER)
+ if (!HARD_REGISTER_P (op))
return 1;
if (TARGET_ALTIVEC && ALTIVEC_REGNO_P (REGNO (op)))
(define_predicate "int_reg_operand"
(match_operand 0 "register_operand")
{
- if (GET_CODE (op) == SUBREG)
+ if (SUBREG_P (op))
{
if (TARGET_NO_SF_SUBREG && sf_subreg_operand (op, mode))
return 0;
if (!REG_P (op))
return 0;
- if (REGNO (op) >= FIRST_PSEUDO_REGISTER)
+ if (!HARD_REGISTER_P (op))
return 1;
return INT_REGNO_P (REGNO (op));
(define_predicate "int_reg_operand_not_pseudo"
(match_operand 0 "register_operand")
{
- if (GET_CODE (op) == SUBREG)
+ if (SUBREG_P (op))
op = SUBREG_REG (op);
if (!REG_P (op))
return 0;
- if (REGNO (op) >= FIRST_PSEUDO_REGISTER)
+ if (!HARD_REGISTER_P (op))
return 0;
return INT_REGNO_P (REGNO (op));
(define_predicate "base_reg_operand"
(match_operand 0 "int_reg_operand")
{
- if (GET_CODE (op) == SUBREG)
+ if (SUBREG_P (op))
op = SUBREG_REG (op);
if (!REG_P (op))
{
HOST_WIDE_INT r;
- if (GET_CODE (op) == SUBREG)
+ if (SUBREG_P (op))
op = SUBREG_REG (op);
if (!REG_P (op))
return 0;
r = REGNO (op);
- if (r >= FIRST_PSEUDO_REGISTER)
+ if (!HARD_REGISTER_NUM_P (r))
return 1;
return FP_REGNO_P (r);
if (!TARGET_HTM)
return 0;
- if (GET_CODE (op) == SUBREG)
+ if (SUBREG_P (op))
op = SUBREG_REG (op);
if (!REG_P (op))
return 0;
r = REGNO (op);
- if (r >= FIRST_PSEUDO_REGISTER)
+ if (!HARD_REGISTER_NUM_P (r))
return 1;
return (INT_REGNO_P (r) && ((r & 1) == 0));
(define_predicate "cc_reg_operand"
(match_operand 0 "register_operand")
{
- if (GET_CODE (op) == SUBREG)
+ if (SUBREG_P (op))
op = SUBREG_REG (op);
if (!REG_P (op))
(define_predicate "cc_reg_not_cr0_operand"
(match_operand 0 "register_operand")
{
- if (GET_CODE (op) == SUBREG)
+ if (SUBREG_P (op))
op = SUBREG_REG (op);
if (!REG_P (op))
op = XEXP (op, 0);
if (VECTOR_MEM_ALTIVEC_P (mode)
&& GET_CODE (op) == AND
- && GET_CODE (XEXP (op, 1)) == CONST_INT
+ && CONST_INT_P (XEXP (op, 1))
&& INTVAL (XEXP (op, 1)) == -16)
op = XEXP (op, 0);
op = XEXP (op, 0);
if (VECTOR_MEM_ALTIVEC_OR_VSX_P (mode)
&& GET_CODE (op) == AND
- && GET_CODE (XEXP (op, 1)) == CONST_INT
+ && CONST_INT_P (XEXP (op, 1))
&& INTVAL (XEXP (op, 1)) == -16)
return indexed_or_indirect_address (XEXP (op, 0), mode);
rtx inner, addr, offset;
inner = op;
- if (reload_completed && GET_CODE (inner) == SUBREG)
+ if (reload_completed && SUBREG_P (inner))
inner = SUBREG_REG (inner);
if (gpc_reg_operand (inner, mode))
&& !legitimate_indexed_address_p (XEXP (addr, 1), 0)))
return false;
if (GET_CODE (addr) == LO_SUM
- && GET_CODE (XEXP (addr, 0)) == REG
+ && REG_P (XEXP (addr, 0))
&& GET_CODE (XEXP (addr, 1)) == CONST)
addr = XEXP (XEXP (addr, 1), 0);
if (GET_CODE (addr) != PLUS)
return true;
offset = XEXP (addr, 1);
- if (GET_CODE (offset) != CONST_INT)
+ if (!CONST_INT_P (offset))
return true;
return INTVAL (offset) % 4 == 0;
})
(if_then_else (match_code "reg")
(match_test "REGNO (op) == LR_REGNO
|| REGNO (op) == CTR_REGNO
- || REGNO (op) >= FIRST_PSEUDO_REGISTER")
+ || !HARD_REGISTER_P (op)")
(match_code "symbol_ref")))
;; Return 1 if the operand, used inside a MEM, is a valid first argument
/* V.4 allows SYMBOL_REFs and CONSTs that are in the small data region
to be valid. */
if (DEFAULT_ABI == ABI_V4
- && (GET_CODE (op) == SYMBOL_REF || GET_CODE (op) == CONST)
+ && (SYMBOL_REF_P (op) || GET_CODE (op) == CONST)
&& small_data_operand (op, Pmode))
return 1;
{
elt = XVECEXP (op, 0, index++);
if (GET_CODE (elt) != SET
- || GET_CODE (SET_DEST (elt)) != MEM
- || ! memory_operand (SET_DEST (elt), DFmode)
- || GET_CODE (SET_SRC (elt)) != REG
+ || !MEM_P (SET_DEST (elt))
+ || !memory_operand (SET_DEST (elt), DFmode)
+ || !REG_P (SET_SRC (elt))
|| GET_MODE (SET_SRC (elt)) != DFmode)
return 0;
}
{
elt = XVECEXP (op, 0, index++);
if (GET_CODE (elt) != SET
- || GET_CODE (SET_DEST (elt)) != MEM
- || GET_CODE (SET_SRC (elt)) != REG
+ || !MEM_P (SET_DEST (elt))
+ || !REG_P (SET_SRC (elt))
|| GET_MODE (SET_SRC (elt)) != V4SImode)
return 0;
}
{
elt = XVECEXP (op, 0, index++);
if (GET_CODE (elt) != SET
- || GET_CODE (SET_DEST (elt)) != MEM
- || ! memory_operand (SET_DEST (elt), Pmode)
- || GET_CODE (SET_SRC (elt)) != REG
+ || !MEM_P (SET_DEST (elt))
+ || !memory_operand (SET_DEST (elt), Pmode)
+ || !REG_P (SET_SRC (elt))
|| GET_MODE (SET_SRC (elt)) != Pmode)
return 0;
}
elt = XVECEXP (op, 0, index++);
if (GET_CODE (elt) != SET
- || GET_CODE (SET_DEST (elt)) != MEM
- || ! memory_operand (SET_DEST (elt), Pmode)
- || GET_CODE (SET_SRC (elt)) != REG
+ || !MEM_P (SET_DEST (elt))
+ || !memory_operand (SET_DEST (elt), Pmode)
+ || !REG_P (SET_SRC (elt))
|| REGNO (SET_SRC (elt)) != CR2_REGNO
|| GET_MODE (SET_SRC (elt)) != Pmode)
return 0;
elt = XVECEXP (op, 0, index++);
if (GET_CODE (elt) != SET
- || GET_CODE (SET_SRC (elt)) != MEM
- || ! memory_operand (SET_SRC (elt), Pmode)
- || GET_CODE (SET_DEST (elt)) != REG
+ || !MEM_P (SET_SRC (elt))
+ || !memory_operand (SET_SRC (elt), Pmode)
+ || !REG_P (SET_DEST (elt))
|| REGNO (SET_DEST (elt)) != CR2_REGNO
|| GET_MODE (SET_DEST (elt)) != Pmode)
return 0;
{
elt = XVECEXP (op, 0, index++);
if (GET_CODE (elt) != SET
- || GET_CODE (SET_SRC (elt)) != MEM
- || ! memory_operand (SET_SRC (elt), Pmode)
- || GET_CODE (SET_DEST (elt)) != REG
+ || !MEM_P (SET_SRC (elt))
+ || !memory_operand (SET_SRC (elt), Pmode)
+ || !REG_P (SET_DEST (elt))
|| GET_MODE (SET_DEST (elt)) != Pmode)
return 0;
}
{
elt = XVECEXP (op, 0, index++);
if (GET_CODE (elt) != SET
- || GET_CODE (SET_SRC (elt)) != MEM
- || GET_CODE (SET_DEST (elt)) != REG
+ || !MEM_P (SET_SRC (elt))
+ || !REG_P (SET_DEST (elt))
|| GET_MODE (SET_DEST (elt)) != V4SImode)
return 0;
}
{
elt = XVECEXP (op, 0, index++);
if (GET_CODE (elt) != SET
- || GET_CODE (SET_SRC (elt)) != MEM
- || ! memory_operand (SET_SRC (elt), DFmode)
- || GET_CODE (SET_DEST (elt)) != REG
+ || !MEM_P (SET_SRC (elt))
+ || !memory_operand (SET_SRC (elt), DFmode)
+ || !REG_P (SET_DEST (elt))
|| GET_MODE (SET_DEST (elt)) != DFmode)
return 0;
}
if (count <= 1
|| GET_CODE (XVECEXP (op, 0, 0)) != SET
- || GET_CODE (SET_DEST (XVECEXP (op, 0, 0))) != REG
+ || !REG_P (SET_DEST (XVECEXP (op, 0, 0)))
|| GET_CODE (SET_SRC (XVECEXP (op, 0, 0))) != UNSPEC_VOLATILE
|| XINT (SET_SRC (XVECEXP (op, 0, 0)), 1) != UNSPECV_SET_VRSAVE)
return 0;
src_reg = XVECEXP (SET_SRC (exp), 0, 0);
- if (GET_CODE (src_reg) != REG
+ if (!REG_P (src_reg)
|| GET_MODE (src_reg) != CCmode
|| ! CR_REGNO_P (REGNO (src_reg)))
return 0;
if (GET_CODE (exp) != SET
- || GET_CODE (SET_DEST (exp)) != REG
+ || !REG_P (SET_DEST (exp))
|| GET_MODE (SET_DEST (exp)) != SImode
|| ! INT_REGNO_P (REGNO (SET_DEST (exp))))
return 0;
|| XINT (unspec, 1) != UNSPEC_MOVESI_FROM_CR
|| XVECLEN (unspec, 0) != 2
|| XVECEXP (unspec, 0, 0) != src_reg
- || GET_CODE (XVECEXP (unspec, 0, 1)) != CONST_INT
+ || !CONST_INT_P (XVECEXP (unspec, 0, 1))
|| INTVAL (XVECEXP (unspec, 0, 1)) != maskval)
return 0;
}
return 0;
src_reg = XVECEXP (SET_SRC (XVECEXP (op, 0, 0)), 0, 0);
- if (GET_CODE (src_reg) != REG
+ if (!REG_P (src_reg)
|| GET_MODE (src_reg) != SImode
|| ! INT_REGNO_P (REGNO (src_reg)))
return 0;
int maskval;
if (GET_CODE (exp) != SET
- || GET_CODE (SET_DEST (exp)) != REG
+ || !REG_P (SET_DEST (exp))
|| GET_MODE (SET_DEST (exp)) != CCmode
|| ! CR_REGNO_P (REGNO (SET_DEST (exp))))
return 0;
|| XINT (unspec, 1) != UNSPEC_MOVESI_TO_CR
|| XVECLEN (unspec, 0) != 2
|| XVECEXP (unspec, 0, 0) != src_reg
- || GET_CODE (XVECEXP (unspec, 0, 1)) != CONST_INT
+ || !CONST_INT_P (XVECEXP (unspec, 0, 1))
|| INTVAL (XVECEXP (unspec, 0, 1)) != maskval)
return 0;
}
rtx exp = XVECEXP (op, 0, i);
if (GET_CODE (exp) != USE
- || GET_CODE (XEXP (exp, 0)) != REG
+ || !REG_P (XEXP (exp, 0))
|| GET_MODE (XEXP (exp, 0)) != CCmode
|| ! CR_REGNO_P (REGNO (XEXP (exp, 0))))
return 0;
/* Perform a quick check so we don't blow up below. */
if (count <= 1
|| GET_CODE (XVECEXP (op, 0, 0)) != SET
- || GET_CODE (SET_DEST (XVECEXP (op, 0, 0))) != REG
- || GET_CODE (SET_SRC (XVECEXP (op, 0, 0))) != MEM)
+ || !REG_P (SET_DEST (XVECEXP (op, 0, 0)))
+ || !MEM_P (SET_SRC (XVECEXP (op, 0, 0))))
return 0;
dest_regno = REGNO (SET_DEST (XVECEXP (op, 0, 0)));
HOST_WIDE_INT newoffset;
if (GET_CODE (elt) != SET
- || GET_CODE (SET_DEST (elt)) != REG
+ || !REG_P (SET_DEST (elt))
|| GET_MODE (SET_DEST (elt)) != SImode
|| REGNO (SET_DEST (elt)) != dest_regno + i
- || GET_CODE (SET_SRC (elt)) != MEM
+ || !MEM_P (SET_SRC (elt))
|| GET_MODE (SET_SRC (elt)) != SImode)
return 0;
newaddr = XEXP (SET_SRC (elt), 0);
/* Perform a quick check so we don't blow up below. */
if (count <= 1
|| GET_CODE (XVECEXP (op, 0, 0)) != SET
- || GET_CODE (SET_DEST (XVECEXP (op, 0, 0))) != MEM
- || GET_CODE (SET_SRC (XVECEXP (op, 0, 0))) != REG)
+ || !MEM_P (SET_DEST (XVECEXP (op, 0, 0)))
+ || !REG_P (SET_SRC (XVECEXP (op, 0, 0))))
return 0;
src_regno = REGNO (SET_SRC (XVECEXP (op, 0, 0)));
HOST_WIDE_INT newoffset;
if (GET_CODE (elt) != SET
- || GET_CODE (SET_SRC (elt)) != REG
+ || !REG_P (SET_SRC (elt))
|| GET_MODE (SET_SRC (elt)) != SImode
|| REGNO (SET_SRC (elt)) != src_regno + i
- || GET_CODE (SET_DEST (elt)) != MEM
+ || !MEM_P (SET_DEST (elt))
|| GET_MODE (SET_DEST (elt)) != SImode)
return 0;
newaddr = XEXP (SET_DEST (elt), 0);
(match_code "parallel")
{
return (GET_CODE (XVECEXP (op, 0, 0)) == SET
- && GET_CODE (XEXP (XVECEXP (op, 0, 0), 0)) == MEM
+ && MEM_P (XEXP (XVECEXP (op, 0, 0), 0))
&& GET_MODE (XEXP (XVECEXP (op, 0, 0), 0)) == BLKmode
&& XEXP (XVECEXP (op, 0, 0), 1) == const0_rtx);
})
if (GET_CODE (body) == SET)
{
- if (GET_CODE (SET_SRC (body)) == MEM)
+ if (MEM_P (SET_SRC (body)))
return 1;
if (GET_CODE (SET_SRC (body)) == VEC_SELECT
- && GET_CODE (XEXP (SET_SRC (body), 0)) == MEM)
+ && MEM_P (XEXP (SET_SRC (body), 0)))
return 1;
return 0;
rtx set = XVECEXP (body, 0, 0);
- if (GET_CODE (set) == SET && GET_CODE (SET_SRC (set)) == MEM)
+ if (GET_CODE (set) == SET && MEM_P (SET_SRC (set)))
return 1;
return 0;
insn_is_store_p (rtx insn)
{
rtx body = PATTERN (insn);
- if (GET_CODE (body) == SET && GET_CODE (SET_DEST (body)) == MEM)
+ if (GET_CODE (body) == SET && MEM_P (SET_DEST (body)))
return 1;
if (GET_CODE (body) != PARALLEL)
return 0;
rtx set = XVECEXP (body, 0, 0);
- if (GET_CODE (set) == SET && GET_CODE (SET_DEST (set)) == MEM)
+ if (GET_CODE (set) == SET && MEM_P (SET_DEST (set)))
return 1;
return 0;
}
for (unsigned int i = 0; i < len / 2; ++i)
{
rtx op = XVECEXP (parallel, 0, i);
- if (GET_CODE (op) != CONST_INT || INTVAL (op) != len / 2 + i)
+ if (!CONST_INT_P (op) || INTVAL (op) != len / 2 + i)
return 0;
}
for (unsigned int i = len / 2; i < len; ++i)
{
rtx op = XVECEXP (parallel, 0, i);
- if (GET_CODE (op) != CONST_INT || INTVAL (op) != i - len / 2)
+ if (!CONST_INT_P (op) || INTVAL (op) != i - len / 2)
return 0;
}
return 1;
rtx body = PATTERN (def_insn);
if (GET_CODE (body) != SET
|| GET_CODE (SET_SRC (body)) != VEC_SELECT
- || GET_CODE (XEXP (SET_SRC (body), 0)) != MEM)
+ || !MEM_P (XEXP (SET_SRC (body), 0)))
return false;
rtx mem = XEXP (SET_SRC (body), 0);
rtx body = PATTERN (def_insn);
if (GET_CODE (body) != SET
|| GET_CODE (SET_SRC (body)) != VEC_SELECT
- || GET_CODE (XEXP (SET_SRC (body), 0)) != MEM)
+ || !MEM_P (XEXP (SET_SRC (body), 0)))
return false;
rtx mem = XEXP (SET_SRC (body), 0);
/* There is an extra level of indirection for small/large
code models. */
rtx tocrel_expr = SET_SRC (tocrel_body);
- if (GET_CODE (tocrel_expr) == MEM)
+ if (MEM_P (tocrel_expr))
tocrel_expr = XEXP (tocrel_expr, 0);
if (!toc_relative_expr_p (tocrel_expr, false, &tocrel_base, NULL))
return false;
split_const (XVECEXP (tocrel_base, 0, 0), &base, &offset);
- if (GET_CODE (base) != SYMBOL_REF || !CONSTANT_POOL_ADDRESS_P (base))
+ if (!SYMBOL_REF_P (base) || !CONSTANT_POOL_ADDRESS_P (base))
return false;
else
{
/* FIXME: The conditions under which
- ((GET_CODE (const_vector) == SYMBOL_REF) &&
- !CONSTANT_POOL_ADDRESS_P (const_vector))
+ (SYMBOL_REF_P (const_vector)
+ && !CONSTANT_POOL_ADDRESS_P (const_vector))
are not well understood. This code prevents
an internal compiler error which will occur in
replace_swapped_load_constant () if we were to return
replace_swapped_load_constant () and then we can
remove this special test. */
rtx const_vector = get_pool_constant (base);
- if (GET_CODE (const_vector) == SYMBOL_REF
+ if (SYMBOL_REF_P (const_vector)
&& CONSTANT_POOL_ADDRESS_P (const_vector))
const_vector = get_pool_constant (const_vector);
if (GET_CODE (const_vector) != CONST_VECTOR)
and XEXP (op, 1) is a PARALLEL with a single QImode const int,
it represents a vector splat for which we can do special
handling. */
- if (GET_CODE (XEXP (op, 0)) == CONST_INT)
+ if (CONST_INT_P (XEXP (op, 0)))
return 1;
else if (REG_P (XEXP (op, 0))
&& GET_MODE_INNER (GET_MODE (op)) == GET_MODE (XEXP (op, 0)))
case VEC_SELECT:
/* A vec_extract operation is ok if we change the lane. */
- if (GET_CODE (XEXP (op, 0)) == REG
+ if (REG_P (XEXP (op, 0))
&& GET_MODE_INNER (GET_MODE (XEXP (op, 0))) == GET_MODE (op)
&& GET_CODE ((parallel = XEXP (op, 1))) == PARALLEL
&& XVECLEN (parallel, 0) == 1
- && GET_CODE (XVECEXP (parallel, 0, 0)) == CONST_INT)
+ && CONST_INT_P (XVECEXP (parallel, 0, 0)))
{
*special = SH_EXTRACT;
return 1;
|| GET_MODE (XEXP (op, 0)) == V4DImode)
&& GET_CODE ((parallel = XEXP (op, 1))) == PARALLEL
&& XVECLEN (parallel, 0) == 2
- && GET_CODE (XVECEXP (parallel, 0, 0)) == CONST_INT
- && GET_CODE (XVECEXP (parallel, 0, 1)) == CONST_INT)
+ && CONST_INT_P (XVECEXP (parallel, 0, 0))
+ && CONST_INT_P (XVECEXP (parallel, 0, 1)))
{
*special = SH_XXPERMDI;
return 1;
rtx rhs = SET_SRC (body);
/* Even without a swap, the RHS might be a vec_select for, say,
a byte-reversing load. */
- if (GET_CODE (rhs) != MEM)
+ if (!MEM_P (rhs))
return 0;
if (GET_CODE (XEXP (rhs, 0)) == AND)
return 0;
rtx lhs = SET_DEST (body);
/* Even without a swap, the RHS might be a vec_select for, say,
a byte-reversing store. */
- if (GET_CODE (lhs) != MEM)
+ if (!MEM_P (lhs))
return 0;
if (GET_CODE (XEXP (lhs, 0)) == AND)
return 0;
&& GET_CODE (SET_SRC (body)) == UNSPEC
&& XINT (SET_SRC (body), 1) == UNSPEC_VPERM
&& XVECLEN (SET_SRC (body), 0) == 3
- && GET_CODE (XVECEXP (SET_SRC (body), 0, 2)) == REG)
+ && REG_P (XVECEXP (SET_SRC (body), 0, 2)))
{
rtx mask_reg = XVECEXP (SET_SRC (body), 0, 2);
struct df_insn_info *insn_info = DF_INSN_INFO_GET (insn);
const_rtx tocrel_base;
rtx tocrel_expr = SET_SRC (PATTERN (tocrel_insn));
/* There is an extra level of indirection for small/large code models. */
- if (GET_CODE (tocrel_expr) == MEM)
+ if (MEM_P (tocrel_expr))
tocrel_expr = XEXP (tocrel_expr, 0);
if (!toc_relative_expr_p (tocrel_expr, false, &tocrel_base, NULL))
gcc_unreachable ();
/* With the extra indirection, get_pool_constant will produce the
real constant from the reg_equal expression, so get the real
constant. */
- if (GET_CODE (const_vector) == SYMBOL_REF)
+ if (SYMBOL_REF_P (const_vector))
const_vector = get_pool_constant (const_vector);
gcc_assert (GET_CODE (const_vector) == CONST_VECTOR);
rtx new_body = PATTERN (new_insn);
gcc_assert ((GET_CODE (new_body) == SET)
- && (GET_CODE (SET_DEST (new_body)) == MEM));
+ && MEM_P (SET_DEST (new_body)));
set_block_for_insn (new_insn, BLOCK_FOR_INSN (store_insn));
df_insn_rescan (new_insn);
rtx body = PATTERN (def_insn);
gcc_assert ((GET_CODE (body) == SET)
&& (GET_CODE (SET_SRC (body)) == VEC_SELECT)
- && (GET_CODE (XEXP (SET_SRC (body), 0)) == MEM));
+ && MEM_P (XEXP (SET_SRC (body), 0)));
rtx src_exp = XEXP (SET_SRC (body), 0);
enum machine_mode mode = GET_MODE (src_exp);
rtx new_body = PATTERN (new_insn);
gcc_assert ((GET_CODE (new_body) == SET)
- && (GET_CODE (SET_SRC (new_body)) == MEM));
+ && MEM_P (SET_SRC (new_body)));
set_block_for_insn (new_insn, BLOCK_FOR_INSN (def_insn));
df_insn_rescan (new_insn);
const_rtx tocrel_base;
/* There is an extra level of indirection for small/large code models. */
- if (GET_CODE (tocrel_expr) == MEM)
+ if (MEM_P (tocrel_expr))
tocrel_expr = XEXP (tocrel_expr, 0);
if (!toc_relative_expr_p (tocrel_expr, false, &tocrel_base, NULL))
/* With the extra indirection, get_pool_constant will produce the
real constant from the reg_equal expression, so get the real
constant. */
- if (GET_CODE (const_vector) == SYMBOL_REF)
+ if (SYMBOL_REF_P (const_vector))
const_vector = get_pool_constant (const_vector);
gcc_assert (GET_CODE (const_vector) == CONST_VECTOR);
rtx mask = XEXP (SET_SRC (body), 1);
- if (GET_CODE (mask) == CONST_INT)
+ if (CONST_INT_P (mask))
{
if (INTVAL (mask) == -16)
return alignment_with_canonical_addr (SET_SRC (body));
real_mask = SET_SRC (const_body);
- if (GET_CODE (real_mask) != CONST_INT
+ if (!CONST_INT_P (real_mask)
|| INTVAL (real_mask) != -16)
return 0;
}
rtx body = PATTERN (insn);
gcc_assert (GET_CODE (body) == SET
&& GET_CODE (SET_SRC (body)) == VEC_SELECT
- && GET_CODE (XEXP (SET_SRC (body), 0)) == MEM);
+ && MEM_P (XEXP (SET_SRC (body), 0)));
rtx mem = XEXP (SET_SRC (body), 0);
rtx base_reg = XEXP (mem, 0);
{
rtx body = PATTERN (insn);
gcc_assert (GET_CODE (body) == SET
- && GET_CODE (SET_DEST (body)) == MEM
+ && MEM_P (SET_DEST (body))
&& GET_CODE (SET_SRC (body)) == VEC_SELECT);
rtx mem = SET_DEST (body);
rtx base_reg = XEXP (mem, 0);
rtx orig_dest = operands[0];
rtx bytes_rtx = operands[1];
rtx align_rtx = operands[3];
- bool constp = (GET_CODE (bytes_rtx) == CONST_INT);
+ bool constp = CONST_INT_P (bytes_rtx);
HOST_WIDE_INT align;
HOST_WIDE_INT bytes;
int offset;
return 0;
/* This must be a fixed size alignment */
- gcc_assert (GET_CODE (align_rtx) == CONST_INT);
+ gcc_assert (CONST_INT_P (align_rtx));
align = INTVAL (align_rtx) * BITS_PER_UNIT;
/* Anything to clear? */
reload, not one per store. */
addr = XEXP (orig_dest, 0);
if ((GET_CODE (addr) == PLUS || GET_CODE (addr) == LO_SUM)
- && GET_CODE (XEXP (addr, 1)) == CONST_INT
+ && CONST_INT_P (XEXP (addr, 1))
&& (INTVAL (XEXP (addr, 1)) & 3) != 0)
{
addr = copy_addr_to_reg (addr);
rtx orig_src = operands[1];
rtx bytes_rtx = operands[2];
rtx align_rtx = operands[3];
- int constp = (GET_CODE (bytes_rtx) == CONST_INT);
+ int constp = CONST_INT_P (bytes_rtx);
int align;
int bytes;
int offset;
return 0;
/* This must be a fixed size alignment */
- gcc_assert (GET_CODE (align_rtx) == CONST_INT);
+ gcc_assert (CONST_INT_P (align_rtx));
align = INTVAL (align_rtx) * BITS_PER_UNIT;
/* Anything to move? */
reload, not one per load and/or store. */
addr = XEXP (orig_dest, 0);
if ((GET_CODE (addr) == PLUS || GET_CODE (addr) == LO_SUM)
- && GET_CODE (XEXP (addr, 1)) == CONST_INT
+ && CONST_INT_P (XEXP (addr, 1))
&& (INTVAL (XEXP (addr, 1)) & 3) != 0)
{
addr = copy_addr_to_reg (addr);
}
addr = XEXP (orig_src, 0);
if ((GET_CODE (addr) == PLUS || GET_CODE (addr) == LO_SUM)
- && GET_CODE (XEXP (addr, 1)) == CONST_INT
+ && CONST_INT_P (XEXP (addr, 1))
&& (INTVAL (XEXP (addr, 1)) & 3) != 0)
{
addr = copy_addr_to_reg (addr);
for (r = 32; r < 64; ++r)
rs6000_regno_regclass[r] = FLOAT_REGS;
- for (r = 64; r < FIRST_PSEUDO_REGISTER; ++r)
+ for (r = 64; HARD_REGISTER_NUM_P (r); ++r)
rs6000_regno_regclass[r] = NO_REGS;
for (r = FIRST_ALTIVEC_REGNO; r <= LAST_ALTIVEC_REGNO; ++r)
}
/* Precalculate HARD_REGNO_NREGS. */
- for (r = 0; r < FIRST_PSEUDO_REGISTER; ++r)
+ for (r = 0; HARD_REGISTER_NUM_P (r); ++r)
for (m = 0; m < NUM_MACHINE_MODES; ++m)
rs6000_hard_regno_nregs[m][r]
= rs6000_hard_regno_nregs_internal (r, (machine_mode)m);
/* Precalculate TARGET_HARD_REGNO_MODE_OK. */
- for (r = 0; r < FIRST_PSEUDO_REGISTER; ++r)
+ for (r = 0; HARD_REGISTER_NUM_P (r); ++r)
for (m = 0; m < NUM_MACHINE_MODES; ++m)
if (rs6000_hard_regno_mode_ok_uncached (r, (machine_mode)m))
rs6000_hard_regno_mode_ok_p[m][r] = true;
else if (mode == V2DImode)
{
- if (GET_CODE (CONST_VECTOR_ELT (op, 0)) != CONST_INT
- || GET_CODE (CONST_VECTOR_ELT (op, 1)) != CONST_INT)
+ if (!CONST_INT_P (CONST_VECTOR_ELT (op, 0))
+ || !CONST_INT_P (CONST_VECTOR_ELT (op, 1)))
return false;
if (zero_constant (op, mode))
emit_move_insn (target, adjust_address_nv (mem, inner_mode, 0));
}
-/* Helper function to return the register number of a RTX. */
-static inline int
-regno_or_subregno (rtx op)
-{
- if (REG_P (op))
- return REGNO (op);
- else if (SUBREG_P (op))
- return subreg_regno (op);
- else
- gcc_unreachable ();
-}
-
/* Adjust a memory address (MEM) of a vector type to point to a scalar field
within the vector (ELEMENT) with a mode (SCALAR_MODE). Use a base register
temporary (BASE_TMP) to fixup the address. Return the new memory address
{
rtx op1 = XEXP (new_addr, 1);
addr_mask_type addr_mask;
- int scalar_regno = regno_or_subregno (scalar_reg);
+ unsigned int scalar_regno = reg_or_subregno (scalar_reg);
- gcc_assert (scalar_regno < FIRST_PSEUDO_REGISTER);
+ gcc_assert (HARD_REGISTER_NUM_P (scalar_regno));
if (INT_REGNO_P (scalar_regno))
addr_mask = reg_addr[scalar_mode].addr_mask[RELOAD_REG_GPR];
{
int bit_shift = byte_shift + 3;
rtx element2;
- int dest_regno = regno_or_subregno (dest);
- int src_regno = regno_or_subregno (src);
- int element_regno = regno_or_subregno (element);
+ unsigned int dest_regno = reg_or_subregno (dest);
+ unsigned int src_regno = reg_or_subregno (src);
+ unsigned int element_regno = reg_or_subregno (element);
gcc_assert (REG_P (tmp_gpr));
if (DEFAULT_ABI != ABI_V4)
return 0;
- if (GET_CODE (op) == SYMBOL_REF)
+ if (SYMBOL_REF_P (op))
sym_ref = op;
else if (GET_CODE (op) != CONST
|| GET_CODE (XEXP (op, 0)) != PLUS
- || GET_CODE (XEXP (XEXP (op, 0), 0)) != SYMBOL_REF
- || GET_CODE (XEXP (XEXP (op, 0), 1)) != CONST_INT)
+ || !SYMBOL_REF_P (XEXP (XEXP (op, 0), 0))
+ || !CONST_INT_P (XEXP (XEXP (op, 0), 1)))
return 0;
else
regno0 = REGNO (op0);
regno1 = REGNO (op1);
- if (regno0 >= FIRST_PSEUDO_REGISTER || regno1 >= FIRST_PSEUDO_REGISTER)
+ if (!HARD_REGISTER_NUM_P (regno0) || !HARD_REGISTER_NUM_P (regno1))
return false;
if (INT_REGNO_P (regno0))
{
int regnum;
- if (GET_CODE (op) == REG)
+ if (REG_P (op))
regnum = REGNO (op);
else if (GET_CODE (op) == PLUS
- && GET_CODE (XEXP (op, 0)) == REG
- && GET_CODE (XEXP (op, 1)) == CONST_INT)
+ && REG_P (XEXP (op, 0))
+ && CONST_INT_P (XEXP (op, 1)))
regnum = REGNO (XEXP (op, 0));
else
tree decl;
unsigned HOST_WIDE_INT dsize, dalign, lsb, mask;
- if (GET_CODE (op) != SYMBOL_REF)
+ if (!SYMBOL_REF_P (op))
return false;
/* ISA 3.0 vector d-form addressing is restricted, don't allow
rtx base, offset;
split_const (op, &base, &offset);
- return (GET_CODE (base) == SYMBOL_REF
+ return (SYMBOL_REF_P (base)
&& CONSTANT_POOL_ADDRESS_P (base)
&& ASM_OUTPUT_SPECIAL_POOL_ENTRY_P (get_pool_constant (base), Pmode));
}
{
return (DEFAULT_ABI == ABI_V4
&& !flag_pic && !TARGET_TOC
- && (GET_CODE (x) == SYMBOL_REF || GET_CODE (x) == CONST)
+ && (SYMBOL_REF_P (x) || GET_CODE (x) == CONST)
&& small_data_operand (x, mode));
}
return virtual_stack_registers_memory_p (x);
if (legitimate_constant_pool_address_p (x, mode, strict || lra_in_progress))
return true;
- if (GET_CODE (XEXP (x, 1)) != CONST_INT)
+ if (!CONST_INT_P (XEXP (x, 1)))
return false;
offset = INTVAL (XEXP (x, 1));
bool
legitimate_indirect_address_p (rtx x, int strict)
{
- return GET_CODE (x) == REG && INT_REG_OK_FOR_BASE_P (x, strict);
+ return REG_P (x) && INT_REG_OK_FOR_BASE_P (x, strict);
}
bool
macho_lo_sum_memory_operand (rtx x, machine_mode mode)
{
if (!TARGET_MACHO || !flag_pic
- || mode != SImode || GET_CODE (x) != MEM)
+ || mode != SImode || !MEM_P (x))
return false;
x = XEXP (x, 0);
if (GET_CODE (x) != LO_SUM)
return false;
- if (GET_CODE (XEXP (x, 0)) != REG)
+ if (!REG_P (XEXP (x, 0)))
return false;
if (!INT_REG_OK_FOR_BASE_P (XEXP (x, 0), 0))
return false;
{
if (GET_CODE (x) != LO_SUM)
return false;
- if (GET_CODE (XEXP (x, 0)) != REG)
+ if (!REG_P (XEXP (x, 0)))
return false;
if (!INT_REG_OK_FOR_BASE_P (XEXP (x, 0), strict))
return false;
else
return force_reg (Pmode, x);
}
- if (GET_CODE (x) == SYMBOL_REF)
+ if (SYMBOL_REF_P (x))
{
enum tls_model model = SYMBOL_REF_TLS_MODEL (x);
if (model != 0)
}
if (GET_CODE (x) == PLUS
- && GET_CODE (XEXP (x, 0)) == REG
- && GET_CODE (XEXP (x, 1)) == CONST_INT
+ && REG_P (XEXP (x, 0))
+ && CONST_INT_P (XEXP (x, 1))
&& ((unsigned HOST_WIDE_INT) (INTVAL (XEXP (x, 1)) + 0x8000)
>= 0x10000 - extra))
{
return plus_constant (Pmode, sum, low_int);
}
else if (GET_CODE (x) == PLUS
- && GET_CODE (XEXP (x, 0)) == REG
- && GET_CODE (XEXP (x, 1)) != CONST_INT
+ && REG_P (XEXP (x, 0))
+ && !CONST_INT_P (XEXP (x, 1))
&& GET_MODE_NUNITS (mode) == 1
&& (GET_MODE_SIZE (mode) <= UNITS_PER_WORD
|| (/* ??? Assume floating point reg based on mode? */
)
&& TARGET_32BIT
&& TARGET_NO_TOC
- && ! flag_pic
- && GET_CODE (x) != CONST_INT
- && GET_CODE (x) != CONST_WIDE_INT
- && GET_CODE (x) != CONST_DOUBLE
+ && !flag_pic
+ && !CONST_INT_P (x)
+ && !CONST_WIDE_INT_P (x)
+ && !CONST_DOUBLE_P (x)
&& CONSTANT_P (x)
&& GET_MODE_NUNITS (mode) == 1
&& (GET_MODE_SIZE (mode) <= UNITS_PER_WORD
return gen_rtx_LO_SUM (Pmode, reg, x);
}
else if (TARGET_TOC
- && GET_CODE (x) == SYMBOL_REF
+ && SYMBOL_REF_P (x)
&& constant_pool_expr_p (x)
&& ASM_OUTPUT_SPECIAL_POOL_ENTRY_P (get_pool_constant (x), Pmode))
return create_TOC_reference (x, NULL_RTX);
output_addr_const (file, x);
if (TARGET_ELF)
fputs ("@dtprel+0x8000", file);
- else if (TARGET_XCOFF && GET_CODE (x) == SYMBOL_REF)
+ else if (TARGET_XCOFF && SYMBOL_REF_P (x))
{
switch (SYMBOL_REF_TLS_MODEL (x))
{
static bool
rs6000_real_tls_symbol_ref_p (rtx x)
{
- return (GET_CODE (x) == SYMBOL_REF
+ return (SYMBOL_REF_P (x)
&& SYMBOL_REF_TLS_MODEL (x) >= TLS_MODEL_REAL);
}
/* Do not associate thread-local symbols with the original
constant pool symbol. */
if (TARGET_XCOFF
- && GET_CODE (y) == SYMBOL_REF
+ && SYMBOL_REF_P (y)
&& CONSTANT_POOL_ADDRESS_P (y)
&& rs6000_real_tls_symbol_ref_p (get_pool_constant (y)))
return orig_x;
{
if (GET_CODE (x) == UNSPEC)
return true;
- if (GET_CODE (x) == SYMBOL_REF
+ if (SYMBOL_REF_P (x)
&& CONSTANT_POOL_ADDRESS_P (x))
{
rtx c = get_pool_constant (x);
/* A TLS symbol in the TOC cannot contain a sum. */
if (GET_CODE (x) == CONST
&& GET_CODE (XEXP (x, 0)) == PLUS
- && GET_CODE (XEXP (XEXP (x, 0), 0)) == SYMBOL_REF
+ && SYMBOL_REF_P (XEXP (XEXP (x, 0), 0))
&& SYMBOL_REF_TLS_MODEL (XEXP (XEXP (x, 0), 0)) != 0)
return true;
/* We must recognize output that we have already generated ourselves. */
if (GET_CODE (x) == PLUS
&& GET_CODE (XEXP (x, 0)) == PLUS
- && GET_CODE (XEXP (XEXP (x, 0), 0)) == REG
- && GET_CODE (XEXP (XEXP (x, 0), 1)) == CONST_INT
- && GET_CODE (XEXP (x, 1)) == CONST_INT)
+ && REG_P (XEXP (XEXP (x, 0), 0))
+ && CONST_INT_P (XEXP (XEXP (x, 0), 1))
+ && CONST_INT_P (XEXP (x, 1)))
{
if (TARGET_DEBUG_ADDR)
{
if (GET_CODE (x) == PLUS
&& REG_P (XEXP (x, 0))
- && REGNO (XEXP (x, 0)) < FIRST_PSEUDO_REGISTER
+ && HARD_REGISTER_P (XEXP (x, 0))
&& INT_REG_OK_FOR_BASE_P (XEXP (x, 0), 1)
&& CONST_INT_P (XEXP (x, 1))
&& reg_offset_p
return x;
}
- if (GET_CODE (x) == SYMBOL_REF
+ if (SYMBOL_REF_P (x)
&& reg_offset_p
&& !quad_offset_p
&& (!VECTOR_MODE_P (mode) || VECTOR_MEM_NONE_P (mode))
if (VECTOR_MEM_ALTIVEC_P (mode)
&& GET_CODE (x) == AND
&& GET_CODE (XEXP (x, 0)) == PLUS
- && GET_CODE (XEXP (XEXP (x, 0), 0)) == REG
- && GET_CODE (XEXP (XEXP (x, 0), 1)) == CONST_INT
- && GET_CODE (XEXP (x, 1)) == CONST_INT
+ && REG_P (XEXP (XEXP (x, 0), 0))
+ && CONST_INT_P (XEXP (XEXP (x, 0), 1))
+ && CONST_INT_P (XEXP (x, 1))
&& INTVAL (XEXP (x, 1)) == -16)
{
x = XEXP (x, 0);
if (TARGET_TOC
&& reg_offset_p
&& !quad_offset_p
- && GET_CODE (x) == SYMBOL_REF
+ && SYMBOL_REF_P (x)
&& use_toc_relative_ref (x, mode))
{
x = create_TOC_reference (x, NULL_RTX);
/* If this is an unaligned stvx/ldvx type address, discard the outer AND. */
if (VECTOR_MEM_ALTIVEC_P (mode)
&& GET_CODE (x) == AND
- && GET_CODE (XEXP (x, 1)) == CONST_INT
+ && CONST_INT_P (XEXP (x, 1))
&& INTVAL (XEXP (x, 1)) == -16)
x = XEXP (x, 0);
if (! reg_ok_strict
&& reg_offset_p
&& GET_CODE (x) == PLUS
- && GET_CODE (XEXP (x, 0)) == REG
+ && REG_P (XEXP (x, 0))
&& (XEXP (x, 0) == virtual_stack_vars_rtx
|| XEXP (x, 0) == arg_pointer_rtx)
- && GET_CODE (XEXP (x, 1)) == CONST_INT)
+ && CONST_INT_P (XEXP (x, 1)))
return 1;
if (rs6000_legitimate_offset_address_p (mode, x, reg_ok_strict, false))
return 1;
been rejected as illegitimate. */
if (XEXP (addr, 0) != virtual_stack_vars_rtx
&& XEXP (addr, 0) != arg_pointer_rtx
- && GET_CODE (XEXP (addr, 1)) == CONST_INT)
+ && CONST_INT_P (XEXP (addr, 1)))
{
unsigned HOST_WIDE_INT val = INTVAL (XEXP (addr, 1));
return val + 0x8000 >= 0x10000 - (TARGET_POWERPC64 ? 8 : 12);
static void
rs6000_eliminate_indexed_memrefs (rtx operands[2])
{
- if (GET_CODE (operands[0]) == MEM
- && GET_CODE (XEXP (operands[0], 0)) != REG
+ if (MEM_P (operands[0])
+ && !REG_P (XEXP (operands[0], 0))
&& ! legitimate_constant_pool_address_p (XEXP (operands[0], 0),
GET_MODE (operands[0]), false))
operands[0]
= replace_equiv_address (operands[0],
copy_addr_to_reg (XEXP (operands[0], 0)));
- if (GET_CODE (operands[1]) == MEM
- && GET_CODE (XEXP (operands[1], 0)) != REG
+ if (MEM_P (operands[1])
+ && !REG_P (XEXP (operands[1], 0))
&& ! legitimate_constant_pool_address_p (XEXP (operands[1], 0),
GET_MODE (operands[1]), false))
operands[1]
if (MEM_P (source))
{
- gcc_assert (REG_P (dest) || GET_CODE (dest) == SUBREG);
+ gcc_assert (REG_P (dest) || SUBREG_P (dest));
rs6000_emit_le_vsx_load (dest, source, mode);
}
else
/* Check if GCC is setting up a block move that will end up using FP
registers as temporaries. We must make sure this is acceptable. */
- if (GET_CODE (operands[0]) == MEM
- && GET_CODE (operands[1]) == MEM
+ if (MEM_P (operands[0])
+ && MEM_P (operands[1])
&& mode == DImode
&& (rs6000_slow_unaligned_access (DImode, MEM_ALIGN (operands[0]))
|| rs6000_slow_unaligned_access (DImode, MEM_ALIGN (operands[1])))
return;
}
- if (can_create_pseudo_p () && GET_CODE (operands[0]) == MEM
+ if (can_create_pseudo_p () && MEM_P (operands[0])
&& !gpc_reg_operand (operands[1], mode))
operands[1] = force_reg (mode, operands[1]);
tmp = XEXP (XEXP (tmp, 0), 0);
}
- gcc_assert (GET_CODE (tmp) == SYMBOL_REF);
+ gcc_assert (SYMBOL_REF_P (tmp));
model = SYMBOL_REF_TLS_MODEL (tmp);
gcc_assert (model != 0);
p1:SD) if p1 is not of floating point class and p0 is spilled as
we can have no analogous movsd_store for this. */
if (lra_in_progress && mode == DDmode
- && REG_P (operands[0]) && REGNO (operands[0]) >= FIRST_PSEUDO_REGISTER
+ && REG_P (operands[0]) && !HARD_REGISTER_P (operands[0])
&& reg_preferred_class (REGNO (operands[0])) == NO_REGS
- && GET_CODE (operands[1]) == SUBREG && REG_P (SUBREG_REG (operands[1]))
+ && SUBREG_P (operands[1]) && REG_P (SUBREG_REG (operands[1]))
&& GET_MODE (SUBREG_REG (operands[1])) == SDmode)
{
enum reg_class cl;
int regno = REGNO (SUBREG_REG (operands[1]));
- if (regno >= FIRST_PSEUDO_REGISTER)
+ if (!HARD_REGISTER_NUM_P (regno))
{
cl = reg_preferred_class (regno);
regno = reg_renumber[regno];
}
if (lra_in_progress
&& mode == SDmode
- && REG_P (operands[0]) && REGNO (operands[0]) >= FIRST_PSEUDO_REGISTER
+ && REG_P (operands[0]) && !HARD_REGISTER_P (operands[0])
&& reg_preferred_class (REGNO (operands[0])) == NO_REGS
&& (REG_P (operands[1])
- || (GET_CODE (operands[1]) == SUBREG
- && REG_P (SUBREG_REG (operands[1])))))
+ || (SUBREG_P (operands[1]) && REG_P (SUBREG_REG (operands[1])))))
{
- int regno = REGNO (GET_CODE (operands[1]) == SUBREG
- ? SUBREG_REG (operands[1]) : operands[1]);
+ int regno = reg_or_subregno (operands[1]);
enum reg_class cl;
- if (regno >= FIRST_PSEUDO_REGISTER)
+ if (!HARD_REGISTER_NUM_P (regno))
{
cl = reg_preferred_class (regno);
gcc_assert (cl != NO_REGS);
p:DD)) if p0 is not of floating point class and p1 is spilled as
we can have no analogous movsd_load for this. */
if (lra_in_progress && mode == DDmode
- && GET_CODE (operands[0]) == SUBREG && REG_P (SUBREG_REG (operands[0]))
+ && SUBREG_P (operands[0]) && REG_P (SUBREG_REG (operands[0]))
&& GET_MODE (SUBREG_REG (operands[0])) == SDmode
- && REG_P (operands[1]) && REGNO (operands[1]) >= FIRST_PSEUDO_REGISTER
+ && REG_P (operands[1]) && !HARD_REGISTER_P (operands[1])
&& reg_preferred_class (REGNO (operands[1])) == NO_REGS)
{
enum reg_class cl;
int regno = REGNO (SUBREG_REG (operands[0]));
- if (regno >= FIRST_PSEUDO_REGISTER)
+ if (!HARD_REGISTER_NUM_P (regno))
{
cl = reg_preferred_class (regno);
regno = reg_renumber[regno];
if (lra_in_progress
&& mode == SDmode
&& (REG_P (operands[0])
- || (GET_CODE (operands[0]) == SUBREG
- && REG_P (SUBREG_REG (operands[0]))))
- && REG_P (operands[1]) && REGNO (operands[1]) >= FIRST_PSEUDO_REGISTER
+ || (SUBREG_P (operands[0]) && REG_P (SUBREG_REG (operands[0]))))
+ && REG_P (operands[1]) && !HARD_REGISTER_P (operands[1])
&& reg_preferred_class (REGNO (operands[1])) == NO_REGS)
{
- int regno = REGNO (GET_CODE (operands[0]) == SUBREG
- ? SUBREG_REG (operands[0]) : operands[0]);
+ int regno = reg_or_subregno (operands[0]);
enum reg_class cl;
- if (regno >= FIRST_PSEUDO_REGISTER)
+ if (!HARD_REGISTER_NUM_P (regno))
{
cl = reg_preferred_class (regno);
gcc_assert (cl != NO_REGS);
case E_HImode:
case E_QImode:
if (CONSTANT_P (operands[1])
- && GET_CODE (operands[1]) != CONST_INT)
+ && !CONST_INT_P (operands[1]))
operands[1] = force_const_mem (mode, operands[1]);
break;
if (TARGET_ELF
&& mode == Pmode
&& DEFAULT_ABI == ABI_V4
- && (GET_CODE (operands[1]) == SYMBOL_REF
+ && (SYMBOL_REF_P (operands[1])
|| GET_CODE (operands[1]) == CONST)
&& small_data_operand (operands[1], mode))
{
&& mode == Pmode
&& CONSTANT_P (operands[1])
&& GET_CODE (operands[1]) != HIGH
- && GET_CODE (operands[1]) != CONST_INT)
+ && !CONST_INT_P (operands[1]))
{
rtx target = (!can_create_pseudo_p ()
? operands[0]
/* If this is a function address on -mcall-aixdesc,
convert it to the address of the descriptor. */
if (DEFAULT_ABI == ABI_AIX
- && GET_CODE (operands[1]) == SYMBOL_REF
+ && SYMBOL_REF_P (operands[1])
&& XSTR (operands[1], 0)[0] == '.')
{
const char *name = XSTR (operands[1], 0);
and we have put it in the TOC, we just need to make a TOC-relative
reference to it. */
if (TARGET_TOC
- && GET_CODE (operands[1]) == SYMBOL_REF
+ && SYMBOL_REF_P (operands[1])
&& use_toc_relative_ref (operands[1], mode))
operands[1] = create_TOC_reference (operands[1], operands[0]);
else if (mode == Pmode
&& GET_CODE (XEXP (operands[1], 0)) == PLUS
&& add_operand (XEXP (XEXP (operands[1], 0), 1), mode)
&& (GET_CODE (XEXP (XEXP (operands[1], 0), 0)) == LABEL_REF
- || GET_CODE (XEXP (XEXP (operands[1], 0), 0)) == SYMBOL_REF)
+ || SYMBOL_REF_P (XEXP (XEXP (operands[1], 0), 0)))
&& ! side_effects_p (operands[0]))
{
rtx sym =
operands[1] = force_const_mem (mode, operands[1]);
if (TARGET_TOC
- && GET_CODE (XEXP (operands[1], 0)) == SYMBOL_REF
+ && SYMBOL_REF_P (XEXP (operands[1], 0))
&& use_toc_relative_ref (XEXP (operands[1], 0), mode))
{
rtx tocref = create_TOC_reference (XEXP (operands[1], 0),
/* Above, we may have called force_const_mem which may have returned
an invalid address. If we can, fix this up; otherwise, reload will
have to deal with it. */
- if (GET_CODE (operands[1]) == MEM)
+ if (MEM_P (operands[1]))
operands[1] = validize_mem (operands[1]);
emit_insn (gen_rtx_SET (operands[0], operands[1]));
{
rtx reg_save_area
= assign_stack_local (BLKmode, gpr_size + fpr_size, 64);
- gcc_assert (GET_CODE (reg_save_area) == MEM);
+ gcc_assert (MEM_P (reg_save_area));
reg_save_area = XEXP (reg_save_area, 0);
if (GET_CODE (reg_save_area) == PLUS)
{
gcc_assert (XEXP (reg_save_area, 0)
== virtual_stack_vars_rtx);
- gcc_assert (GET_CODE (XEXP (reg_save_area, 1)) == CONST_INT);
+ gcc_assert (CONST_INT_P (XEXP (reg_save_area, 1)));
offset += INTVAL (XEXP (reg_save_area, 1));
}
else
if (arg0 == error_mark_node || arg1 == error_mark_node)
return const0_rtx;
- if (GET_CODE (op0) != CONST_INT
+ if (!CONST_INT_P (op0)
|| INTVAL (op0) > 255
|| INTVAL (op0) < 0)
{
compile time if the argument is a variable. The least significant two
bits of the argument, regardless of type, are used to set the rounding
mode. All other bits are ignored. */
- if (GET_CODE (op0) == CONST_INT && !const_0_to_3_operand(op0, VOIDmode))
+ if (CONST_INT_P (op0) && !const_0_to_3_operand(op0, VOIDmode))
{
error ("Argument must be a value between 0 and 3.");
return const0_rtx;
compile time if the argument is a variable. The least significant two
bits of the argument, regardless of type, are used to set the rounding
mode. All other bits are ignored. */
- if (GET_CODE (op0) == CONST_INT && !const_0_to_7_operand(op0, VOIDmode))
+ if (CONST_INT_P (op0) && !const_0_to_7_operand(op0, VOIDmode))
{
error ("Argument must be a value between 0 and 7.");
return const0_rtx;
|| icode == CODE_FOR_altivec_vspltisw)
{
/* Only allow 5-bit *signed* literals. */
- if (GET_CODE (op0) != CONST_INT
+ if (!CONST_INT_P (op0)
|| INTVAL (op0) > 15
|| INTVAL (op0) < -16)
{
registers_ok_for_quad_peep (rtx reg1, rtx reg2)
{
/* We might have been passed a SUBREG. */
- if (GET_CODE (reg1) != REG || GET_CODE (reg2) != REG)
+ if (!REG_P (reg1) || !REG_P (reg2))
return 0;
/* We might have been passed non floating point registers. */
if (GET_CODE (addr1) == PLUS)
{
/* If not a REG, return zero. */
- if (GET_CODE (XEXP (addr1, 0)) != REG)
+ if (!REG_P (XEXP (addr1, 0)))
return 0;
else
{
reg1 = REGNO (XEXP (addr1, 0));
/* The offset must be constant! */
- if (GET_CODE (XEXP (addr1, 1)) != CONST_INT)
+ if (!CONST_INT_P (XEXP (addr1, 1)))
return 0;
offset1 = INTVAL (XEXP (addr1, 1));
}
}
- else if (GET_CODE (addr1) != REG)
+ else if (!REG_P (addr1))
return 0;
else
{
if (GET_CODE (addr2) == PLUS)
{
/* If not a REG, return zero. */
- if (GET_CODE (XEXP (addr2, 0)) != REG)
+ if (!REG_P (XEXP (addr2, 0)))
return 0;
else
{
reg2 = REGNO (XEXP (addr2, 0));
/* The offset must be constant. */
- if (GET_CODE (XEXP (addr2, 1)) != CONST_INT)
+ if (!CONST_INT_P (XEXP (addr2, 1)))
return 0;
offset2 = INTVAL (XEXP (addr2, 1));
}
}
- else if (GET_CODE (addr2) != REG)
+ else if (!REG_P (addr2))
return 0;
else
{
HOST_WIDE_INT regno;
enum reg_class rclass;
- if (GET_CODE (reg) == SUBREG)
+ if (SUBREG_P (reg))
reg = SUBREG_REG (reg);
if (!REG_P (reg))
return NO_REG_TYPE;
regno = REGNO (reg);
- if (regno >= FIRST_PSEUDO_REGISTER)
+ if (!HARD_REGISTER_NUM_P (regno))
{
if (!lra_in_progress && !reload_completed)
return PSEUDO_REG_TYPE;
regno = true_regnum (reg);
- if (regno < 0 || regno >= FIRST_PSEUDO_REGISTER)
+ if (regno < 0 || !HARD_REGISTER_NUM_P (regno))
return PSEUDO_REG_TYPE;
}
case AND:
and_arg = XEXP (addr, 0);
if (GET_MODE_SIZE (mode) != 16
- || GET_CODE (XEXP (addr, 1)) != CONST_INT
+ || !CONST_INT_P (XEXP (addr, 1))
|| INTVAL (XEXP (addr, 1)) != -16)
{
fail_msg = "bad Altivec AND #1";
/* Allow subreg of memory before/during reload. */
bool memory_p = (MEM_P (x)
- || (!reload_completed && GET_CODE (x) == SUBREG
+ || (!reload_completed && SUBREG_P (x)
&& MEM_P (SUBREG_REG (x))));
sri->icode = CODE_FOR_nothing;
if (!done_p && reg_addr[mode].scalar_in_vmx_p
&& !mode_supports_vmx_dform (mode)
&& (rclass == VSX_REGS || rclass == ALTIVEC_REGS)
- && (memory_p || (GET_CODE (x) == CONST_DOUBLE)))
+ && (memory_p || CONST_DOUBLE_P (x)))
{
ret = FLOAT_REGS;
default_p = false;
rtx cc_clobber;
rtvec rv;
- if (regno < 0 || regno >= FIRST_PSEUDO_REGISTER || !MEM_P (mem)
+ if (regno < 0 || !HARD_REGISTER_NUM_P (regno) || !MEM_P (mem)
|| !base_reg_operand (scratch, GET_MODE (scratch)))
rs6000_secondary_reload_fail (__LINE__, reg, mem, scratch, store_p);
op1 = XEXP (addr, 1);
if ((addr_mask & RELOAD_REG_AND_M16) == 0)
{
- if (REG_P (op0) || GET_CODE (op0) == SUBREG)
+ if (REG_P (op0) || SUBREG_P (op0))
op_reg = op0;
else if (GET_CODE (op1) == PLUS)
debug_rtx (scratch);
}
- gcc_assert (regno >= 0 && regno < FIRST_PSEUDO_REGISTER);
- gcc_assert (GET_CODE (mem) == MEM);
+ gcc_assert (regno >= 0 && HARD_REGISTER_NUM_P (regno));
+ gcc_assert (MEM_P (mem));
rclass = REGNO_REG_CLASS (regno);
gcc_assert (rclass == GENERAL_REGS || rclass == BASE_REGS);
addr = XEXP (mem, 0);
On Darwin, pic addresses require a load from memory, which
needs a base register. */
if (rclass != BASE_REGS
- && (GET_CODE (in) == SYMBOL_REF
+ && (SYMBOL_REF_P (in)
|| GET_CODE (in) == HIGH
|| GET_CODE (in) == LABEL_REF
|| GET_CODE (in) == CONST))
return BASE_REGS;
}
- if (GET_CODE (in) == REG)
+ if (REG_P (in))
{
regno = REGNO (in);
- if (regno >= FIRST_PSEUDO_REGISTER)
+ if (!HARD_REGISTER_NUM_P (regno))
{
regno = true_regnum (in);
- if (regno >= FIRST_PSEUDO_REGISTER)
+ if (!HARD_REGISTER_NUM_P (regno))
regno = -1;
}
}
- else if (GET_CODE (in) == SUBREG)
+ else if (SUBREG_P (in))
{
regno = true_regnum (in);
- if (regno >= FIRST_PSEUDO_REGISTER)
+ if (!HARD_REGISTER_NUM_P (regno))
regno = -1;
}
else
/* Constants. */
else if (dest_regno >= 0
- && (GET_CODE (src) == CONST_INT
- || GET_CODE (src) == CONST_WIDE_INT
- || GET_CODE (src) == CONST_DOUBLE
+ && (CONST_INT_P (src)
+ || CONST_WIDE_INT_P (src)
+ || CONST_DOUBLE_P (src)
|| GET_CODE (src) == CONST_VECTOR))
{
if (dest_gpr_p)
return ggc_cleared_alloc<machine_function> ();
}
\f
-#define INT_P(X) (GET_CODE (X) == CONST_INT && GET_MODE (X) == VOIDmode)
+#define INT_P(X) (CONST_INT_P (X) && GET_MODE (X) == VOIDmode)
/* Write out a function code label. */
case 'G':
/* X is a constant integer. If it is negative, print "m",
otherwise print "z". This is to make an aze or ame insn. */
- if (GET_CODE (x) != CONST_INT)
+ if (!CONST_INT_P (x))
output_operand_lossage ("invalid %%G value");
else if (INTVAL (x) >= 0)
putc ('z', file);
if (GET_CODE (x) == CONST)
{
if (GET_CODE (XEXP (x, 0)) != PLUS
- || (GET_CODE (XEXP (XEXP (x, 0), 0)) != SYMBOL_REF
+ || (!SYMBOL_REF_P (XEXP (XEXP (x, 0), 0))
&& GET_CODE (XEXP (XEXP (x, 0), 0)) != LABEL_REF)
- || GET_CODE (XEXP (XEXP (x, 0), 1)) != CONST_INT)
+ || !CONST_INT_P (XEXP (XEXP (x, 0), 1)))
output_operand_lossage ("invalid %%K value");
}
print_operand_address (file, x);
case 'P':
/* The operand must be an indirect memory reference. The result
is the register name. */
- if (GET_CODE (x) != MEM || GET_CODE (XEXP (x, 0)) != REG
+ if (!MEM_P (x) || !REG_P (XEXP (x, 0))
|| REGNO (XEXP (x, 0)) >= 32)
output_operand_lossage ("invalid %%P value");
else
/* Print the symbolic name of a branch target register. */
if (GET_CODE (x) == UNSPEC && XINT (x, 1) == UNSPEC_PLTSEQ)
x = XVECEXP (x, 0, 0);
- if (GET_CODE (x) != REG || (REGNO (x) != LR_REGNO
- && REGNO (x) != CTR_REGNO))
+ if (!REG_P (x) || (REGNO (x) != LR_REGNO
+ && REGNO (x) != CTR_REGNO))
output_operand_lossage ("invalid %%T value");
else if (REGNO (x) == LR_REGNO)
fputs ("lr", file);
case 'x':
/* X is a FPR or Altivec register used in a VSX context. */
- if (GET_CODE (x) != REG || !VSX_REGNO_P (REGNO (x)))
+ if (!REG_P (x) || !VSX_REGNO_P (REGNO (x)))
output_operand_lossage ("invalid %%x value");
else
{
if (VECTOR_MEM_ALTIVEC_OR_VSX_P (GET_MODE (x))
&& GET_CODE (tmp) == AND
- && GET_CODE (XEXP (tmp, 1)) == CONST_INT
+ && CONST_INT_P (XEXP (tmp, 1))
&& INTVAL (XEXP (tmp, 1)) == -16)
tmp = XEXP (tmp, 0);
else if (VECTOR_MEM_VSX_P (GET_MODE (x))
{
if (REG_P (x))
fprintf (file, "0(%s)", reg_names[ REGNO (x) ]);
- else if (GET_CODE (x) == SYMBOL_REF || GET_CODE (x) == CONST
+ else if (SYMBOL_REF_P (x) || GET_CODE (x) == CONST
|| GET_CODE (x) == LABEL_REF)
{
output_addr_const (file, x);
reg_names[ REGNO (XEXP (x, 1)) ]);
}
else if (GET_CODE (x) == PLUS && REG_P (XEXP (x, 0))
- && GET_CODE (XEXP (x, 1)) == CONST_INT)
+ && CONST_INT_P (XEXP (x, 1)))
fprintf (file, HOST_WIDE_INT_PRINT_DEC "(%s)",
INTVAL (XEXP (x, 1)), reg_names[ REGNO (XEXP (x, 0)) ]);
#if TARGET_MACHO
switch (XINT (x, 1))
{
case UNSPEC_TOCREL:
- gcc_checking_assert (GET_CODE (XVECEXP (x, 0, 0)) == SYMBOL_REF
+ gcc_checking_assert (SYMBOL_REF_P (XVECEXP (x, 0, 0))
&& REG_P (XVECEXP (x, 0, 1))
&& REGNO (XVECEXP (x, 0, 1)) == TOC_REGISTER);
output_addr_const (file, XVECEXP (x, 0, 0));
/* Remove initial .'s to turn a -mcall-aixdesc function
address into the address of the descriptor, not the function
itself. */
- else if (GET_CODE (x) == SYMBOL_REF
+ else if (SYMBOL_REF_P (x)
&& XSTR (x, 0)[0] == '.'
&& DEFAULT_ABI == ABI_AIX)
{
/* If we have an unsigned compare, make sure we don't have a signed value as
an immediate. */
- if (comp_mode == CCUNSmode && GET_CODE (op1) == CONST_INT
+ if (comp_mode == CCUNSmode && CONST_INT_P (op1)
&& INTVAL (op1) < 0)
{
op0 = copy_rtx_if_shared (op0);
would treat EQ different to UNORDERED, we can't do it. */
if (HONOR_INFINITIES (compare_mode)
&& code != GT && code != UNGE
- && (GET_CODE (op1) != CONST_DOUBLE
+ && (!CONST_DOUBLE_P (op1)
|| real_isinf (CONST_DOUBLE_REAL_VALUE (op1)))
/* Constructs of the form (a OP b ? a : b) are safe. */
&& ((! rtx_equal_p (op0, false_cond) && ! rtx_equal_p (op1, false_cond))
if (TARGET_DEBUG_ADDR)
{
- if (GET_CODE (symbol) == SYMBOL_REF)
+ if (SYMBOL_REF_P (symbol))
fprintf (stderr, "\ncreate_TOC_reference, (symbol_ref %s)\n",
XSTR (symbol, 0));
else
emit_insn (insn);
emit_insn (gen_cond_trap (LTU, stack_reg, tmp_reg, const0_rtx));
}
- else if (GET_CODE (stack_limit_rtx) == SYMBOL_REF
+ else if (SYMBOL_REF_P (stack_limit_rtx)
&& TARGET_32BIT
&& DEFAULT_ABI == ABI_V4
&& !flag_pic)
then the arg pointer is used. */
if (cfun->machine->split_stack_arg_pointer != NULL_RTX
&& (!REG_P (cfun->machine->split_stack_arg_pointer)
- || (REGNO (cfun->machine->split_stack_arg_pointer)
- < FIRST_PSEUDO_REGISTER)))
+ || HARD_REGISTER_P (cfun->machine->split_stack_arg_pointer)))
return true;
/* Unfortunately we also need to do some code scanning, since
rtx parameter = DECL_INCOMING_RTL (decl);
machine_mode mode = GET_MODE (parameter);
- if (GET_CODE (parameter) == REG)
+ if (REG_P (parameter))
{
if (SCALAR_FLOAT_MODE_P (mode))
{
fprintf (file, "%d\n", ((*found)->labelno));
#ifdef HAVE_AS_TLS
- if (TARGET_XCOFF && GET_CODE (x) == SYMBOL_REF
+ if (TARGET_XCOFF && SYMBOL_REF_P (x)
&& (SYMBOL_REF_TLS_MODEL (x) == TLS_MODEL_GLOBAL_DYNAMIC
|| SYMBOL_REF_TLS_MODEL (x) == TLS_MODEL_LOCAL_DYNAMIC))
{
return;
}
}
- else if (GET_MODE (x) == VOIDmode && GET_CODE (x) == CONST_INT)
+ else if (GET_MODE (x) == VOIDmode && CONST_INT_P (x))
{
unsigned HOST_WIDE_INT low;
HOST_WIDE_INT high;
if (GET_CODE (x) == CONST)
{
gcc_assert (GET_CODE (XEXP (x, 0)) == PLUS
- && GET_CODE (XEXP (XEXP (x, 0), 1)) == CONST_INT);
+ && CONST_INT_P (XEXP (XEXP (x, 0), 1)));
base = XEXP (XEXP (x, 0), 0);
offset = INTVAL (XEXP (XEXP (x, 0), 1));
output_addr_const (file, x);
#if HAVE_AS_TLS
- if (TARGET_XCOFF && GET_CODE (base) == SYMBOL_REF)
+ if (TARGET_XCOFF && SYMBOL_REF_P (base))
{
switch (SYMBOL_REF_TLS_MODEL (base))
{
if ((rs6000_sched_groups || rs6000_tune == PROCESSOR_POWER9)
&& GET_CODE (PATTERN (insn)) == SET
&& GET_CODE (PATTERN (dep_insn)) == SET
- && GET_CODE (XEXP (PATTERN (insn), 1)) == MEM
- && GET_CODE (XEXP (PATTERN (dep_insn), 0)) == MEM
+ && MEM_P (XEXP (PATTERN (insn), 1))
+ && MEM_P (XEXP (PATTERN (dep_insn), 0))
&& (GET_MODE_SIZE (GET_MODE (XEXP (PATTERN (insn), 1)))
> GET_MODE_SIZE (GET_MODE (XEXP (PATTERN (dep_insn), 0)))))
return cost + 14;
if (tie_operand (pat, VOIDmode))
return false;
- if (GET_CODE (pat) == MEM)
+ if (MEM_P (pat))
{
*mem_ref = pat;
return true;
{
while (GET_CODE (addr) == PLUS)
{
- if (GET_CODE (XEXP (addr, 0)) == REG
+ if (REG_P (XEXP (addr, 0))
&& REGNO (XEXP (addr, 0)) != 0)
addr = XEXP (addr, 0);
- else if (GET_CODE (XEXP (addr, 1)) == REG
+ else if (REG_P (XEXP (addr, 1))
&& REGNO (XEXP (addr, 1)) != 0)
addr = XEXP (addr, 1);
else if (CONSTANT_P (XEXP (addr, 0)))
else
gcc_unreachable ();
}
- gcc_assert (GET_CODE (addr) == REG && REGNO (addr) != 0);
+ gcc_assert (REG_P (addr) && REGNO (addr) != 0);
return addr;
}
rs6000_machopic_legitimize_pic_address (XEXP (XEXP (orig, 0), 1),
Pmode, reg);
- if (GET_CODE (offset) == CONST_INT)
+ if (CONST_INT_P (offset))
{
if (SMALL_INT (offset))
return plus_constant (Pmode, base, INTVAL (offset));
if (!MEM_P (rtl))
return;
symbol = XEXP (rtl, 0);
- if (GET_CODE (symbol) != SYMBOL_REF)
+ if (!SYMBOL_REF_P (symbol))
return;
flags = SYMBOL_REF_FLAGS (symbol);
return false;
case MULT:
- if (GET_CODE (XEXP (x, 1)) == CONST_INT
+ if (CONST_INT_P (XEXP (x, 1))
&& satisfies_constraint_I (XEXP (x, 1)))
{
if (INTVAL (XEXP (x, 1)) >= -256
case UDIV:
case UMOD:
- if (GET_CODE (XEXP (x, 1)) == CONST_INT
+ if (CONST_INT_P (XEXP (x, 1))
&& exact_log2 (INTVAL (XEXP (x, 1))) >= 0)
{
if (code == DIV || code == MOD)
case SIGN_EXTEND:
case ZERO_EXTEND:
- if (GET_CODE (XEXP (x, 0)) == MEM)
+ if (MEM_P (XEXP (x, 0)))
*total = 0;
else
*total = COSTS_N_INSNS (1);
numbering) are what we need. */
if (!BYTES_BIG_ENDIAN
&& icode == CODE_FOR_altivec_vpkuwum_direct
- && ((GET_CODE (op0) == REG
+ && ((REG_P (op0)
&& GET_MODE (op0) != V4SImode)
- || (GET_CODE (op0) == SUBREG
+ || (SUBREG_P (op0)
&& GET_MODE (XEXP (op0, 0)) != V4SImode)))
continue;
if (!BYTES_BIG_ENDIAN
&& icode == CODE_FOR_altivec_vpkuhum_direct
- && ((GET_CODE (op0) == REG
+ && ((REG_P (op0)
&& GET_MODE (op0) != V8HImode)
- || (GET_CODE (op0) == SUBREG
+ || (SUBREG_P (op0)
&& GET_MODE (XEXP (op0, 0)) != V8HImode)))
continue;
func = rs6000_longcall_ref (func_desc, tlsarg);
/* Handle indirect calls. */
- if (GET_CODE (func) != SYMBOL_REF
+ if (!SYMBOL_REF_P (func)
|| (DEFAULT_ABI == ABI_AIX && !SYMBOL_REF_FUNCTION_P (func)))
{
/* Save the TOC into its reserved slot before the call,
rtx bool_rtx;
/* Optimize AND of 0/0xffffffff and IOR/XOR of 0. */
- if (op2 && GET_CODE (op2) == CONST_INT
+ if (op2 && CONST_INT_P (op2)
&& (mode == SImode || (mode == DImode && TARGET_POWERPC64))
&& !complement_final_p && !complement_op1_p && !complement_op2_p)
{
op2_hi_lo[hi] = op2_hi_lo[lo] = NULL_RTX;
else
{
- if (GET_CODE (operands[2]) != CONST_INT)
+ if (!CONST_INT_P (operands[2]))
{
op2_hi_lo[hi] = gen_highpart_mode (SImode, DImode, operands[2]);
op2_hi_lo[lo] = gen_lowpart (SImode, operands[2]);
{
/* Split large IOR/XOR operations. */
if ((code == IOR || code == XOR)
- && GET_CODE (op2_hi_lo[i]) == CONST_INT
+ && CONST_INT_P (op2_hi_lo[i])
&& !complement_final_p
&& !complement_op1_p
&& !complement_op2_p
/* Return 1 for a symbol ref for a thread-local storage symbol. */
#define RS6000_SYMBOL_REF_TLS_P(RTX) \
- (GET_CODE (RTX) == SYMBOL_REF && SYMBOL_REF_TLS_MODEL (RTX) != 0)
+ (SYMBOL_REF_P (RTX) && SYMBOL_REF_TLS_MODEL (RTX) != 0)
#ifdef IN_LIBGCC2
/* For libgcc2 we make sure this is a compile time constant */
allocation. */
#define REGNO_OK_FOR_INDEX_P(REGNO) \
-((REGNO) < FIRST_PSEUDO_REGISTER \
+(HARD_REGISTER_NUM_P (REGNO) \
? (REGNO) <= 31 || (REGNO) == 67 \
|| (REGNO) == FRAME_POINTER_REGNUM \
: (reg_renumber[REGNO] >= 0 \
|| reg_renumber[REGNO] == FRAME_POINTER_REGNUM)))
#define REGNO_OK_FOR_BASE_P(REGNO) \
-((REGNO) < FIRST_PSEUDO_REGISTER \
+(HARD_REGISTER_NUM_P (REGNO) \
? ((REGNO) > 0 && (REGNO) <= 31) || (REGNO) == 67 \
|| (REGNO) == FRAME_POINTER_REGNUM \
: (reg_renumber[REGNO] > 0 \
/* Nonzero if X is a hard reg that can be used as an index
or if it is a pseudo reg in the non-strict case. */
#define INT_REG_OK_FOR_INDEX_P(X, STRICT) \
- ((!(STRICT) && REGNO (X) >= FIRST_PSEUDO_REGISTER) \
+ ((!(STRICT) && !HARD_REGISTER_P (X)) \
|| REGNO_OK_FOR_INDEX_P (REGNO (X)))
/* Nonzero if X is a hard reg that can be used as a base reg
or if it is a pseudo reg in the non-strict case. */
#define INT_REG_OK_FOR_BASE_P(X, STRICT) \
- ((!(STRICT) && REGNO (X) >= FIRST_PSEUDO_REGISTER) \
+ ((!(STRICT) && !HARD_REGISTER_P (X)) \
|| REGNO_OK_FOR_BASE_P (REGNO (X)))
\f
/* Recognize any constant value that is a valid address. */
#define CONSTANT_ADDRESS_P(X) \
- (GET_CODE (X) == LABEL_REF || GET_CODE (X) == SYMBOL_REF \
- || GET_CODE (X) == CONST_INT || GET_CODE (X) == CONST \
+ (GET_CODE (X) == LABEL_REF || SYMBOL_REF_P (X) \
+ || CONST_INT_P (X) || GET_CODE (X) == CONST \
|| GET_CODE (X) == HIGH)
#define EASY_VECTOR_15(n) ((n) >= -16 && (n) <= 15)
rtx addr, scratch_string, word1, word2, scratch_dlmzb;
rtx loop_label, end_label, mem, cr0, cond;
if (search_char != const0_rtx
- || GET_CODE (align) != CONST_INT
+ || !CONST_INT_P (align)
|| INTVAL (align) < 8)
FAIL;
word1 = gen_reg_rtx (SImode);
rtx temp1;
rtx temp2;
- if (GET_CODE (operands[2]) != CONST_INT
+ if (!CONST_INT_P (operands[2])
|| INTVAL (operands[2]) <= 0
|| (i = exact_log2 (INTVAL (operands[2]))) < 0)
{
[(set (match_operand:FMOVE32 0 "gpc_reg_operand")
(match_operand:FMOVE32 1 "const_double_operand"))]
"reload_completed
- && ((GET_CODE (operands[0]) == REG && REGNO (operands[0]) <= 31)
- || (GET_CODE (operands[0]) == SUBREG
- && GET_CODE (SUBREG_REG (operands[0])) == REG
+ && ((REG_P (operands[0]) && REGNO (operands[0]) <= 31)
+ || (SUBREG_P (operands[0])
+ && REG_P (SUBREG_REG (operands[0]))
&& REGNO (SUBREG_REG (operands[0])) <= 31))"
[(set (match_dup 2) (match_dup 3))]
{
[(set (match_operand:FMOVE64 0 "gpc_reg_operand")
(match_operand:FMOVE64 1 "const_int_operand"))]
"! TARGET_POWERPC64 && reload_completed
- && ((GET_CODE (operands[0]) == REG && REGNO (operands[0]) <= 31)
- || (GET_CODE (operands[0]) == SUBREG
- && GET_CODE (SUBREG_REG (operands[0])) == REG
+ && ((REG_P (operands[0]) && REGNO (operands[0]) <= 31)
+ || (SUBREG_P (operands[0])
+ && REG_P (SUBREG_REG (operands[0]))
&& REGNO (SUBREG_REG (operands[0])) <= 31))"
[(set (match_dup 2) (match_dup 4))
(set (match_dup 3) (match_dup 1))]
[(set (match_operand:FMOVE64 0 "gpc_reg_operand")
(match_operand:FMOVE64 1 "const_double_operand"))]
"! TARGET_POWERPC64 && reload_completed
- && ((GET_CODE (operands[0]) == REG && REGNO (operands[0]) <= 31)
- || (GET_CODE (operands[0]) == SUBREG
- && GET_CODE (SUBREG_REG (operands[0])) == REG
+ && ((REG_P (operands[0]) && REGNO (operands[0]) <= 31)
+ || (SUBREG_P (operands[0])
+ && REG_P (SUBREG_REG (operands[0]))
&& REGNO (SUBREG_REG (operands[0])) <= 31))"
[(set (match_dup 2) (match_dup 4))
(set (match_dup 3) (match_dup 5))]
[(set (match_operand:FMOVE64 0 "gpc_reg_operand")
(match_operand:FMOVE64 1 "const_double_operand"))]
"TARGET_POWERPC64 && reload_completed
- && ((GET_CODE (operands[0]) == REG && REGNO (operands[0]) <= 31)
- || (GET_CODE (operands[0]) == SUBREG
- && GET_CODE (SUBREG_REG (operands[0])) == REG
+ && ((REG_P (operands[0]) && REGNO (operands[0]) <= 31)
+ || (SUBREG_P (operands[0])
+ && REG_P (SUBREG_REG (operands[0]))
&& REGNO (SUBREG_REG (operands[0])) <= 31))"
[(set (match_dup 2) (match_dup 3))]
{
operands[0] = machopic_indirect_call_target (operands[0]);
#endif
- gcc_assert (GET_CODE (operands[0]) == MEM);
+ gcc_assert (MEM_P (operands[0]));
operands[0] = XEXP (operands[0], 0);
operands[1] = machopic_indirect_call_target (operands[1]);
#endif
- gcc_assert (GET_CODE (operands[1]) == MEM);
+ gcc_assert (MEM_P (operands[1]));
operands[1] = XEXP (operands[1], 0);
operands[0] = machopic_indirect_call_target (operands[0]);
#endif
- gcc_assert (GET_CODE (operands[0]) == MEM);
- gcc_assert (GET_CODE (operands[1]) == CONST_INT);
+ gcc_assert (MEM_P (operands[0]));
+ gcc_assert (CONST_INT_P (operands[1]));
operands[0] = XEXP (operands[0], 0);
operands[1] = machopic_indirect_call_target (operands[1]);
#endif
- gcc_assert (GET_CODE (operands[1]) == MEM);
- gcc_assert (GET_CODE (operands[2]) == CONST_INT);
+ gcc_assert (MEM_P (operands[1]));
+ gcc_assert (CONST_INT_P (operands[2]));
operands[1] = XEXP (operands[1], 0);
{
/* Take care of the possibility that operands[2] might be negative but
this might be a logical operation. That insn doesn't exist. */
- if (GET_CODE (operands[2]) == CONST_INT
+ if (CONST_INT_P (operands[2])
&& INTVAL (operands[2]) < 0)
{
operands[2] = force_reg (<MODE>mode, operands[2]);
(unspec:CC [(match_operand:SI 1 "gpc_reg_operand" "r")
(match_operand 2 "immediate_operand" "n")]
UNSPEC_MOVESI_TO_CR))]
- "GET_CODE (operands[0]) == REG
+ "REG_P (operands[0])
&& CR_REGNO_P (REGNO (operands[0]))
- && GET_CODE (operands[2]) == CONST_INT
+ && CONST_INT_P (operands[2])
&& INTVAL (operands[2]) == 1 << (75 - REGNO (operands[0]))"
"mtcrf %R0,%1"
[(set_attr "type" "mtcr")])
#undef ASM_OUTPUT_SPECIAL_POOL_ENTRY_P
#define ASM_OUTPUT_SPECIAL_POOL_ENTRY_P(X, MODE) \
(TARGET_TOC \
- && (GET_CODE (X) == SYMBOL_REF \
+ && (SYMBOL_REF_P (X) \
|| (GET_CODE (X) == CONST && GET_CODE (XEXP (X, 0)) == PLUS \
- && GET_CODE (XEXP (XEXP (X, 0), 0)) == SYMBOL_REF) \
+ && SYMBOL_REF_P (XEXP (XEXP (X, 0), 0))) \
|| GET_CODE (X) == LABEL_REF \
- || (GET_CODE (X) == CONST_INT \
+ || (CONST_INT_P (X) \
&& GET_MODE_BITSIZE (MODE) <= GET_MODE_BITSIZE (Pmode)) \
- || (GET_CODE (X) == CONST_DOUBLE \
+ || (CONST_DOUBLE_P (X) \
&& ((TARGET_64BIT \
&& (TARGET_MINIMAL_TOC \
|| (SCALAR_FLOAT_MODE_P (GET_MODE (X)) \
#undef ASM_OUTPUT_SPECIAL_POOL_ENTRY_P
#define ASM_OUTPUT_SPECIAL_POOL_ENTRY_P(X, MODE) \
(TARGET_TOC \
- && (GET_CODE (X) == SYMBOL_REF \
+ && (SYMBOL_REF_P (X) \
|| (GET_CODE (X) == CONST && GET_CODE (XEXP (X, 0)) == PLUS \
- && GET_CODE (XEXP (XEXP (X, 0), 0)) == SYMBOL_REF) \
+ && SYMBOL_REF_P (XEXP (XEXP (X, 0), 0))) \
|| GET_CODE (X) == LABEL_REF \
- || (GET_CODE (X) == CONST_INT \
+ || (CONST_INT_P (X) \
&& GET_MODE_BITSIZE (MODE) <= GET_MODE_BITSIZE (Pmode)) \
|| (!TARGET_NO_FP_IN_TOC \
- && GET_CODE (X) == CONST_DOUBLE \
+ && CONST_DOUBLE_P (X) \
&& SCALAR_FLOAT_MODE_P (GET_MODE (X)) \
&& BITS_PER_WORD == HOST_BITS_PER_INT)))
allocation and the hard register destination is not in the altivec
range. */
if ((MEM_ALIGN (mem) >= 128)
- && ((reg_or_subregno (operands[0]) >= FIRST_PSEUDO_REGISTER)
+ && (!HARD_REGISTER_NUM_P (reg_or_subregno (operands[0]))
|| ALTIVEC_REGNO_P (reg_or_subregno (operands[0]))))
{
rtx mem_address = XEXP (mem, 0);
allocation and the hard register destination is not in the altivec
range. */
if ((MEM_ALIGN (mem) >= 128)
- && ((REGNO(operands[0]) >= FIRST_PSEUDO_REGISTER)
+ && (!HARD_REGISTER_P (operands[0])
|| ALTIVEC_REGNO_P (REGNO(operands[0]))))
{
rtx mem_address = XEXP (mem, 0);
allocation and the hard register destination is not in the altivec
range. */
if ((MEM_ALIGN (mem) >= 128)
- && ((REGNO(operands[0]) >= FIRST_PSEUDO_REGISTER)
+ && (!HARD_REGISTER_P (operands[0])
|| ALTIVEC_REGNO_P (REGNO(operands[0]))))
{
rtx mem_address = XEXP (mem, 0);
allocation and the hard register destination is not in the altivec
range. */
if ((MEM_ALIGN (mem) >= 128)
- && ((REGNO(operands[0]) >= FIRST_PSEUDO_REGISTER)
+ && (!HARD_REGISTER_P (operands[0])
|| ALTIVEC_REGNO_P (REGNO(operands[0]))))
{
rtx mem_address = XEXP (mem, 0);
/* Don't apply the swap optimization if we've already performed register
allocation and the hard register source is not in the altivec range. */
if ((MEM_ALIGN (mem) >= 128)
- && ((reg_or_subregno (operands[1]) >= FIRST_PSEUDO_REGISTER)
- || ALTIVEC_REGNO_P (reg_or_subregno (operands[1]))))
+ && (!HARD_REGISTER_NUM_P (reg_or_subregno (operands[1]))
+ || ALTIVEC_REGNO_P (reg_or_subregno (operands[1]))))
{
rtx mem_address = XEXP (mem, 0);
enum machine_mode mode = GET_MODE (mem);
/* Don't apply the swap optimization if we've already performed register
allocation and the hard register source is not in the altivec range. */
if ((MEM_ALIGN (mem) >= 128)
- && ((reg_or_subregno (operands[1]) >= FIRST_PSEUDO_REGISTER)
- || ALTIVEC_REGNO_P (reg_or_subregno (operands[1]))))
+ && (!HARD_REGISTER_NUM_P (reg_or_subregno (operands[1]))
+ || ALTIVEC_REGNO_P (reg_or_subregno (operands[1]))))
{
rtx mem_address = XEXP (mem, 0);
enum machine_mode mode = GET_MODE (mem);
/* Don't apply the swap optimization if we've already performed register
allocation and the hard register source is not in the altivec range. */
if ((MEM_ALIGN (mem) >= 128)
- && ((reg_or_subregno (operands[1]) >= FIRST_PSEUDO_REGISTER)
- || ALTIVEC_REGNO_P (reg_or_subregno (operands[1]))))
+ && (!HARD_REGISTER_NUM_P (reg_or_subregno (operands[1]))
+ || ALTIVEC_REGNO_P (reg_or_subregno (operands[1]))))
{
rtx mem_address = XEXP (mem, 0);
enum machine_mode mode = GET_MODE (mem);
/* Don't apply the swap optimization if we've already performed register
allocation and the hard register source is not in the altivec range. */
if ((MEM_ALIGN (mem) >= 128)
- && ((reg_or_subregno (operands[1]) >= FIRST_PSEUDO_REGISTER)
- || ALTIVEC_REGNO_P (reg_or_subregno (operands[1]))))
+ && (!HARD_REGISTER_NUM_P (reg_or_subregno (operands[1]))
+ || ALTIVEC_REGNO_P (reg_or_subregno (operands[1]))))
{
rtx mem_address = XEXP (mem, 0);
enum machine_mode mode = GET_MODE (mem);
#define ASM_OUTPUT_SPECIAL_POOL_ENTRY_P(X, MODE) \
(TARGET_TOC \
- && (GET_CODE (X) == SYMBOL_REF \
+ && (SYMBOL_REF_P (X) \
|| (GET_CODE (X) == CONST && GET_CODE (XEXP (X, 0)) == PLUS \
- && GET_CODE (XEXP (XEXP (X, 0), 0)) == SYMBOL_REF) \
+ && SYMBOL_REF_P (XEXP (XEXP (X, 0), 0))) \
|| GET_CODE (X) == LABEL_REF \
- || (GET_CODE (X) == CONST_INT \
+ || (CONST_INT_P (X) \
&& GET_MODE_BITSIZE (MODE) <= GET_MODE_BITSIZE (Pmode)) \
- || (GET_CODE (X) == CONST_DOUBLE \
+ || (CONST_DOUBLE_P (X) \
&& (TARGET_MINIMAL_TOC \
|| (SCALAR_FLOAT_MODE_P (GET_MODE (X)) \
&& ! TARGET_NO_FP_IN_TOC)))))