* target.def (rtx_costs): Remove "code" param, add "mode".
* rtl.h (rtx_cost, get_full_rtx_cost): Update prototype.
(set_src_cost, get_full_set_src_cost): Likewise. Move later in file.
(set_rtx_cost, get_full_set_rtx_cost): Move later in file.
* rtlanal.c (rtx_cost): Add "mode" parameter. Update targetm.rtx_costs
call. Track mode when given in rtx.
(get_full_rtx_cost): Add "mode" parameter. Update rtx_cost calls.
(default_address_cost): Pass Pmode to rtx_cost.
(insn_rtx_cost): Pass dest mode of set to set_src_cost.
* cprop.c (try_replace_reg): Ensure set_rtx_cost is not called
with NULL set.
* cse.c (COST, COST_IN): Add MODE param. Update all uses.
(notreg_cost): Add mode param. Use it.
* gcse.c (want_to_gcse_p): Delete forward declaration. Add
mode param and pass to set_src_cost. Update all calls.
(hash_scan_set): Formatting.
* hooks.c (hook_bool_rtx_int_int_int_intp_bool_false): Delete.
(hook_bool_rtx_mode_int_int_intp_bool_false): New function.
* hooks.h: Ditto.
* expmed.c (init_expmed_one_conv, init_expmed_one_mode,
init_expmed, expand_mult, mult_by_coeff_cost, expand_smod_pow2,
emit_store_flag): Update set_src_cost and rtx_cost calls.
* auto-inc-dec.c (attempt_change): Likewise.
* calls.c (precompute_register_parameters): Likewise.
* combine.c (expand_compound_operation, make_extraction,
force_to_mode, distribute_and_simplify_rtx): Likewise.
* dojump.c (prefer_and_bit_test): Likewise.
* dse.c (find_shift_sequence): Likewise.
* expr.c (compress_float_constant): Likewise.
* fwprop.c (should_replace_address, try_fwprop_subst): Likewise.
* ifcvt.c (noce_try_sign_mask): Likewise.
* loop-doloop.c (doloop_optimize): Likewise.
* loop-invariant.c (create_new_invariant): Likewise.
* lower-subreg.c (shift_cost, compute_costs): Likewise.
* optabs.c (avoid_expensive_constant, prepare_cmp_insn,
lshift_cheap_p): Likewise.
* postreload.c (reload_cse_simplify_set, reload_cse_simplify_operands,
try_replace_in_use, reload_cse_move2add): Likewise.
* reload1.c (calculate_elim_costs_all_insns, note_reg_elim_costly):
Likewise.
* simplify-rtx.c (simplify_binary_operation_1): Likewise.
* tree-ssa-loop-ivopts.c (computation_cost): Likewise.
* tree-ssa-reassoc.c (optimize_range_tests_to_bit_test): Likewise.
* tree-switch-conversion.c (emit_case_bit_tests): Likewise.
* config/aarch64/aarch64.c (aarch64_rtx_costs): Delete "code" param,
add "mode" param. Use "mode: in place of GET_MODE (x). Pass mode
to rtx_cost calls.
* config/alpha/alpha.c (alpha_rtx_costs): Likewise.
* config/arc/arc.c (arc_rtx_costs): Likewise.
* config/arm/arm.c (arm_rtx_costs): Likewise.
* config/avr/avr.c (avr_rtx_costs, avr_rtx_costs_1): Likewise.
* config/bfin/bfin.c (bfin_rtx_costs): Likewise.
* config/c6x/c6x.c (c6x_rtx_costs): Likewise.
* config/cris/cris.c (cris_rtx_costs): Likewise.
* config/epiphany/epiphany.c (epiphany_rtx_costs): Likewise.
* config/frv/frv.c (frv_rtx_costs): Likewise.
* config/h8300/h8300.c (h8300_rtx_costs): Likewise.
* config/i386/i386.c (ix86_rtx_costs): Likewise.
* config/ia64/ia64.c (ia64_rtx_costs): Likewise.
* config/iq2000/iq2000.c (iq2000_rtx_costs): Likewise.
* config/lm32/lm32.c (lm32_rtx_costs): Likewise.
* config/m32c/m32c.c (m32c_rtx_costs): Likewise.
* config/m32r/m32r.c (m32r_rtx_costs): Likewise.
* config/m68k/m68k.c (m68k_rtx_costs): Likewise.
* config/mcore/mcore.c (mcore_rtx_costs): Likewise.
* config/mep/mep.c (mep_rtx_cost): Likewise.
* config/microblaze/microblaze.c (microblaze_rtx_costs): Likewise.
* config/mips/mips.c (mips_rtx_costs): Likewise.
* config/mmix/mmix.c (mmix_rtx_costs): Likewise.
* config/mn10300/mn10300.c (mn10300_rtx_costs): Likewise.
* config/msp430/msp430.c (msp430_rtx_costs): Likewise.
* config/nds32/nds32-cost.c (nds32_rtx_costs_impl): Likewise.
* config/nds32/nds32-protos.h (nds32_rtx_costs_impl): Likewise.
* config/nds32/nds32.c (nds32_rtx_costs): Likewise.
* config/nios2/nios2.c (nios2_rtx_costs): Likewise.
* config/pa/pa.c (hppa_rtx_costs): Likewise.
* config/pdp11/pdp11.c (pdp11_rtx_costs): Likewise.
* config/rl78/rl78.c (rl78_rtx_costs): Likewise.
* config/rs6000/rs6000.c (rs6000_rtx_costs): Likewise.
* config/s390/s390.c (s390_rtx_costs): Likewise.
* config/sh/sh.c (sh_rtx_costs): Likewise.
* config/sparc/sparc.c (sparc_rtx_costs): Likewise.
* config/spu/spu.c (spu_rtx_costs): Likewise.
* config/stormy16/stormy16.c (xstormy16_rtx_costs): Likewise.
* config/tilegx/tilegx.c (tilegx_rtx_costs): Likewise.
* config/tilepro/tilepro.c (tilepro_rtx_costs): Likewise.
* config/v850/v850.c (v850_rtx_costs): Likewise.
* config/vax/vax.c (vax_rtx_costs): Likewise.
* config/visium/visium.c (visium_rtx_costs): Likewise.
* config/xtensa/xtensa.c (xtensa_rtx_costs): Likewise.
* config/aarch64/aarch64.c (aarch64_rtx_mult_cost): Change type of
"code" param, and pass as outer_code to first rtx_cost call. Pass
mode to rtx_cost calls.
(aarch64_address_cost, aarch64_if_then_else_costs): Update rtx_cost
calls.
(aarch64_rtx_costs_wrapper): Update.
* config/arm/arm.c (arm_rtx_costs_1, arm_size_rtx_costs,
arm_unspec_cost, arm_new_rtx_costs, arm_slowmul_rtx_costs): Update
rtx_cost calls.
* config/avr/avr.c (avr_final_prescan_insn): Update set_src_cost
and rtx_cost calls.
(avr_operand_rtx_cost): Similarly.
(avr_rtx_costs_1): Correct mode passed to avr_operand_rtx_cost
for subexpressions of ZERO_EXTEND, SIGN_EXTEND and COMPARE.
* config/mips/mips.c (mips_stack_address_p): Comment typo.
(mips_binary_cost): Update rtx_cost and set_src_cost calls.
(mips_rtx_costs): Use GET_MODE (x) to detect const_int.
* config/mn10300/mn10300.c (mn10300_address_cost): Pass Pmode to
rtx_cost.
(mn10300_rtx_costs): Correct mode passed to mn10300_address_cost.
* config/rs6000/rs6000.c (rs6000_debug_rtx_costs): Update.
* config/sh/sh.c (and_xor_ior_costs): Update rtx_cost call.
* doc/tm.texi: Regenerate.
From-SVN: r225532
+2015-07-08 Alan Modra <amodra@gmail.com>
+
+ * target.def (rtx_costs): Remove "code" param, add "mode".
+ * rtl.h (rtx_cost, get_full_rtx_cost): Update prototype.
+ (set_src_cost, get_full_set_src_cost): Likewise. Move later in file.
+ (set_rtx_cost, get_full_set_rtx_cost): Move later in file.
+ * rtlanal.c (rtx_cost): Add "mode" parameter. Update targetm.rtx_costs
+ call. Track mode when given in rtx.
+ (get_full_rtx_cost): Add "mode" parameter. Update rtx_cost calls.
+ (default_address_cost): Pass Pmode to rtx_cost.
+ (insn_rtx_cost): Pass dest mode of set to set_src_cost.
+ * cprop.c (try_replace_reg): Ensure set_rtx_cost is not called
+ with NULL set.
+ * cse.c (COST, COST_IN): Add MODE param. Update all uses.
+ (notreg_cost): Add mode param. Use it.
+ * gcse.c (want_to_gcse_p): Delete forward declaration. Add
+ mode param and pass to set_src_cost. Update all calls.
+ (hash_scan_set): Formatting.
+ * hooks.c (hook_bool_rtx_int_int_int_intp_bool_false): Delete.
+ (hook_bool_rtx_mode_int_int_intp_bool_false): New function.
+ * hooks.h: Ditto.
+ * expmed.c (init_expmed_one_conv, init_expmed_one_mode,
+ init_expmed, expand_mult, mult_by_coeff_cost, expand_smod_pow2,
+ emit_store_flag): Update set_src_cost and rtx_cost calls.
+ * auto-inc-dec.c (attempt_change): Likewise.
+ * calls.c (precompute_register_parameters): Likewise.
+ * combine.c (expand_compound_operation, make_extraction,
+ force_to_mode, distribute_and_simplify_rtx): Likewise.
+ * dojump.c (prefer_and_bit_test): Likewise.
+ * dse.c (find_shift_sequence): Likewise.
+ * expr.c (compress_float_constant): Likewise.
+ * fwprop.c (should_replace_address, try_fwprop_subst): Likewise.
+ * ifcvt.c (noce_try_sign_mask): Likewise.
+ * loop-doloop.c (doloop_optimize): Likewise.
+ * loop-invariant.c (create_new_invariant): Likewise.
+ * lower-subreg.c (shift_cost, compute_costs): Likewise.
+ * optabs.c (avoid_expensive_constant, prepare_cmp_insn,
+ lshift_cheap_p): Likewise.
+ * postreload.c (reload_cse_simplify_set, reload_cse_simplify_operands,
+ try_replace_in_use, reload_cse_move2add): Likewise.
+ * reload1.c (calculate_elim_costs_all_insns, note_reg_elim_costly):
+ Likewise.
+ * simplify-rtx.c (simplify_binary_operation_1): Likewise.
+ * tree-ssa-loop-ivopts.c (computation_cost): Likewise.
+ * tree-ssa-reassoc.c (optimize_range_tests_to_bit_test): Likewise.
+ * tree-switch-conversion.c (emit_case_bit_tests): Likewise.
+ * config/aarch64/aarch64.c (aarch64_rtx_costs): Delete "code" param,
+ add "mode" param. Use "mode: in place of GET_MODE (x). Pass mode
+ to rtx_cost calls.
+ * config/alpha/alpha.c (alpha_rtx_costs): Likewise.
+ * config/arc/arc.c (arc_rtx_costs): Likewise.
+ * config/arm/arm.c (arm_rtx_costs): Likewise.
+ * config/avr/avr.c (avr_rtx_costs, avr_rtx_costs_1): Likewise.
+ * config/bfin/bfin.c (bfin_rtx_costs): Likewise.
+ * config/c6x/c6x.c (c6x_rtx_costs): Likewise.
+ * config/cris/cris.c (cris_rtx_costs): Likewise.
+ * config/epiphany/epiphany.c (epiphany_rtx_costs): Likewise.
+ * config/frv/frv.c (frv_rtx_costs): Likewise.
+ * config/h8300/h8300.c (h8300_rtx_costs): Likewise.
+ * config/i386/i386.c (ix86_rtx_costs): Likewise.
+ * config/ia64/ia64.c (ia64_rtx_costs): Likewise.
+ * config/iq2000/iq2000.c (iq2000_rtx_costs): Likewise.
+ * config/lm32/lm32.c (lm32_rtx_costs): Likewise.
+ * config/m32c/m32c.c (m32c_rtx_costs): Likewise.
+ * config/m32r/m32r.c (m32r_rtx_costs): Likewise.
+ * config/m68k/m68k.c (m68k_rtx_costs): Likewise.
+ * config/mcore/mcore.c (mcore_rtx_costs): Likewise.
+ * config/mep/mep.c (mep_rtx_cost): Likewise.
+ * config/microblaze/microblaze.c (microblaze_rtx_costs): Likewise.
+ * config/mips/mips.c (mips_rtx_costs): Likewise.
+ * config/mmix/mmix.c (mmix_rtx_costs): Likewise.
+ * config/mn10300/mn10300.c (mn10300_rtx_costs): Likewise.
+ * config/msp430/msp430.c (msp430_rtx_costs): Likewise.
+ * config/nds32/nds32-cost.c (nds32_rtx_costs_impl): Likewise.
+ * config/nds32/nds32-protos.h (nds32_rtx_costs_impl): Likewise.
+ * config/nds32/nds32.c (nds32_rtx_costs): Likewise.
+ * config/nios2/nios2.c (nios2_rtx_costs): Likewise.
+ * config/pa/pa.c (hppa_rtx_costs): Likewise.
+ * config/pdp11/pdp11.c (pdp11_rtx_costs): Likewise.
+ * config/rl78/rl78.c (rl78_rtx_costs): Likewise.
+ * config/rs6000/rs6000.c (rs6000_rtx_costs): Likewise.
+ * config/s390/s390.c (s390_rtx_costs): Likewise.
+ * config/sh/sh.c (sh_rtx_costs): Likewise.
+ * config/sparc/sparc.c (sparc_rtx_costs): Likewise.
+ * config/spu/spu.c (spu_rtx_costs): Likewise.
+ * config/stormy16/stormy16.c (xstormy16_rtx_costs): Likewise.
+ * config/tilegx/tilegx.c (tilegx_rtx_costs): Likewise.
+ * config/tilepro/tilepro.c (tilepro_rtx_costs): Likewise.
+ * config/v850/v850.c (v850_rtx_costs): Likewise.
+ * config/vax/vax.c (vax_rtx_costs): Likewise.
+ * config/visium/visium.c (visium_rtx_costs): Likewise.
+ * config/xtensa/xtensa.c (xtensa_rtx_costs): Likewise.
+ * config/aarch64/aarch64.c (aarch64_rtx_mult_cost): Change type of
+ "code" param, and pass as outer_code to first rtx_cost call. Pass
+ mode to rtx_cost calls.
+ (aarch64_address_cost, aarch64_if_then_else_costs): Update rtx_cost
+ calls.
+ (aarch64_rtx_costs_wrapper): Update.
+ * config/arm/arm.c (arm_rtx_costs_1, arm_size_rtx_costs,
+ arm_unspec_cost, arm_new_rtx_costs, arm_slowmul_rtx_costs): Update
+ rtx_cost calls.
+ * config/avr/avr.c (avr_final_prescan_insn): Update set_src_cost
+ and rtx_cost calls.
+ (avr_operand_rtx_cost): Similarly.
+ (avr_rtx_costs_1): Correct mode passed to avr_operand_rtx_cost
+ for subexpressions of ZERO_EXTEND, SIGN_EXTEND and COMPARE.
+ * config/mips/mips.c (mips_stack_address_p): Comment typo.
+ (mips_binary_cost): Update rtx_cost and set_src_cost calls.
+ (mips_rtx_costs): Use GET_MODE (x) to detect const_int.
+ * config/mn10300/mn10300.c (mn10300_address_cost): Pass Pmode to
+ rtx_cost.
+ (mn10300_rtx_costs): Correct mode passed to mn10300_address_cost.
+ * config/rs6000/rs6000.c (rs6000_debug_rtx_costs): Update.
+ * config/sh/sh.c (and_xor_ior_costs): Update rtx_cost call.
+ * doc/tm.texi: Regenerate.
+
2015-07-07 Andrew MacLeod <amacleod@redhat.com>
* tree-core.h: Include symtab.h.
PUT_MODE (mem_tmp, mode);
XEXP (mem_tmp, 0) = new_addr;
- old_cost = (set_src_cost (mem, speed)
+ old_cost = (set_src_cost (mem, mode, speed)
+ set_rtx_cost (PATTERN (inc_insn.insn), speed));
- new_cost = set_src_cost (mem_tmp, speed);
+ new_cost = set_src_cost (mem_tmp, mode, speed);
/* The first item of business is to see if this is profitable. */
if (old_cost < new_cost)
|| (GET_CODE (args[i].value) == SUBREG
&& REG_P (SUBREG_REG (args[i].value)))))
&& args[i].mode != BLKmode
- && set_src_cost (args[i].value, optimize_insn_for_speed_p ())
- > COSTS_N_INSNS (1)
+ && (set_src_cost (args[i].value, args[i].mode,
+ optimize_insn_for_speed_p ())
+ > COSTS_N_INSNS (1))
&& ((*reg_parm_seen
&& targetm.small_register_classes_for_mode_p (args[i].mode))
|| optimize))
>> 1))
== 0)))
{
- rtx temp = gen_rtx_ZERO_EXTEND (GET_MODE (x), XEXP (x, 0));
+ machine_mode mode = GET_MODE (x);
+ rtx temp = gen_rtx_ZERO_EXTEND (mode, XEXP (x, 0));
rtx temp2 = expand_compound_operation (temp);
/* Make sure this is a profitable operation. */
- if (set_src_cost (x, optimize_this_for_speed_p)
- > set_src_cost (temp2, optimize_this_for_speed_p))
+ if (set_src_cost (x, mode, optimize_this_for_speed_p)
+ > set_src_cost (temp2, mode, optimize_this_for_speed_p))
return temp2;
- else if (set_src_cost (x, optimize_this_for_speed_p)
- > set_src_cost (temp, optimize_this_for_speed_p))
+ else if (set_src_cost (x, mode, optimize_this_for_speed_p)
+ > set_src_cost (temp, mode, optimize_this_for_speed_p))
return temp;
else
return x;
/* Prefer ZERO_EXTENSION, since it gives more information to
backends. */
- if (set_src_cost (temp, optimize_this_for_speed_p)
- <= set_src_cost (temp1, optimize_this_for_speed_p))
+ if (set_src_cost (temp, mode, optimize_this_for_speed_p)
+ <= set_src_cost (temp1, mode, optimize_this_for_speed_p))
return temp;
return temp1;
}
/* Prefer ZERO_EXTENSION, since it gives more information to
backends. */
- if (set_src_cost (temp1, optimize_this_for_speed_p)
- < set_src_cost (temp, optimize_this_for_speed_p))
+ if (set_src_cost (temp1, pos_mode, optimize_this_for_speed_p)
+ < set_src_cost (temp, pos_mode, optimize_this_for_speed_p))
temp = temp1;
}
pos_rtx = temp;
y = simplify_gen_binary (AND, GET_MODE (x), XEXP (x, 0),
gen_int_mode (cval, GET_MODE (x)));
- if (set_src_cost (y, optimize_this_for_speed_p)
- < set_src_cost (x, optimize_this_for_speed_p))
+ if (set_src_cost (y, GET_MODE (x), optimize_this_for_speed_p)
+ < set_src_cost (x, GET_MODE (x), optimize_this_for_speed_p))
x = y;
}
tmp = apply_distributive_law (simplify_gen_binary (inner_code, mode,
new_op0, new_op1));
if (GET_CODE (tmp) != outer_code
- && (set_src_cost (tmp, optimize_this_for_speed_p)
- < set_src_cost (x, optimize_this_for_speed_p)))
+ && (set_src_cost (tmp, mode, optimize_this_for_speed_p)
+ < set_src_cost (x, mode, optimize_this_for_speed_p)))
return tmp;
return NULL_RTX;
operands where needed. */
static int
-aarch64_rtx_mult_cost (rtx x, int code, int outer, bool speed)
+aarch64_rtx_mult_cost (rtx x, enum rtx_code code, int outer, bool speed)
{
rtx op0, op1;
const struct cpu_cost_table *extra_cost
if (is_extend)
op0 = aarch64_strip_extend (op0);
- cost += rtx_cost (op0, GET_CODE (op0), 0, speed);
+ cost += rtx_cost (op0, VOIDmode, code, 0, speed);
return cost;
}
|| (GET_CODE (op0) == SIGN_EXTEND
&& GET_CODE (op1) == SIGN_EXTEND))
{
- cost += rtx_cost (XEXP (op0, 0), MULT, 0, speed)
- + rtx_cost (XEXP (op1, 0), MULT, 1, speed);
+ cost += rtx_cost (XEXP (op0, 0), VOIDmode, MULT, 0, speed);
+ cost += rtx_cost (XEXP (op1, 0), VOIDmode, MULT, 1, speed);
if (speed)
{
/* This is either an integer multiply or a MADD. In both cases
we want to recurse and cost the operands. */
- cost += rtx_cost (op0, MULT, 0, speed)
- + rtx_cost (op1, MULT, 1, speed);
+ cost += rtx_cost (op0, mode, MULT, 0, speed);
+ cost += rtx_cost (op1, mode, MULT, 1, speed);
if (speed)
{
cost += extra_cost->fp[mode == DFmode].mult;
}
- cost += rtx_cost (op0, MULT, 0, speed)
- + rtx_cost (op1, MULT, 1, speed);
+ cost += rtx_cost (op0, mode, MULT, 0, speed);
+ cost += rtx_cost (op1, mode, MULT, 1, speed);
return cost;
}
}
/* This is a CONST or SYMBOL ref which will be split
in a different way depending on the code model in use.
Cost it through the generic infrastructure. */
- int cost_symbol_ref = rtx_cost (x, MEM, 1, speed);
+ int cost_symbol_ref = rtx_cost (x, Pmode, MEM, 1, speed);
/* Divide through by the cost of one instruction to
bring it to the same units as the address costs. */
cost_symbol_ref /= COSTS_N_INSNS (1);
/* TBZ/TBNZ/CBZ/CBNZ. */
if (GET_CODE (inner) == ZERO_EXTRACT)
/* TBZ/TBNZ. */
- *cost += rtx_cost (XEXP (inner, 0), ZERO_EXTRACT,
- 0, speed);
- else
- /* CBZ/CBNZ. */
- *cost += rtx_cost (inner, cmpcode, 0, speed);
+ *cost += rtx_cost (XEXP (inner, 0), VOIDmode,
+ ZERO_EXTRACT, 0, speed);
+ else
+ /* CBZ/CBNZ. */
+ *cost += rtx_cost (inner, VOIDmode, cmpcode, 0, speed);
return true;
}
|| (GET_CODE (op1) == PLUS && XEXP (op1, 1) == const1_rtx))
op1 = XEXP (op1, 0);
- *cost += rtx_cost (op1, IF_THEN_ELSE, 1, speed);
- *cost += rtx_cost (op2, IF_THEN_ELSE, 2, speed);
+ *cost += rtx_cost (op1, VOIDmode, IF_THEN_ELSE, 1, speed);
+ *cost += rtx_cost (op2, VOIDmode, IF_THEN_ELSE, 2, speed);
return true;
}
/* Calculate the cost of calculating X, storing it in *COST. Result
is true if the total cost of the operation has now been calculated. */
static bool
-aarch64_rtx_costs (rtx x, int code, int outer ATTRIBUTE_UNUSED,
+aarch64_rtx_costs (rtx x, machine_mode mode, int outer ATTRIBUTE_UNUSED,
int param ATTRIBUTE_UNUSED, int *cost, bool speed)
{
rtx op0, op1, op2;
const struct cpu_cost_table *extra_cost
= aarch64_tune_params.insn_extra_cost;
- machine_mode mode = GET_MODE (x);
+ int code = GET_CODE (x);
/* By default, assume that everything has equivalent cost to the
cheapest instruction. Any additional costs are applied as a delta
0, speed));
}
- *cost += rtx_cost (op1, SET, 1, speed);
+ *cost += rtx_cost (op1, mode, SET, 1, speed);
return true;
case SUBREG:
if (! REG_P (SUBREG_REG (op0)))
- *cost += rtx_cost (SUBREG_REG (op0), SET, 0, speed);
+ *cost += rtx_cost (SUBREG_REG (op0), VOIDmode, SET, 0, speed);
/* Fall through. */
case REG:
}
else
/* Cost is just the cost of the RHS of the set. */
- *cost += rtx_cost (op1, SET, 1, speed);
+ *cost += rtx_cost (op1, mode, SET, 1, speed);
return true;
case ZERO_EXTRACT:
/* BFM. */
if (speed)
*cost += extra_cost->alu.bfi;
- *cost += rtx_cost (op1, (enum rtx_code) code, 1, speed);
+ *cost += rtx_cost (op1, VOIDmode, (enum rtx_code) code, 1, speed);
}
return true;
return false;
}
- if (GET_MODE_CLASS (GET_MODE (x)) == MODE_INT)
- {
+ if (GET_MODE_CLASS (mode) == MODE_INT)
+ {
if (GET_RTX_CLASS (GET_CODE (op0)) == RTX_COMPARE
|| GET_RTX_CLASS (GET_CODE (op0)) == RTX_COMM_COMPARE)
{
/* CSETM. */
- *cost += rtx_cost (XEXP (op0, 0), NEG, 0, speed);
+ *cost += rtx_cost (XEXP (op0, 0), VOIDmode, NEG, 0, speed);
return true;
}
/* Cost this as SUB wzr, X. */
- op0 = CONST0_RTX (GET_MODE (x));
+ op0 = CONST0_RTX (mode);
op1 = XEXP (x, 0);
goto cost_minus;
}
- if (GET_MODE_CLASS (GET_MODE (x)) == MODE_FLOAT)
+ if (GET_MODE_CLASS (mode) == MODE_FLOAT)
{
/* Support (neg(fma...)) as a single instruction only if
sign of zeros is unimportant. This matches the decision
if (GET_CODE (op0) == FMA && !HONOR_SIGNED_ZEROS (GET_MODE (op0)))
{
/* FNMADD. */
- *cost = rtx_cost (op0, NEG, 0, speed);
+ *cost = rtx_cost (op0, mode, NEG, 0, speed);
return true;
}
if (speed)
&& GET_CODE (op0) == AND)
{
x = op0;
+ mode = GET_MODE (op0);
goto cost_logic;
}
needs encoding in the cost tables. */
/* CC_ZESWPmode supports zero extend for free. */
- if (GET_MODE (x) == CC_ZESWPmode && GET_CODE (op0) == ZERO_EXTEND)
+ if (mode == CC_ZESWPmode && GET_CODE (op0) == ZERO_EXTEND)
op0 = XEXP (op0, 0);
+ mode = GET_MODE (op0);
/* ANDS. */
if (GET_CODE (op0) == AND)
{
if (speed)
*cost += extra_cost->alu.arith;
- *cost += rtx_cost (op0, COMPARE, 0, speed);
- *cost += rtx_cost (XEXP (op1, 0), NEG, 1, speed);
+ *cost += rtx_cost (op0, mode, COMPARE, 0, speed);
+ *cost += rtx_cost (XEXP (op1, 0), mode, NEG, 1, speed);
return true;
}
if (CONST_DOUBLE_P (op1) && aarch64_float_const_zero_rtx_p (op1))
{
- *cost += rtx_cost (op0, COMPARE, 0, speed);
+ *cost += rtx_cost (op0, VOIDmode, COMPARE, 0, speed);
/* FCMP supports constant 0.0 for no extra cost. */
return true;
}
op1 = XEXP (x, 1);
cost_minus:
- *cost += rtx_cost (op0, MINUS, 0, speed);
+ *cost += rtx_cost (op0, mode, MINUS, 0, speed);
/* Detect valid immediates. */
if ((GET_MODE_CLASS (mode) == MODE_INT
if (speed)
*cost += extra_cost->alu.extend_arith;
- *cost += rtx_cost (XEXP (XEXP (op1, 0), 0),
- (enum rtx_code) GET_CODE (op1),
- 0, speed);
+ *cost += rtx_cost (XEXP (XEXP (op1, 0), 0), VOIDmode,
+ (enum rtx_code) GET_CODE (op1), 0, speed);
return true;
}
return true;
}
- *cost += rtx_cost (new_op1, MINUS, 1, speed);
+ *cost += rtx_cost (new_op1, VOIDmode, MINUS, 1, speed);
if (speed)
{
|| GET_RTX_CLASS (GET_CODE (op0)) == RTX_COMM_COMPARE)
{
/* CSINC. */
- *cost += rtx_cost (XEXP (op0, 0), PLUS, 0, speed);
- *cost += rtx_cost (op1, PLUS, 1, speed);
+ *cost += rtx_cost (XEXP (op0, 0), mode, PLUS, 0, speed);
+ *cost += rtx_cost (op1, mode, PLUS, 1, speed);
return true;
}
&& CONST_INT_P (op1)
&& aarch64_uimm12_shift (INTVAL (op1)))
{
- *cost += rtx_cost (op0, PLUS, 0, speed);
+ *cost += rtx_cost (op0, mode, PLUS, 0, speed);
if (speed)
/* ADD (immediate). */
return true;
}
- *cost += rtx_cost (op1, PLUS, 1, speed);
+ *cost += rtx_cost (op1, mode, PLUS, 1, speed);
/* Look for ADD (extended register). */
if (aarch64_rtx_arith_op_extract_p (op0, mode))
if (speed)
*cost += extra_cost->alu.extend_arith;
- *cost += rtx_cost (XEXP (XEXP (op0, 0), 0),
- (enum rtx_code) GET_CODE (op0),
- 0, speed);
+ *cost += rtx_cost (XEXP (XEXP (op0, 0), 0), VOIDmode,
+ (enum rtx_code) GET_CODE (op0), 0, speed);
return true;
}
return true;
}
- *cost += rtx_cost (new_op0, PLUS, 0, speed);
+ *cost += rtx_cost (new_op0, VOIDmode, PLUS, 0, speed);
if (speed)
{
if (aarch64_extr_rtx_p (x, &op0, &op1))
{
- *cost += rtx_cost (op0, IOR, 0, speed)
- + rtx_cost (op1, IOR, 1, speed);
+ *cost += rtx_cost (op0, mode, IOR, 0, speed);
+ *cost += rtx_cost (op1, mode, IOR, 1, speed);
if (speed)
*cost += extra_cost->alu.shift;
INTVAL (op1)) != 0)
{
/* This is a UBFM/SBFM. */
- *cost += rtx_cost (XEXP (op0, 0), ZERO_EXTRACT, 0, speed);
+ *cost += rtx_cost (XEXP (op0, 0), mode, ZERO_EXTRACT, 0, speed);
if (speed)
*cost += extra_cost->alu.bfx;
return true;
}
- if (GET_MODE_CLASS (GET_MODE (x)) == MODE_INT)
+ if (GET_MODE_CLASS (mode) == MODE_INT)
{
/* We possibly get the immediate for free, this is not
modelled. */
if (CONST_INT_P (op1)
- && aarch64_bitmask_imm (INTVAL (op1), GET_MODE (x)))
+ && aarch64_bitmask_imm (INTVAL (op1), mode))
{
- *cost += rtx_cost (op0, (enum rtx_code) code, 0, speed);
+ *cost += rtx_cost (op0, mode, (enum rtx_code) code, 0, speed);
if (speed)
*cost += extra_cost->alu.logical;
}
/* In both cases we want to cost both operands. */
- *cost += rtx_cost (new_op0, (enum rtx_code) code, 0, speed)
- + rtx_cost (op1, (enum rtx_code) code, 1, speed);
+ *cost += rtx_cost (new_op0, mode, (enum rtx_code) code, 0, speed);
+ *cost += rtx_cost (op1, mode, (enum rtx_code) code, 1, speed);
return true;
}
/* MVN-shifted-reg. */
if (op0 != x)
{
- *cost += rtx_cost (op0, (enum rtx_code) code, 0, speed);
+ *cost += rtx_cost (op0, mode, (enum rtx_code) code, 0, speed);
if (speed)
*cost += extra_cost->alu.log_shift;
rtx newop1 = XEXP (op0, 1);
rtx op0_stripped = aarch64_strip_shift (newop0);
- *cost += rtx_cost (newop1, (enum rtx_code) code, 1, speed)
- + rtx_cost (op0_stripped, XOR, 0, speed);
+ *cost += rtx_cost (newop1, mode, (enum rtx_code) code, 1, speed);
+ *cost += rtx_cost (op0_stripped, mode, XOR, 0, speed);
if (speed)
{
&& GET_MODE (op0) == SImode
&& outer == SET)
{
- int op_cost = rtx_cost (XEXP (x, 0), ZERO_EXTEND, 0, speed);
+ int op_cost = rtx_cost (op0, VOIDmode, ZERO_EXTEND, 0, speed);
if (!op_cost && speed)
/* MOV. */
return true;
}
- else if (MEM_P (XEXP (x, 0)))
+ else if (MEM_P (op0))
{
/* All loads can zero extend to any size for free. */
- *cost = rtx_cost (XEXP (x, 0), ZERO_EXTEND, param, speed);
+ *cost = rtx_cost (op0, VOIDmode, ZERO_EXTEND, param, speed);
return true;
}
|| GET_CODE (op0) == SIGN_EXTEND)
op0 = XEXP (op0, 0);
- *cost += rtx_cost (op0, ASHIFT, 0, speed);
+ *cost += rtx_cost (op0, VOIDmode, ASHIFT, 0, speed);
return true;
}
else
*cost += extra_cost->alu.shift;
}
- *cost += rtx_cost (op0, (enum rtx_code) code, 0, speed);
+ *cost += rtx_cost (op0, mode, (enum rtx_code) code, 0, speed);
return true;
}
else
/* We can trust that the immediates used will be correct (there
are no by-register forms), so we need only cost op0. */
- *cost += rtx_cost (XEXP (x, 0), (enum rtx_code) code, 0, speed);
+ *cost += rtx_cost (XEXP (x, 0), VOIDmode, (enum rtx_code) code, 0, speed);
return true;
case MULT:
{
if (VECTOR_MODE_P (mode))
*cost += extra_cost->vect.alu;
- else if (GET_MODE_CLASS (GET_MODE (x)) == MODE_INT)
- *cost += (extra_cost->mult[GET_MODE (x) == DImode].add
- + extra_cost->mult[GET_MODE (x) == DImode].idiv);
- else if (GET_MODE (x) == DFmode)
+ else if (GET_MODE_CLASS (mode) == MODE_INT)
+ *cost += (extra_cost->mult[mode == DImode].add
+ + extra_cost->mult[mode == DImode].idiv);
+ else if (mode == DFmode)
*cost += (extra_cost->fp[1].mult
+ extra_cost->fp[1].div);
- else if (GET_MODE (x) == SFmode)
+ else if (mode == SFmode)
*cost += (extra_cost->fp[0].mult
+ extra_cost->fp[0].div);
}
/* If the remaining parameters are not registers,
get the cost to put them into registers. */
- *cost += rtx_cost (op0, FMA, 0, speed);
- *cost += rtx_cost (op1, FMA, 1, speed);
- *cost += rtx_cost (op2, FMA, 2, speed);
+ *cost += rtx_cost (op0, mode, FMA, 0, speed);
+ *cost += rtx_cost (op1, mode, FMA, 1, speed);
+ *cost += rtx_cost (op2, mode, FMA, 2, speed);
return true;
case FLOAT:
else
*cost += extra_cost->fp[GET_MODE (x) == DFmode].toint;
}
- *cost += rtx_cost (x, (enum rtx_code) code, 0, speed);
+ *cost += rtx_cost (x, VOIDmode, (enum rtx_code) code, 0, speed);
return true;
case ABS:
/* FABD, which is analogous to FADD. */
if (GET_CODE (op0) == MINUS)
{
- *cost += rtx_cost (XEXP (op0, 0), MINUS, 0, speed);
- + rtx_cost (XEXP (op0, 1), MINUS, 1, speed);
+ *cost += rtx_cost (XEXP (op0, 0), mode, MINUS, 0, speed);
+ *cost += rtx_cost (XEXP (op0, 1), mode, MINUS, 1, speed);
if (speed)
*cost += extra_cost->fp[mode == DFmode].addsub;
/* UMULH/SMULH. */
if (speed)
*cost += extra_cost->mult[mode == DImode].extend;
- *cost += rtx_cost (XEXP (XEXP (XEXP (XEXP (x, 0), 0), 0), 0),
- MULT, 0, speed);
- *cost += rtx_cost (XEXP (XEXP (XEXP (XEXP (x, 0), 0), 1), 0),
- MULT, 1, speed);
+ *cost += rtx_cost (XEXP (XEXP (XEXP (XEXP (x, 0), 0), 0), 0),
+ mode, MULT, 0, speed);
+ *cost += rtx_cost (XEXP (XEXP (XEXP (XEXP (x, 0), 0), 1), 0),
+ mode, MULT, 1, speed);
return true;
}
calculated for X. This cost is stored in *COST. Returns true
if the total cost of X was calculated. */
static bool
-aarch64_rtx_costs_wrapper (rtx x, int code, int outer,
+aarch64_rtx_costs_wrapper (rtx x, machine_mode mode, int outer,
int param, int *cost, bool speed)
{
- bool result = aarch64_rtx_costs (x, code, outer, param, cost, speed);
+ bool result = aarch64_rtx_costs (x, mode, outer, param, cost, speed);
if (dump_file && (dump_flags & TDF_DETAILS))
{
scanned. In either case, *TOTAL contains the cost result. */
static bool
-alpha_rtx_costs (rtx x, int code, int outer_code, int opno, int *total,
+alpha_rtx_costs (rtx x, machine_mode mode, int outer_code, int opno, int *total,
bool speed)
{
- machine_mode mode = GET_MODE (x);
+ int code = GET_CODE (x);
bool float_mode_p = FLOAT_MODE_P (mode);
const struct alpha_rtx_cost_data *cost_data;
else if (GET_CODE (XEXP (x, 0)) == MULT
&& const48_operand (XEXP (XEXP (x, 0), 1), VOIDmode))
{
- *total = (rtx_cost (XEXP (XEXP (x, 0), 0),
+ *total = (rtx_cost (XEXP (XEXP (x, 0), 0), mode,
(enum rtx_code) outer_code, opno, speed)
- + rtx_cost (XEXP (x, 1),
+ + rtx_cost (XEXP (x, 1), mode,
(enum rtx_code) outer_code, opno, speed)
+ COSTS_N_INSNS (1));
return true;
scanned. In either case, *TOTAL contains the cost result. */
static bool
-arc_rtx_costs (rtx x, int code, int outer_code, int opno ATTRIBUTE_UNUSED,
- int *total, bool speed)
+arc_rtx_costs (rtx x, machine_mode mode, int outer_code,
+ int opno ATTRIBUTE_UNUSED, int *total, bool speed)
{
+ int code = GET_CODE (x);
+
switch (code)
{
/* Small integers are as cheap as registers. */
if (CONSTANT_P (XEXP (x, 0)))
{
*total += (COSTS_N_INSNS (2)
- + rtx_cost (XEXP (x, 1), (enum rtx_code) code, 0, speed));
+ + rtx_cost (XEXP (x, 1), mode, (enum rtx_code) code,
+ 0, speed));
return true;
}
*total = COSTS_N_INSNS (1);
if (GET_CODE (XEXP (x, 0)) == MULT
&& _2_4_8_operand (XEXP (XEXP (x, 0), 1), VOIDmode))
{
- *total += (rtx_cost (XEXP (x, 1), PLUS, 0, speed)
- + rtx_cost (XEXP (XEXP (x, 0), 0), PLUS, 1, speed));
+ *total += (rtx_cost (XEXP (x, 1), mode, PLUS, 0, speed)
+ + rtx_cost (XEXP (XEXP (x, 0), 0), mode, PLUS, 1, speed));
return true;
}
return false;
if (GET_CODE (XEXP (x, 1)) == MULT
&& _2_4_8_operand (XEXP (XEXP (x, 1), 1), VOIDmode))
{
- *total += (rtx_cost (XEXP (x, 0), PLUS, 0, speed)
- + rtx_cost (XEXP (XEXP (x, 1), 0), PLUS, 1, speed));
+ *total += (rtx_cost (XEXP (x, 0), mode, PLUS, 0, speed)
+ + rtx_cost (XEXP (XEXP (x, 1), 0), mode, PLUS, 1, speed));
return true;
}
return false;
/* btst / bbit0 / bbit1:
Small integers and registers are free; everything else can
be put in a register. */
- *total = (rtx_cost (XEXP (op0, 0), SET, 1, speed)
- + rtx_cost (XEXP (op0, 2), SET, 1, speed));
+ mode = GET_MODE (XEXP (op0, 0));
+ *total = (rtx_cost (XEXP (op0, 0), mode, SET, 1, speed)
+ + rtx_cost (XEXP (op0, 2), mode, SET, 1, speed));
return true;
}
if (GET_CODE (op0) == AND && op1 == const0_rtx
&& satisfies_constraint_C1p (XEXP (op0, 1)))
{
/* bmsk.f */
- *total = rtx_cost (XEXP (op0, 0), SET, 1, speed);
+ *total = rtx_cost (XEXP (op0, 0), VOIDmode, SET, 1, speed);
return true;
}
/* add.f */
/* op0 might be constant, the inside of op1 is rather
unlikely to be so. So swapping the operands might lower
the cost. */
- *total = (rtx_cost (op0, PLUS, 1, speed)
- + rtx_cost (XEXP (op1, 0), PLUS, 0, speed));
+ mode = GET_MODE (op0);
+ *total = (rtx_cost (op0, mode, PLUS, 1, speed)
+ + rtx_cost (XEXP (op1, 0), mode, PLUS, 0, speed));
}
return false;
}
be put in a register. */
rtx op0 = XEXP (x, 0);
- *total = (rtx_cost (XEXP (op0, 0), SET, 1, speed)
- + rtx_cost (XEXP (op0, 2), SET, 1, speed));
+ mode = GET_MODE (XEXP (op0, 0));
+ *total = (rtx_cost (XEXP (op0, 0), mode, SET, 1, speed)
+ + rtx_cost (XEXP (op0, 2), mode, SET, 1, speed));
return true;
}
/* Fall through. */
/* scc_insn expands into two insns. */
case GTU: case GEU: case LEU:
- if (GET_MODE (x) == SImode)
+ if (mode == SImode)
*total += COSTS_N_INSNS (1);
return false;
case LTU: /* might use adc. */
- if (GET_MODE (x) == SImode)
+ if (mode == SImode)
*total += COSTS_N_INSNS (1) - 1;
return false;
default:
static bool arm_fastmul_rtx_costs (rtx, enum rtx_code, enum rtx_code, int *, bool);
static bool arm_xscale_rtx_costs (rtx, enum rtx_code, enum rtx_code, int *, bool);
static bool arm_9e_rtx_costs (rtx, enum rtx_code, enum rtx_code, int *, bool);
-static bool arm_rtx_costs (rtx, int, int, int, int *, bool);
+static bool arm_rtx_costs (rtx, machine_mode, int, int, int *, bool);
static int arm_address_cost (rtx, machine_mode, addr_space_t, bool);
static int arm_register_move_cost (machine_mode, reg_class_t, reg_class_t);
static int arm_memory_move_cost (machine_mode, reg_class_t, bool);
if (REG_P (XEXP (x, 1)))
*total = COSTS_N_INSNS (1); /* Need to subtract from 32 */
else if (!CONST_INT_P (XEXP (x, 1)))
- *total = rtx_cost (XEXP (x, 1), code, 1, speed);
+ *total = rtx_cost (XEXP (x, 1), mode, code, 1, speed);
/* Fall through */
case ROTATERT:
/* Fall through */
case ASHIFT: case LSHIFTRT: case ASHIFTRT:
- *total += rtx_cost (XEXP (x, 0), code, 0, speed);
+ *total += rtx_cost (XEXP (x, 0), mode, code, 0, speed);
if (mode == DImode)
{
*total += COSTS_N_INSNS (3);
if (CONST_INT_P (XEXP (x, 0))
&& const_ok_for_arm (INTVAL (XEXP (x, 0))))
{
- *total += rtx_cost (XEXP (x, 1), code, 1, speed);
+ *total += rtx_cost (XEXP (x, 1), mode, code, 1, speed);
return true;
}
if (CONST_INT_P (XEXP (x, 1))
&& const_ok_for_arm (INTVAL (XEXP (x, 1))))
{
- *total += rtx_cost (XEXP (x, 0), code, 0, speed);
+ *total += rtx_cost (XEXP (x, 0), mode, code, 0, speed);
return true;
}
if (CONST_DOUBLE_P (XEXP (x, 0))
&& arm_const_double_rtx (XEXP (x, 0)))
{
- *total += rtx_cost (XEXP (x, 1), code, 1, speed);
+ *total += rtx_cost (XEXP (x, 1), mode, code, 1, speed);
return true;
}
if (CONST_DOUBLE_P (XEXP (x, 1))
&& arm_const_double_rtx (XEXP (x, 1)))
{
- *total += rtx_cost (XEXP (x, 0), code, 0, speed);
+ *total += rtx_cost (XEXP (x, 0), mode, code, 0, speed);
return true;
}
if (CONST_INT_P (XEXP (x, 0))
&& const_ok_for_arm (INTVAL (XEXP (x, 0))))
{
- *total += rtx_cost (XEXP (x, 1), code, 1, speed);
+ *total += rtx_cost (XEXP (x, 1), mode, code, 1, speed);
return true;
}
|| subcode == LSHIFTRT
|| subcode == ROTATE || subcode == ROTATERT)
{
- *total += rtx_cost (XEXP (x, 0), code, 0, speed);
- *total += rtx_cost (XEXP (XEXP (x, 1), 0), subcode, 0, speed);
+ *total += rtx_cost (XEXP (x, 0), mode, code, 0, speed);
+ *total += rtx_cost (XEXP (XEXP (x, 1), 0), mode, subcode, 0, speed);
return true;
}
if (GET_CODE (XEXP (x, 0)) == MULT
&& power_of_two_operand (XEXP (XEXP (x, 0), 1), SImode))
{
- *total += rtx_cost (XEXP (XEXP (x, 0), 0), code, 0, speed);
- *total += rtx_cost (XEXP (x, 1), code, 1, speed);
+ *total += rtx_cost (XEXP (XEXP (x, 0), 0), mode, code, 0, speed);
+ *total += rtx_cost (XEXP (x, 1), mode, code, 1, speed);
return true;
}
if (subcode == MULT
&& power_of_two_operand (XEXP (XEXP (x, 1), 1), SImode))
{
- *total += rtx_cost (XEXP (x, 0), code, 0, speed);
- *total += rtx_cost (XEXP (XEXP (x, 1), 0), subcode, 0, speed);
+ *total += rtx_cost (XEXP (x, 0), mode, code, 0, speed);
+ *total += rtx_cost (XEXP (XEXP (x, 1), 0), mode, subcode, 0, speed);
return true;
}
if (GET_RTX_CLASS (GET_CODE (XEXP (x, 1))) == RTX_COMPARE
|| GET_RTX_CLASS (GET_CODE (XEXP (x, 1))) == RTX_COMM_COMPARE)
{
- *total = COSTS_N_INSNS (1) + rtx_cost (XEXP (x, 0), code, 0, speed);
+ *total = COSTS_N_INSNS (1) + rtx_cost (XEXP (x, 0), mode, code,
+ 0, speed);
if (REG_P (XEXP (XEXP (x, 1), 0))
&& REGNO (XEXP (XEXP (x, 1), 0)) != CC_REGNUM)
*total += COSTS_N_INSNS (1);
|| GET_CODE (XEXP (x, 0)) == SIGN_EXTEND))
{
*total = COSTS_N_INSNS (1);
- *total += rtx_cost (XEXP (XEXP (x, 0), 0), GET_CODE (XEXP (x, 0)),
- 0, speed);
- *total += rtx_cost (XEXP (x, 1), code, 1, speed);
+ *total += rtx_cost (XEXP (XEXP (x, 0), 0), VOIDmode,
+ GET_CODE (XEXP (x, 0)), 0, speed);
+ *total += rtx_cost (XEXP (x, 1), mode, code, 1, speed);
return true;
}
if (CONST_DOUBLE_P (XEXP (x, 1))
&& arm_const_double_rtx (XEXP (x, 1)))
{
- *total += rtx_cost (XEXP (x, 0), code, 0, speed);
+ *total += rtx_cost (XEXP (x, 0), mode, code, 0, speed);
return true;
}
if (GET_RTX_CLASS (GET_CODE (XEXP (x, 0))) == RTX_COMPARE
|| GET_RTX_CLASS (GET_CODE (XEXP (x, 0))) == RTX_COMM_COMPARE)
{
- *total = COSTS_N_INSNS (1) + rtx_cost (XEXP (x, 1), code, 1, speed);
+ *total = COSTS_N_INSNS (1) + rtx_cost (XEXP (x, 1), mode, code,
+ 1, speed);
if (REG_P (XEXP (XEXP (x, 0), 0))
&& REGNO (XEXP (XEXP (x, 0), 0)) != CC_REGNUM)
*total += COSTS_N_INSNS (1);
if (CONST_INT_P (XEXP (x, 1))
&& const_ok_for_op (INTVAL (XEXP (x, 1)), code))
{
- *total += rtx_cost (XEXP (x, 0), code, 0, speed);
+ *total += rtx_cost (XEXP (x, 0), mode, code, 0, speed);
return true;
}
if (CONST_INT_P (XEXP (x, 1))
&& const_ok_for_op (INTVAL (XEXP (x, 1)), code))
{
- *total += rtx_cost (XEXP (x, 0), code, 0, speed);
+ *total += rtx_cost (XEXP (x, 0), mode, code, 0, speed);
return true;
}
subcode = GET_CODE (XEXP (x, 0));
|| subcode == LSHIFTRT
|| subcode == ROTATE || subcode == ROTATERT)
{
- *total += rtx_cost (XEXP (x, 1), code, 1, speed);
- *total += rtx_cost (XEXP (XEXP (x, 0), 0), subcode, 0, speed);
+ *total += rtx_cost (XEXP (x, 1), mode, code, 1, speed);
+ *total += rtx_cost (XEXP (XEXP (x, 0), 0), mode, subcode, 0, speed);
return true;
}
if (subcode == MULT
&& power_of_two_operand (XEXP (XEXP (x, 0), 1), SImode))
{
- *total += rtx_cost (XEXP (x, 1), code, 1, speed);
- *total += rtx_cost (XEXP (XEXP (x, 0), 0), subcode, 0, speed);
+ *total += rtx_cost (XEXP (x, 1), mode, code, 1, speed);
+ *total += rtx_cost (XEXP (XEXP (x, 0), 0), mode, subcode, 0, speed);
return true;
}
&& (GET_CODE (XEXP (XEXP (XEXP (x, 0), 0), 0)) == ZERO_EXTEND
|| GET_CODE (XEXP (XEXP (XEXP (x, 0), 0), 0)) == SIGN_EXTEND))
{
- *total = rtx_cost (XEXP (XEXP (x, 0), 0), LSHIFTRT, 0, speed);
+ *total = rtx_cost (XEXP (XEXP (x, 0), 0), VOIDmode, LSHIFTRT,
+ 0, speed);
return true;
}
*total = COSTS_N_INSNS (2); /* Plus the cost of the MULT */
|| (subcode == MULT
&& power_of_two_operand (XEXP (XEXP (x, 0), 1), SImode)))
{
- *total += rtx_cost (XEXP (XEXP (x, 0), 0), subcode, 0, speed);
+ *total += rtx_cost (XEXP (XEXP (x, 0), 0), mode, subcode,
+ 0, speed);
/* Register shifts cost an extra cycle. */
if (!CONST_INT_P (XEXP (XEXP (x, 0), 1)))
*total += COSTS_N_INSNS (1) + rtx_cost (XEXP (XEXP (x, 0), 1),
- subcode, 1, speed);
+ mode, subcode,
+ 1, speed);
return true;
}
}
&& REG_P (XEXP (operand, 0))
&& REGNO (XEXP (operand, 0)) == CC_REGNUM))
*total += COSTS_N_INSNS (1);
- *total += (rtx_cost (XEXP (x, 1), code, 1, speed)
- + rtx_cost (XEXP (x, 2), code, 2, speed));
+ *total += rtx_cost (XEXP (x, 1), VOIDmode, code, 1, speed);
+ *total += rtx_cost (XEXP (x, 2), VOIDmode, code, 2, speed);
return true;
case NE:
if (mode == SImode && XEXP (x, 1) == const0_rtx)
{
- *total = COSTS_N_INSNS (2) + rtx_cost (XEXP (x, 0), code, 0, speed);
+ *total = COSTS_N_INSNS (2) + rtx_cost (XEXP (x, 0), mode, code,
+ 0, speed);
return true;
}
goto scc_insn;
if ((!REG_P (XEXP (x, 0)) || REGNO (XEXP (x, 0)) != CC_REGNUM)
&& mode == SImode && XEXP (x, 1) == const0_rtx)
{
- *total = COSTS_N_INSNS (2) + rtx_cost (XEXP (x, 0), code, 0, speed);
+ *total = COSTS_N_INSNS (2) + rtx_cost (XEXP (x, 0), mode, code,
+ 0, speed);
return true;
}
goto scc_insn;
if ((!REG_P (XEXP (x, 0)) || REGNO (XEXP (x, 0)) != CC_REGNUM)
&& mode == SImode && XEXP (x, 1) == const0_rtx)
{
- *total = COSTS_N_INSNS (1) + rtx_cost (XEXP (x, 0), code, 0, speed);
+ *total = COSTS_N_INSNS (1) + rtx_cost (XEXP (x, 0), mode, code,
+ 0, speed);
return true;
}
goto scc_insn;
if (CONST_INT_P (XEXP (x, 1))
&& const_ok_for_op (INTVAL (XEXP (x, 1)), code))
{
- *total += rtx_cost (XEXP (x, 0), code, 0, speed);
+ *total += rtx_cost (XEXP (x, 0), VOIDmode, code, 0, speed);
return true;
}
|| subcode == LSHIFTRT
|| subcode == ROTATE || subcode == ROTATERT)
{
- *total += rtx_cost (XEXP (x, 1), code, 1, speed);
- *total += rtx_cost (XEXP (XEXP (x, 0), 0), subcode, 0, speed);
+ mode = GET_MODE (XEXP (x, 0));
+ *total += rtx_cost (XEXP (x, 1), mode, code, 1, speed);
+ *total += rtx_cost (XEXP (XEXP (x, 0), 0), mode, subcode, 0, speed);
return true;
}
if (subcode == MULT
&& power_of_two_operand (XEXP (XEXP (x, 0), 1), SImode))
{
- *total += rtx_cost (XEXP (x, 1), code, 1, speed);
- *total += rtx_cost (XEXP (XEXP (x, 0), 0), subcode, 0, speed);
+ mode = GET_MODE (XEXP (x, 0));
+ *total += rtx_cost (XEXP (x, 1), mode, code, 1, speed);
+ *total += rtx_cost (XEXP (XEXP (x, 0), 0), mode, subcode, 0, speed);
return true;
}
case UMAX:
case SMIN:
case SMAX:
- *total = COSTS_N_INSNS (2) + rtx_cost (XEXP (x, 0), code, 0, speed);
+ *total = COSTS_N_INSNS (2) + rtx_cost (XEXP (x, 0), mode, code, 0, speed);
if (!CONST_INT_P (XEXP (x, 1))
|| !const_ok_for_arm (INTVAL (XEXP (x, 1))))
- *total += rtx_cost (XEXP (x, 1), code, 1, speed);
+ *total += rtx_cost (XEXP (x, 1), mode, code, 1, speed);
return true;
case ABS:
case ZERO_EXTRACT:
case SIGN_EXTRACT:
- *total = COSTS_N_INSNS (1) + rtx_cost (XEXP (x, 0), code, 0, speed);
+ mode = GET_MODE (XEXP (x, 0));
+ *total = COSTS_N_INSNS (1) + rtx_cost (XEXP (x, 0), mode, code, 0, speed);
return true;
case CONST_INT:
case LO_SUM:
*total = COSTS_N_INSNS (1);
- *total += rtx_cost (XEXP (x, 0), code, 0, speed);
+ *total += rtx_cost (XEXP (x, 0), mode, code, 0, speed);
return true;
case CONST_DOUBLE:
if (TARGET_NEON && MEM_P (SET_DEST (x))
&& GET_CODE (SET_SRC (x)) == VEC_SELECT)
{
- *total = rtx_cost (SET_DEST (x), code, 0, speed);
+ mode = GET_MODE (SET_DEST (x));
+ *total = rtx_cost (SET_DEST (x), mode, code, 0, speed);
if (!neon_vector_mem_operand (SET_DEST (x), 2, true))
*total += COSTS_N_INSNS (1);
return true;
&& MEM_P (XEXP (XEXP (SET_SRC (x), 0), 0)))
{
rtx mem = XEXP (XEXP (SET_SRC (x), 0), 0);
- *total = rtx_cost (mem, code, 0, speed);
+ mode = GET_MODE (SET_DEST (x));
+ *total = rtx_cost (mem, mode, code, 0, speed);
if (!neon_vector_mem_operand (mem, 2, true))
*total += COSTS_N_INSNS (1);
return true;
case ROTATE:
if (mode == SImode && REG_P (XEXP (x, 1)))
{
- *total = COSTS_N_INSNS (2) + rtx_cost (XEXP (x, 0), code, 0, false);
+ *total = COSTS_N_INSNS (2) + rtx_cost (XEXP (x, 0), mode, code,
+ 0, false);
return true;
}
/* Fall through */
case ASHIFTRT:
if (mode == DImode && CONST_INT_P (XEXP (x, 1)))
{
- *total = COSTS_N_INSNS (3) + rtx_cost (XEXP (x, 0), code, 0, false);
+ *total = COSTS_N_INSNS (3) + rtx_cost (XEXP (x, 0), mode, code,
+ 0, false);
return true;
}
else if (mode == SImode)
{
- *total = COSTS_N_INSNS (1) + rtx_cost (XEXP (x, 0), code, 0, false);
+ *total = COSTS_N_INSNS (1) + rtx_cost (XEXP (x, 0), mode, code,
+ 0, false);
/* Slightly disparage register shifts, but not by much. */
if (!CONST_INT_P (XEXP (x, 1)))
- *total += 1 + rtx_cost (XEXP (x, 1), code, 1, false);
+ *total += 1 + rtx_cost (XEXP (x, 1), mode, code, 1, false);
return true;
}
&& power_of_two_operand (XEXP (XEXP (x, 0), 1), SImode))
{
*total = COSTS_N_INSNS (TARGET_THUMB2 ? 2 : 1);
- *total += rtx_cost (XEXP (XEXP (x, 0), 0), code, 0, false);
- *total += rtx_cost (XEXP (x, 1), code, 1, false);
+ *total += rtx_cost (XEXP (XEXP (x, 0), 0), mode, code, 0, false);
+ *total += rtx_cost (XEXP (x, 1), mode, code, 1, false);
return true;
}
*cost += (ARM_NUM_REGS (GET_MODE (x)) * extra_cost->ldst.store
+ extra_cost->ldst.store_unaligned);
- *cost += rtx_cost (XVECEXP (x, 0, 0), UNSPEC, 0, speed_p);
+ *cost += rtx_cost (XVECEXP (x, 0, 0), VOIDmode, UNSPEC, 0, speed_p);
#ifdef NOT_YET
*cost += arm_address_cost (XEXP (XVECEXP (x, 0, 0), 0), GET_MODE (x),
ADDR_SPACE_GENERIC, speed_p);
if (shift_reg) \
{ \
if (speed_p) \
- *cost += extra_cost->alu.arith_shift_reg; \
- *cost += rtx_cost (shift_reg, ASHIFT, 1, speed_p); \
+ *cost += extra_cost->alu.arith_shift_reg; \
+ *cost += rtx_cost (shift_reg, GET_MODE (shift_reg), \
+ ASHIFT, 1, speed_p); \
} \
else if (speed_p) \
- *cost += extra_cost->alu.arith_shift; \
+ *cost += extra_cost->alu.arith_shift; \
\
- *cost += (rtx_cost (shift_op, ASHIFT, 0, speed_p) \
+ *cost += (rtx_cost (shift_op, GET_MODE (shift_op), \
+ ASHIFT, 0, speed_p) \
+ rtx_cost (XEXP (x, 1 - IDX), \
- OP, 1, speed_p)); \
+ GET_MODE (shift_op), \
+ OP, 1, speed_p)); \
return true; \
} \
} \
{
/* Handle CONST_INT here, since the value doesn't have a mode
and we would otherwise be unable to work out the true cost. */
- *cost = rtx_cost (SET_DEST (x), SET, 0, speed_p);
+ *cost = rtx_cost (SET_DEST (x), GET_MODE (SET_DEST (x)), SET,
+ 0, speed_p);
outer_code = SET;
/* Slightly lower the cost of setting a core reg to a constant.
This helps break up chains and allows for better scheduling. */
if (mode == SImode && REG_P (XEXP (x, 1)))
{
*cost = (COSTS_N_INSNS (2)
- + rtx_cost (XEXP (x, 0), code, 0, speed_p));
+ + rtx_cost (XEXP (x, 0), mode, code, 0, speed_p));
if (speed_p)
*cost += extra_cost->alu.shift_reg;
return true;
if (mode == DImode && CONST_INT_P (XEXP (x, 1)))
{
*cost = (COSTS_N_INSNS (3)
- + rtx_cost (XEXP (x, 0), code, 0, speed_p));
+ + rtx_cost (XEXP (x, 0), mode, code, 0, speed_p));
if (speed_p)
*cost += 2 * extra_cost->alu.shift;
return true;
else if (mode == SImode)
{
*cost = (COSTS_N_INSNS (1)
- + rtx_cost (XEXP (x, 0), code, 0, speed_p));
+ + rtx_cost (XEXP (x, 0), mode, code, 0, speed_p));
/* Slightly disparage register shifts at -Os, but not by much. */
if (!CONST_INT_P (XEXP (x, 1)))
*cost += (speed_p ? extra_cost->alu.shift_reg : 1
- + rtx_cost (XEXP (x, 1), code, 1, speed_p));
+ + rtx_cost (XEXP (x, 1), mode, code, 1, speed_p));
return true;
}
else if (GET_MODE_CLASS (mode) == MODE_INT
if (code == ASHIFT)
{
*cost = (COSTS_N_INSNS (1)
- + rtx_cost (XEXP (x, 0), code, 0, speed_p));
+ + rtx_cost (XEXP (x, 0), mode, code, 0, speed_p));
/* Slightly disparage register shifts at -Os, but not by
much. */
if (!CONST_INT_P (XEXP (x, 1)))
*cost += (speed_p ? extra_cost->alu.shift_reg : 1
- + rtx_cost (XEXP (x, 1), code, 1, speed_p));
+ + rtx_cost (XEXP (x, 1), mode, code, 1, speed_p));
}
else if (code == LSHIFTRT || code == ASHIFTRT)
{
*cost = COSTS_N_INSNS (1);
if (speed_p)
*cost += extra_cost->alu.bfx;
- *cost += rtx_cost (XEXP (x, 0), code, 0, speed_p);
+ *cost += rtx_cost (XEXP (x, 0), mode, code, 0, speed_p);
}
else
{
*cost = COSTS_N_INSNS (2);
- *cost += rtx_cost (XEXP (x, 0), code, 0, speed_p);
+ *cost += rtx_cost (XEXP (x, 0), mode, code, 0, speed_p);
if (speed_p)
{
if (CONST_INT_P (XEXP (x, 1)))
else /* Rotates. */
{
*cost = COSTS_N_INSNS (3 + !CONST_INT_P (XEXP (x, 1)));
- *cost += rtx_cost (XEXP (x, 0), code, 0, speed_p);
+ *cost += rtx_cost (XEXP (x, 0), mode, code, 0, speed_p);
if (speed_p)
{
if (CONST_INT_P (XEXP (x, 1)))
if (GET_CODE (mul_op0) == NEG)
mul_op0 = XEXP (mul_op0, 0);
- *cost += (rtx_cost (mul_op0, code, 0, speed_p)
- + rtx_cost (mul_op1, code, 0, speed_p)
- + rtx_cost (sub_op, code, 0, speed_p));
+ *cost += (rtx_cost (mul_op0, mode, code, 0, speed_p)
+ + rtx_cost (mul_op1, mode, code, 0, speed_p)
+ + rtx_cost (sub_op, mode, code, 0, speed_p));
return true;
}
{
if (speed_p)
*cost += extra_cost->alu.arith_shift_reg;
- *cost += rtx_cost (shift_by_reg, code, 0, speed_p);
+ *cost += rtx_cost (shift_by_reg, mode, code, 0, speed_p);
}
else if (speed_p)
*cost += extra_cost->alu.arith_shift;
- *cost += (rtx_cost (shift_op, code, 0, speed_p)
- + rtx_cost (non_shift_op, code, 0, speed_p));
+ *cost += rtx_cost (shift_op, mode, code, 0, speed_p);
+ *cost += rtx_cost (non_shift_op, mode, code, 0, speed_p);
return true;
}
/* MLS. */
if (speed_p)
*cost += extra_cost->mult[0].add;
- *cost += (rtx_cost (XEXP (x, 0), MINUS, 0, speed_p)
- + rtx_cost (XEXP (XEXP (x, 1), 0), MULT, 0, speed_p)
- + rtx_cost (XEXP (XEXP (x, 1), 1), MULT, 1, speed_p));
+ *cost += rtx_cost (XEXP (x, 0), mode, MINUS, 0, speed_p);
+ *cost += rtx_cost (XEXP (XEXP (x, 1), 0), mode, MULT, 0, speed_p);
+ *cost += rtx_cost (XEXP (XEXP (x, 1), 1), mode, MULT, 1, speed_p);
return true;
}
*cost = COSTS_N_INSNS (insns);
if (speed_p)
*cost += insns * extra_cost->alu.arith;
- *cost += rtx_cost (XEXP (x, 1), code, 1, speed_p);
+ *cost += rtx_cost (XEXP (x, 1), mode, code, 1, speed_p);
return true;
}
else if (speed_p)
if (CONST_INT_P (XEXP (x, 0)))
{
- *cost += rtx_cost (XEXP (x, 1), code, 1, speed_p);
+ *cost += rtx_cost (XEXP (x, 1), mode, code, 1, speed_p);
return true;
}
*cost += 2 * extra_cost->alu.arith;
if (GET_CODE (op1) == ZERO_EXTEND)
- *cost += rtx_cost (XEXP (op1, 0), ZERO_EXTEND, 0, speed_p);
+ *cost += rtx_cost (XEXP (op1, 0), VOIDmode, ZERO_EXTEND,
+ 0, speed_p);
else
- *cost += rtx_cost (op1, MINUS, 1, speed_p);
- *cost += rtx_cost (XEXP (XEXP (x, 0), 0), ZERO_EXTEND,
+ *cost += rtx_cost (op1, mode, MINUS, 1, speed_p);
+ *cost += rtx_cost (XEXP (XEXP (x, 0), 0), VOIDmode, ZERO_EXTEND,
0, speed_p);
return true;
}
{
if (speed_p)
*cost += extra_cost->alu.arith + extra_cost->alu.arith_shift;
- *cost += (rtx_cost (XEXP (XEXP (x, 0), 0), SIGN_EXTEND,
+ *cost += (rtx_cost (XEXP (XEXP (x, 0), 0), VOIDmode, SIGN_EXTEND,
0, speed_p)
- + rtx_cost (XEXP (x, 1), MINUS, 1, speed_p));
+ + rtx_cost (XEXP (x, 1), mode, MINUS, 1, speed_p));
return true;
}
else if (GET_CODE (XEXP (x, 1)) == ZERO_EXTEND
+ (GET_CODE (XEXP (x, 1)) == ZERO_EXTEND
? extra_cost->alu.arith
: extra_cost->alu.arith_shift));
- *cost += (rtx_cost (XEXP (x, 0), MINUS, 0, speed_p)
- + rtx_cost (XEXP (XEXP (x, 1), 0),
+ *cost += (rtx_cost (XEXP (x, 0), mode, MINUS, 0, speed_p)
+ + rtx_cost (XEXP (XEXP (x, 1), 0), VOIDmode,
GET_CODE (XEXP (x, 1)), 0, speed_p));
return true;
}
mul_op1 = XEXP (XEXP (x, 0), 1);
add_op = XEXP (x, 1);
- *cost += (rtx_cost (mul_op0, code, 0, speed_p)
- + rtx_cost (mul_op1, code, 0, speed_p)
- + rtx_cost (add_op, code, 0, speed_p));
+ *cost += (rtx_cost (mul_op0, mode, code, 0, speed_p)
+ + rtx_cost (mul_op1, mode, code, 0, speed_p)
+ + rtx_cost (add_op, mode, code, 0, speed_p));
return true;
}
*cost += insns * extra_cost->alu.arith;
/* Slightly penalize a narrow operation as the result may
need widening. */
- *cost += 1 + rtx_cost (XEXP (x, 0), PLUS, 0, speed_p);
+ *cost += 1 + rtx_cost (XEXP (x, 0), mode, PLUS, 0, speed_p);
return true;
}
/* UXTA[BH] or SXTA[BH]. */
if (speed_p)
*cost += extra_cost->alu.extend_arith;
- *cost += (rtx_cost (XEXP (XEXP (x, 0), 0), ZERO_EXTEND, 0,
- speed_p)
- + rtx_cost (XEXP (x, 1), PLUS, 0, speed_p));
+ *cost += (rtx_cost (XEXP (XEXP (x, 0), 0), VOIDmode, ZERO_EXTEND,
+ 0, speed_p)
+ + rtx_cost (XEXP (x, 1), mode, PLUS, 0, speed_p));
return true;
}
{
if (speed_p)
*cost += extra_cost->alu.arith_shift_reg;
- *cost += rtx_cost (shift_reg, ASHIFT, 1, speed_p);
+ *cost += rtx_cost (shift_reg, mode, ASHIFT, 1, speed_p);
}
else if (speed_p)
*cost += extra_cost->alu.arith_shift;
- *cost += (rtx_cost (shift_op, ASHIFT, 0, speed_p)
- + rtx_cost (XEXP (x, 1), PLUS, 1, speed_p));
+ *cost += (rtx_cost (shift_op, mode, ASHIFT, 0, speed_p)
+ + rtx_cost (XEXP (x, 1), mode, PLUS, 1, speed_p));
return true;
}
if (GET_CODE (XEXP (x, 0)) == MULT)
/* SMLA[BT][BT]. */
if (speed_p)
*cost += extra_cost->mult[0].extend_add;
- *cost += (rtx_cost (XEXP (XEXP (mul_op, 0), 0),
+ *cost += (rtx_cost (XEXP (XEXP (mul_op, 0), 0), mode,
SIGN_EXTEND, 0, speed_p)
- + rtx_cost (XEXP (XEXP (mul_op, 1), 0),
+ + rtx_cost (XEXP (XEXP (mul_op, 1), 0), mode,
SIGN_EXTEND, 0, speed_p)
- + rtx_cost (XEXP (x, 1), PLUS, 1, speed_p));
+ + rtx_cost (XEXP (x, 1), mode, PLUS, 1, speed_p));
return true;
}
if (speed_p)
*cost += extra_cost->mult[0].add;
- *cost += (rtx_cost (XEXP (mul_op, 0), MULT, 0, speed_p)
- + rtx_cost (XEXP (mul_op, 1), MULT, 1, speed_p)
- + rtx_cost (XEXP (x, 1), PLUS, 1, speed_p));
+ *cost += (rtx_cost (XEXP (mul_op, 0), mode, MULT, 0, speed_p)
+ + rtx_cost (XEXP (mul_op, 1), mode, MULT, 1, speed_p)
+ + rtx_cost (XEXP (x, 1), mode, PLUS, 1, speed_p));
return true;
}
if (CONST_INT_P (XEXP (x, 1)))
*cost = COSTS_N_INSNS (insns);
if (speed_p)
*cost += insns * extra_cost->alu.arith;
- *cost += rtx_cost (XEXP (x, 0), PLUS, 0, speed_p);
+ *cost += rtx_cost (XEXP (x, 0), mode, PLUS, 0, speed_p);
return true;
}
else if (speed_p)
*cost = COSTS_N_INSNS (1);
if (speed_p)
*cost += extra_cost->mult[1].extend_add;
- *cost += (rtx_cost (XEXP (XEXP (XEXP (x, 0), 0), 0),
+ *cost += (rtx_cost (XEXP (XEXP (XEXP (x, 0), 0), 0), mode,
ZERO_EXTEND, 0, speed_p)
- + rtx_cost (XEXP (XEXP (XEXP (x, 0), 1), 0),
+ + rtx_cost (XEXP (XEXP (XEXP (x, 0), 1), 0), mode,
ZERO_EXTEND, 0, speed_p)
- + rtx_cost (XEXP (x, 1), PLUS, 1, speed_p));
+ + rtx_cost (XEXP (x, 1), mode, PLUS, 1, speed_p));
return true;
}
? extra_cost->alu.arith
: extra_cost->alu.arith_shift));
- *cost += (rtx_cost (XEXP (XEXP (x, 0), 0), ZERO_EXTEND, 0,
- speed_p)
- + rtx_cost (XEXP (x, 1), PLUS, 1, speed_p));
+ *cost += (rtx_cost (XEXP (XEXP (x, 0), 0), VOIDmode, ZERO_EXTEND,
+ 0, speed_p)
+ + rtx_cost (XEXP (x, 1), mode, PLUS, 1, speed_p));
return true;
}
{
if (speed_p)
*cost += extra_cost->alu.log_shift_reg;
- *cost += rtx_cost (shift_reg, ASHIFT, 1, speed_p);
+ *cost += rtx_cost (shift_reg, mode, ASHIFT, 1, speed_p);
}
else if (speed_p)
*cost += extra_cost->alu.log_shift;
- *cost += (rtx_cost (shift_op, ASHIFT, 0, speed_p)
- + rtx_cost (XEXP (x, 1), code, 1, speed_p));
+ *cost += (rtx_cost (shift_op, mode, ASHIFT, 0, speed_p)
+ + rtx_cost (XEXP (x, 1), mode, code, 1, speed_p));
return true;
}
*cost = COSTS_N_INSNS (insns);
if (speed_p)
*cost += insns * extra_cost->alu.logical;
- *cost += rtx_cost (op0, code, 0, speed_p);
+ *cost += rtx_cost (op0, mode, code, 0, speed_p);
return true;
}
if (speed_p)
*cost += extra_cost->alu.logical;
- *cost += (rtx_cost (op0, code, 0, speed_p)
- + rtx_cost (XEXP (x, 1), code, 1, speed_p));
+ *cost += (rtx_cost (op0, mode, code, 0, speed_p)
+ + rtx_cost (XEXP (x, 1), mode, code, 1, speed_p));
return true;
}
if (speed_p)
*cost += 2 * extra_cost->alu.logical;
- *cost += (rtx_cost (XEXP (op0, 0), ZERO_EXTEND, 0, speed_p)
- + rtx_cost (XEXP (x, 1), code, 0, speed_p));
+ *cost += (rtx_cost (XEXP (op0, 0), VOIDmode, ZERO_EXTEND,
+ 0, speed_p)
+ + rtx_cost (XEXP (x, 1), mode, code, 0, speed_p));
return true;
}
else if (GET_CODE (op0) == SIGN_EXTEND)
if (speed_p)
*cost += extra_cost->alu.logical + extra_cost->alu.log_shift;
- *cost += (rtx_cost (XEXP (op0, 0), SIGN_EXTEND, 0, speed_p)
- + rtx_cost (XEXP (x, 1), code, 0, speed_p));
+ *cost += (rtx_cost (XEXP (op0, 0), VOIDmode, SIGN_EXTEND,
+ 0, speed_p)
+ + rtx_cost (XEXP (x, 1), mode, code, 0, speed_p));
return true;
}
if (speed_p)
*cost += extra_cost->fp[mode != SFmode].mult;
- *cost += (rtx_cost (op0, MULT, 0, speed_p)
- + rtx_cost (XEXP (x, 1), MULT, 1, speed_p));
+ *cost += (rtx_cost (op0, mode, MULT, 0, speed_p)
+ + rtx_cost (XEXP (x, 1), mode, MULT, 1, speed_p));
return true;
}
else if (GET_MODE_CLASS (mode) == MODE_FLOAT)
/* SMUL[TB][TB]. */
if (speed_p)
*cost += extra_cost->mult[0].extend;
- *cost += (rtx_cost (XEXP (x, 0), SIGN_EXTEND, 0, speed_p)
- + rtx_cost (XEXP (x, 1), SIGN_EXTEND, 0, speed_p));
+ *cost += rtx_cost (XEXP (x, 0), mode, SIGN_EXTEND, 0, speed_p);
+ *cost += rtx_cost (XEXP (x, 1), mode, SIGN_EXTEND, 1, speed_p);
return true;
}
if (speed_p)
*cost = COSTS_N_INSNS (1);
if (speed_p)
*cost += extra_cost->mult[1].extend;
- *cost += (rtx_cost (XEXP (XEXP (x, 0), 0),
+ *cost += (rtx_cost (XEXP (XEXP (x, 0), 0), VOIDmode,
ZERO_EXTEND, 0, speed_p)
- + rtx_cost (XEXP (XEXP (x, 1), 0),
+ + rtx_cost (XEXP (XEXP (x, 1), 0), VOIDmode,
ZERO_EXTEND, 0, speed_p));
return true;
}
if (speed_p)
*cost += (extra_cost->alu.log_shift
+ extra_cost->alu.arith_shift);
- *cost += rtx_cost (XEXP (XEXP (x, 0), 0), ABS, 0, speed_p);
+ *cost += rtx_cost (XEXP (XEXP (x, 0), 0), mode, ABS, 0, speed_p);
return true;
}
&& REGNO (XEXP (XEXP (x, 0), 0)) == CC_REGNUM
&& XEXP (XEXP (x, 0), 1) == const0_rtx))
{
+ mode = GET_MODE (XEXP (XEXP (x, 0), 0));
*cost += (COSTS_N_INSNS (1)
- + rtx_cost (XEXP (XEXP (x, 0), 0), COMPARE, 0,
- speed_p)
- + rtx_cost (XEXP (XEXP (x, 0), 1), COMPARE, 1,
- speed_p));
+ + rtx_cost (XEXP (XEXP (x, 0), 0), mode, COMPARE,
+ 0, speed_p)
+ + rtx_cost (XEXP (XEXP (x, 0), 1), mode, COMPARE,
+ 1, speed_p));
if (speed_p)
*cost += extra_cost->alu.arith;
}
{
if (speed_p)
*cost += extra_cost->alu.log_shift_reg;
- *cost += rtx_cost (shift_reg, ASHIFT, 1, speed_p);
+ *cost += rtx_cost (shift_reg, mode, ASHIFT, 1, speed_p);
}
else if (speed_p)
*cost += extra_cost->alu.log_shift;
- *cost += rtx_cost (shift_op, ASHIFT, 0, speed_p);
+ *cost += rtx_cost (shift_op, mode, ASHIFT, 0, speed_p);
return true;
}
*cost = COSTS_N_INSNS (4);
return true;
}
- int op1cost = rtx_cost (XEXP (x, 1), SET, 1, speed_p);
- int op2cost = rtx_cost (XEXP (x, 2), SET, 1, speed_p);
+ int op1cost = rtx_cost (XEXP (x, 1), mode, SET, 1, speed_p);
+ int op2cost = rtx_cost (XEXP (x, 2), mode, SET, 1, speed_p);
- *cost = rtx_cost (XEXP (x, 0), IF_THEN_ELSE, 0, speed_p);
+ *cost = rtx_cost (XEXP (x, 0), mode, IF_THEN_ELSE, 0, speed_p);
/* Assume that if one arm of the if_then_else is a register,
that it will be tied with the result and eliminate the
conditional insn. */
if (XEXP (x, 1) == CONST0_RTX (op0mode))
{
- *cost += rtx_cost (XEXP (x, 0), code, 0, speed_p);
+ *cost += rtx_cost (XEXP (x, 0), op0mode, code, 0, speed_p);
return true;
}
|| (GET_CODE (XEXP (x, 0)) == SUBREG
&& REG_P (SUBREG_REG (XEXP (x, 0))))))
{
- *cost = rtx_cost (XEXP (x, 0), COMPARE, 0, speed_p);
+ *cost = rtx_cost (XEXP (x, 0), op0mode, COMPARE, 0, speed_p);
/* Multiply operations that set the flags are often
significantly more expensive. */
*cost = COSTS_N_INSNS (1);
if (shift_reg != NULL)
{
- *cost += rtx_cost (shift_reg, ASHIFT, 1, speed_p);
+ *cost += rtx_cost (shift_reg, op0mode, ASHIFT,
+ 1, speed_p);
if (speed_p)
*cost += extra_cost->alu.arith_shift_reg;
}
else if (speed_p)
*cost += extra_cost->alu.arith_shift;
- *cost += (rtx_cost (shift_op, ASHIFT, 0, speed_p)
- + rtx_cost (XEXP (x, 1), COMPARE, 1, speed_p));
+ *cost += rtx_cost (shift_op, op0mode, ASHIFT, 0, speed_p);
+ *cost += rtx_cost (XEXP (x, 1), op0mode, COMPARE, 1, speed_p);
return true;
}
if (CONST_INT_P (XEXP (x, 1))
&& const_ok_for_op (INTVAL (XEXP (x, 1)), COMPARE))
{
- *cost += rtx_cost (XEXP (x, 0), COMPARE, 0, speed_p);
+ *cost += rtx_cost (XEXP (x, 0), op0mode, COMPARE, 0, speed_p);
return true;
}
return false;
*cost = COSTS_N_INSNS (3);
break;
}
- *cost += rtx_cost (XEXP (x, 0), code, 0, speed_p);
+ *cost += rtx_cost (XEXP (x, 0), mode, code, 0, speed_p);
return true;
}
else
if (CONST_INT_P (XEXP (x, 1))
&& const_ok_for_op (INTVAL (XEXP (x, 1)), COMPARE))
{
- *cost += rtx_cost (XEXP (x, 0), code, 0, speed_p);
+ *cost += rtx_cost (XEXP (x, 0), mode, code, 0, speed_p);
return true;
}
if ((arm_arch4 || GET_MODE (XEXP (x, 0)) == SImode)
&& MEM_P (XEXP (x, 0)))
{
- *cost = rtx_cost (XEXP (x, 0), code, 0, speed_p);
+ *cost = rtx_cost (XEXP (x, 0), VOIDmode, code, 0, speed_p);
if (mode == DImode)
*cost += COSTS_N_INSNS (1);
{
/* We have SXTB/SXTH. */
*cost = COSTS_N_INSNS (1);
- *cost += rtx_cost (XEXP (x, 0), code, 0, speed_p);
+ *cost += rtx_cost (XEXP (x, 0), VOIDmode, code, 0, speed_p);
if (speed_p)
*cost += extra_cost->alu.extend;
}
{
/* Needs two shifts. */
*cost = COSTS_N_INSNS (2);
- *cost += rtx_cost (XEXP (x, 0), code, 0, speed_p);
+ *cost += rtx_cost (XEXP (x, 0), VOIDmode, code, 0, speed_p);
if (speed_p)
*cost += 2 * extra_cost->alu.shift;
}
|| GET_MODE (XEXP (x, 0)) == QImode)
&& MEM_P (XEXP (x, 0)))
{
- *cost = rtx_cost (XEXP (x, 0), code, 0, speed_p);
+ *cost = rtx_cost (XEXP (x, 0), VOIDmode, code, 0, speed_p);
if (mode == DImode)
*cost += COSTS_N_INSNS (1); /* No speed penalty. */
{
/* We have UXTB/UXTH. */
*cost = COSTS_N_INSNS (1);
- *cost += rtx_cost (XEXP (x, 0), code, 0, speed_p);
+ *cost += rtx_cost (XEXP (x, 0), VOIDmode, code, 0, speed_p);
if (speed_p)
*cost += extra_cost->alu.extend;
}
shift may merge with a subsequent insn as a shifter
op. */
*cost = COSTS_N_INSNS (2);
- *cost += rtx_cost (XEXP (x, 0), code, 0, speed_p);
+ *cost += rtx_cost (XEXP (x, 0), VOIDmode, code, 0, speed_p);
if (speed_p)
*cost += 2 * extra_cost->alu.shift;
}
*cost = COSTS_N_INSNS (1);
if (speed_p)
*cost += extra_cost->alu.log_shift;
- *cost += rtx_cost (XEXP (x, 0), code, 0, speed_p);
+ *cost += rtx_cost (XEXP (x, 0), mode, code, 0, speed_p);
return true;
}
/* Fall through. */
*cost = COSTS_N_INSNS (1);
if (speed_p)
*cost += extra_cost->mult[1].extend;
- *cost += (rtx_cost (XEXP (XEXP (XEXP (x, 0), 0), 0), ZERO_EXTEND, 0,
- speed_p)
- + rtx_cost (XEXP (XEXP (XEXP (x, 0), 0), 1), ZERO_EXTEND,
- 0, speed_p));
+ *cost += (rtx_cost (XEXP (XEXP (XEXP (x, 0), 0), 0), VOIDmode,
+ ZERO_EXTEND, 0, speed_p)
+ + rtx_cost (XEXP (XEXP (XEXP (x, 0), 0), 1), VOIDmode,
+ ZERO_EXTEND, 0, speed_p));
return true;
}
*cost = LIBCALL_COST (1);
*cost = COSTS_N_INSNS (1);
if (speed_p)
*cost += extra_cost->alu.bfx;
- *cost += rtx_cost (XEXP (x, 0), code, 0, speed_p);
+ *cost += rtx_cost (XEXP (x, 0), mode, code, 0, speed_p);
return true;
}
/* Without UBFX/SBFX, need to resort to shift operations. */
*cost = COSTS_N_INSNS (2);
if (speed_p)
*cost += 2 * extra_cost->alu.shift;
- *cost += rtx_cost (XEXP (x, 0), ASHIFT, 0, speed_p);
+ *cost += rtx_cost (XEXP (x, 0), mode, ASHIFT, 0, speed_p);
return true;
case FLOAT_EXTEND:
if (speed_p)
*cost += extra_cost->fp[0].widen;
}
- *cost += rtx_cost (XEXP (x, 0), code, 0, speed_p);
+ *cost += rtx_cost (XEXP (x, 0), VOIDmode, code, 0, speed_p);
return true;
}
*cost = COSTS_N_INSNS (1);
if (speed_p)
*cost += extra_cost->fp[mode == DFmode].narrow;
- *cost += rtx_cost (XEXP (x, 0), code, 0, speed_p);
+ *cost += rtx_cost (XEXP (x, 0), VOIDmode, code, 0, speed_p);
return true;
/* Vector modes? */
}
if (GET_CODE (op2) == NEG)
op2 = XEXP (op2, 0);
- *cost += rtx_cost (op0, FMA, 0, speed_p);
- *cost += rtx_cost (op1, FMA, 1, speed_p);
- *cost += rtx_cost (op2, FMA, 2, speed_p);
+ *cost += rtx_cost (op0, mode, FMA, 0, speed_p);
+ *cost += rtx_cost (op1, mode, FMA, 1, speed_p);
+ *cost += rtx_cost (op2, mode, FMA, 2, speed_p);
if (speed_p)
*cost += extra_cost->fp[mode ==DFmode].fma;
if (GET_MODE_CLASS (mode) == MODE_INT)
{
*cost = COSTS_N_INSNS (1);
+ mode = GET_MODE (XEXP (x, 0));
if (speed_p)
- *cost += extra_cost->fp[GET_MODE (XEXP (x, 0)) == DFmode].toint;
+ *cost += extra_cost->fp[mode == DFmode].toint;
/* Strip of the 'cost' of rounding towards zero. */
if (GET_CODE (XEXP (x, 0)) == FIX)
- *cost += rtx_cost (XEXP (XEXP (x, 0), 0), code, 0, speed_p);
+ *cost += rtx_cost (XEXP (XEXP (x, 0), 0), mode, code,
+ 0, speed_p);
else
- *cost += rtx_cost (XEXP (x, 0), code, 0, speed_p);
+ *cost += rtx_cost (XEXP (x, 0), mode, code, 0, speed_p);
/* ??? Increase the cost to deal with transferring from
FP -> CORE registers? */
return true;
/* RTX costs when optimizing for size. */
static bool
-arm_rtx_costs (rtx x, int code, int outer_code, int opno ATTRIBUTE_UNUSED,
- int *total, bool speed)
+arm_rtx_costs (rtx x, machine_mode mode ATTRIBUTE_UNUSED, int outer_code,
+ int opno ATTRIBUTE_UNUSED, int *total, bool speed)
{
bool result;
+ int code = GET_CODE (x);
if (TARGET_OLD_RTX_COSTS
|| (!current_tune->insn_extra_cost && !TARGET_NEW_GENERIC_COSTS))
}
*total = COSTS_N_INSNS (cost);
- *total += rtx_cost (XEXP (x, 0), code, 0, speed);
+ *total += rtx_cost (XEXP (x, 0), mode, code, 0, speed);
return true;
}
/* Prototypes for hook implementors if needed before their implementation. */
-static bool avr_rtx_costs (rtx, int, int, int, int*, bool);
+static bool avr_rtx_costs (rtx, machine_mode, int, int, int*, bool);
/* Allocate registers from r25 to r8 for parameters for function calls. */
if (set)
fprintf (asm_out_file, "/* DEBUG: cost = %d. */\n",
- set_src_cost (SET_SRC (set), optimize_insn_for_speed_p ()));
+ set_src_cost (SET_SRC (set), GET_MODE (SET_DEST (set)),
+ optimize_insn_for_speed_p ()));
else
fprintf (asm_out_file, "/* DEBUG: pattern-cost = %d. */\n",
- rtx_cost (PATTERN (insn), INSN, 0,
+ rtx_cost (PATTERN (insn), VOIDmode, INSN, 0,
optimize_insn_for_speed_p()));
}
}
}
total = 0;
- avr_rtx_costs (x, code, outer, opno, &total, speed);
+ avr_rtx_costs (x, mode, outer, opno, &total, speed);
return total;
}
In either case, *TOTAL contains the cost result. */
static bool
-avr_rtx_costs_1 (rtx x, int codearg, int outer_code ATTRIBUTE_UNUSED,
+avr_rtx_costs_1 (rtx x, machine_mode mode, int outer_code ATTRIBUTE_UNUSED,
int opno ATTRIBUTE_UNUSED, int *total, bool speed)
{
- enum rtx_code code = (enum rtx_code) codearg;
- machine_mode mode = GET_MODE (x);
+ enum rtx_code code = GET_CODE (x);
HOST_WIDE_INT val;
switch (code)
case ZERO_EXTEND:
*total = COSTS_N_INSNS (GET_MODE_SIZE (mode)
- GET_MODE_SIZE (GET_MODE (XEXP (x, 0))));
- *total += avr_operand_rtx_cost (XEXP (x, 0), mode, code, 0, speed);
+ *total += avr_operand_rtx_cost (XEXP (x, 0), GET_MODE (XEXP (x, 0)),
+ code, 0, speed);
return true;
case SIGN_EXTEND:
*total = COSTS_N_INSNS (GET_MODE_SIZE (mode) + 2
- GET_MODE_SIZE (GET_MODE (XEXP (x, 0))));
- *total += avr_operand_rtx_cost (XEXP (x, 0), mode, code, 0, speed);
+ *total += avr_operand_rtx_cost (XEXP (x, 0), GET_MODE (XEXP (x, 0)),
+ code, 0, speed);
return true;
case PLUS:
case QImode:
*total = COSTS_N_INSNS (1);
if (GET_CODE (XEXP (x, 1)) != CONST_INT)
- *total += avr_operand_rtx_cost (XEXP (x, 1), mode, code, 1, speed);
+ *total += avr_operand_rtx_cost (XEXP (x, 1), QImode, code,
+ 1, speed);
break;
case HImode:
*total = COSTS_N_INSNS (2);
if (GET_CODE (XEXP (x, 1)) != CONST_INT)
- *total += avr_operand_rtx_cost (XEXP (x, 1), mode, code, 1, speed);
+ *total += avr_operand_rtx_cost (XEXP (x, 1), HImode, code,
+ 1, speed);
else if (INTVAL (XEXP (x, 1)) != 0)
*total += COSTS_N_INSNS (1);
break;
case SImode:
*total = COSTS_N_INSNS (4);
if (GET_CODE (XEXP (x, 1)) != CONST_INT)
- *total += avr_operand_rtx_cost (XEXP (x, 1), mode, code, 1, speed);
+ *total += avr_operand_rtx_cost (XEXP (x, 1), SImode, code,
+ 1, speed);
else if (INTVAL (XEXP (x, 1)) != 0)
*total += COSTS_N_INSNS (3);
break;
default:
return false;
}
- *total += avr_operand_rtx_cost (XEXP (x, 0), mode, code, 0, speed);
+ *total += avr_operand_rtx_cost (XEXP (x, 0), GET_MODE (XEXP (x, 0)),
+ code, 0, speed);
return true;
case TRUNCATE:
/* Implement `TARGET_RTX_COSTS'. */
static bool
-avr_rtx_costs (rtx x, int codearg, int outer_code,
+avr_rtx_costs (rtx x, machine_mode mode, int outer_code,
int opno, int *total, bool speed)
{
- bool done = avr_rtx_costs_1 (x, codearg, outer_code,
+ bool done = avr_rtx_costs_1 (x, mode, outer_code,
opno, total, speed);
if (avr_log.rtx_costs)
}
static bool
-bfin_rtx_costs (rtx x, int code_i, int outer_code_i, int opno, int *total,
- bool speed)
+bfin_rtx_costs (rtx x, machine_mode mode, int outer_code_i, int opno,
+ int *total, bool speed)
{
- enum rtx_code code = (enum rtx_code) code_i;
+ enum rtx_code code = GET_CODE (x);
enum rtx_code outer_code = (enum rtx_code) outer_code_i;
int cost2 = COSTS_N_INSNS (1);
rtx op0, op1;
case PLUS:
op0 = XEXP (x, 0);
op1 = XEXP (x, 1);
- if (GET_MODE (x) == SImode)
+ if (mode == SImode)
{
if (GET_CODE (op0) == MULT
&& GET_CODE (XEXP (op0, 1)) == CONST_INT)
if (val == 2 || val == 4)
{
*total = cost2;
- *total += rtx_cost (XEXP (op0, 0), outer_code, opno, speed);
- *total += rtx_cost (op1, outer_code, opno, speed);
+ *total += rtx_cost (XEXP (op0, 0), mode, outer_code,
+ opno, speed);
+ *total += rtx_cost (op1, mode, outer_code, opno, speed);
return true;
}
}
*total = cost2;
if (GET_CODE (op0) != REG
&& (GET_CODE (op0) != SUBREG || GET_CODE (SUBREG_REG (op0)) != REG))
- *total += set_src_cost (op0, speed);
+ *total += set_src_cost (op0, mode, speed);
#if 0 /* We'd like to do this for accuracy, but it biases the loop optimizer
towards creating too many induction variables. */
if (!reg_or_7bit_operand (op1, SImode))
- *total += set_src_cost (op1, speed);
+ *total += set_src_cost (op1, mode, speed);
#endif
}
- else if (GET_MODE (x) == DImode)
+ else if (mode == DImode)
{
*total = 6 * cost2;
if (GET_CODE (op1) != CONST_INT
|| !satisfies_constraint_Ks7 (op1))
- *total += rtx_cost (op1, PLUS, 1, speed);
+ *total += rtx_cost (op1, mode, PLUS, 1, speed);
if (GET_CODE (op0) != REG
&& (GET_CODE (op0) != SUBREG || GET_CODE (SUBREG_REG (op0)) != REG))
- *total += rtx_cost (op0, PLUS, 0, speed);
+ *total += rtx_cost (op0, mode, PLUS, 0, speed);
}
return true;
case MINUS:
- if (GET_MODE (x) == DImode)
+ if (mode == DImode)
*total = 6 * cost2;
else
*total = cost2;
case ASHIFT:
case ASHIFTRT:
case LSHIFTRT:
- if (GET_MODE (x) == DImode)
+ if (mode == DImode)
*total = 6 * cost2;
else
*total = cost2;
op1 = XEXP (x, 1);
if (GET_CODE (op0) != REG
&& (GET_CODE (op0) != SUBREG || GET_CODE (SUBREG_REG (op0)) != REG))
- *total += rtx_cost (op0, code, 0, speed);
+ *total += rtx_cost (op0, mode, code, 0, speed);
return true;
if (GET_CODE (op0) != REG
&& (GET_CODE (op0) != SUBREG || GET_CODE (SUBREG_REG (op0)) != REG))
- *total += rtx_cost (op0, code, 0, speed);
+ *total += rtx_cost (op0, mode, code, 0, speed);
- if (GET_MODE (x) == DImode)
+ if (mode == DImode)
{
*total = 2 * cost2;
return true;
}
*total = cost2;
- if (GET_MODE (x) != SImode)
+ if (mode != SImode)
return true;
if (code == AND)
{
if (! rhs_andsi3_operand (XEXP (x, 1), SImode))
- *total += rtx_cost (XEXP (x, 1), code, 1, speed);
+ *total += rtx_cost (XEXP (x, 1), mode, code, 1, speed);
}
else
{
if (! regorlog2_operand (XEXP (x, 1), SImode))
- *total += rtx_cost (XEXP (x, 1), code, 1, speed);
+ *total += rtx_cost (XEXP (x, 1), mode, code, 1, speed);
}
return true;
if (GET_CODE (op0) != REG
&& (GET_CODE (op0) != SUBREG || GET_CODE (SUBREG_REG (op0)) != REG))
- *total += rtx_cost (op0, MULT, 0, speed);
+ *total += rtx_cost (op0, mode, MULT, 0, speed);
if (GET_CODE (op1) != REG
&& (GET_CODE (op1) != SUBREG || GET_CODE (SUBREG_REG (op1)) != REG))
- *total += rtx_cost (op1, MULT, 1, speed);
+ *total += rtx_cost (op1, mode, MULT, 1, speed);
}
return true;
scanned. In either case, *TOTAL contains the cost result. */
static bool
-c6x_rtx_costs (rtx x, int code, int outer_code, int opno, int *total,
+c6x_rtx_costs (rtx x, machine_mode mode, int outer_code, int opno, int *total,
bool speed)
{
int cost2 = COSTS_N_INSNS (1);
rtx op0, op1;
+ int code = GET_CODE (x);
switch (code)
{
case TRUNCATE:
/* Recognize a mult_highpart operation. */
- if ((GET_MODE (x) == HImode || GET_MODE (x) == SImode)
+ if ((mode == HImode || mode == SImode)
&& GET_CODE (XEXP (x, 0)) == LSHIFTRT
- && GET_MODE (XEXP (x, 0)) == GET_MODE_2XWIDER_MODE (GET_MODE (x))
+ && GET_MODE (XEXP (x, 0)) == GET_MODE_2XWIDER_MODE (mode)
&& GET_CODE (XEXP (XEXP (x, 0), 0)) == MULT
&& GET_CODE (XEXP (XEXP (x, 0), 1)) == CONST_INT
- && INTVAL (XEXP (XEXP (x, 0), 1)) == GET_MODE_BITSIZE (GET_MODE (x)))
+ && INTVAL (XEXP (XEXP (x, 0), 1)) == GET_MODE_BITSIZE (mode))
{
rtx mul = XEXP (XEXP (x, 0), 0);
rtx op0 = XEXP (mul, 0);
if ((code0 == code1
&& (code0 == SIGN_EXTEND || code0 == ZERO_EXTEND))
- || (GET_MODE (x) == HImode
+ || (mode == HImode
&& code0 == ZERO_EXTEND && code1 == SIGN_EXTEND))
{
- if (GET_MODE (x) == HImode)
+ if (mode == HImode)
*total = COSTS_N_INSNS (2);
else
*total = COSTS_N_INSNS (12);
- *total += rtx_cost (XEXP (op0, 0), code0, 0, speed);
- *total += rtx_cost (XEXP (op1, 0), code1, 0, speed);
+ mode = GET_MODE (XEXP (op0, 0));
+ *total += rtx_cost (XEXP (op0, 0), mode, code0, 0, speed);
+ *total += rtx_cost (XEXP (op1, 0), mode, code1, 0, speed);
return true;
}
}
case ASHIFT:
case ASHIFTRT:
case LSHIFTRT:
- if (GET_MODE (x) == DImode)
+ if (mode == DImode)
*total = COSTS_N_INSNS (CONSTANT_P (XEXP (x, 1)) ? 4 : 15);
else
*total = COSTS_N_INSNS (1);
*total = COSTS_N_INSNS (1);
op0 = code == PLUS ? XEXP (x, 0) : XEXP (x, 1);
op1 = code == PLUS ? XEXP (x, 1) : XEXP (x, 0);
- if (GET_MODE_SIZE (GET_MODE (x)) <= UNITS_PER_WORD
- && INTEGRAL_MODE_P (GET_MODE (x))
+ if (GET_MODE_SIZE (mode) <= UNITS_PER_WORD
+ && INTEGRAL_MODE_P (mode)
&& GET_CODE (op0) == MULT
&& GET_CODE (XEXP (op0, 1)) == CONST_INT
&& (INTVAL (XEXP (op0, 1)) == 2
|| INTVAL (XEXP (op0, 1)) == 4
|| (code == PLUS && INTVAL (XEXP (op0, 1)) == 8)))
{
- *total += rtx_cost (XEXP (op0, 0), ASHIFT, 0, speed);
- *total += rtx_cost (op1, (enum rtx_code) code, 1, speed);
+ *total += rtx_cost (XEXP (op0, 0), mode, ASHIFT, 0, speed);
+ *total += rtx_cost (op1, mode, (enum rtx_code) code, 1, speed);
return true;
}
return false;
case MULT:
op0 = XEXP (x, 0);
op1 = XEXP (x, 1);
- if (GET_MODE (x) == DFmode)
+ if (mode == DFmode)
{
if (TARGET_FP)
*total = COSTS_N_INSNS (speed ? 10 : 1);
else
*total = COSTS_N_INSNS (speed ? 200 : 4);
}
- else if (GET_MODE (x) == SFmode)
+ else if (mode == SFmode)
{
if (TARGET_FP)
*total = COSTS_N_INSNS (speed ? 4 : 1);
else
*total = COSTS_N_INSNS (speed ? 100 : 4);
}
- else if (GET_MODE (x) == DImode)
+ else if (mode == DImode)
{
if (TARGET_MPY32
&& GET_CODE (op0) == GET_CODE (op1)
/* Maybe improve this laster. */
*total = COSTS_N_INSNS (20);
}
- else if (GET_MODE (x) == SImode)
+ else if (mode == SImode)
{
if (((GET_CODE (op0) == ZERO_EXTEND
|| GET_CODE (op0) == SIGN_EXTEND
else
*total = COSTS_N_INSNS (6);
}
- else if (GET_MODE (x) == HImode)
+ else if (mode == HImode)
*total = COSTS_N_INSNS (speed ? 2 : 1);
if (GET_CODE (op0) != REG
&& (GET_CODE (op0) != SUBREG || GET_CODE (SUBREG_REG (op0)) != REG))
- *total += rtx_cost (op0, MULT, 0, speed);
+ *total += rtx_cost (op0, mode, MULT, 0, speed);
if (op1 && GET_CODE (op1) != REG
&& (GET_CODE (op1) != SUBREG || GET_CODE (SUBREG_REG (op1)) != REG))
- *total += rtx_cost (op1, MULT, 1, speed);
+ *total += rtx_cost (op1, mode, MULT, 1, speed);
return true;
case UDIV:
&& XEXP (op0, 1) == const0_rtx
&& rtx_equal_p (XEXP (x, 1), XEXP (op0, 0)))
{
- *total = rtx_cost (XEXP (x, 1), (enum rtx_code) outer_code,
+ *total = rtx_cost (XEXP (x, 1), VOIDmode, (enum rtx_code) outer_code,
opno, speed);
return false;
}
static int cris_register_move_cost (machine_mode, reg_class_t, reg_class_t);
static int cris_memory_move_cost (machine_mode, reg_class_t, bool);
-static bool cris_rtx_costs (rtx, int, int, int, int *, bool);
+static bool cris_rtx_costs (rtx, machine_mode, int, int, int *, bool);
static int cris_address_cost (rtx, machine_mode, addr_space_t, bool);
static bool cris_pass_by_reference (cumulative_args_t, machine_mode,
const_tree, bool);
scanned. In either case, *TOTAL contains the cost result. */
static bool
-cris_rtx_costs (rtx x, int code, int outer_code, int opno, int *total,
- bool speed)
+cris_rtx_costs (rtx x, machine_mode mode, int outer_code, int opno,
+ int *total, bool speed)
{
+ int code = GET_CODE (x);
+
switch (code)
{
case CONST_INT:
return true;
case CONST_DOUBLE:
- if (x != CONST0_RTX (GET_MODE (x) == VOIDmode ? DImode : GET_MODE (x)))
+ if (x != CONST0_RTX (mode == VOIDmode ? DImode : mode))
*total = 12;
else
/* Make 0.0 cheap, else test-insns will not be used. */
&& !satisfies_constraint_I (XEXP (x, 1)))
{
*total
- = (rtx_cost (XEXP (x, 0), (enum rtx_code) outer_code,
+ = (rtx_cost (XEXP (x, 0), mode, (enum rtx_code) outer_code,
opno, speed) + 2
- + 2 * GET_MODE_NUNITS (GET_MODE (XEXP (x, 0))));
+ + 2 * GET_MODE_NUNITS (mode));
return true;
}
return false;
/* fall through */
case ZERO_EXTEND: case SIGN_EXTEND:
- *total = rtx_cost (XEXP (x, 0), (enum rtx_code) outer_code, opno, speed);
+ *total = rtx_cost (XEXP (x, 0), VOIDmode, (enum rtx_code) outer_code,
+ opno, speed);
return true;
default:
scanned. In either case, *TOTAL contains the cost result. */
static bool
-epiphany_rtx_costs (rtx x, int code, int outer_code, int opno ATTRIBUTE_UNUSED,
+epiphany_rtx_costs (rtx x, machine_mode mode, int outer_code,
+ int opno ATTRIBUTE_UNUSED,
int *total, bool speed ATTRIBUTE_UNUSED)
{
+ int code = GET_CODE (x);
+
switch (code)
{
/* Small integers in the right context are as cheap as registers. */
return true;
case COMPARE:
- switch (GET_MODE (x))
+ switch (mode)
{
/* There are a number of single-insn combiner patterns that use
the flag side effects of arithmetic. */
tree, int *, int);
static rtx frv_expand_builtin_saveregs (void);
static void frv_expand_builtin_va_start (tree, rtx);
-static bool frv_rtx_costs (rtx, int, int, int, int*,
- bool);
+static bool frv_rtx_costs (rtx, machine_mode, int, int,
+ int*, bool);
static int frv_register_move_cost (machine_mode,
reg_class_t, reg_class_t);
static int frv_memory_move_cost (machine_mode,
\f
static bool
frv_rtx_costs (rtx x,
- int code ATTRIBUTE_UNUSED,
- int outer_code ATTRIBUTE_UNUSED,
+ machine_mode mode,
+ int outer_code,
int opno ATTRIBUTE_UNUSED,
int *total,
bool speed ATTRIBUTE_UNUSED)
{
+ int code = GET_CODE (x);
+
if (outer_code == MEM)
{
/* Don't differentiate between memory addresses. All the ones
case NOT:
case NEG:
case COMPARE:
- if (GET_MODE (x) == SImode)
+ if (mode == SImode)
*total = COSTS_N_INSNS (1);
- else if (GET_MODE (x) == DImode)
+ else if (mode == DImode)
*total = COSTS_N_INSNS (2);
else
*total = COSTS_N_INSNS (3);
return true;
case MULT:
- if (GET_MODE (x) == SImode)
+ if (mode == SImode)
*total = COSTS_N_INSNS (2);
else
*total = COSTS_N_INSNS (6); /* guess */
/* Worker function for TARGET_RTX_COSTS. */
static bool
-h8300_rtx_costs (rtx x, int code, int outer_code, int opno ATTRIBUTE_UNUSED,
- int *total, bool speed)
+h8300_rtx_costs (rtx x, machine_mode mode ATTRIBUTE_UNUSED, int outer_code,
+ int opno ATTRIBUTE_UNUSED, int *total, bool speed)
{
+ int code = GET_CODE (x);
+
if (TARGET_H8300SX && outer_code == MEM)
{
/* Estimate the number of execution states needed to calculate
scanned. In either case, *TOTAL contains the cost result. */
static bool
-ix86_rtx_costs (rtx x, int code_i, int outer_code_i, int opno, int *total,
- bool speed)
+ix86_rtx_costs (rtx x, machine_mode mode, int outer_code_i, int opno,
+ int *total, bool speed)
{
rtx mask;
- enum rtx_code code = (enum rtx_code) code_i;
+ enum rtx_code code = GET_CODE (x);
enum rtx_code outer_code = (enum rtx_code) outer_code_i;
- machine_mode mode = GET_MODE (x);
const struct processor_costs *cost = speed ? ix86_cost : &ix86_size_cost;
switch (code)
if (CONSTANT_P (XEXP (x, 1)))
{
*total = (cost->fabs
- + rtx_cost (XEXP (x, 0), code, 0, speed)
+ + rtx_cost (XEXP (x, 0), mode, code, 0, speed)
+ (speed ? 2 : COSTS_N_BYTES (16)));
return true;
}
/* ??? SSE scalar/vector cost should be used here. */
/* ??? Bald assumption that fma has the same cost as fmul. */
*total = cost->fmul;
- *total += rtx_cost (XEXP (x, 1), FMA, 1, speed);
+ *total += rtx_cost (XEXP (x, 1), mode, FMA, 1, speed);
/* Negate in op0 or op2 is free: FMS, FNMA, FNMS. */
sub = XEXP (x, 0);
if (GET_CODE (sub) == NEG)
sub = XEXP (sub, 0);
- *total += rtx_cost (sub, FMA, 0, speed);
+ *total += rtx_cost (sub, mode, FMA, 0, speed);
sub = XEXP (x, 2);
if (GET_CODE (sub) == NEG)
sub = XEXP (sub, 0);
- *total += rtx_cost (sub, FMA, 2, speed);
+ *total += rtx_cost (sub, mode, FMA, 2, speed);
return true;
}
*total = (cost->mult_init[MODE_INDEX (mode)]
+ nbits * cost->mult_bit
- + rtx_cost (op0, outer_code, opno, speed)
- + rtx_cost (op1, outer_code, opno, speed));
+ + rtx_cost (op0, mode, outer_code, opno, speed)
+ + rtx_cost (op1, mode, outer_code, opno, speed));
return true;
}
if (val == 2 || val == 4 || val == 8)
{
*total = cost->lea;
- *total += rtx_cost (XEXP (XEXP (x, 0), 1),
+ *total += rtx_cost (XEXP (XEXP (x, 0), 1), mode,
+ outer_code, opno, speed);
+ *total += rtx_cost (XEXP (XEXP (XEXP (x, 0), 0), 0), mode,
outer_code, opno, speed);
- *total += rtx_cost (XEXP (XEXP (XEXP (x, 0), 0), 0),
+ *total += rtx_cost (XEXP (x, 1), mode,
outer_code, opno, speed);
- *total += rtx_cost (XEXP (x, 1), outer_code, opno, speed);
return true;
}
}
if (val == 2 || val == 4 || val == 8)
{
*total = cost->lea;
- *total += rtx_cost (XEXP (XEXP (x, 0), 0),
+ *total += rtx_cost (XEXP (XEXP (x, 0), 0), mode,
+ outer_code, opno, speed);
+ *total += rtx_cost (XEXP (x, 1), mode,
outer_code, opno, speed);
- *total += rtx_cost (XEXP (x, 1), outer_code, opno, speed);
return true;
}
}
else if (GET_CODE (XEXP (x, 0)) == PLUS)
{
*total = cost->lea;
- *total += rtx_cost (XEXP (XEXP (x, 0), 0),
+ *total += rtx_cost (XEXP (XEXP (x, 0), 0), mode,
+ outer_code, opno, speed);
+ *total += rtx_cost (XEXP (XEXP (x, 0), 1), mode,
outer_code, opno, speed);
- *total += rtx_cost (XEXP (XEXP (x, 0), 1),
+ *total += rtx_cost (XEXP (x, 1), mode,
outer_code, opno, speed);
- *total += rtx_cost (XEXP (x, 1), outer_code, opno, speed);
return true;
}
}
&& GET_MODE_SIZE (mode) > UNITS_PER_WORD)
{
*total = (cost->add * 2
- + (rtx_cost (XEXP (x, 0), outer_code, opno, speed)
+ + (rtx_cost (XEXP (x, 0), mode, outer_code, opno, speed)
<< (GET_MODE (XEXP (x, 0)) != DImode))
- + (rtx_cost (XEXP (x, 1), outer_code, opno, speed)
+ + (rtx_cost (XEXP (x, 1), mode, outer_code, opno, speed)
<< (GET_MODE (XEXP (x, 1)) != DImode)));
return true;
}
{
/* This kind of construct is implemented using test[bwl].
Treat it as if we had an AND. */
+ mode = GET_MODE (XEXP (XEXP (x, 0), 0));
*total = (cost->add
- + rtx_cost (XEXP (XEXP (x, 0), 0), outer_code, opno, speed)
- + rtx_cost (const1_rtx, outer_code, opno, speed));
+ + rtx_cost (XEXP (XEXP (x, 0), 0), mode, outer_code,
+ opno, speed)
+ + rtx_cost (const1_rtx, mode, outer_code, opno, speed));
return true;
}
/* This is masked instruction, assume the same cost,
as nonmasked variant. */
if (TARGET_AVX512F && register_operand (mask, GET_MODE (mask)))
- *total = rtx_cost (XEXP (x, 0), outer_code, opno, speed);
+ *total = rtx_cost (XEXP (x, 0), mode, outer_code, opno, speed);
else
*total = cost->fabs;
return true;
reg_class_t);
static int ia64_memory_move_cost (machine_mode mode, reg_class_t,
bool);
-static bool ia64_rtx_costs (rtx, int, int, int, int *, bool);
+static bool ia64_rtx_costs (rtx, machine_mode, int, int, int *, bool);
static int ia64_unspec_may_trap_p (const_rtx, unsigned);
static void fix_range (const char *);
static struct machine_function * ia64_init_machine_status (void);
/* ??? This is incomplete. */
static bool
-ia64_rtx_costs (rtx x, int code, int outer_code, int opno ATTRIBUTE_UNUSED,
+ia64_rtx_costs (rtx x, machine_mode mode, int outer_code,
+ int opno ATTRIBUTE_UNUSED,
int *total, bool speed ATTRIBUTE_UNUSED)
{
+ int code = GET_CODE (x);
+
switch (code)
{
case CONST_INT:
which normally involves copies. Plus there's the latency
of the multiply itself, and the latency of the instructions to
transfer integer regs to FP regs. */
- if (FLOAT_MODE_P (GET_MODE (x)))
+ if (FLOAT_MODE_P (mode))
*total = COSTS_N_INSNS (4);
- else if (GET_MODE_SIZE (GET_MODE (x)) > 2)
+ else if (GET_MODE_SIZE (mode) > 2)
*total = COSTS_N_INSNS (10);
else
*total = COSTS_N_INSNS (2);
case PLUS:
case MINUS:
- if (FLOAT_MODE_P (GET_MODE (x)))
+ if (FLOAT_MODE_P (mode))
{
*total = COSTS_N_INSNS (4);
return true;
static void iq2000_setup_incoming_varargs (cumulative_args_t,
machine_mode, tree, int *,
int);
-static bool iq2000_rtx_costs (rtx, int, int, int, int *, bool);
+static bool iq2000_rtx_costs (rtx, machine_mode, int, int, int *, bool);
static int iq2000_address_cost (rtx, machine_mode, addr_space_t,
bool);
static section *iq2000_select_section (tree, int, unsigned HOST_WIDE_INT);
static bool
-iq2000_rtx_costs (rtx x, int code, int outer_code ATTRIBUTE_UNUSED,
+iq2000_rtx_costs (rtx x, machine_mode mode, int outer_code ATTRIBUTE_UNUSED,
int opno ATTRIBUTE_UNUSED, int * total,
bool speed ATTRIBUTE_UNUSED)
{
- machine_mode mode = GET_MODE (x);
+ int code = GET_CODE (x);
switch (code)
{
static void lm32_setup_incoming_varargs (cumulative_args_t cum,
machine_mode mode, tree type,
int *pretend_size, int no_rtl);
-static bool lm32_rtx_costs (rtx x, int code, int outer_code, int opno,
+static bool lm32_rtx_costs (rtx x, machine_mode mode, int outer_code, int opno,
int *total, bool speed);
static bool lm32_can_eliminate (const int, const int);
static bool
scanned. In either case, *TOTAL contains the cost result. */
static bool
-lm32_rtx_costs (rtx x, int code, int outer_code, int opno ATTRIBUTE_UNUSED,
- int *total, bool speed)
+lm32_rtx_costs (rtx x, machine_mode mode, int outer_code,
+ int opno ATTRIBUTE_UNUSED, int *total, bool speed)
{
- machine_mode mode = GET_MODE (x);
+ int code = GET_CODE (x);
bool small_mode;
const int arithmetic_latency = 1;
#undef TARGET_RTX_COSTS
#define TARGET_RTX_COSTS m32c_rtx_costs
static bool
-m32c_rtx_costs (rtx x, int code, int outer_code, int opno ATTRIBUTE_UNUSED,
+m32c_rtx_costs (rtx x, machine_mode mode, int outer_code,
+ int opno ATTRIBUTE_UNUSED,
int *total, bool speed ATTRIBUTE_UNUSED)
{
+ int code = GET_CODE (x);
switch (code)
{
case REG:
default:
/* Reasonable default. */
- if (TARGET_A16 && GET_MODE(x) == SImode)
+ if (TARGET_A16 && mode == SImode)
*total += COSTS_N_INSNS (2);
break;
}
static void m32r_setup_incoming_varargs (cumulative_args_t, machine_mode,
tree, int *, int);
static void init_idents (void);
-static bool m32r_rtx_costs (rtx, int, int, int, int *, bool speed);
+static bool m32r_rtx_costs (rtx, machine_mode, int, int, int *, bool speed);
static int m32r_memory_move_cost (machine_mode, reg_class_t, bool);
static bool m32r_pass_by_reference (cumulative_args_t, machine_mode,
const_tree, bool);
}
static bool
-m32r_rtx_costs (rtx x, int code, int outer_code ATTRIBUTE_UNUSED,
+m32r_rtx_costs (rtx x, machine_mode mode ATTRIBUTE_UNUSED,
+ int outer_code ATTRIBUTE_UNUSED,
int opno ATTRIBUTE_UNUSED, int *total,
bool speed ATTRIBUTE_UNUSED)
{
+ int code = GET_CODE (x);
+
switch (code)
{
/* Small integers are as cheap as registers. 4 byte values can be
static bool m68k_ok_for_sibcall_p (tree, tree);
static bool m68k_tls_symbol_p (rtx);
static rtx m68k_legitimize_address (rtx, rtx, machine_mode);
-static bool m68k_rtx_costs (rtx, int, int, int, int *, bool);
+static bool m68k_rtx_costs (rtx, machine_mode, int, int, int *, bool);
#if M68K_HONOR_TARGET_STRICT_ALIGNMENT
static bool m68k_return_in_memory (const_tree, const_tree);
#endif
}
static bool
-m68k_rtx_costs (rtx x, int code, int outer_code, int opno ATTRIBUTE_UNUSED,
+m68k_rtx_costs (rtx x, machine_mode mode, int outer_code,
+ int opno ATTRIBUTE_UNUSED,
int *total, bool speed ATTRIBUTE_UNUSED)
{
+ int code = GET_CODE (x);
+
switch (code)
{
case CONST_INT:
case PLUS:
/* An lea costs about three times as much as a simple add. */
- if (GET_MODE (x) == SImode
+ if (mode == SImode
&& GET_CODE (XEXP (x, 1)) == REG
&& GET_CODE (XEXP (x, 0)) == MULT
&& GET_CODE (XEXP (XEXP (x, 0), 0)) == REG
case MULT:
if ((GET_CODE (XEXP (x, 0)) == ZERO_EXTEND
|| GET_CODE (XEXP (x, 0)) == SIGN_EXTEND)
- && GET_MODE (x) == SImode)
+ && mode == SImode)
*total = COSTS_N_INSNS (MULW_COST);
- else if (GET_MODE (x) == QImode || GET_MODE (x) == HImode)
+ else if (mode == QImode || mode == HImode)
*total = COSTS_N_INSNS (MULW_COST);
else
*total = COSTS_N_INSNS (MULL_COST);
case UDIV:
case MOD:
case UMOD:
- if (GET_MODE (x) == QImode || GET_MODE (x) == HImode)
+ if (mode == QImode || mode == HImode)
*total = COSTS_N_INSNS (DIVW_COST); /* div.w */
else if (TARGET_CF_HWDIV)
*total = COSTS_N_INSNS (18);
static int mcore_const_costs (rtx, RTX_CODE);
static int mcore_and_cost (rtx);
static int mcore_ior_cost (rtx);
-static bool mcore_rtx_costs (rtx, int, int, int,
+static bool mcore_rtx_costs (rtx, machine_mode, int, int,
int *, bool);
static void mcore_external_libcall (rtx);
static bool mcore_return_in_memory (const_tree, const_tree);
}
static bool
-mcore_rtx_costs (rtx x, int code, int outer_code, int opno ATTRIBUTE_UNUSED,
+mcore_rtx_costs (rtx x, machine_mode mode ATTRIBUTE_UNUSED, int outer_code,
+ int opno ATTRIBUTE_UNUSED,
int * total, bool speed ATTRIBUTE_UNUSED)
{
+ int code = GET_CODE (x);
+
switch (code)
{
case CONST_INT:
static int mep_sched_reorder (FILE *, int, rtx_insn **, int *, int);
static rtx_insn *mep_make_bundle (rtx, rtx_insn *);
static void mep_bundle_insns (rtx_insn *);
-static bool mep_rtx_cost (rtx, int, int, int, int *, bool);
+static bool mep_rtx_cost (rtx, machine_mode, int, int, int *, bool);
static int mep_address_cost (rtx, machine_mode, addr_space_t, bool);
static void mep_setup_incoming_varargs (cumulative_args_t, machine_mode,
tree, int *, int);
}
static bool
-mep_rtx_cost (rtx x, int code, int outer_code ATTRIBUTE_UNUSED,
+mep_rtx_cost (rtx x, machine_mode mode ATTRIBUTE_UNUSED,
+ int outer_code ATTRIBUTE_UNUSED,
int opno ATTRIBUTE_UNUSED, int *total,
bool ATTRIBUTE_UNUSED speed_t)
{
+ int code = GET_CODE (x);
+
switch (code)
{
case CONST_INT:
}
static bool
-microblaze_rtx_costs (rtx x, int code, int outer_code ATTRIBUTE_UNUSED,
+microblaze_rtx_costs (rtx x, machine_mode mode, int outer_code ATTRIBUTE_UNUSED,
int opno ATTRIBUTE_UNUSED, int *total,
bool speed ATTRIBUTE_UNUSED)
{
- machine_mode mode = GET_MODE (x);
+ int code = GET_CODE (x);
switch (code)
{
return mips_classify_address (&addr, x, mode, strict_p);
}
-/* Return true if X is a legitimate $sp-based address for mode MDOE. */
+/* Return true if X is a legitimate $sp-based address for mode MODE. */
bool
mips_stack_address_p (rtx x, machine_mode mode)
else
cost = single_cost;
return (cost
- + set_src_cost (XEXP (x, 0), speed)
- + rtx_cost (XEXP (x, 1), GET_CODE (x), 1, speed));
+ + set_src_cost (XEXP (x, 0), GET_MODE (x), speed)
+ + rtx_cost (XEXP (x, 1), GET_MODE (x), GET_CODE (x), 1, speed));
}
/* Return the cost of floating-point multiplications of mode MODE. */
/* Implement TARGET_RTX_COSTS. */
static bool
-mips_rtx_costs (rtx x, int code, int outer_code, int opno ATTRIBUTE_UNUSED,
- int *total, bool speed)
+mips_rtx_costs (rtx x, machine_mode mode, int outer_code,
+ int opno ATTRIBUTE_UNUSED, int *total, bool speed)
{
- machine_mode mode = GET_MODE (x);
+ int code = GET_CODE (x);
bool float_mode_p = FLOAT_MODE_P (mode);
int cost;
rtx addr;
for a word or doubleword operation, so we cannot rely on
the result of mips_build_integer. */
else if (!TARGET_MIPS16
- && (outer_code == SET || mode == VOIDmode))
+ && (outer_code == SET || GET_MODE (x) == VOIDmode))
cost = 1;
*total = COSTS_N_INSNS (cost);
return true;
&& UINTVAL (XEXP (x, 1)) == 0xffffffff)
{
*total = (mips_zero_extend_cost (mode, XEXP (x, 0))
- + set_src_cost (XEXP (x, 0), speed));
+ + set_src_cost (XEXP (x, 0), mode, speed));
return true;
}
if (ISA_HAS_CINS && CONST_INT_P (XEXP (x, 1)))
&& CONST_INT_P (XEXP (op, 1))
&& mask_low_and_shift_p (mode, XEXP (x, 1), XEXP (op, 1), 32))
{
- *total = COSTS_N_INSNS (1) + set_src_cost (XEXP (op, 0), speed);
+ *total = COSTS_N_INSNS (1);
+ *total += set_src_cost (XEXP (op, 0), mode, speed);
return true;
}
}
{
cost = GET_MODE_SIZE (mode) > UNITS_PER_WORD ? 2 : 1;
*total = (COSTS_N_INSNS (cost)
- + set_src_cost (XEXP (XEXP (x, 0), 0), speed)
- + set_src_cost (XEXP (XEXP (x, 1), 0), speed));
+ + set_src_cost (XEXP (XEXP (x, 0), 0), mode, speed)
+ + set_src_cost (XEXP (XEXP (x, 1), 0), mode, speed));
return true;
}
case LO_SUM:
/* Low-part immediates need an extended MIPS16 instruction. */
*total = (COSTS_N_INSNS (TARGET_MIPS16 ? 2 : 1)
- + set_src_cost (XEXP (x, 0), speed));
+ + set_src_cost (XEXP (x, 0), mode, speed));
return true;
case LT:
if (GET_CODE (op0) == MULT && GET_CODE (XEXP (op0, 0)) == NEG)
{
*total = (mips_fp_mult_cost (mode)
- + set_src_cost (XEXP (XEXP (op0, 0), 0), speed)
- + set_src_cost (XEXP (op0, 1), speed)
- + set_src_cost (op1, speed));
+ + set_src_cost (XEXP (XEXP (op0, 0), 0), mode, speed)
+ + set_src_cost (XEXP (op0, 1), mode, speed)
+ + set_src_cost (op1, mode, speed));
return true;
}
if (GET_CODE (op1) == MULT)
{
*total = (mips_fp_mult_cost (mode)
- + set_src_cost (op0, speed)
- + set_src_cost (XEXP (op1, 0), speed)
- + set_src_cost (XEXP (op1, 1), speed));
+ + set_src_cost (op0, mode, speed)
+ + set_src_cost (XEXP (op1, 0), mode, speed)
+ + set_src_cost (XEXP (op1, 1), mode, speed));
return true;
}
}
if (const_immlsa_operand (op2, mode))
{
*total = (COSTS_N_INSNS (1)
- + set_src_cost (XEXP (XEXP (x, 0), 0), speed)
- + set_src_cost (XEXP (x, 1), speed));
+ + set_src_cost (XEXP (XEXP (x, 0), 0), mode, speed)
+ + set_src_cost (XEXP (x, 1), mode, speed));
return true;
}
}
&& GET_CODE (XEXP (op, 0)) == MULT)
{
*total = (mips_fp_mult_cost (mode)
- + set_src_cost (XEXP (XEXP (op, 0), 0), speed)
- + set_src_cost (XEXP (XEXP (op, 0), 1), speed)
- + set_src_cost (XEXP (op, 1), speed));
+ + set_src_cost (XEXP (XEXP (op, 0), 0), mode, speed)
+ + set_src_cost (XEXP (XEXP (op, 0), 1), mode, speed)
+ + set_src_cost (XEXP (op, 1), mode, speed));
return true;
}
}
if (outer_code == SQRT || GET_CODE (XEXP (x, 1)) == SQRT)
/* An rsqrt<mode>a or rsqrt<mode>b pattern. Count the
division as being free. */
- *total = set_src_cost (XEXP (x, 1), speed);
+ *total = set_src_cost (XEXP (x, 1), mode, speed);
else
*total = (mips_fp_div_cost (mode)
- + set_src_cost (XEXP (x, 1), speed));
+ + set_src_cost (XEXP (x, 1), mode, speed));
return true;
}
/* Fall through. */
&& CONST_INT_P (XEXP (x, 1))
&& exact_log2 (INTVAL (XEXP (x, 1))) >= 0)
{
- *total = COSTS_N_INSNS (2) + set_src_cost (XEXP (x, 0), speed);
+ *total = COSTS_N_INSNS (2);
+ *total += set_src_cost (XEXP (x, 0), mode, speed);
return true;
}
*total = COSTS_N_INSNS (mips_idiv_insns ());
&& GET_MODE (XEXP (x, 0)) == QImode
&& GET_CODE (XEXP (XEXP (x, 0), 0)) == PLUS)
{
- *total = set_src_cost (XEXP (XEXP (x, 0), 0), speed);
+ *total = set_src_cost (XEXP (XEXP (x, 0), 0), VOIDmode, speed);
return true;
}
*total = mips_zero_extend_cost (mode, XEXP (x, 0));
if (ISA_HAS_R6DMUL
&& GET_CODE (op) == ZERO_EXTEND
&& GET_MODE (op) == DImode)
- *total += rtx_cost (op, MULT, i, speed);
+ *total += rtx_cost (op, DImode, MULT, i, speed);
else
- *total += rtx_cost (XEXP (op, 0), GET_CODE (op), 0, speed);
+ *total += rtx_cost (XEXP (op, 0), VOIDmode, GET_CODE (op),
+ 0, speed);
}
return true;
(cumulative_args_t, machine_mode, tree, int *, int);
static void mmix_file_start (void);
static void mmix_file_end (void);
-static bool mmix_rtx_costs (rtx, int, int, int, int *, bool);
+static bool mmix_rtx_costs (rtx, machine_mode, int, int, int *, bool);
static int mmix_register_move_cost (machine_mode,
reg_class_t, reg_class_t);
static rtx mmix_struct_value_rtx (tree, int);
static bool
mmix_rtx_costs (rtx x ATTRIBUTE_UNUSED,
- int code ATTRIBUTE_UNUSED,
+ machine_mode mode ATTRIBUTE_UNUSED,
int outer_code ATTRIBUTE_UNUSED,
int opno ATTRIBUTE_UNUSED,
int *total ATTRIBUTE_UNUSED,
return speed ? 2 : 6;
default:
- return rtx_cost (x, MEM, 0, speed);
+ return rtx_cost (x, Pmode, MEM, 0, speed);
}
}
to represent cycles. Size-relative costs are in bytes. */
static bool
-mn10300_rtx_costs (rtx x, int code, int outer_code, int opno ATTRIBUTE_UNUSED,
- int *ptotal, bool speed)
+mn10300_rtx_costs (rtx x, machine_mode mode, int outer_code,
+ int opno ATTRIBUTE_UNUSED, int *ptotal, bool speed)
{
/* This value is used for SYMBOL_REF etc where we want to pretend
we have a full 32-bit constant. */
HOST_WIDE_INT i = 0x12345678;
int total;
+ int code = GET_CODE (x);
switch (code)
{
i = INTVAL (XEXP (x, 1));
if (i == 1 || i == 4)
{
- total = 1 + rtx_cost (XEXP (x, 0), PLUS, 0, speed);
+ total = 1 + rtx_cost (XEXP (x, 0), mode, PLUS, 0, speed);
goto alldone;
}
}
break;
case MEM:
- total = mn10300_address_cost (XEXP (x, 0), GET_MODE (x),
+ total = mn10300_address_cost (XEXP (x, 0), mode,
MEM_ADDR_SPACE (x), speed);
if (speed)
total = COSTS_N_INSNS (2 + total);
#undef TARGET_RTX_COSTS
#define TARGET_RTX_COSTS msp430_rtx_costs
-static bool msp430_rtx_costs (rtx x ATTRIBUTE_UNUSED,
- int code,
- int outer_code ATTRIBUTE_UNUSED,
- int opno ATTRIBUTE_UNUSED,
- int * total,
- bool speed ATTRIBUTE_UNUSED)
+static bool msp430_rtx_costs (rtx x ATTRIBUTE_UNUSED,
+ machine_mode mode,
+ int outer_code ATTRIBUTE_UNUSED,
+ int opno ATTRIBUTE_UNUSED,
+ int * total,
+ bool speed ATTRIBUTE_UNUSED)
{
+ int code = GET_CODE (x);
+
switch (code)
{
case SIGN_EXTEND:
- if (GET_MODE (x) == SImode && outer_code == SET)
+ if (mode == SImode && outer_code == SET)
{
*total = COSTS_N_INSNS (4);
return true;
bool
nds32_rtx_costs_impl (rtx x,
- int code,
+ machine_mode mode ATTRIBUTE_UNUSED,
int outer_code,
int opno ATTRIBUTE_UNUSED,
int *total,
bool speed)
{
+ int code = GET_CODE (x);
+
/* According to 'speed', goto suitable cost model section. */
if (speed)
goto performance_cost;
/* Auxiliary functions for cost calculation. */
-extern bool nds32_rtx_costs_impl (rtx, int, int, int, int *, bool);
+extern bool nds32_rtx_costs_impl (rtx, machine_mode, int, int, int *, bool);
extern int nds32_address_cost_impl (rtx, machine_mode, addr_space_t, bool);
/* ------------------------------------------------------------------------ */
Refer to gcc/rtlanal.c for more information. */
static bool
nds32_rtx_costs (rtx x,
- int code,
+ machine_mode mode,
int outer_code,
int opno,
int *total,
bool speed)
{
- return nds32_rtx_costs_impl (x, code, outer_code, opno, total, speed);
+ return nds32_rtx_costs_impl (x, mode, outer_code, opno, total, speed);
}
static int
cost has been computed, and false if subexpressions should be
scanned. In either case, *TOTAL contains the cost result. */
static bool
-nios2_rtx_costs (rtx x, int code, int outer_code ATTRIBUTE_UNUSED,
+nios2_rtx_costs (rtx x, machine_mode mode ATTRIBUTE_UNUSED,
+ int outer_code ATTRIBUTE_UNUSED,
int opno ATTRIBUTE_UNUSED,
int *total, bool speed ATTRIBUTE_UNUSED)
{
+ int code = GET_CODE (x);
+
switch (code)
{
case CONST_INT:
static int hppa_register_move_cost (machine_mode mode, reg_class_t,
reg_class_t);
static int hppa_address_cost (rtx, machine_mode mode, addr_space_t, bool);
-static bool hppa_rtx_costs (rtx, int, int, int, int *, bool);
+static bool hppa_rtx_costs (rtx, machine_mode, int, int, int *, bool);
static inline rtx force_mode (machine_mode, rtx);
static void pa_reorg (void);
static void pa_combine_instructions (void);
scanned. In either case, *TOTAL contains the cost result. */
static bool
-hppa_rtx_costs (rtx x, int code, int outer_code, int opno ATTRIBUTE_UNUSED,
+hppa_rtx_costs (rtx x, machine_mode mode, int outer_code,
+ int opno ATTRIBUTE_UNUSED,
int *total, bool speed ATTRIBUTE_UNUSED)
{
int factor;
+ int code = GET_CODE (x);
switch (code)
{
return true;
case MULT:
- if (GET_MODE_CLASS (GET_MODE (x)) == MODE_FLOAT)
+ if (GET_MODE_CLASS (mode) == MODE_FLOAT)
{
*total = COSTS_N_INSNS (3);
return true;
}
/* A mode size N times larger than SImode needs O(N*N) more insns. */
- factor = GET_MODE_SIZE (GET_MODE (x)) / 4;
+ factor = GET_MODE_SIZE (mode) / 4;
if (factor == 0)
factor = 1;
return true;
case DIV:
- if (GET_MODE_CLASS (GET_MODE (x)) == MODE_FLOAT)
+ if (GET_MODE_CLASS (mode) == MODE_FLOAT)
{
*total = COSTS_N_INSNS (14);
return true;
case MOD:
case UMOD:
/* A mode size N times larger than SImode needs O(N*N) more insns. */
- factor = GET_MODE_SIZE (GET_MODE (x)) / 4;
+ factor = GET_MODE_SIZE (mode) / 4;
if (factor == 0)
factor = 1;
case PLUS: /* this includes shNadd insns */
case MINUS:
- if (GET_MODE_CLASS (GET_MODE (x)) == MODE_FLOAT)
+ if (GET_MODE_CLASS (mode) == MODE_FLOAT)
{
*total = COSTS_N_INSNS (3);
return true;
/* A size N times larger than UNITS_PER_WORD needs N times as
many insns, taking N times as long. */
- factor = GET_MODE_SIZE (GET_MODE (x)) / UNITS_PER_WORD;
+ factor = GET_MODE_SIZE (mode) / UNITS_PER_WORD;
if (factor == 0)
factor = 1;
*total = factor * COSTS_N_INSNS (1);
static const char *singlemove_string (rtx *);
static bool pdp11_assemble_integer (rtx, unsigned int, int);
-static bool pdp11_rtx_costs (rtx, int, int, int, int *, bool);
+static bool pdp11_rtx_costs (rtx, machine_mode, int, int, int *, bool);
static bool pdp11_return_in_memory (const_tree, const_tree);
static rtx pdp11_function_value (const_tree, const_tree, bool);
static rtx pdp11_libcall_value (machine_mode, const_rtx);
}
static bool
-pdp11_rtx_costs (rtx x, int code, int outer_code ATTRIBUTE_UNUSED,
+pdp11_rtx_costs (rtx x, machine_mode mode, int outer_code ATTRIBUTE_UNUSED,
int opno ATTRIBUTE_UNUSED, int *total,
bool speed ATTRIBUTE_UNUSED)
{
+ int code = GET_CODE (x);
+
switch (code)
{
case CONST_INT:
return false;
case SIGN_EXTEND:
- if (GET_MODE (x) == HImode)
+ if (mode == HImode)
*total = COSTS_N_INSNS (1);
- else if (GET_MODE (x) == SImode)
+ else if (mode == SImode)
*total = COSTS_N_INSNS (6);
else
*total = COSTS_N_INSNS (2);
case ASHIFTRT:
if (optimize_size)
*total = COSTS_N_INSNS (1);
- else if (GET_MODE (x) == QImode)
+ else if (mode == QImode)
{
if (GET_CODE (XEXP (x, 1)) != CONST_INT)
*total = COSTS_N_INSNS (8); /* worst case */
else
*total = COSTS_N_INSNS (INTVAL (XEXP (x, 1)));
}
- else if (GET_MODE (x) == HImode)
+ else if (mode == HImode)
{
if (GET_CODE (XEXP (x, 1)) == CONST_INT)
{
else
*total = COSTS_N_INSNS (10); /* worst case */
}
- else if (GET_MODE (x) == SImode)
+ else if (mode == SImode)
{
if (GET_CODE (XEXP (x, 1)) == CONST_INT)
*total = COSTS_N_INSNS (2.5 + 0.5 * INTVAL (XEXP (x, 1)));
#define TARGET_RTX_COSTS rl78_rtx_costs
static bool
-rl78_rtx_costs (rtx x,
- int code,
- int outer_code ATTRIBUTE_UNUSED,
- int opno ATTRIBUTE_UNUSED,
- int * total,
- bool speed ATTRIBUTE_UNUSED)
+rl78_rtx_costs (rtx x,
+ machine_mode mode,
+ int outer_code ATTRIBUTE_UNUSED,
+ int opno ATTRIBUTE_UNUSED,
+ int * total,
+ bool speed ATTRIBUTE_UNUSED)
{
+ int code = GET_CODE (x);
+
if (code == IF_THEN_ELSE)
{
*total = COSTS_N_INSNS (10);
return true;
}
- if (GET_MODE (x) == SImode)
+ if (mode == SImode)
{
switch (code)
{
static tree rs6000_builtin_vectorized_libmass (tree, tree, tree);
static void rs6000_emit_set_long_const (rtx, HOST_WIDE_INT);
static int rs6000_memory_move_cost (machine_mode, reg_class_t, bool);
-static bool rs6000_debug_rtx_costs (rtx, int, int, int, int *, bool);
+static bool rs6000_debug_rtx_costs (rtx, machine_mode, int, int, int *, bool);
static int rs6000_debug_address_cost (rtx, machine_mode, addr_space_t,
bool);
static int rs6000_debug_adjust_cost (rtx_insn *, rtx, rtx_insn *, int);
scanned. In either case, *TOTAL contains the cost result. */
static bool
-rs6000_rtx_costs (rtx x, int code, int outer_code, int opno ATTRIBUTE_UNUSED,
- int *total, bool speed)
+rs6000_rtx_costs (rtx x, machine_mode mode, int outer_code,
+ int opno ATTRIBUTE_UNUSED, int *total, bool speed)
{
- machine_mode mode = GET_MODE (x);
+ int code = GET_CODE (x);
switch (code)
{
/* Debug form of r6000_rtx_costs that is selected if -mdebug=cost. */
static bool
-rs6000_debug_rtx_costs (rtx x, int code, int outer_code, int opno, int *total,
- bool speed)
+rs6000_debug_rtx_costs (rtx x, machine_mode mode, int outer_code,
+ int opno, int *total, bool speed)
{
- bool ret = rs6000_rtx_costs (x, code, outer_code, opno, total, speed);
+ bool ret = rs6000_rtx_costs (x, mode, outer_code, opno, total, speed);
fprintf (stderr,
- "\nrs6000_rtx_costs, return = %s, code = %s, outer_code = %s, "
+ "\nrs6000_rtx_costs, return = %s, mode = %s, outer_code = %s, "
"opno = %d, total = %d, speed = %s, x:\n",
ret ? "complete" : "scan inner",
- GET_RTX_NAME (code),
+ GET_MODE_NAME (mode),
GET_RTX_NAME (outer_code),
opno,
*total,
/* Compute a (partial) cost for rtx X. Return true if the complete
cost has been computed, and false if subexpressions should be
scanned. In either case, *TOTAL contains the cost result.
- CODE contains GET_CODE (x), OUTER_CODE contains the code
- of the superexpression of x. */
+ OUTER_CODE contains the code of the superexpression of x. */
static bool
-s390_rtx_costs (rtx x, int code, int outer_code, int opno ATTRIBUTE_UNUSED,
+s390_rtx_costs (rtx x, machine_mode mode, int outer_code,
+ int opno ATTRIBUTE_UNUSED,
int *total, bool speed ATTRIBUTE_UNUSED)
{
+ int code = GET_CODE (x);
switch (code)
{
case CONST:
return false;
case MULT:
- switch (GET_MODE (x))
+ switch (mode)
{
case SImode:
{
return false;
case FMA:
- switch (GET_MODE (x))
+ switch (mode)
{
case DFmode:
*total = s390_cost->madbr;
/* Negate in the third argument is free: FMSUB. */
if (GET_CODE (XEXP (x, 2)) == NEG)
{
- *total += (rtx_cost (XEXP (x, 0), FMA, 0, speed)
- + rtx_cost (XEXP (x, 1), FMA, 1, speed)
- + rtx_cost (XEXP (XEXP (x, 2), 0), FMA, 2, speed));
+ *total += (rtx_cost (XEXP (x, 0), mode, FMA, 0, speed)
+ + rtx_cost (XEXP (x, 1), mode, FMA, 1, speed)
+ + rtx_cost (XEXP (XEXP (x, 2), 0), mode, FMA, 2, speed));
return true;
}
return false;
case UDIV:
case UMOD:
- if (GET_MODE (x) == TImode) /* 128 bit division */
+ if (mode == TImode) /* 128 bit division */
*total = s390_cost->dlgr;
- else if (GET_MODE (x) == DImode)
+ else if (mode == DImode)
{
rtx right = XEXP (x, 1);
if (GET_CODE (right) == ZERO_EXTEND) /* 64 by 32 bit division */
else /* 64 by 64 bit division */
*total = s390_cost->dlgr;
}
- else if (GET_MODE (x) == SImode) /* 32 bit division */
+ else if (mode == SImode) /* 32 bit division */
*total = s390_cost->dlr;
return false;
case DIV:
case MOD:
- if (GET_MODE (x) == DImode)
+ if (mode == DImode)
{
rtx right = XEXP (x, 1);
if (GET_CODE (right) == ZERO_EXTEND) /* 64 by 32 bit division */
else /* 64 by 64 bit division */
*total = s390_cost->dsgr;
}
- else if (GET_MODE (x) == SImode) /* 32 bit division */
+ else if (mode == SImode) /* 32 bit division */
*total = s390_cost->dlr;
- else if (GET_MODE (x) == SFmode)
+ else if (mode == SFmode)
{
*total = s390_cost->debr;
}
- else if (GET_MODE (x) == DFmode)
+ else if (mode == DFmode)
{
*total = s390_cost->ddbr;
}
- else if (GET_MODE (x) == TFmode)
+ else if (mode == TFmode)
{
*total = s390_cost->dxbr;
}
return false;
case SQRT:
- if (GET_MODE (x) == SFmode)
+ if (mode == SFmode)
*total = s390_cost->sqebr;
- else if (GET_MODE (x) == DFmode)
+ else if (mode == DFmode)
*total = s390_cost->sqdbr;
else /* TFmode */
*total = s390_cost->sqxbr;
static int multcosts (rtx);
static bool unspec_caller_rtx_p (rtx);
static bool sh_cannot_copy_insn_p (rtx_insn *);
-static bool sh_rtx_costs (rtx, int, int, int, int *, bool);
+static bool sh_rtx_costs (rtx, machine_mode, int, int, int *, bool);
static int sh_address_cost (rtx, machine_mode, addr_space_t, bool);
static int sh_pr_n_sets (void);
static rtx sh_allocate_initial_value (rtx);
|| satisfies_constraint_J16 (XEXP (x, 1)))
return 1;
else
- return 1 + rtx_cost (XEXP (x, 1), AND, 1, !optimize_size);
+ return 1 + rtx_cost (XEXP (x, 1), GET_MODE (x), AND, 1, !optimize_size);
}
/* These constants are single cycle extu.[bw] instructions. */
cost has been computed, and false if subexpressions should be
scanned. In either case, *TOTAL contains the cost result. */
static bool
-sh_rtx_costs (rtx x, int code, int outer_code, int opno ATTRIBUTE_UNUSED,
+sh_rtx_costs (rtx x, machine_mode mode ATTRIBUTE_UNUSED, int outer_code,
+ int opno ATTRIBUTE_UNUSED,
int *total, bool speed ATTRIBUTE_UNUSED)
{
+ int code = GET_CODE (x);
+
switch (code)
{
/* The lower-subreg pass decides whether to split multi-word regs
static rtx sparc_tls_got (void);
static int sparc_register_move_cost (machine_mode,
reg_class_t, reg_class_t);
-static bool sparc_rtx_costs (rtx, int, int, int, int *, bool);
+static bool sparc_rtx_costs (rtx, machine_mode, int, int, int *, bool);
static rtx sparc_function_value (const_tree, const_tree, bool);
static rtx sparc_libcall_value (machine_mode, const_rtx);
static bool sparc_function_value_regno_p (const unsigned int);
??? the latencies and then CSE will just use that. */
static bool
-sparc_rtx_costs (rtx x, int code, int outer_code, int opno ATTRIBUTE_UNUSED,
+sparc_rtx_costs (rtx x, machine_mode mode, int outer_code,
+ int opno ATTRIBUTE_UNUSED,
int *total, bool speed ATTRIBUTE_UNUSED)
{
- machine_mode mode = GET_MODE (x);
+ int code = GET_CODE (x);
bool float_mode_p = FLOAT_MODE_P (mode);
switch (code)
return true;
case CONST_DOUBLE:
- if (GET_MODE (x) == VOIDmode
+ if (mode == VOIDmode
&& ((CONST_DOUBLE_HIGH (x) == 0
&& CONST_DOUBLE_LOW (x) < 0x1000)
|| (CONST_DOUBLE_HIGH (x) == -1
sub = XEXP (x, 0);
if (GET_CODE (sub) == NEG)
sub = XEXP (sub, 0);
- *total += rtx_cost (sub, FMA, 0, speed);
+ *total += rtx_cost (sub, mode, FMA, 0, speed);
sub = XEXP (x, 2);
if (GET_CODE (sub) == NEG)
sub = XEXP (sub, 0);
- *total += rtx_cost (sub, FMA, 2, speed);
+ *total += rtx_cost (sub, mode, FMA, 2, speed);
return true;
}
case IOR:
/* Handle the NAND vector patterns. */
- if (sparc_vector_mode_supported_p (GET_MODE (x))
+ if (sparc_vector_mode_supported_p (mode)
&& GET_CODE (XEXP (x, 0)) == NOT
&& GET_CODE (XEXP (x, 1)) == NOT)
{
}
static bool
-spu_rtx_costs (rtx x, int code, int outer_code ATTRIBUTE_UNUSED,
+spu_rtx_costs (rtx x, machine_mode mode, int outer_code ATTRIBUTE_UNUSED,
int opno ATTRIBUTE_UNUSED, int *total,
bool speed ATTRIBUTE_UNUSED)
{
- machine_mode mode = GET_MODE (x);
+ int code = GET_CODE (x);
int cost = COSTS_N_INSNS (2);
/* Folding to a CONST_VECTOR will use extra space but there might
static void xstormy16_init_builtins (void);
static rtx xstormy16_expand_builtin (tree, rtx, rtx, machine_mode, int);
-static bool xstormy16_rtx_costs (rtx, int, int, int, int *, bool);
static int xstormy16_address_cost (rtx, machine_mode, addr_space_t, bool);
static bool xstormy16_return_in_memory (const_tree, const_tree);
scanned. In either case, *TOTAL contains the cost result. */
static bool
-xstormy16_rtx_costs (rtx x, int code, int outer_code ATTRIBUTE_UNUSED,
+xstormy16_rtx_costs (rtx x, machine_mode mode ATTRIBUTE_UNUSED,
+ int outer_code ATTRIBUTE_UNUSED,
int opno ATTRIBUTE_UNUSED, int *total,
bool speed ATTRIBUTE_UNUSED)
{
+ int code = GET_CODE (x);
+
switch (code)
{
case CONST_INT:
/* Implement TARGET_RTX_COSTS. */
static bool
-tilegx_rtx_costs (rtx x, int code, int outer_code, int opno, int *total,
- bool speed)
+tilegx_rtx_costs (rtx x, machine_mode mode, int outer_code, int opno,
+ int *total, bool speed)
{
+ int code = GET_CODE (x);
+
switch (code)
{
case CONST_INT:
if (GET_CODE (XEXP (x, 0)) == MULT
&& cint_248_operand (XEXP (XEXP (x, 0), 1), VOIDmode))
{
- *total = (rtx_cost (XEXP (XEXP (x, 0), 0),
+ *total = (rtx_cost (XEXP (XEXP (x, 0), 0), mode,
(enum rtx_code) outer_code, opno, speed)
- + rtx_cost (XEXP (x, 1),
+ + rtx_cost (XEXP (x, 1), mode,
(enum rtx_code) outer_code, opno, speed)
+ COSTS_N_INSNS (1));
return true;
/* Implement TARGET_RTX_COSTS. */
static bool
-tilepro_rtx_costs (rtx x, int code, int outer_code, int opno, int *total,
- bool speed)
+tilepro_rtx_costs (rtx x, machine_mode mode, int outer_code, int opno,
+ int *total, bool speed)
{
+ int code = GET_CODE (x);
+
switch (code)
{
case CONST_INT:
if (GET_CODE (XEXP (x, 0)) == MULT
&& cint_248_operand (XEXP (XEXP (x, 0), 1), VOIDmode))
{
- *total = (rtx_cost (XEXP (XEXP (x, 0), 0),
+ *total = (rtx_cost (XEXP (XEXP (x, 0), 0), mode,
(enum rtx_code) outer_code, opno, speed)
- + rtx_cost (XEXP (x, 1),
+ + rtx_cost (XEXP (x, 1), mode,
(enum rtx_code) outer_code, opno, speed)
+ COSTS_N_INSNS (1));
return true;
}
static bool
-v850_rtx_costs (rtx x,
- int codearg,
- int outer_code ATTRIBUTE_UNUSED,
- int opno ATTRIBUTE_UNUSED,
- int * total, bool speed)
+v850_rtx_costs (rtx x, machine_mode mode, int outer_code,
+ int opno ATTRIBUTE_UNUSED, int *total, bool speed)
{
- enum rtx_code code = (enum rtx_code) codearg;
+ enum rtx_code code = GET_CODE (x);
switch (code)
{
case MULT:
if (TARGET_V850E
- && ( GET_MODE (x) == SImode
- || GET_MODE (x) == HImode
- || GET_MODE (x) == QImode))
+ && (mode == SImode || mode == HImode || mode == QImode))
{
if (GET_CODE (XEXP (x, 1)) == REG)
*total = 4;
HOST_WIDE_INT, tree);
static int vax_address_cost_1 (rtx);
static int vax_address_cost (rtx, machine_mode, addr_space_t, bool);
-static bool vax_rtx_costs (rtx, int, int, int, int *, bool);
+static bool vax_rtx_costs (rtx, machine_mode, int, int, int *, bool);
static rtx vax_function_arg (cumulative_args_t, machine_mode,
const_tree, bool);
static void vax_function_arg_advance (cumulative_args_t, machine_mode,
costs on a per cpu basis. */
static bool
-vax_rtx_costs (rtx x, int code, int outer_code, int opno ATTRIBUTE_UNUSED,
+vax_rtx_costs (rtx x, machine_mode mode, int outer_code,
+ int opno ATTRIBUTE_UNUSED,
int *total, bool speed ATTRIBUTE_UNUSED)
{
- machine_mode mode = GET_MODE (x);
+ int code = GET_CODE (x);
int i = 0; /* may be modified in switch */
const char *fmt = GET_RTX_FORMAT (code); /* may be modified in switch */
return true;
case CONST_DOUBLE:
- if (GET_MODE_CLASS (GET_MODE (x)) == MODE_FLOAT)
+ if (GET_MODE_CLASS (mode) == MODE_FLOAT)
*total = vax_float_literal (x) ? 5 : 8;
else
*total = ((CONST_DOUBLE_HIGH (x) == 0
{
case CONST_INT:
if ((unsigned HOST_WIDE_INT)INTVAL (op) > 63
- && GET_MODE (x) != QImode)
+ && mode != QImode)
*total += 1; /* 2 on VAX 2 */
break;
case CONST:
static int visium_memory_move_cost (enum machine_mode, reg_class_t, bool);
-static bool visium_rtx_costs (rtx, int, int, int, int *, bool);
+static bool visium_rtx_costs (rtx, machine_mode, int, int, int *, bool);
static void visium_option_override (void);
/* Return the relative costs of expression X. */
static bool
-visium_rtx_costs (rtx x, int code, int outer_code ATTRIBUTE_UNUSED,
+visium_rtx_costs (rtx x, machine_mode mode, int outer_code ATTRIBUTE_UNUSED,
int opno ATTRIBUTE_UNUSED, int *total,
bool speed ATTRIBUTE_UNUSED)
{
- enum machine_mode mode = GET_MODE (x);
+ int code = GET_CODE (x);
switch (code)
{
int) ATTRIBUTE_UNUSED;
static section *xtensa_select_rtx_section (machine_mode, rtx,
unsigned HOST_WIDE_INT);
-static bool xtensa_rtx_costs (rtx, int, int, int, int *, bool);
+static bool xtensa_rtx_costs (rtx, machine_mode, int, int, int *, bool);
static int xtensa_register_move_cost (machine_mode, reg_class_t,
reg_class_t);
static int xtensa_memory_move_cost (machine_mode, reg_class_t, bool);
scanned. In either case, *TOTAL contains the cost result. */
static bool
-xtensa_rtx_costs (rtx x, int code, int outer_code, int opno ATTRIBUTE_UNUSED,
+xtensa_rtx_costs (rtx x, machine_mode mode, int outer_code,
+ int opno ATTRIBUTE_UNUSED,
int *total, bool speed ATTRIBUTE_UNUSED)
{
+ int code = GET_CODE (x);
+
switch (code)
{
case CONST_INT:
case MEM:
{
int num_words =
- (GET_MODE_SIZE (GET_MODE (x)) > UNITS_PER_WORD) ? 2 : 1;
+ (GET_MODE_SIZE (mode) > UNITS_PER_WORD) ? 2 : 1;
- if (memory_address_p (GET_MODE (x), XEXP ((x), 0)))
+ if (memory_address_p (mode, XEXP ((x), 0)))
*total = COSTS_N_INSNS (num_words);
else
*total = COSTS_N_INSNS (2*num_words);
return true;
case NOT:
- *total = COSTS_N_INSNS ((GET_MODE (x) == DImode) ? 3 : 2);
+ *total = COSTS_N_INSNS (mode == DImode ? 3 : 2);
return true;
case AND:
case IOR:
case XOR:
- if (GET_MODE (x) == DImode)
+ if (mode == DImode)
*total = COSTS_N_INSNS (2);
else
*total = COSTS_N_INSNS (1);
case ASHIFT:
case ASHIFTRT:
case LSHIFTRT:
- if (GET_MODE (x) == DImode)
+ if (mode == DImode)
*total = COSTS_N_INSNS (50);
else
*total = COSTS_N_INSNS (1);
case ABS:
{
- machine_mode xmode = GET_MODE (x);
- if (xmode == SFmode)
+ if (mode == SFmode)
*total = COSTS_N_INSNS (TARGET_HARD_FLOAT ? 1 : 50);
- else if (xmode == DFmode)
+ else if (mode == DFmode)
*total = COSTS_N_INSNS (50);
else
*total = COSTS_N_INSNS (4);
case PLUS:
case MINUS:
{
- machine_mode xmode = GET_MODE (x);
- if (xmode == SFmode)
+ if (mode == SFmode)
*total = COSTS_N_INSNS (TARGET_HARD_FLOAT ? 1 : 50);
- else if (xmode == DFmode || xmode == DImode)
+ else if (mode == DFmode || mode == DImode)
*total = COSTS_N_INSNS (50);
else
*total = COSTS_N_INSNS (1);
}
case NEG:
- *total = COSTS_N_INSNS ((GET_MODE (x) == DImode) ? 4 : 2);
+ *total = COSTS_N_INSNS (mode == DImode ? 4 : 2);
return true;
case MULT:
{
- machine_mode xmode = GET_MODE (x);
- if (xmode == SFmode)
+ if (mode == SFmode)
*total = COSTS_N_INSNS (TARGET_HARD_FLOAT ? 4 : 50);
- else if (xmode == DFmode)
+ else if (mode == DFmode)
*total = COSTS_N_INSNS (50);
- else if (xmode == DImode)
+ else if (mode == DImode)
*total = COSTS_N_INSNS (TARGET_MUL32_HIGH ? 10 : 50);
else if (TARGET_MUL32)
*total = COSTS_N_INSNS (4);
case DIV:
case MOD:
{
- machine_mode xmode = GET_MODE (x);
- if (xmode == SFmode)
+ if (mode == SFmode)
{
*total = COSTS_N_INSNS (TARGET_HARD_FLOAT_DIV ? 8 : 50);
return true;
}
- else if (xmode == DFmode)
+ else if (mode == DFmode)
{
*total = COSTS_N_INSNS (50);
return true;
case UDIV:
case UMOD:
{
- machine_mode xmode = GET_MODE (x);
- if (xmode == DImode)
+ if (mode == DImode)
*total = COSTS_N_INSNS (50);
else if (TARGET_DIV32)
*total = COSTS_N_INSNS (32);
}
case SQRT:
- if (GET_MODE (x) == SFmode)
+ if (mode == SFmode)
*total = COSTS_N_INSNS (TARGET_HARD_FLOAT_SQRT ? 8 : 50);
else
*total = COSTS_N_INSNS (50);
bool speed = optimize_bb_for_speed_p (BLOCK_FOR_INSN (insn));
int old_cost = set ? set_rtx_cost (set, speed) : 0;
- if ((note != 0
- && REG_NOTE_KIND (note) == REG_EQUAL
- && (GET_CODE (XEXP (note, 0)) == CONST
- || CONSTANT_P (XEXP (note, 0))))
- || (set && CONSTANT_P (SET_SRC (set))))
+ if (!set
+ || CONSTANT_P (SET_SRC (set))
+ || (note != 0
+ && REG_NOTE_KIND (note) == REG_EQUAL
+ && (GET_CODE (XEXP (note, 0)) == CONST
+ || CONSTANT_P (XEXP (note, 0)))))
check_rtx_costs = false;
/* Usually we substitute easy stuff, so we won't copy everything.
if (check_rtx_costs
&& CONSTANT_P (to)
- && (set_rtx_cost (set, speed) > old_cost))
+ && set_rtx_cost (set, speed) > old_cost)
{
cancel_changes (0);
return false;
|| (HARD_REGISTER_NUM_P (N) \
&& FIXED_REGNO_P (N) && REGNO_REG_CLASS (N) != NO_REGS))
-#define COST(X) (REG_P (X) ? 0 : notreg_cost (X, SET, 1))
-#define COST_IN(X, OUTER, OPNO) (REG_P (X) ? 0 : notreg_cost (X, OUTER, OPNO))
+#define COST(X, MODE) \
+ (REG_P (X) ? 0 : notreg_cost (X, MODE, SET, 1))
+#define COST_IN(X, MODE, OUTER, OPNO) \
+ (REG_P (X) ? 0 : notreg_cost (X, MODE, OUTER, OPNO))
/* Get the number of times this register has been updated in this
basic block. */
static sbitmap cse_visited_basic_blocks;
static bool fixed_base_plus_p (rtx x);
-static int notreg_cost (rtx, enum rtx_code, int);
+static int notreg_cost (rtx, machine_mode, enum rtx_code, int);
static int preferable (int, int, int, int);
static void new_basic_block (void);
static void make_new_qty (unsigned int, machine_mode);
from COST macro to keep it simple. */
static int
-notreg_cost (rtx x, enum rtx_code outer, int opno)
+notreg_cost (rtx x, machine_mode mode, enum rtx_code outer, int opno)
{
return ((GET_CODE (x) == SUBREG
&& REG_P (SUBREG_REG (x))
- && GET_MODE_CLASS (GET_MODE (x)) == MODE_INT
+ && GET_MODE_CLASS (mode) == MODE_INT
&& GET_MODE_CLASS (GET_MODE (SUBREG_REG (x))) == MODE_INT
- && (GET_MODE_SIZE (GET_MODE (x))
- < GET_MODE_SIZE (GET_MODE (SUBREG_REG (x))))
+ && GET_MODE_SIZE (mode) < GET_MODE_SIZE (GET_MODE (SUBREG_REG (x)))
&& subreg_lowpart_p (x)
- && TRULY_NOOP_TRUNCATION_MODES_P (GET_MODE (x),
- GET_MODE (SUBREG_REG (x))))
+ && TRULY_NOOP_TRUNCATION_MODES_P (mode, GET_MODE (SUBREG_REG (x))))
? 0
- : rtx_cost (x, outer, opno, optimize_this_for_speed_p) * 2);
+ : rtx_cost (x, mode, outer, opno, optimize_this_for_speed_p) * 2);
}
\f
don't prefer pseudos over hard regs so that we derive constants in
argument registers from other argument registers rather than from the
original pseudo that was used to synthesize the constant. */
- insert_with_costs (exp, elt, hash, mode, COST (reg), 1);
+ insert_with_costs (exp, elt, hash, mode, COST (reg, mode), 1);
}
/* The constant CST is equivalent to the register REG. Create
insert (rtx x, struct table_elt *classp, unsigned int hash,
machine_mode mode)
{
- return
- insert_with_costs (x, classp, hash, mode, COST (x), approx_reg_cost (x));
+ return insert_with_costs (x, classp, hash, mode,
+ COST (x, mode), approx_reg_cost (x));
}
\f
argument. */
if (const_arg != 0
&& const_arg != folded_arg
- && COST_IN (const_arg, code, i) <= COST_IN (folded_arg, code, i)
+ && (COST_IN (const_arg, mode_arg, code, i)
+ <= COST_IN (folded_arg, mode_arg, code, i))
/* It's not safe to substitute the operand of a conversion
operator with a constant, as the conversion's identity
if (p != NULL)
{
cheapest_simplification = x;
- cheapest_cost = COST (x);
+ cheapest_cost = COST (x, mode);
for (p = p->first_same_value; p != NULL; p = p->next_same_value)
{
if (simp_result == NULL)
continue;
- cost = COST (simp_result);
+ cost = COST (simp_result, mode);
if (cost < cheapest_cost)
{
cheapest_cost = cost;
src_cost = src_regcost = -1;
else
{
- src_cost = COST (src);
+ src_cost = COST (src, mode);
src_regcost = approx_reg_cost (src);
}
}
src_eqv_cost = src_eqv_regcost = -1;
else
{
- src_eqv_cost = COST (src_eqv_here);
+ src_eqv_cost = COST (src_eqv_here, mode);
src_eqv_regcost = approx_reg_cost (src_eqv_here);
}
}
src_folded_cost = src_folded_regcost = -1;
else
{
- src_folded_cost = COST (src_folded);
+ src_folded_cost = COST (src_folded, mode);
src_folded_regcost = approx_reg_cost (src_folded);
}
}
src_related_cost = src_related_regcost = -1;
else
{
- src_related_cost = COST (src_related);
+ src_related_cost = COST (src_related, mode);
src_related_regcost = approx_reg_cost (src_related);
/* If a const-anchor is used to synthesize a constant that
/* If we had a constant that is cheaper than what we are now
setting SRC to, use that constant. We ignored it when we
thought we could make this into a no-op. */
- if (src_const && COST (src_const) < COST (src)
+ if (src_const && COST (src_const, mode) < COST (src, mode)
&& validate_change (insn, &SET_SRC (sets[i].rtl),
src_const, 0))
src = src_const;
@code{BRANCH_COST} is greater than or equal to the value 2.
@end defmac
-@deftypefn {Target Hook} bool TARGET_RTX_COSTS (rtx @var{x}, int @var{code}, int @var{outer_code}, int @var{opno}, int *@var{total}, bool @var{speed})
+@deftypefn {Target Hook} bool TARGET_RTX_COSTS (rtx @var{x}, machine_mode @var{mode}, int @var{outer_code}, int @var{opno}, int *@var{total}, bool @var{speed})
This target hook describes the relative costs of RTL expressions.
The cost may depend on the precise form of the expression, which is
either (a) @samp{XEXP (@var{y}, @var{opno}) == @var{x}} or
(b) @samp{XVEC (@var{y}, @var{opno})} contains @var{x}.
-@var{code} is @var{x}'s expression code---redundant, since it can be
-obtained with @code{GET_CODE (@var{x})}.
+@var{mode} is @var{x}'s machine mode, or for cases like @code{const_int} that
+do not have a mode, the mode in which @var{x} is used.
In implementing this hook, you can use the construct
@code{COSTS_N_INSNS (@var{n})} to specify a cost equal to @var{n} fast
XEXP (XEXP (shift_test, 0), 1) = GEN_INT (bitnum);
speed_p = optimize_insn_for_speed_p ();
- return (rtx_cost (and_test, IF_THEN_ELSE, 0, speed_p)
- <= rtx_cost (shift_test, IF_THEN_ELSE, 0, speed_p));
+ return (rtx_cost (and_test, mode, IF_THEN_ELSE, 0, speed_p)
+ <= rtx_cost (shift_test, mode, IF_THEN_ELSE, 0, speed_p));
}
/* Subroutine of do_jump, dealing with exploded comparisons of the type
byte = subreg_lowpart_offset (read_mode, new_mode);
ret = simplify_subreg (read_mode, ret, new_mode, byte);
if (ret && CONSTANT_P (ret)
- && set_src_cost (ret, speed) <= COSTS_N_INSNS (1))
+ && (set_src_cost (ret, read_mode, speed)
+ <= COSTS_N_INSNS (1)))
return ret;
}
}
which = (to_size < from_size ? all->trunc : all->zext);
PUT_MODE (all->reg, from_mode);
- set_convert_cost (to_mode, from_mode, speed, set_src_cost (which, speed));
+ set_convert_cost (to_mode, from_mode, speed,
+ set_src_cost (which, to_mode, speed));
}
static void
PUT_MODE (all->zext, mode);
PUT_MODE (all->trunc, mode);
- set_add_cost (speed, mode, set_src_cost (all->plus, speed));
- set_neg_cost (speed, mode, set_src_cost (all->neg, speed));
- set_mul_cost (speed, mode, set_src_cost (all->mult, speed));
- set_sdiv_cost (speed, mode, set_src_cost (all->sdiv, speed));
- set_udiv_cost (speed, mode, set_src_cost (all->udiv, speed));
+ set_add_cost (speed, mode, set_src_cost (all->plus, mode, speed));
+ set_neg_cost (speed, mode, set_src_cost (all->neg, mode, speed));
+ set_mul_cost (speed, mode, set_src_cost (all->mult, mode, speed));
+ set_sdiv_cost (speed, mode, set_src_cost (all->sdiv, mode, speed));
+ set_udiv_cost (speed, mode, set_src_cost (all->udiv, mode, speed));
- set_sdiv_pow2_cheap (speed, mode, (set_src_cost (all->sdiv_32, speed)
+ set_sdiv_pow2_cheap (speed, mode, (set_src_cost (all->sdiv_32, mode, speed)
<= 2 * add_cost (speed, mode)));
- set_smod_pow2_cheap (speed, mode, (set_src_cost (all->smod_32, speed)
+ set_smod_pow2_cheap (speed, mode, (set_src_cost (all->smod_32, mode, speed)
<= 4 * add_cost (speed, mode)));
set_shift_cost (speed, mode, 0, 0);
XEXP (all->shift, 1) = all->cint[m];
XEXP (all->shift_mult, 1) = all->pow2[m];
- set_shift_cost (speed, mode, m, set_src_cost (all->shift, speed));
- set_shiftadd_cost (speed, mode, m, set_src_cost (all->shift_add, speed));
- set_shiftsub0_cost (speed, mode, m, set_src_cost (all->shift_sub0, speed));
- set_shiftsub1_cost (speed, mode, m, set_src_cost (all->shift_sub1, speed));
+ set_shift_cost (speed, mode, m, set_src_cost (all->shift, mode, speed));
+ set_shiftadd_cost (speed, mode, m, set_src_cost (all->shift_add, mode,
+ speed));
+ set_shiftsub0_cost (speed, mode, m, set_src_cost (all->shift_sub0, mode,
+ speed));
+ set_shiftsub1_cost (speed, mode, m, set_src_cost (all->shift_sub1, mode,
+ speed));
}
if (SCALAR_INT_MODE_P (mode))
XEXP (all->wide_lshr, 1) = GEN_INT (mode_bitsize);
set_mul_widen_cost (speed, wider_mode,
- set_src_cost (all->wide_mult, speed));
+ set_src_cost (all->wide_mult, wider_mode, speed));
set_mul_highpart_cost (speed, mode,
- set_src_cost (all->wide_trunc, speed));
+ set_src_cost (all->wide_trunc, mode, speed));
}
}
}
for (speed = 0; speed < 2; speed++)
{
crtl->maybe_hot_insn_p = speed;
- set_zero_cost (speed, set_src_cost (const0_rtx, speed));
+ set_zero_cost (speed, set_src_cost (const0_rtx, mode, speed));
for (mode = MIN_MODE_INT; mode <= MAX_MODE_INT;
mode = (machine_mode)(mode + 1))
Exclude cost of op0 from max_cost to match the cost
calculation of the synth_mult. */
coeff = -(unsigned HOST_WIDE_INT) coeff;
- max_cost = (set_src_cost (gen_rtx_MULT (mode, fake_reg, op1), speed)
+ max_cost = (set_src_cost (gen_rtx_MULT (mode, fake_reg, op1),
+ mode, speed)
- neg_cost (speed, mode));
if (max_cost <= 0)
goto skip_synth;
/* Exclude cost of op0 from max_cost to match the cost
calculation of the synth_mult. */
- max_cost = set_src_cost (gen_rtx_MULT (mode, fake_reg, op1), speed);
+ max_cost = set_src_cost (gen_rtx_MULT (mode, fake_reg, op1), mode, speed);
if (choose_mult_variant (mode, coeff, &algorithm, &variant, max_cost))
return expand_mult_const (mode, op0, coeff, target,
&algorithm, variant);
enum mult_variant variant;
rtx fake_reg = gen_raw_REG (mode, LAST_VIRTUAL_REGISTER + 1);
- max_cost = set_src_cost (gen_rtx_MULT (mode, fake_reg, fake_reg), speed);
+ max_cost = set_src_cost (gen_rtx_MULT (mode, fake_reg, fake_reg),
+ mode, speed);
if (choose_mult_variant (mode, coeff, &algorithm, &variant, max_cost))
return algorithm.cost.cost;
else
temp = gen_rtx_LSHIFTRT (mode, result, shift);
if (optab_handler (lshr_optab, mode) == CODE_FOR_nothing
- || (set_src_cost (temp, optimize_insn_for_speed_p ())
+ || (set_src_cost (temp, mode, optimize_insn_for_speed_p ())
> COSTS_N_INSNS (2)))
{
temp = expand_binop (mode, xor_optab, op0, signmask,
/* For the reverse comparison, use either an addition or a XOR. */
if (want_add
- && rtx_cost (GEN_INT (normalizep), PLUS, 1,
+ && rtx_cost (GEN_INT (normalizep), mode, PLUS, 1,
optimize_insn_for_speed_p ()) == 0)
{
tem = emit_store_flag_1 (subtarget, rcode, op0, op1, mode, 0,
target, 0, OPTAB_WIDEN);
}
else if (!want_add
- && rtx_cost (trueval, XOR, 1,
+ && rtx_cost (trueval, mode, XOR, 1,
optimize_insn_for_speed_p ()) == 0)
{
tem = emit_store_flag_1 (subtarget, rcode, op0, op1, mode, 0,
/* Again, for the reverse comparison, use either an addition or a XOR. */
if (want_add
- && rtx_cost (GEN_INT (normalizep), PLUS, 1,
+ && rtx_cost (GEN_INT (normalizep), mode, PLUS, 1,
optimize_insn_for_speed_p ()) == 0)
{
tem = emit_store_flag_1 (subtarget, rcode, op0, op1, mode, 0,
target, 0, OPTAB_WIDEN);
}
else if (!want_add
- && rtx_cost (trueval, XOR, 1,
+ && rtx_cost (trueval, mode, XOR, 1,
optimize_insn_for_speed_p ()) == 0)
{
tem = emit_store_flag_1 (subtarget, rcode, op0, op1, mode, 0,
REAL_VALUE_FROM_CONST_DOUBLE (r, y);
if (targetm.legitimate_constant_p (dstmode, y))
- oldcost = set_src_cost (y, speed);
+ oldcost = set_src_cost (y, orig_srcmode, speed);
else
- oldcost = set_src_cost (force_const_mem (dstmode, y), speed);
+ oldcost = set_src_cost (force_const_mem (dstmode, y), dstmode, speed);
for (srcmode = GET_CLASS_NARROWEST_MODE (GET_MODE_CLASS (orig_srcmode));
srcmode != orig_srcmode;
continue;
/* This is valid, but may not be cheaper than the original. */
newcost = set_src_cost (gen_rtx_FLOAT_EXTEND (dstmode, trunc_y),
- speed);
+ dstmode, speed);
if (oldcost < newcost)
continue;
}
trunc_y = force_const_mem (srcmode, trunc_y);
/* This is valid, but may not be cheaper than the original. */
newcost = set_src_cost (gen_rtx_FLOAT_EXTEND (dstmode, trunc_y),
- speed);
+ dstmode, speed);
if (oldcost < newcost)
continue;
trunc_y = validize_mem (trunc_y);
eliminating the most insns without additional costs, and it
is the same that cse.c used to do. */
if (gain == 0)
- gain = set_src_cost (new_rtx, speed) - set_src_cost (old_rtx, speed);
+ gain = (set_src_cost (new_rtx, VOIDmode, speed)
+ - set_src_cost (old_rtx, VOIDmode, speed));
return (gain > 0);
}
multiple sets. If so, assume the cost of the new instruction is
not greater than the old one. */
if (set)
- old_cost = set_src_cost (SET_SRC (set), speed);
+ old_cost = set_src_cost (SET_SRC (set), GET_MODE (SET_DEST (set)), speed);
if (dump_file)
{
fprintf (dump_file, "\nIn insn %d, replacing\n ", INSN_UID (insn));
else if (DF_REF_TYPE (use) == DF_REF_REG_USE
&& set
- && set_src_cost (SET_SRC (set), speed) > old_cost)
+ && (set_src_cost (SET_SRC (set), GET_MODE (SET_DEST (set)), speed)
+ > old_cost))
{
if (dump_file)
fprintf (dump_file, "Changes to insn %d not profitable\n",
static void hash_scan_set (rtx, rtx_insn *, struct gcse_hash_table_d *);
static void hash_scan_clobber (rtx, rtx_insn *, struct gcse_hash_table_d *);
static void hash_scan_call (rtx, rtx_insn *, struct gcse_hash_table_d *);
-static int want_to_gcse_p (rtx, int *);
static int oprs_unchanged_p (const_rtx, const rtx_insn *, int);
static int oprs_anticipatable_p (const_rtx, const rtx_insn *);
static int oprs_available_p (const_rtx, const rtx_insn *);
GCSE. */
static int
-want_to_gcse_p (rtx x, int *max_distance_ptr)
+want_to_gcse_p (rtx x, machine_mode mode, int *max_distance_ptr)
{
#ifdef STACK_REGS
/* On register stack architectures, don't GCSE constants from the
gcc_assert (!optimize_function_for_speed_p (cfun)
&& optimize_function_for_size_p (cfun));
- cost = set_src_cost (x, 0);
+ cost = set_src_cost (x, mode, 0);
if (cost < COSTS_N_INSNS (GCSE_UNRESTRICTED_COST))
{
if (note != 0
&& REG_NOTE_KIND (note) == REG_EQUAL
&& !REG_P (src)
- && want_to_gcse_p (XEXP (note, 0), NULL))
+ && want_to_gcse_p (XEXP (note, 0), GET_MODE (dest), NULL))
src = XEXP (note, 0), set = gen_rtx_SET (dest, src);
/* Only record sets of pseudo-regs in the hash table. */
can't do the same thing at the rtl level. */
&& !can_throw_internal (insn)
/* Is SET_SRC something we want to gcse? */
- && want_to_gcse_p (src, &max_distance)
+ && want_to_gcse_p (src, GET_MODE (dest), &max_distance)
/* Don't CSE a nop. */
&& ! set_noop_p (set)
/* Don't GCSE if it has attached REG_EQUIV note.
the REG stored in that memory. This makes it possible to remove
redundant loads from due to stores to the same location. */
else if (flag_gcse_las && REG_P (src) && MEM_P (dest))
- {
- unsigned int regno = REGNO (src);
- int max_distance = 0;
-
- /* Only record sets of pseudo-regs in the hash table. */
- if (regno >= FIRST_PSEUDO_REGISTER
- /* Don't GCSE something if we can't do a reg/reg copy. */
- && can_copy_p (GET_MODE (src))
- /* GCSE commonly inserts instruction after the insn. We can't
- do that easily for EH edges so disable GCSE on these for now. */
- && !can_throw_internal (insn)
- /* Is SET_DEST something we want to gcse? */
- && want_to_gcse_p (dest, &max_distance)
- /* Don't CSE a nop. */
- && ! set_noop_p (set)
- /* Don't GCSE if it has attached REG_EQUIV note.
- At this point this only function parameters should have
- REG_EQUIV notes and if the argument slot is used somewhere
- explicitly, it means address of parameter has been taken,
- so we should not extend the lifetime of the pseudo. */
- && ((note = find_reg_note (insn, REG_EQUIV, NULL_RTX)) == 0
- || ! MEM_P (XEXP (note, 0))))
- {
- /* Stores are never anticipatable. */
- int antic_p = 0;
- /* An expression is not available if its operands are
- subsequently modified, including this insn. It's also not
- available if this is a branch, because we can't insert
- a set after the branch. */
- int avail_p = oprs_available_p (dest, insn)
- && ! JUMP_P (insn);
-
- /* Record the memory expression (DEST) in the hash table. */
- insert_expr_in_table (dest, GET_MODE (dest), insn,
- antic_p, avail_p, max_distance, table);
- }
- }
+ {
+ unsigned int regno = REGNO (src);
+ int max_distance = 0;
+
+ /* Only record sets of pseudo-regs in the hash table. */
+ if (regno >= FIRST_PSEUDO_REGISTER
+ /* Don't GCSE something if we can't do a reg/reg copy. */
+ && can_copy_p (GET_MODE (src))
+ /* GCSE commonly inserts instruction after the insn. We can't
+ do that easily for EH edges so disable GCSE on these for now. */
+ && !can_throw_internal (insn)
+ /* Is SET_DEST something we want to gcse? */
+ && want_to_gcse_p (dest, GET_MODE (dest), &max_distance)
+ /* Don't CSE a nop. */
+ && ! set_noop_p (set)
+ /* Don't GCSE if it has attached REG_EQUIV note.
+ At this point this only function parameters should have
+ REG_EQUIV notes and if the argument slot is used somewhere
+ explicitly, it means address of parameter has been taken,
+ so we should not extend the lifetime of the pseudo. */
+ && ((note = find_reg_note (insn, REG_EQUIV, NULL_RTX)) == 0
+ || ! MEM_P (XEXP (note, 0))))
+ {
+ /* Stores are never anticipatable. */
+ int antic_p = 0;
+ /* An expression is not available if its operands are
+ subsequently modified, including this insn. It's also not
+ available if this is a branch, because we can't insert
+ a set after the branch. */
+ int avail_p = oprs_available_p (dest, insn) && ! JUMP_P (insn);
+
+ /* Record the memory expression (DEST) in the hash table. */
+ insert_expr_in_table (dest, GET_MODE (dest), insn,
+ antic_p, avail_p, max_distance, table);
+ }
+ }
}
static void
}
bool
-hook_bool_rtx_int_int_int_intp_bool_false (rtx a ATTRIBUTE_UNUSED,
- int b ATTRIBUTE_UNUSED,
- int c ATTRIBUTE_UNUSED,
- int d ATTRIBUTE_UNUSED,
- int *e ATTRIBUTE_UNUSED,
- bool speed_p ATTRIBUTE_UNUSED)
+hook_bool_rtx_mode_int_int_intp_bool_false (rtx a ATTRIBUTE_UNUSED,
+ machine_mode b ATTRIBUTE_UNUSED,
+ int c ATTRIBUTE_UNUSED,
+ int d ATTRIBUTE_UNUSED,
+ int *e ATTRIBUTE_UNUSED,
+ bool speed_p ATTRIBUTE_UNUSED)
{
return false;
}
extern bool hook_bool_rtx_false (rtx);
extern bool hook_bool_rtx_insn_int_false (rtx_insn *, int);
extern bool hook_bool_uintp_uintp_false (unsigned int *, unsigned int *);
-extern bool hook_bool_rtx_int_int_int_intp_bool_false (rtx, int, int, int,
- int *, bool);
+extern bool hook_bool_rtx_mode_int_int_intp_bool_false (rtx, machine_mode,
+ int, int, int *, bool);
extern bool hook_bool_tree_tree_false (tree, tree);
extern bool hook_bool_tree_tree_true (tree, tree);
extern bool hook_bool_tree_bool_false (tree, bool);
&& (if_info->insn_b == NULL_RTX
|| BLOCK_FOR_INSN (if_info->insn_b) == if_info->test_bb));
if (!(t_unconditional
- || (set_src_cost (t, optimize_bb_for_speed_p (if_info->test_bb))
+ || (set_src_cost (t, mode, optimize_bb_for_speed_p (if_info->test_bb))
< COSTS_N_INSNS (2))))
return FALSE;
max_cost
= COSTS_N_INSNS (PARAM_VALUE (PARAM_MAX_ITERATIONS_COMPUTATION_COST));
- if (set_src_cost (desc->niter_expr, optimize_loop_for_speed_p (loop))
+ if (set_src_cost (desc->niter_expr, mode, optimize_loop_for_speed_p (loop))
> max_cost)
{
if (dump_file)
}
else
{
- inv->cost = set_src_cost (SET_SRC (set), speed);
+ inv->cost = set_src_cost (SET_SRC (set), GET_MODE (SET_DEST (set)),
+ speed);
inv->cheap_address = false;
}
PUT_MODE (rtxes->shift, mode);
PUT_MODE (rtxes->source, mode);
XEXP (rtxes->shift, 1) = GEN_INT (op1);
- return set_src_cost (rtxes->shift, speed_p);
+ return set_src_cost (rtxes->shift, mode, speed_p);
}
/* For each X in the range [0, BITS_PER_WORD), set SPLITTING[X]
/* The only case here to check to see if moving the upper part with a
zero is cheaper than doing the zext itself. */
PUT_MODE (rtxes->source, word_mode);
- zext_cost = set_src_cost (rtxes->zext, speed_p);
+ zext_cost = set_src_cost (rtxes->zext, twice_word_mode, speed_p);
if (LOG_COSTS)
fprintf (stderr, "%s %s: original cost %d, split cost %d + %d\n",
if (mode != VOIDmode
&& optimize
&& CONSTANT_P (x)
- && (rtx_cost (x, optab_to_code (binoptab), opn, speed)
- > set_src_cost (x, speed)))
+ && (rtx_cost (x, mode, optab_to_code (binoptab), opn, speed)
+ > set_src_cost (x, mode, speed)))
{
if (CONST_INT_P (x))
{
/* If we are optimizing, force expensive constants into a register. */
if (CONSTANT_P (x) && optimize
- && (rtx_cost (x, COMPARE, 0, optimize_insn_for_speed_p ())
+ && (rtx_cost (x, mode, COMPARE, 0, optimize_insn_for_speed_p ())
> COSTS_N_INSNS (1)))
x = force_reg (mode, x);
if (CONSTANT_P (y) && optimize
- && (rtx_cost (y, COMPARE, 1, optimize_insn_for_speed_p ())
+ && (rtx_cost (y, mode, COMPARE, 1, optimize_insn_for_speed_p ())
> COSTS_N_INSNS (1)))
y = force_reg (mode, y);
{
rtx reg = gen_raw_REG (word_mode, 10000);
int cost = set_src_cost (gen_rtx_ASHIFT (word_mode, const1_rtx, reg),
- speed_p);
+ word_mode, speed_p);
cheap[speed_p] = cost < COSTS_N_INSNS (3);
init[speed_p] = true;
}
old_cost = register_move_cost (GET_MODE (src),
REGNO_REG_CLASS (REGNO (src)), dclass);
else
- old_cost = set_src_cost (src, speed);
+ old_cost = set_src_cost (src, GET_MODE (SET_DEST (set)), speed);
for (l = val->locs; l; l = l->next)
{
this_rtx = immed_wide_int_const (result, word_mode);
}
#endif
- this_cost = set_src_cost (this_rtx, speed);
+ this_cost = set_src_cost (this_rtx, GET_MODE (SET_DEST (set)), speed);
}
else if (REG_P (this_rtx))
{
if (extend_op != UNKNOWN)
{
this_rtx = gen_rtx_fmt_e (extend_op, word_mode, this_rtx);
- this_cost = set_src_cost (this_rtx, speed);
+ this_cost = set_src_cost (this_rtx, word_mode, speed);
}
else
#endif
&& TEST_BIT (preferred, j)
&& reg_fits_class_p (testreg, rclass, 0, mode)
&& (!CONST_INT_P (recog_data.operand[i])
- || (set_src_cost (recog_data.operand[i],
+ || (set_src_cost (recog_data.operand[i], mode,
optimize_bb_for_speed_p
(BLOCK_FOR_INSN (insn)))
- > set_src_cost (testreg,
+ > set_src_cost (testreg, mode,
optimize_bb_for_speed_p
(BLOCK_FOR_INSN (insn))))))
{
&& CONSTANT_P (XEXP (SET_SRC (new_set), 1)))
{
rtx new_src;
- int old_cost = set_src_cost (SET_SRC (new_set), speed);
+ machine_mode mode = GET_MODE (SET_DEST (new_set));
+ int old_cost = set_src_cost (SET_SRC (new_set), mode, speed);
gcc_assert (rtx_equal_p (XEXP (SET_SRC (new_set), 0), reg));
new_src = simplify_replace_rtx (SET_SRC (new_set), reg, src);
- if (set_src_cost (new_src, speed) <= old_cost
+ if (set_src_cost (new_src, mode, speed) <= old_cost
&& validate_change (use_insn, &SET_SRC (new_set),
new_src, 0))
return true;
get_full_set_rtx_cost (set, &oldcst);
SET_SRC (set) = tem;
- get_full_set_src_cost (tem, &newcst);
+ get_full_set_src_cost (tem, GET_MODE (reg), &newcst);
SET_SRC (set) = old_src;
costs_add_n_insns (&oldcst, 1);
{
rtx t = eliminate_regs_1 (SET_SRC (set), VOIDmode, insn,
false, true);
- int cost = set_src_cost (t, optimize_bb_for_speed_p (bb));
+ machine_mode mode = GET_MODE (SET_DEST (set));
+ int cost = set_src_cost (t, mode,
+ optimize_bb_for_speed_p (bb));
int freq = REG_FREQ_FROM_BB (bb);
reg_equiv_init_cost[regno] = cost * freq;
{
rtx t = reg_equiv_invariant (REGNO (x));
rtx new_rtx = eliminate_regs_1 (t, Pmode, insn, true, true);
- int cost = set_src_cost (new_rtx, optimize_bb_for_speed_p (elim_bb));
+ int cost = set_src_cost (new_rtx, Pmode,
+ optimize_bb_for_speed_p (elim_bb));
int freq = REG_FREQ_FROM_BB (elim_bb);
if (cost != 0)
}
extern void init_rtlanal (void);
-extern int rtx_cost (rtx, enum rtx_code, int, bool);
+extern int rtx_cost (rtx, machine_mode, enum rtx_code, int, bool);
extern int address_cost (rtx, machine_mode, addr_space_t, bool);
-extern void get_full_rtx_cost (rtx, enum rtx_code, int,
+extern void get_full_rtx_cost (rtx, machine_mode, enum rtx_code, int,
struct full_rtx_costs *);
extern unsigned int subreg_lsb (const_rtx);
extern unsigned int subreg_lsb_1 (machine_mode, machine_mode,
extern HOST_WIDE_INT get_index_scale (const struct address_info *);
extern enum rtx_code get_index_code (const struct address_info *);
-#ifndef GENERATOR_FILE
-/* Return the cost of SET X. SPEED_P is true if optimizing for speed
- rather than size. */
-
-static inline int
-set_rtx_cost (rtx x, bool speed_p)
-{
- return rtx_cost (x, INSN, 4, speed_p);
-}
-
-/* Like set_rtx_cost, but return both the speed and size costs in C. */
-
-static inline void
-get_full_set_rtx_cost (rtx x, struct full_rtx_costs *c)
-{
- get_full_rtx_cost (x, INSN, 4, c);
-}
-
-/* Return the cost of moving X into a register, relative to the cost
- of a register move. SPEED_P is true if optimizing for speed rather
- than size. */
-
-static inline int
-set_src_cost (rtx x, bool speed_p)
-{
- return rtx_cost (x, SET, 1, speed_p);
-}
-
-/* Like set_src_cost, but return both the speed and size costs in C. */
-
-static inline void
-get_full_set_src_cost (rtx x, struct full_rtx_costs *c)
-{
- get_full_rtx_cost (x, SET, 1, c);
-}
-#endif
-
/* 1 if RTX is a subreg containing a reg that is already known to be
sign- or zero-extended from the mode of the subreg to the mode of
the reg. SUBREG_PROMOTED_UNSIGNED_P gives the signedness of the
/* Generally useful functions. */
+#ifndef GENERATOR_FILE
+/* Return the cost of SET X. SPEED_P is true if optimizing for speed
+ rather than size. */
+
+static inline int
+set_rtx_cost (rtx x, bool speed_p)
+{
+ return rtx_cost (x, VOIDmode, INSN, 4, speed_p);
+}
+
+/* Like set_rtx_cost, but return both the speed and size costs in C. */
+
+static inline void
+get_full_set_rtx_cost (rtx x, struct full_rtx_costs *c)
+{
+ get_full_rtx_cost (x, VOIDmode, INSN, 4, c);
+}
+
+/* Return the cost of moving X into a register, relative to the cost
+ of a register move. SPEED_P is true if optimizing for speed rather
+ than size. */
+
+static inline int
+set_src_cost (rtx x, machine_mode mode, bool speed_p)
+{
+ return rtx_cost (x, mode, SET, 1, speed_p);
+}
+
+/* Like set_src_cost, but return both the speed and size costs in C. */
+
+static inline void
+get_full_set_src_cost (rtx x, machine_mode mode, struct full_rtx_costs *c)
+{
+ get_full_rtx_cost (x, mode, SET, 1, c);
+}
+#endif
+
/* In explow.c */
extern HOST_WIDE_INT trunc_int_for_mode (HOST_WIDE_INT, machine_mode);
extern rtx plus_constant (machine_mode, rtx, HOST_WIDE_INT, bool = false);
be returned. */
int
-rtx_cost (rtx x, enum rtx_code outer_code, int opno, bool speed)
+rtx_cost (rtx x, machine_mode mode, enum rtx_code outer_code,
+ int opno, bool speed)
{
int i, j;
enum rtx_code code;
if (x == 0)
return 0;
+ if (GET_MODE (x) != VOIDmode)
+ mode = GET_MODE (x);
+
/* A size N times larger than UNITS_PER_WORD likely needs N times as
many insns, taking N times as long. */
- factor = GET_MODE_SIZE (GET_MODE (x)) / UNITS_PER_WORD;
+ factor = GET_MODE_SIZE (mode) / UNITS_PER_WORD;
if (factor == 0)
factor = 1;
case SET:
/* A SET doesn't have a mode, so let's look at the SET_DEST to get
the mode for the factor. */
- factor = GET_MODE_SIZE (GET_MODE (SET_DEST (x))) / UNITS_PER_WORD;
+ mode = GET_MODE (SET_DEST (x));
+ factor = GET_MODE_SIZE (mode) / UNITS_PER_WORD;
if (factor == 0)
factor = 1;
/* Pass through. */
total = 0;
/* If we can't tie these modes, make this expensive. The larger
the mode, the more expensive it is. */
- if (! MODES_TIEABLE_P (GET_MODE (x), GET_MODE (SUBREG_REG (x))))
+ if (! MODES_TIEABLE_P (mode, GET_MODE (SUBREG_REG (x))))
return COSTS_N_INSNS (2 + factor);
break;
default:
- if (targetm.rtx_costs (x, code, outer_code, opno, &total, speed))
+ if (targetm.rtx_costs (x, mode, outer_code, opno, &total, speed))
return total;
break;
}
fmt = GET_RTX_FORMAT (code);
for (i = GET_RTX_LENGTH (code) - 1; i >= 0; i--)
if (fmt[i] == 'e')
- total += rtx_cost (XEXP (x, i), code, i, speed);
+ total += rtx_cost (XEXP (x, i), mode, code, i, speed);
else if (fmt[i] == 'E')
for (j = 0; j < XVECLEN (x, i); j++)
- total += rtx_cost (XVECEXP (x, i, j), code, i, speed);
+ total += rtx_cost (XVECEXP (x, i, j), mode, code, i, speed);
return total;
}
costs for X, which is operand OPNO in an expression with code OUTER. */
void
-get_full_rtx_cost (rtx x, enum rtx_code outer, int opno,
+get_full_rtx_cost (rtx x, machine_mode mode, enum rtx_code outer, int opno,
struct full_rtx_costs *c)
{
- c->speed = rtx_cost (x, outer, opno, true);
- c->size = rtx_cost (x, outer, opno, false);
+ c->speed = rtx_cost (x, mode, outer, opno, true);
+ c->size = rtx_cost (x, mode, outer, opno, false);
}
\f
int
default_address_cost (rtx x, machine_mode, addr_space_t, bool speed)
{
- return rtx_cost (x, MEM, 0, speed);
+ return rtx_cost (x, Pmode, MEM, 0, speed);
}
\f
else
return 0;
- cost = set_src_cost (SET_SRC (set), speed);
+ cost = set_src_cost (SET_SRC (set), GET_MODE (SET_DEST (set)), speed);
return cost > 0 ? cost : COSTS_N_INSNS (1);
}
coeff = immed_wide_int_const (coeff0 + coeff1, mode);
tem = simplify_gen_binary (MULT, mode, lhs, coeff);
- return set_src_cost (tem, speed) <= set_src_cost (orig, speed)
- ? tem : 0;
+ return (set_src_cost (tem, mode, speed)
+ <= set_src_cost (orig, mode, speed) ? tem : 0);
}
}
coeff = immed_wide_int_const (coeff0 + negcoeff1, mode);
tem = simplify_gen_binary (MULT, mode, lhs, coeff);
- return set_src_cost (tem, speed) <= set_src_cost (orig, speed)
- ? tem : 0;
+ return (set_src_cost (tem, mode, speed)
+ <= set_src_cost (orig, mode, speed) ? tem : 0);
}
}
/* Compute a (partial) cost for rtx X. Return true if the complete
cost has been computed, and false if subexpressions should be
scanned. In either case, *TOTAL contains the cost result. */
-/* Note that CODE and OUTER_CODE ought to be RTX_CODE, but that's
+/* Note that OUTER_CODE ought to be RTX_CODE, but that's
not necessarily defined at this point. */
DEFHOOK
(rtx_costs,
either (a) @samp{XEXP (@var{y}, @var{opno}) == @var{x}} or\n\
(b) @samp{XVEC (@var{y}, @var{opno})} contains @var{x}.\n\
\n\
-@var{code} is @var{x}'s expression code---redundant, since it can be\n\
-obtained with @code{GET_CODE (@var{x})}.\n\
+@var{mode} is @var{x}'s machine mode, or for cases like @code{const_int} that\n\
+do not have a mode, the mode in which @var{x} is used.\n\
\n\
In implementing this hook, you can use the construct\n\
@code{COSTS_N_INSNS (@var{n})} to specify a cost equal to @var{n} fast\n\
\n\
The hook returns true when all subexpressions of @var{x} have been\n\
processed, and false when @code{rtx_cost} should recurse.",
- bool, (rtx x, int code, int outer_code, int opno, int *total, bool speed),
- hook_bool_rtx_int_int_int_intp_bool_false)
+ bool, (rtx x, machine_mode mode, int outer_code, int opno, int *total, bool speed),
+ hook_bool_rtx_mode_int_int_intp_bool_false)
/* Compute the cost of X, used as an address. Never called with
invalid addresses. */
cost += address_cost (XEXP (rslt, 0), TYPE_MODE (type),
TYPE_ADDR_SPACE (type), speed);
else if (!REG_P (rslt))
- cost += set_src_cost (rslt, speed);
+ cost += set_src_cost (rslt, TYPE_MODE (type), speed);
return cost;
}
GEN_INT (-m)), speed_p);
rtx r = immed_wide_int_const (mask, word_mode);
cost_diff += set_src_cost (gen_rtx_AND (word_mode, reg, r),
- speed_p);
+ word_mode, speed_p);
r = immed_wide_int_const (wi::lshift (mask, m), word_mode);
cost_diff -= set_src_cost (gen_rtx_AND (word_mode, reg, r),
- speed_p);
+ word_mode, speed_p);
if (cost_diff > 0)
{
mask = wi::lshift (mask, m);
for (i = 0; i < count; i++)
{
rtx r = immed_wide_int_const (test[i].mask, word_mode);
- cost_diff += set_src_cost (gen_rtx_AND (word_mode, reg, r), speed_p);
+ cost_diff += set_src_cost (gen_rtx_AND (word_mode, reg, r),
+ word_mode, speed_p);
r = immed_wide_int_const (wi::lshift (test[i].mask, m), word_mode);
- cost_diff -= set_src_cost (gen_rtx_AND (word_mode, reg, r), speed_p);
+ cost_diff -= set_src_cost (gen_rtx_AND (word_mode, reg, r),
+ word_mode, speed_p);
}
if (cost_diff > 0)
{