+2019-06-27 Aaron Sawdey <acsawdey@linux.ibm.com>
+
+ * builtins.c (get_memory_rtx): Fix comment.
+ * optabs.def (movmem_optab): Change to cpymem_optab.
+ * expr.c (emit_block_move_via_cpymem): Change movmem to cpymem.
+ (emit_block_move_hints): Change movmem to cpymem.
+ * defaults.h: Change movmem to cpymem.
+ * targhooks.c (get_move_ratio): Change movmem to cpymem.
+ (default_use_by_pieces_infrastructure_p): Ditto.
+ * config/aarch64/aarch64-protos.h: Change movmem to cpymem.
+ * config/aarch64/aarch64.c (aarch64_expand_movmem): Change movmem
+ to cpymem.
+ * config/aarch64/aarch64.h: Change movmem to cpymem.
+ * config/aarch64/aarch64.md (movmemdi): Change name to cpymemdi.
+ * config/alpha/alpha.h: Change movmem to cpymem in comment.
+ * config/alpha/alpha.md (movmemqi, movmemdi, *movmemdi_1): Change
+ movmem to cpymem.
+ * config/arc/arc-protos.h: Change movmem to cpymem.
+ * config/arc/arc.c (arc_expand_movmem): Change movmem to cpymem.
+ * config/arc/arc.h: Change movmem to cpymem in comment.
+ * config/arc/arc.md (movmemsi): Change movmem to cpymem.
+ * config/arm/arm-protos.h: Change movmem to cpymem in names.
+ * config/arm/arm.c (arm_movmemqi_unaligned, arm_gen_movmemqi,
+ gen_movmem_ldrd_strd, thumb_expand_movmemqi) Change movmem to cpymem.
+ * config/arm/arm.md (movmemqi): Change movmem to cpymem.
+ * config/arm/thumb1.md (movmem12b, movmem8b): Change movmem to cpymem.
+ * config/avr/avr-protos.h: Change movmem to cpymem.
+ * config/avr/avr.c (avr_adjust_insn_length, avr_emit_movmemhi,
+ avr_out_movmem): Change movmem to cpymem.
+ * config/avr/avr.md (movmemhi, movmem_<mode>, movmemx_<mode>):
+ Change movmem to cpymem.
+ * config/bfin/bfin-protos.h: Change movmem to cpymem.
+ * config/bfin/bfin.c (single_move_for_movmem, bfin_expand_movmem):
+ Change movmem to cpymem.
+ * config/bfin/bfin.h: Change movmem to cpymem in comment.
+ * config/bfin/bfin.md (movmemsi): Change name to cpymemsi.
+ * config/c6x/c6x-protos.h: Change movmem to cpymem.
+ * config/c6x/c6x.c (c6x_expand_movmem): Change movmem to cpymem.
+ * config/c6x/c6x.md (movmemsi): Change name to cpymemsi.
+ * config/frv/frv.md (movmemsi): Change name to cpymemsi.
+ * config/ft32/ft32.md (movmemsi): Change name to cpymemsi.
+ * config/h8300/h8300.md (movmemsi): Change name to cpymemsi.
+ * config/i386/i386-expand.c (expand_set_or_movmem_via_loop,
+ expand_set_or_movmem_via_rep, expand_movmem_epilogue,
+ expand_setmem_epilogue_via_loop, expand_set_or_cpymem_prologue,
+ expand_small_cpymem_or_setmem,
+ expand_set_or_cpymem_prologue_epilogue_by_misaligned_moves,
+ expand_set_or_cpymem_constant_prologue,
+ ix86_expand_set_or_cpymem): Change movmem to cpymem.
+ * config/i386/i386-protos.h: Change movmem to cpymem.
+ * config/i386/i386.h: Change movmem to cpymem in comment.
+ * config/i386/i386.md (movmem<mode>): Change name to cpymem.
+ (setmem<mode>): Change expansion function name.
+ * config/lm32/lm32.md (movmemsi): Change name to cpymemsi.
+ * config/m32c/blkmov.md (movmemhi, movmemhi_bhi_op, movmemhi_bpsi_op,
+ movmemhi_whi_op, movmemhi_wpsi_op): Change movmem to cpymem.
+ * config/m32c/m32c-protos.h: Change movmem to cpymem.
+ * config/m32c/m32c.c (m32c_expand_movmemhi): Change movmem to cpymem.
+ * config/m32r/m32r.c (m32r_expand_block_move): Change movmem to cpymem.
+ * config/m32r/m32r.md (movmemsi, movmemsi_internal): Change movmem
+ to cpymem.
+ * config/mcore/mcore.md (movmemsi): Change name to cpymemsi.
+ * config/microblaze/microblaze.c: Change movmem to cpymem in comment.
+ * config/microblaze/microblaze.md (movmemsi): Change name to cpymemsi.
+ * config/mips/mips.c (mips_use_by_pieces_infrastructure_p):
+ Change movmem to cpymem.
+ * config/mips/mips.h: Change movmem to cpymem.
+ * config/mips/mips.md (movmemsi): Change name to cpymemsi.
+ * config/nds32/nds32-memory-manipulation.c
+ (nds32_expand_movmemsi_loop_unknown_size,
+ nds32_expand_movmemsi_loop_known_size, nds32_expand_movmemsi_loop,
+ nds32_expand_movmemsi_unroll,
+ nds32_expand_movmemsi): Change movmem to cpymem.
+ * config/nds32/nds32-multiple.md (movmemsi): Change name to cpymemsi.
+ * config/nds32/nds32-protos.h: Change movmem to cpymem.
+ * config/pa/pa.c (compute_movmem_length): Change movmem to cpymem.
+ (pa_adjust_insn_length): Change call to compute_movmem_length.
+ * config/pa/pa.md (movmemsi, movmemsi_prereload, movmemsi_postreload,
+ movmemdi, movmemdi_prereload,
+ movmemdi_postreload): Change movmem to cpymem.
+ * config/pdp11/pdp11.md (movmemhi, movmemhi1,
+ movmemhi_nocc, UNSPEC_MOVMEM): Change movmem to cpymem.
+ * config/riscv/riscv.c: Change movmem to cpymem in comment.
+ * config/riscv/riscv.h: Change movmem to cpymem.
+ * config/riscv/riscv.md: (movmemsi) Change name to cpymemsi.
+ * config/rs6000/rs6000.md: (movmemsi) Change name to cpymemsi.
+ * config/rx/rx.md: (UNSPEC_MOVMEM, movmemsi, rx_movmem): Change
+ movmem to cpymem.
+ * config/s390/s390-protos.h: Change movmem to cpymem.
+ * config/s390/s390.c (s390_expand_movmem, s390_expand_setmem,
+ s390_expand_insv): Change movmem to cpymem.
+ * config/s390/s390.md (movmem<mode>, movmem_short, *movmem_short,
+ movmem_long, *movmem_long, *movmem_long_31z): Change movmem to cpymem.
+ * config/sh/sh.md (movmemsi): Change name to cpymemsi.
+ * config/sparc/sparc.h: Change movmem to cpymem in comment.
+ * config/vax/vax-protos.h (vax_output_movmemsi): Remove prototype
+ for nonexistent function.
+ * config/vax/vax.h: Change movmem to cpymem in comment.
+ * config/vax/vax.md (movmemhi, movmemhi1): Change movmem to cpymem.
+ * config/visium/visium.h: Change movmem to cpymem in comment.
+ * config/visium/visium.md (movmemsi): Change name to cpymemsi.
+ * config/xtensa/xtensa.md (movmemsi): Change name to cpymemsi.
+ * doc/md.texi: Change movmem to cpymem and update description to match.
+ * doc/rtl.texi: Change movmem to cpymem.
+ * target.def (use_by_pieces_infrastructure_p): Change movmem to cpymem.
+ * doc/tm.texi: Regenerate.
+
2019-06-27 Bill Schmidt <wschmidt@linux.ibm.com>
* config/rs6000/rs6000.c (rs6000_option_override_internal): Enable
}
/* Get a MEM rtx for expression EXP which is the address of an operand
- to be used in a string instruction (cmpstrsi, movmemsi, ..). LEN is
+ to be used in a string instruction (cmpstrsi, cpymemsi, ..). LEN is
the maximum length of the block of memory that might be accessed or
NULL if unknown. */
bool aarch64_emit_approx_div (rtx, rtx, rtx);
bool aarch64_emit_approx_sqrt (rtx, rtx, bool);
void aarch64_expand_call (rtx, rtx, bool);
-bool aarch64_expand_movmem (rtx *);
+bool aarch64_expand_cpymem (rtx *);
bool aarch64_float_const_zero_rtx_p (rtx);
bool aarch64_float_const_rtx_p (rtx);
bool aarch64_function_arg_regno_p (unsigned);
bool aarch64_fusion_enabled_p (enum aarch64_fusion_pairs);
-bool aarch64_gen_movmemqi (rtx *);
+bool aarch64_gen_cpymemqi (rtx *);
bool aarch64_gimple_fold_builtin (gimple_stmt_iterator *);
bool aarch64_is_extend_from_extract (scalar_int_mode, rtx, rtx);
bool aarch64_is_long_call_p (rtx);
*dst = aarch64_progress_pointer (*dst);
}
-/* Expand movmem, as if from a __builtin_memcpy. Return true if
+/* Expand cpymem, as if from a __builtin_memcpy. Return true if
we succeed, otherwise return false. */
bool
-aarch64_expand_movmem (rtx *operands)
+aarch64_expand_cpymem (rtx *operands)
{
int n, mode_bits;
rtx dst = operands[0];
/* MOVE_RATIO dictates when we will use the move_by_pieces infrastructure.
move_by_pieces will continually copy the largest safe chunks. So a
7-byte copy is a 4-byte + 2-byte + byte copy. This proves inefficient
- for both size and speed of copy, so we will instead use the "movmem"
+ for both size and speed of copy, so we will instead use the "cpymem"
standard name to implement the copy. This logic does not apply when
targeting -mstrict-align, so keep a sensible default in that case. */
#define MOVE_RATIO(speed) \
;; 0 is dst
;; 1 is src
-;; 2 is size of move in bytes
+;; 2 is size of copy in bytes
;; 3 is alignment
-(define_expand "movmemdi"
+(define_expand "cpymemdi"
[(match_operand:BLK 0 "memory_operand")
(match_operand:BLK 1 "memory_operand")
(match_operand:DI 2 "immediate_operand")
(match_operand:DI 3 "immediate_operand")]
"!STRICT_ALIGNMENT"
{
- if (aarch64_expand_movmem (operands))
+ if (aarch64_expand_cpymem (operands))
DONE;
FAIL;
}
#define MOVE_MAX 8
/* If a memory-to-memory move would take MOVE_RATIO or more simple
- move-instruction pairs, we will do a movmem or libcall instead.
+ move-instruction pairs, we will do a cpymem or libcall instead.
Without byte/word accesses, we want no more than four instructions;
with, several single byte accesses are better. */
;; Argument 2 is the length
;; Argument 3 is the alignment
-(define_expand "movmemqi"
+(define_expand "cpymemqi"
[(parallel [(set (match_operand:BLK 0 "memory_operand")
(match_operand:BLK 1 "memory_operand"))
(use (match_operand:DI 2 "immediate_operand"))
FAIL;
})
-(define_expand "movmemdi"
+(define_expand "cpymemdi"
[(parallel [(set (match_operand:BLK 0 "memory_operand")
(match_operand:BLK 1 "memory_operand"))
(use (match_operand:DI 2 "immediate_operand"))
"TARGET_ABI_OPEN_VMS"
"operands[4] = gen_rtx_SYMBOL_REF (Pmode, \"OTS$MOVE\");")
-(define_insn "*movmemdi_1"
+(define_insn "*cpymemdi_1"
[(set (match_operand:BLK 0 "memory_operand" "=m,m")
(match_operand:BLK 1 "memory_operand" "m,m"))
(use (match_operand:DI 2 "nonmemory_operand" "r,i"))
extern const char *arc_output_libcall (const char *);
extern int arc_output_addsi (rtx *operands, bool, bool);
extern int arc_output_commutative_cond_exec (rtx *operands, bool);
-extern bool arc_expand_movmem (rtx *operands);
+extern bool arc_expand_cpymem (rtx *operands);
extern bool prepare_move_operands (rtx *operands, machine_mode mode);
extern void emit_shift (enum rtx_code, rtx, rtx, rtx);
extern void arc_expand_atomic_op (enum rtx_code, rtx, rtx, rtx, rtx, rtx);
return 8;
}
-/* Helper function of arc_expand_movmem. ADDR points to a chunk of memory.
+/* Helper function of arc_expand_cpymem. ADDR points to a chunk of memory.
Emit code and return an potentially modified address such that offsets
up to SIZE are can be added to yield a legitimate address.
if REUSE is set, ADDR is a register that may be modified. */
offset ranges. Return true on success. */
bool
-arc_expand_movmem (rtx *operands)
+arc_expand_cpymem (rtx *operands)
{
rtx dst = operands[0];
rtx src = operands[1];
enum by_pieces_operation op,
bool speed_p)
{
- /* Let the movmem expander handle small block moves. */
+ /* Let the cpymem expander handle small block moves. */
if (op == MOVE_BY_PIECES)
return false;
in one reasonably fast instruction. */
#define MOVE_MAX 4
-/* Undo the effects of the movmem pattern presence on STORE_BY_PIECES_P . */
+/* Undo the effects of the cpymem pattern presence on STORE_BY_PIECES_P . */
#define MOVE_RATIO(SPEED) ((SPEED) ? 15 : 3)
/* Define this to be nonzero if shift instructions ignore all but the
(set_attr "type" "loop_end")
(set_attr "length" "4,20")])
-(define_expand "movmemsi"
+(define_expand "cpymemsi"
[(match_operand:BLK 0 "" "")
(match_operand:BLK 1 "" "")
(match_operand:SI 2 "nonmemory_operand" "")
(match_operand 3 "immediate_operand" "")]
""
- "if (arc_expand_movmem (operands)) DONE; else FAIL;")
+ "if (arc_expand_cpymem (operands)) DONE; else FAIL;")
;; Close http://gcc.gnu.org/bugzilla/show_bug.cgi?id=35803 if this works
;; to the point that we can generate cmove instructions.
extern bool operands_ok_ldrd_strd (rtx, rtx, rtx, HOST_WIDE_INT, bool, bool);
extern bool gen_operands_ldrd_strd (rtx *, bool, bool, bool);
extern bool valid_operands_ldrd_strd (rtx *, bool);
-extern int arm_gen_movmemqi (rtx *);
-extern bool gen_movmem_ldrd_strd (rtx *);
+extern int arm_gen_cpymemqi (rtx *);
+extern bool gen_cpymem_ldrd_strd (rtx *);
extern machine_mode arm_select_cc_mode (RTX_CODE, rtx, rtx);
extern machine_mode arm_select_dominance_cc_mode (rtx, rtx,
HOST_WIDE_INT);
extern const char *thumb_load_double_from_address (rtx *);
extern const char *thumb_output_move_mem_multiple (int, rtx *);
extern const char *thumb_call_via_reg (rtx);
-extern void thumb_expand_movmemqi (rtx *);
+extern void thumb_expand_cpymemqi (rtx *);
extern rtx arm_return_addr (int, rtx);
extern void thumb_reload_out_hi (rtx *);
extern void thumb_set_return_address (rtx, rtx);
core type, optimize_size setting, etc. */
static int
-arm_movmemqi_unaligned (rtx *operands)
+arm_cpymemqi_unaligned (rtx *operands)
{
HOST_WIDE_INT length = INTVAL (operands[2]);
}
int
-arm_gen_movmemqi (rtx *operands)
+arm_gen_cpymemqi (rtx *operands)
{
HOST_WIDE_INT in_words_to_go, out_words_to_go, last_bytes;
HOST_WIDE_INT srcoffset, dstoffset;
return 0;
if (unaligned_access && (INTVAL (operands[3]) & 3) != 0)
- return arm_movmemqi_unaligned (operands);
+ return arm_cpymemqi_unaligned (operands);
if (INTVAL (operands[3]) & 3)
return 0;
return 1;
}
-/* Helper for gen_movmem_ldrd_strd. Increase the address of memory rtx
+/* Helper for gen_cpymem_ldrd_strd. Increase the address of memory rtx
by mode size. */
inline static rtx
next_consecutive_mem (rtx mem)
/* Copy using LDRD/STRD instructions whenever possible.
Returns true upon success. */
bool
-gen_movmem_ldrd_strd (rtx *operands)
+gen_cpymem_ldrd_strd (rtx *operands)
{
unsigned HOST_WIDE_INT len;
HOST_WIDE_INT align;
/* If we cannot generate any LDRD/STRD, try to generate LDM/STM. */
if (!(dst_aligned || src_aligned))
- return arm_gen_movmemqi (operands);
+ return arm_gen_cpymemqi (operands);
/* If the either src or dst is unaligned we'll be accessing it as pairs
of unaligned SImode accesses. Otherwise we can generate DImode
/* Routines for generating rtl. */
void
-thumb_expand_movmemqi (rtx *operands)
+thumb_expand_cpymemqi (rtx *operands)
{
rtx out = copy_to_mode_reg (SImode, XEXP (operands[0], 0));
rtx in = copy_to_mode_reg (SImode, XEXP (operands[1], 0));
while (len >= 12)
{
- emit_insn (gen_movmem12b (out, in, out, in));
+ emit_insn (gen_cpymem12b (out, in, out, in));
len -= 12;
}
if (len >= 8)
{
- emit_insn (gen_movmem8b (out, in, out, in));
+ emit_insn (gen_cpymem8b (out, in, out, in));
len -= 8;
}
;; We could let this apply for blocks of less than this, but it clobbers so
;; many registers that there is then probably a better way.
-(define_expand "movmemqi"
+(define_expand "cpymemqi"
[(match_operand:BLK 0 "general_operand" "")
(match_operand:BLK 1 "general_operand" "")
(match_operand:SI 2 "const_int_operand" "")
if (TARGET_LDRD && current_tune->prefer_ldrd_strd
&& !optimize_function_for_size_p (cfun))
{
- if (gen_movmem_ldrd_strd (operands))
+ if (gen_cpymem_ldrd_strd (operands))
DONE;
FAIL;
}
- if (arm_gen_movmemqi (operands))
+ if (arm_gen_cpymemqi (operands))
DONE;
FAIL;
}
|| INTVAL (operands[2]) > 48)
FAIL;
- thumb_expand_movmemqi (operands);
+ thumb_expand_cpymemqi (operands);
DONE;
}
"
;; Thumb block-move insns
-(define_insn "movmem12b"
+(define_insn "cpymem12b"
[(set (mem:SI (match_operand:SI 2 "register_operand" "0"))
(mem:SI (match_operand:SI 3 "register_operand" "1")))
(set (mem:SI (plus:SI (match_dup 2) (const_int 4)))
(set_attr "type" "store_12")]
)
-(define_insn "movmem8b"
+(define_insn "cpymem8b"
[(set (mem:SI (match_operand:SI 2 "register_operand" "0"))
(mem:SI (match_operand:SI 3 "register_operand" "1")))
(set (mem:SI (plus:SI (match_dup 2) (const_int 4)))
extern void avr_expand_prologue (void);
extern void avr_expand_epilogue (bool);
-extern bool avr_emit_movmemhi (rtx*);
+extern bool avr_emit_cpymemhi (rtx*);
extern int avr_epilogue_uses (int regno);
extern void avr_output_addr_vec (rtx_insn*, rtx);
extern const char* avr_out_round (rtx_insn *, rtx*, int* =NULL);
extern const char* avr_out_addto_sp (rtx*, int*);
extern const char* avr_out_xload (rtx_insn *, rtx*, int*);
-extern const char* avr_out_movmem (rtx_insn *, rtx*, int*);
+extern const char* avr_out_cpymem (rtx_insn *, rtx*, int*);
extern const char* avr_out_insert_bits (rtx*, int*);
extern bool avr_popcount_each_byte (rtx, int, int);
extern bool avr_has_nibble_0xf (rtx);
case ADJUST_LEN_MOV16: output_movhi (insn, op, &len); break;
case ADJUST_LEN_MOV24: avr_out_movpsi (insn, op, &len); break;
case ADJUST_LEN_MOV32: output_movsisf (insn, op, &len); break;
- case ADJUST_LEN_MOVMEM: avr_out_movmem (insn, op, &len); break;
+ case ADJUST_LEN_CPYMEM: avr_out_cpymem (insn, op, &len); break;
case ADJUST_LEN_XLOAD: avr_out_xload (insn, op, &len); break;
case ADJUST_LEN_SEXT: avr_out_sign_extend (insn, op, &len); break;
}
-/* Worker function for movmemhi expander.
+/* Worker function for cpymemhi expander.
XOP[0] Destination as MEM:BLK
XOP[1] Source " "
XOP[2] # Bytes to copy
Return FALSE if the operand compination is not supported. */
bool
-avr_emit_movmemhi (rtx *xop)
+avr_emit_cpymemhi (rtx *xop)
{
HOST_WIDE_INT count;
machine_mode loop_mode;
Do the copy-loop inline. */
rtx (*fun) (rtx, rtx, rtx)
- = QImode == loop_mode ? gen_movmem_qi : gen_movmem_hi;
+ = QImode == loop_mode ? gen_cpymem_qi : gen_cpymem_hi;
insn = fun (xas, loop_reg, loop_reg);
}
else
{
rtx (*fun) (rtx, rtx)
- = QImode == loop_mode ? gen_movmemx_qi : gen_movmemx_hi;
+ = QImode == loop_mode ? gen_cpymemx_qi : gen_cpymemx_hi;
emit_move_insn (gen_rtx_REG (QImode, 23), a_hi8);
}
-/* Print assembler for movmem_qi, movmem_hi insns...
+/* Print assembler for cpymem_qi, cpymem_hi insns...
$0 : Address Space
$1, $2 : Loop register
Z : Source address
*/
const char*
-avr_out_movmem (rtx_insn *insn ATTRIBUTE_UNUSED, rtx *op, int *plen)
+avr_out_cpymem (rtx_insn *insn ATTRIBUTE_UNUSED, rtx *op, int *plen)
{
addr_space_t as = (addr_space_t) INTVAL (op[0]);
machine_mode loop_mode = GET_MODE (op[1]);
(define_c_enum "unspec"
[UNSPEC_STRLEN
- UNSPEC_MOVMEM
+ UNSPEC_CPYMEM
UNSPEC_INDEX_JMP
UNSPEC_FMUL
UNSPEC_FMULS
tsthi, tstpsi, tstsi, compare, compare64, call,
mov8, mov16, mov24, mov32, reload_in16, reload_in24, reload_in32,
ufract, sfract, round,
- xload, movmem,
+ xload, cpymem,
ashlqi, ashrqi, lshrqi,
ashlhi, ashrhi, lshrhi,
ashlsi, ashrsi, lshrsi,
;;=========================================================================
;; move string (like memcpy)
-(define_expand "movmemhi"
+(define_expand "cpymemhi"
[(parallel [(set (match_operand:BLK 0 "memory_operand" "")
(match_operand:BLK 1 "memory_operand" ""))
(use (match_operand:HI 2 "const_int_operand" ""))
(use (match_operand:HI 3 "const_int_operand" ""))])]
""
{
- if (avr_emit_movmemhi (operands))
+ if (avr_emit_cpymemhi (operands))
DONE;
FAIL;
})
-(define_mode_attr MOVMEM_r_d [(QI "r")
+(define_mode_attr CPYMEM_r_d [(QI "r")
(HI "wd")])
;; $0 : Address Space
;; R30 : source address
;; R26 : destination address
-;; "movmem_qi"
-;; "movmem_hi"
-(define_insn "movmem_<mode>"
+;; "cpymem_qi"
+;; "cpymem_hi"
+(define_insn "cpymem_<mode>"
[(set (mem:BLK (reg:HI REG_X))
(mem:BLK (reg:HI REG_Z)))
(unspec [(match_operand:QI 0 "const_int_operand" "n")]
- UNSPEC_MOVMEM)
- (use (match_operand:QIHI 1 "register_operand" "<MOVMEM_r_d>"))
+ UNSPEC_CPYMEM)
+ (use (match_operand:QIHI 1 "register_operand" "<CPYMEM_r_d>"))
(clobber (reg:HI REG_X))
(clobber (reg:HI REG_Z))
(clobber (reg:QI LPM_REGNO))
(clobber (match_operand:QIHI 2 "register_operand" "=1"))]
""
{
- return avr_out_movmem (insn, operands, NULL);
+ return avr_out_cpymem (insn, operands, NULL);
}
- [(set_attr "adjust_len" "movmem")
+ [(set_attr "adjust_len" "cpymem")
(set_attr "cc" "clobber")])
;; R23:Z : 24-bit source address
;; R26 : 16-bit destination address
-;; "movmemx_qi"
-;; "movmemx_hi"
-(define_insn "movmemx_<mode>"
+;; "cpymemx_qi"
+;; "cpymemx_hi"
+(define_insn "cpymemx_<mode>"
[(set (mem:BLK (reg:HI REG_X))
(mem:BLK (lo_sum:PSI (reg:QI 23)
(reg:HI REG_Z))))
(unspec [(match_operand:QI 0 "const_int_operand" "n")]
- UNSPEC_MOVMEM)
+ UNSPEC_CPYMEM)
(use (reg:QIHI 24))
(clobber (reg:HI REG_X))
(clobber (reg:HI REG_Z))
extern void bfin_expand_call (rtx, rtx, rtx, rtx, int);
extern bool bfin_longcall_p (rtx, int);
extern bool bfin_dsp_memref_p (rtx);
-extern bool bfin_expand_movmem (rtx, rtx, rtx, rtx);
+extern bool bfin_expand_cpymem (rtx, rtx, rtx, rtx);
extern enum reg_class secondary_input_reload_class (enum reg_class,
machine_mode,
/* Adjust DST and SRC by OFFSET bytes, and generate one move in mode MODE. */
static void
-single_move_for_movmem (rtx dst, rtx src, machine_mode mode, HOST_WIDE_INT offset)
+single_move_for_cpymem (rtx dst, rtx src, machine_mode mode, HOST_WIDE_INT offset)
{
rtx scratch = gen_reg_rtx (mode);
rtx srcmem, dstmem;
back on a different method. */
bool
-bfin_expand_movmem (rtx dst, rtx src, rtx count_exp, rtx align_exp)
+bfin_expand_cpymem (rtx dst, rtx src, rtx count_exp, rtx align_exp)
{
rtx srcreg, destreg, countreg;
HOST_WIDE_INT align = 0;
{
if ((count & ~3) == 4)
{
- single_move_for_movmem (dst, src, SImode, offset);
+ single_move_for_cpymem (dst, src, SImode, offset);
offset = 4;
}
else if (count & ~3)
}
if (count & 2)
{
- single_move_for_movmem (dst, src, HImode, offset);
+ single_move_for_cpymem (dst, src, HImode, offset);
offset += 2;
}
}
{
if ((count & ~1) == 2)
{
- single_move_for_movmem (dst, src, HImode, offset);
+ single_move_for_cpymem (dst, src, HImode, offset);
offset = 2;
}
else if (count & ~1)
}
if (count & 1)
{
- single_move_for_movmem (dst, src, QImode, offset);
+ single_move_for_cpymem (dst, src, QImode, offset);
}
return true;
}
#define MOVE_MAX UNITS_PER_WORD
/* If a memory-to-memory move would take MOVE_RATIO or more simple
- move-instruction pairs, we will do a movmem or libcall instead. */
+ move-instruction pairs, we will do a cpymem or libcall instead. */
#define MOVE_RATIO(speed) 5
(set_attr "length" "16")
(set_attr "seq_insns" "multi")])
-(define_expand "movmemsi"
+(define_expand "cpymemsi"
[(match_operand:BLK 0 "general_operand" "")
(match_operand:BLK 1 "general_operand" "")
(match_operand:SI 2 "const_int_operand" "")
(match_operand:SI 3 "const_int_operand" "")]
""
{
- if (bfin_expand_movmem (operands[0], operands[1], operands[2], operands[3]))
+ if (bfin_expand_cpymem (operands[0], operands[1], operands[2], operands[3]))
DONE;
FAIL;
})
extern void c6x_expand_call (rtx, rtx, bool);
extern rtx c6x_expand_compare (rtx, machine_mode);
extern bool c6x_force_op_for_comparison_p (enum rtx_code, rtx);
-extern bool c6x_expand_movmem (rtx, rtx, rtx, rtx, rtx, rtx);
+extern bool c6x_expand_cpymem (rtx, rtx, rtx, rtx, rtx, rtx);
extern rtx c6x_subword (rtx, bool);
extern void split_di (rtx *, int, rtx *, rtx *);
return true;
}
-/* Expand a block move for a movmemM pattern. */
+/* Expand a block move for a cpymemM pattern. */
bool
-c6x_expand_movmem (rtx dst, rtx src, rtx count_exp, rtx align_exp,
+c6x_expand_cpymem (rtx dst, rtx src, rtx count_exp, rtx align_exp,
rtx expected_align_exp ATTRIBUTE_UNUSED,
rtx expected_size_exp ATTRIBUTE_UNUSED)
{
;; Block moves
;; -------------------------------------------------------------------------
-(define_expand "movmemsi"
+(define_expand "cpymemsi"
[(use (match_operand:BLK 0 "memory_operand" ""))
(use (match_operand:BLK 1 "memory_operand" ""))
(use (match_operand:SI 2 "nonmemory_operand" ""))
(use (match_operand:SI 5 "const_int_operand" ""))]
""
{
- if (c6x_expand_movmem (operands[0], operands[1], operands[2], operands[3],
+ if (c6x_expand_cpymem (operands[0], operands[1], operands[2], operands[3],
operands[4], operands[5]))
DONE;
else
;; Argument 2 is the length
;; Argument 3 is the alignment
-(define_expand "movmemsi"
+(define_expand "cpymemsi"
[(parallel [(set (match_operand:BLK 0 "" "")
(match_operand:BLK 1 "" ""))
(use (match_operand:SI 2 "" ""))
"stpcpy %b1,%b2 # %0 %b1 %b2"
)
-(define_insn "movmemsi"
+(define_insn "cpymemsi"
[(set (match_operand:BLK 0 "memory_operand" "=W,BW")
(match_operand:BLK 1 "memory_operand" "W,BW"))
(use (match_operand:SI 2 "ft32_imm_operand" "KA,KA"))
(set_attr "length_table" "*,movl")
(set_attr "cc" "set_zn,set_znv")])
-;; Implement block moves using movmd. Defining movmemsi allows the full
+;; Implement block copies using movmd. Defining cpymemsi allows the full
;; range of constant lengths (up to 0x40000 bytes when using movmd.l).
;; See h8sx_emit_movmd for details.
-(define_expand "movmemsi"
+(define_expand "cpymemsi"
[(use (match_operand:BLK 0 "memory_operand" ""))
(use (match_operand:BLK 1 "memory_operand" ""))
(use (match_operand:SI 2 "" ""))
static void
-expand_set_or_movmem_via_loop (rtx destmem, rtx srcmem,
+expand_set_or_cpymem_via_loop (rtx destmem, rtx srcmem,
rtx destptr, rtx srcptr, rtx value,
rtx count, machine_mode mode, int unroll,
int expected_size, bool issetmem)
Other arguments have same meaning as for previous function. */
static void
-expand_set_or_movmem_via_rep (rtx destmem, rtx srcmem,
+expand_set_or_cpymem_via_rep (rtx destmem, rtx srcmem,
rtx destptr, rtx srcptr, rtx value, rtx orig_value,
rtx count,
machine_mode mode, bool issetmem)
/* Output code to copy at most count & (max_size - 1) bytes from SRC to DEST. */
static void
-expand_movmem_epilogue (rtx destmem, rtx srcmem,
+expand_cpymem_epilogue (rtx destmem, rtx srcmem,
rtx destptr, rtx srcptr, rtx count, int max_size)
{
rtx src, dest;
{
count = expand_simple_binop (GET_MODE (count), AND, count, GEN_INT (max_size - 1),
count, 1, OPTAB_DIRECT);
- expand_set_or_movmem_via_loop (destmem, srcmem, destptr, srcptr, NULL,
+ expand_set_or_cpymem_via_loop (destmem, srcmem, destptr, srcptr, NULL,
count, QImode, 1, 4, false);
return;
}
{
count = expand_simple_binop (counter_mode (count), AND, count,
GEN_INT (max_size - 1), count, 1, OPTAB_DIRECT);
- expand_set_or_movmem_via_loop (destmem, NULL, destptr, NULL,
+ expand_set_or_cpymem_via_loop (destmem, NULL, destptr, NULL,
gen_lowpart (QImode, value), count, QImode,
1, max_size / 2, true);
}
Return value is updated DESTMEM. */
static rtx
-expand_set_or_movmem_prologue (rtx destmem, rtx srcmem,
+expand_set_or_cpymem_prologue (rtx destmem, rtx srcmem,
rtx destptr, rtx srcptr, rtx value,
rtx vec_value, rtx count, int align,
int desired_alignment, bool issetmem)
or setmem sequence that is valid for SIZE..2*SIZE-1 bytes
and jump to DONE_LABEL. */
static void
-expand_small_movmem_or_setmem (rtx destmem, rtx srcmem,
+expand_small_cpymem_or_setmem (rtx destmem, rtx srcmem,
rtx destptr, rtx srcptr,
rtx value, rtx vec_value,
rtx count, int size,
done_label:
*/
static void
-expand_set_or_movmem_prologue_epilogue_by_misaligned_moves (rtx destmem, rtx srcmem,
+expand_set_or_cpymem_prologue_epilogue_by_misaligned_moves (rtx destmem, rtx srcmem,
rtx *destptr, rtx *srcptr,
machine_mode mode,
rtx value, rtx vec_value,
/* Handle sizes > 3. */
for (;size2 > 2; size2 >>= 1)
- expand_small_movmem_or_setmem (destmem, srcmem,
+ expand_small_cpymem_or_setmem (destmem, srcmem,
*destptr, *srcptr,
value, vec_value,
*count,
is returned, but also of SRC, which is passed as a pointer for that
reason. */
static rtx
-expand_set_or_movmem_constant_prologue (rtx dst, rtx *srcp, rtx destreg,
+expand_set_or_cpymem_constant_prologue (rtx dst, rtx *srcp, rtx destreg,
rtx srcreg, rtx value, rtx vec_value,
int desired_align, int align_bytes,
bool issetmem)
3) Main body: the copying loop itself, copying in SIZE_NEEDED chunks
with specified algorithm. */
bool
-ix86_expand_set_or_movmem (rtx dst, rtx src, rtx count_exp, rtx val_exp,
+ix86_expand_set_or_cpymem (rtx dst, rtx src, rtx count_exp, rtx val_exp,
rtx align_exp, rtx expected_align_exp,
rtx expected_size_exp, rtx min_size_exp,
rtx max_size_exp, rtx probable_max_size_exp,
if (misaligned_prologue_used)
{
/* Misaligned move prologue handled small blocks by itself. */
- expand_set_or_movmem_prologue_epilogue_by_misaligned_moves
+ expand_set_or_cpymem_prologue_epilogue_by_misaligned_moves
(dst, src, &destreg, &srcreg,
move_mode, promoted_val, vec_promoted_val,
&count_exp,
dst = change_address (dst, BLKmode, destreg);
if (!issetmem)
src = change_address (src, BLKmode, srcreg);
- dst = expand_set_or_movmem_prologue (dst, src, destreg, srcreg,
+ dst = expand_set_or_cpymem_prologue (dst, src, destreg, srcreg,
promoted_val, vec_promoted_val,
count_exp, align, desired_align,
issetmem);
{
/* If we know how many bytes need to be stored before dst is
sufficiently aligned, maintain aliasing info accurately. */
- dst = expand_set_or_movmem_constant_prologue (dst, &src, destreg,
+ dst = expand_set_or_cpymem_constant_prologue (dst, &src, destreg,
srcreg,
promoted_val,
vec_promoted_val,
case loop_1_byte:
case loop:
case unrolled_loop:
- expand_set_or_movmem_via_loop (dst, src, destreg, srcreg, promoted_val,
+ expand_set_or_cpymem_via_loop (dst, src, destreg, srcreg, promoted_val,
count_exp, move_mode, unroll_factor,
expected_size, issetmem);
break;
case vector_loop:
- expand_set_or_movmem_via_loop (dst, src, destreg, srcreg,
+ expand_set_or_cpymem_via_loop (dst, src, destreg, srcreg,
vec_promoted_val, count_exp, move_mode,
unroll_factor, expected_size, issetmem);
break;
case rep_prefix_8_byte:
case rep_prefix_4_byte:
case rep_prefix_1_byte:
- expand_set_or_movmem_via_rep (dst, src, destreg, srcreg, promoted_val,
+ expand_set_or_cpymem_via_rep (dst, src, destreg, srcreg, promoted_val,
val_exp, count_exp, move_mode, issetmem);
break;
}
vec_promoted_val, count_exp,
epilogue_size_needed);
else
- expand_movmem_epilogue (dst, src, destreg, srcreg, count_exp,
+ expand_cpymem_epilogue (dst, src, destreg, srcreg, count_exp,
epilogue_size_needed);
}
}
extern int avx_vperm2f128_parallel (rtx par, machine_mode mode);
extern bool ix86_expand_strlen (rtx, rtx, rtx, rtx);
-extern bool ix86_expand_set_or_movmem (rtx, rtx, rtx, rtx, rtx, rtx,
+extern bool ix86_expand_set_or_cpymem (rtx, rtx, rtx, rtx, rtx, rtx,
rtx, rtx, rtx, rtx, bool);
extern bool constant_address_p (rtx);
? GET_MODE_SIZE (TImode) : UNITS_PER_WORD)
/* If a memory-to-memory move would take MOVE_RATIO or more simple
- move-instruction pairs, we will do a movmem or libcall instead.
+ move-instruction pairs, we will do a cpymem or libcall instead.
Increasing the value will always make code faster, but eventually
incurs high cost in increased code size.
(set_attr "length_immediate" "0")
(set_attr "modrm" "0")])
-(define_expand "movmem<mode>"
+(define_expand "cpymem<mode>"
[(use (match_operand:BLK 0 "memory_operand"))
(use (match_operand:BLK 1 "memory_operand"))
(use (match_operand:SWI48 2 "nonmemory_operand"))
(use (match_operand:SI 8 ""))]
""
{
- if (ix86_expand_set_or_movmem (operands[0], operands[1],
+ if (ix86_expand_set_or_cpymem (operands[0], operands[1],
operands[2], NULL, operands[3],
operands[4], operands[5],
operands[6], operands[7],
(use (match_operand:SI 8 ""))]
""
{
- if (ix86_expand_set_or_movmem (operands[0], NULL,
+ if (ix86_expand_set_or_cpymem (operands[0], NULL,
operands[1], operands[2],
operands[3], operands[4],
operands[5], operands[6],
}
}")
-(define_expand "movmemsi"
+(define_expand "cpymemsi"
[(parallel [(set (match_operand:BLK 0 "general_operand" "")
(match_operand:BLK 1 "general_operand" ""))
(use (match_operand:SI 2 "" ""))
;; 1 = source (mem:BLK ...)
;; 2 = count
;; 3 = alignment
-(define_expand "movmemhi"
+(define_expand "cpymemhi"
[(match_operand 0 "ap_operand" "")
(match_operand 1 "ap_operand" "")
(match_operand 2 "m32c_r3_operand" "")
(match_operand 3 "" "")
]
""
- "if (m32c_expand_movmemhi(operands)) DONE; FAIL;"
+ "if (m32c_expand_cpymemhi(operands)) DONE; FAIL;"
)
;; We can't use mode iterators for these because M16C uses r1h to extend
;; 3 = dest (in)
;; 4 = src (in)
;; 5 = count (in)
-(define_insn "movmemhi_bhi_op"
+(define_insn "cpymemhi_bhi_op"
[(set (mem:QI (match_operand:HI 3 "ap_operand" "0"))
(mem:QI (match_operand:HI 4 "ap_operand" "1")))
(set (match_operand:HI 2 "m32c_r3_operand" "=R3w")
"TARGET_A16"
"mov.b:q\t#0,r1h\n\tsmovf.b\t; %0[0..%2-1]=r1h%1[]"
)
-(define_insn "movmemhi_bpsi_op"
+(define_insn "cpymemhi_bpsi_op"
[(set (mem:QI (match_operand:PSI 3 "ap_operand" "0"))
(mem:QI (match_operand:PSI 4 "ap_operand" "1")))
(set (match_operand:HI 2 "m32c_r3_operand" "=R3w")
"TARGET_A24"
"smovf.b\t; %0[0..%2-1]=%1[]"
)
-(define_insn "movmemhi_whi_op"
+(define_insn "cpymemhi_whi_op"
[(set (mem:HI (match_operand:HI 3 "ap_operand" "0"))
(mem:HI (match_operand:HI 4 "ap_operand" "1")))
(set (match_operand:HI 2 "m32c_r3_operand" "=R3w")
"TARGET_A16"
"mov.b:q\t#0,r1h\n\tsmovf.w\t; %0[0..%2-1]=r1h%1[]"
)
-(define_insn "movmemhi_wpsi_op"
+(define_insn "cpymemhi_wpsi_op"
[(set (mem:HI (match_operand:PSI 3 "ap_operand" "0"))
(mem:HI (match_operand:PSI 4 "ap_operand" "1")))
(set (match_operand:HI 2 "m32c_r3_operand" "=R3w")
int m32c_expand_cmpstr (rtx *);
int m32c_expand_insv (rtx *);
int m32c_expand_movcc (rtx *);
-int m32c_expand_movmemhi (rtx *);
+int m32c_expand_cpymemhi (rtx *);
int m32c_expand_movstr (rtx *);
void m32c_expand_neg_mulpsi3 (rtx *);
int m32c_expand_setmemhi (rtx *);
addresses, not [mem] syntax. $0 is the destination (MEM:BLK), $1
is the source (MEM:BLK), and $2 the count (HI). */
int
-m32c_expand_movmemhi(rtx *operands)
+m32c_expand_cpymemhi(rtx *operands)
{
rtx desta, srca, count;
rtx desto, srco, counto;
{
count = copy_to_mode_reg (HImode, GEN_INT (INTVAL (count) / 2));
if (TARGET_A16)
- emit_insn (gen_movmemhi_whi_op (desto, srco, counto, desta, srca, count));
+ emit_insn (gen_cpymemhi_whi_op (desto, srco, counto, desta, srca, count));
else
- emit_insn (gen_movmemhi_wpsi_op (desto, srco, counto, desta, srca, count));
+ emit_insn (gen_cpymemhi_wpsi_op (desto, srco, counto, desta, srca, count));
return 1;
}
count = copy_to_mode_reg (HImode, count);
if (TARGET_A16)
- emit_insn (gen_movmemhi_bhi_op (desto, srco, counto, desta, srca, count));
+ emit_insn (gen_cpymemhi_bhi_op (desto, srco, counto, desta, srca, count));
else
- emit_insn (gen_movmemhi_bpsi_op (desto, srco, counto, desta, srca, count));
+ emit_insn (gen_cpymemhi_bpsi_op (desto, srco, counto, desta, srca, count));
return 1;
}
to the word after the end of the source block, and dst_reg to point
to the last word of the destination block, provided that the block
is MAX_MOVE_BYTES long. */
- emit_insn (gen_movmemsi_internal (dst_reg, src_reg, at_a_time,
+ emit_insn (gen_cpymemsi_internal (dst_reg, src_reg, at_a_time,
new_dst_reg, new_src_reg));
emit_move_insn (dst_reg, new_dst_reg);
emit_move_insn (src_reg, new_src_reg);
}
if (leftover)
- emit_insn (gen_movmemsi_internal (dst_reg, src_reg, GEN_INT (leftover),
+ emit_insn (gen_cpymemsi_internal (dst_reg, src_reg, GEN_INT (leftover),
gen_reg_rtx (SImode),
gen_reg_rtx (SImode)));
return 1;
;; Argument 2 is the length
;; Argument 3 is the alignment
-(define_expand "movmemsi"
+(define_expand "cpymemsi"
[(parallel [(set (match_operand:BLK 0 "general_operand" "")
(match_operand:BLK 1 "general_operand" ""))
(use (match_operand:SI 2 "immediate_operand" ""))
;; Insn generated by block moves
-(define_insn "movmemsi_internal"
+(define_insn "cpymemsi_internal"
[(set (mem:BLK (match_operand:SI 0 "register_operand" "r")) ;; destination
(mem:BLK (match_operand:SI 1 "register_operand" "r"))) ;; source
(use (match_operand:SI 2 "m32r_block_immediate_operand" "J"));; # bytes to move
;; Block move - adapted from m88k.md
;; ------------------------------------------------------------------------
-(define_expand "movmemsi"
+(define_expand "cpymemsi"
[(parallel [(set (mem:BLK (match_operand:BLK 0 "" ""))
(mem:BLK (match_operand:BLK 1 "" "")))
(use (match_operand:SI 2 "general_operand" ""))
microblaze_block_move_straight (dest, src, leftover);
}
-/* Expand a movmemsi instruction. */
+/* Expand a cpymemsi instruction. */
bool
microblaze_expand_block_move (rtx dest, rtx src, rtx length, rtx align_rtx)
;; Argument 2 is the length
;; Argument 3 is the alignment
-(define_expand "movmemsi"
+(define_expand "cpymemsi"
[(parallel [(set (match_operand:BLK 0 "general_operand")
(match_operand:BLK 1 "general_operand"))
(use (match_operand:SI 2 ""))
{
if (op == STORE_BY_PIECES)
return mips_store_by_pieces_p (size, align);
- if (op == MOVE_BY_PIECES && HAVE_movmemsi)
+ if (op == MOVE_BY_PIECES && HAVE_cpymemsi)
{
- /* movmemsi is meant to generate code that is at least as good as
- move_by_pieces. However, movmemsi effectively uses a by-pieces
+ /* cpymemsi is meant to generate code that is at least as good as
+ move_by_pieces. However, cpymemsi effectively uses a by-pieces
implementation both for moves smaller than a word and for
word-aligned moves of no more than MIPS_MAX_MOVE_BYTES_STRAIGHT
bytes. We should allow the tree-level optimisers to do such
moves by pieces, as it often exposes other optimization
- opportunities. We might as well continue to use movmemsi at
+ opportunities. We might as well continue to use cpymemsi at
the rtl level though, as it produces better code when
scheduling is disabled (such as at -O). */
if (currently_expanding_to_rtl)
emit_insn (gen_nop ());
}
-/* Expand a movmemsi instruction, which copies LENGTH bytes from
+/* Expand a cpymemsi instruction, which copies LENGTH bytes from
memory reference SRC to memory reference DEST. */
bool
#define MIPS_MIN_MOVE_MEM_ALIGN 16
/* The maximum number of bytes that can be copied by one iteration of
- a movmemsi loop; see mips_block_move_loop. */
+ a cpymemsi loop; see mips_block_move_loop. */
#define MIPS_MAX_MOVE_BYTES_PER_LOOP_ITER \
(UNITS_PER_WORD * 4)
/* The maximum number of bytes that can be copied by a straight-line
- implementation of movmemsi; see mips_block_move_straight. We want
+ implementation of cpymemsi; see mips_block_move_straight. We want
to make sure that any loop-based implementation will iterate at
least twice. */
#define MIPS_MAX_MOVE_BYTES_STRAIGHT \
#define MIPS_CALL_RATIO 8
-/* Any loop-based implementation of movmemsi will have at least
+/* Any loop-based implementation of cpymemsi will have at least
MIPS_MAX_MOVE_BYTES_STRAIGHT / UNITS_PER_WORD memory-to-memory
moves, so allow individual copies of fewer elements.
- When movmemsi is not available, use a value approximating
+ When cpymemsi is not available, use a value approximating
the length of a memcpy call sequence, so that move_by_pieces
will generate inline code if it is shorter than a function call.
Since move_by_pieces_ninsns counts memory-to-memory moves, but
value of MIPS_CALL_RATIO to take that into account. */
#define MOVE_RATIO(speed) \
- (HAVE_movmemsi \
+ (HAVE_cpymemsi \
? MIPS_MAX_MOVE_BYTES_STRAIGHT / MOVE_MAX \
: MIPS_CALL_RATIO / 2)
;; Argument 2 is the length
;; Argument 3 is the alignment
-(define_expand "movmemsi"
+(define_expand "cpymemsi"
[(parallel [(set (match_operand:BLK 0 "general_operand")
(match_operand:BLK 1 "general_operand"))
(use (match_operand:SI 2 ""))
-/* Auxiliary functions for expand movmem, setmem, cmpmem, load_multiple
+/* Auxiliary functions for expand cpymem, setmem, cmpmem, load_multiple
and store_multiple pattern of Andes NDS32 cpu for GNU compiler
Copyright (C) 2012-2019 Free Software Foundation, Inc.
Contributed by Andes Technology Corporation.
/* ------------------------------------------------------------------------ */
-/* Auxiliary function for expand movmem pattern. */
+/* Auxiliary function for expand cpymem pattern. */
static bool
-nds32_expand_movmemsi_loop_unknown_size (rtx dstmem, rtx srcmem,
+nds32_expand_cpymemsi_loop_unknown_size (rtx dstmem, rtx srcmem,
rtx size,
rtx alignment)
{
- /* Emit loop version of movmem.
+ /* Emit loop version of cpymem.
andi $size_least_3_bit, $size, #~7
add $dst_end, $dst, $size
}
static bool
-nds32_expand_movmemsi_loop_known_size (rtx dstmem, rtx srcmem,
+nds32_expand_cpymemsi_loop_known_size (rtx dstmem, rtx srcmem,
rtx size, rtx alignment)
{
rtx dst_base_reg, src_base_reg;
if (total_bytes < 8)
{
- /* Emit total_bytes less than 8 loop version of movmem.
+ /* Emit total_bytes less than 8 loop version of cpymem.
add $dst_end, $dst, $size
move $dst_itr, $dst
.Lbyte_mode_loop:
}
else if (total_bytes % 8 == 0)
{
- /* Emit multiple of 8 loop version of movmem.
+ /* Emit multiple of 8 loop version of cpymem.
add $dst_end, $dst, $size
move $dst_itr, $dst
else
{
/* Handle size greater than 8, and not a multiple of 8. */
- return nds32_expand_movmemsi_loop_unknown_size (dstmem, srcmem,
+ return nds32_expand_cpymemsi_loop_unknown_size (dstmem, srcmem,
size, alignment);
}
}
static bool
-nds32_expand_movmemsi_loop (rtx dstmem, rtx srcmem,
+nds32_expand_cpymemsi_loop (rtx dstmem, rtx srcmem,
rtx size, rtx alignment)
{
if (CONST_INT_P (size))
- return nds32_expand_movmemsi_loop_known_size (dstmem, srcmem,
+ return nds32_expand_cpymemsi_loop_known_size (dstmem, srcmem,
size, alignment);
else
- return nds32_expand_movmemsi_loop_unknown_size (dstmem, srcmem,
+ return nds32_expand_cpymemsi_loop_unknown_size (dstmem, srcmem,
size, alignment);
}
static bool
-nds32_expand_movmemsi_unroll (rtx dstmem, rtx srcmem,
+nds32_expand_cpymemsi_unroll (rtx dstmem, rtx srcmem,
rtx total_bytes, rtx alignment)
{
rtx dst_base_reg, src_base_reg;
This is auxiliary extern function to help create rtx template.
Check nds32-multiple.md file for the patterns. */
bool
-nds32_expand_movmemsi (rtx dstmem, rtx srcmem, rtx total_bytes, rtx alignment)
+nds32_expand_cpymemsi (rtx dstmem, rtx srcmem, rtx total_bytes, rtx alignment)
{
- if (nds32_expand_movmemsi_unroll (dstmem, srcmem, total_bytes, alignment))
+ if (nds32_expand_cpymemsi_unroll (dstmem, srcmem, total_bytes, alignment))
return true;
if (!optimize_size && optimize > 2)
- return nds32_expand_movmemsi_loop (dstmem, srcmem, total_bytes, alignment);
+ return nds32_expand_cpymemsi_loop (dstmem, srcmem, total_bytes, alignment);
return false;
}
;; operands[3] is the known shared alignment.
-(define_expand "movmemsi"
+(define_expand "cpymemsi"
[(match_operand:BLK 0 "general_operand" "")
(match_operand:BLK 1 "general_operand" "")
(match_operand:SI 2 "nds32_reg_constant_operand" "")
(match_operand:SI 3 "const_int_operand" "")]
""
{
- if (nds32_expand_movmemsi (operands[0],
+ if (nds32_expand_cpymemsi (operands[0],
operands[1],
operands[2],
operands[3]))
extern rtx nds32_expand_load_multiple (int, int, rtx, rtx, bool, rtx *);
extern rtx nds32_expand_store_multiple (int, int, rtx, rtx, bool, rtx *);
-extern bool nds32_expand_movmemsi (rtx, rtx, rtx, rtx);
+extern bool nds32_expand_cpymemsi (rtx, rtx, rtx, rtx);
extern bool nds32_expand_setmem (rtx, rtx, rtx, rtx, rtx, rtx);
extern bool nds32_expand_strlen (rtx, rtx, rtx, rtx);
static bool forward_branch_p (rtx_insn *);
static void compute_zdepwi_operands (unsigned HOST_WIDE_INT, unsigned *);
static void compute_zdepdi_operands (unsigned HOST_WIDE_INT, unsigned *);
-static int compute_movmem_length (rtx_insn *);
+static int compute_cpymem_length (rtx_insn *);
static int compute_clrmem_length (rtx_insn *);
static bool pa_assemble_integer (rtx, unsigned int, int);
static void remove_useless_addtr_insns (int);
count insns rather than emit them. */
static int
-compute_movmem_length (rtx_insn *insn)
+compute_cpymem_length (rtx_insn *insn)
{
rtx pat = PATTERN (insn);
unsigned int align = INTVAL (XEXP (XVECEXP (pat, 0, 7), 0));
&& GET_CODE (XEXP (XVECEXP (pat, 0, 0), 1)) == MEM
&& GET_MODE (XEXP (XVECEXP (pat, 0, 0), 0)) == BLKmode
&& GET_MODE (XEXP (XVECEXP (pat, 0, 0), 1)) == BLKmode)
- length += compute_movmem_length (insn) - 4;
+ length += compute_cpymem_length (insn) - 4;
/* Block clear pattern. */
else if (NONJUMP_INSN_P (insn)
&& GET_CODE (pat) == PARALLEL
;; The definition of this insn does not really explain what it does,
;; but it should suffice that anything generated as this insn will be
-;; recognized as a movmemsi operation, and that it will not successfully
+;; recognized as a cpymemsi operation, and that it will not successfully
;; combine with anything.
-(define_expand "movmemsi"
+(define_expand "cpymemsi"
[(parallel [(set (match_operand:BLK 0 "" "")
(match_operand:BLK 1 "" ""))
(clobber (match_dup 4))
;; operands 0 and 1 are both equivalent to symbolic MEMs. Thus, we are
;; forced to internally copy operands 0 and 1 to operands 7 and 8,
;; respectively. We then split or peephole optimize after reload.
-(define_insn "movmemsi_prereload"
+(define_insn "cpymemsi_prereload"
[(set (mem:BLK (match_operand:SI 0 "register_operand" "r,r"))
(mem:BLK (match_operand:SI 1 "register_operand" "r,r")))
(clobber (match_operand:SI 2 "register_operand" "=&r,&r")) ;loop cnt/tmp
}
}")
-(define_insn "movmemsi_postreload"
+(define_insn "cpymemsi_postreload"
[(set (mem:BLK (match_operand:SI 0 "register_operand" "+r,r"))
(mem:BLK (match_operand:SI 1 "register_operand" "+r,r")))
(clobber (match_operand:SI 2 "register_operand" "=&r,&r")) ;loop cnt/tmp
"* return pa_output_block_move (operands, !which_alternative);"
[(set_attr "type" "multi,multi")])
-(define_expand "movmemdi"
+(define_expand "cpymemdi"
[(parallel [(set (match_operand:BLK 0 "" "")
(match_operand:BLK 1 "" ""))
(clobber (match_dup 4))
;; operands 0 and 1 are both equivalent to symbolic MEMs. Thus, we are
;; forced to internally copy operands 0 and 1 to operands 7 and 8,
;; respectively. We then split or peephole optimize after reload.
-(define_insn "movmemdi_prereload"
+(define_insn "cpymemdi_prereload"
[(set (mem:BLK (match_operand:DI 0 "register_operand" "r,r"))
(mem:BLK (match_operand:DI 1 "register_operand" "r,r")))
(clobber (match_operand:DI 2 "register_operand" "=&r,&r")) ;loop cnt/tmp
}
}")
-(define_insn "movmemdi_postreload"
+(define_insn "cpymemdi_postreload"
[(set (mem:BLK (match_operand:DI 0 "register_operand" "+r,r"))
(mem:BLK (match_operand:DI 1 "register_operand" "+r,r")))
(clobber (match_operand:DI 2 "register_operand" "=&r,&r")) ;loop cnt/tmp
UNSPECV_BLOCKAGE
UNSPECV_SETD
UNSPECV_SETI
- UNSPECV_MOVMEM
+ UNSPECV_CPYMEM
])
(define_constants
[(set_attr "length" "2,2,4,4,2")])
;; Expand a block move. We turn this into a move loop.
-(define_expand "movmemhi"
- [(parallel [(unspec_volatile [(const_int 0)] UNSPECV_MOVMEM)
+(define_expand "cpymemhi"
+ [(parallel [(unspec_volatile [(const_int 0)] UNSPECV_CPYMEM)
(match_operand:BLK 0 "general_operand" "=g")
(match_operand:BLK 1 "general_operand" "g")
(match_operand:HI 2 "immediate_operand" "i")
}")
;; Expand a block move. We turn this into a move loop.
-(define_insn_and_split "movmemhi1"
- [(unspec_volatile [(const_int 0)] UNSPECV_MOVMEM)
+(define_insn_and_split "cpymemhi1"
+ [(unspec_volatile [(const_int 0)] UNSPECV_CPYMEM)
(match_operand:HI 0 "register_operand" "+r")
(match_operand:HI 1 "register_operand" "+r")
(match_operand:HI 2 "register_operand" "+r")
""
"#"
"reload_completed"
- [(parallel [(unspec_volatile [(const_int 0)] UNSPECV_MOVMEM)
+ [(parallel [(unspec_volatile [(const_int 0)] UNSPECV_CPYMEM)
(match_dup 0)
(match_dup 1)
(match_dup 2)
(clobber (reg:CC CC_REGNUM))])]
"")
-(define_insn "movmemhi_nocc"
- [(unspec_volatile [(const_int 0)] UNSPECV_MOVMEM)
+(define_insn "cpymemhi_nocc"
+ [(unspec_volatile [(const_int 0)] UNSPECV_CPYMEM)
(match_operand:HI 0 "register_operand" "+r")
(match_operand:HI 1 "register_operand" "+r")
(match_operand:HI 2 "register_operand" "+r")
emit_insn(gen_nop ());
}
-/* Expand a movmemsi instruction, which copies LENGTH bytes from
+/* Expand a cpymemsi instruction, which copies LENGTH bytes from
memory reference SRC to memory reference DEST. */
bool
#undef PTRDIFF_TYPE
#define PTRDIFF_TYPE (POINTER_SIZE == 64 ? "long int" : "int")
-/* The maximum number of bytes copied by one iteration of a movmemsi loop. */
+/* The maximum number of bytes copied by one iteration of a cpymemsi loop. */
#define RISCV_MAX_MOVE_BYTES_PER_LOOP_ITER (UNITS_PER_WORD * 4)
/* The maximum number of bytes that can be copied by a straight-line
- movmemsi implementation. */
+ cpymemsi implementation. */
#define RISCV_MAX_MOVE_BYTES_STRAIGHT (RISCV_MAX_MOVE_BYTES_PER_LOOP_ITER * 3)
/* If a memory-to-memory move would take MOVE_RATIO or more simple
- move-instruction pairs, we will do a movmem or libcall instead.
+ move-instruction pairs, we will do a cpymem or libcall instead.
Do not use move_by_pieces at all when strict alignment is not
in effect but the target has slow unaligned accesses; in this
- case, movmem or libcall is more efficient. */
+ case, cpymem or libcall is more efficient. */
#define MOVE_RATIO(speed) \
(!STRICT_ALIGNMENT && riscv_slow_unaligned_access_p ? 1 : \
DONE;
})
-(define_expand "movmemsi"
+(define_expand "cpymemsi"
[(parallel [(set (match_operand:BLK 0 "general_operand")
(match_operand:BLK 1 "general_operand"))
(use (match_operand:SI 2 ""))
;; Argument 2 is the length
;; Argument 3 is the alignment
-(define_expand "movmemsi"
+(define_expand "cpymemsi"
[(parallel [(set (match_operand:BLK 0 "")
(match_operand:BLK 1 ""))
(use (match_operand:SI 2 ""))
(UNSPEC_CONST 13)
(UNSPEC_MOVSTR 20)
- (UNSPEC_MOVMEM 21)
+ (UNSPEC_CPYMEM 21)
(UNSPEC_SETMEM 22)
(UNSPEC_STRLEN 23)
(UNSPEC_CMPSTRN 24)
(set_attr "timings" "1111")] ;; The timing is a guesstimate.
)
-(define_expand "movmemsi"
+(define_expand "cpymemsi"
[(parallel
[(set (match_operand:BLK 0 "memory_operand") ;; Dest
(match_operand:BLK 1 "memory_operand")) ;; Source
(use (match_operand:SI 2 "register_operand")) ;; Length in bytes
(match_operand 3 "immediate_operand") ;; Align
- (unspec_volatile:BLK [(reg:SI 1) (reg:SI 2) (reg:SI 3)] UNSPEC_MOVMEM)]
+ (unspec_volatile:BLK [(reg:SI 1) (reg:SI 2) (reg:SI 3)] UNSPEC_CPYMEM)]
)]
"rx_allow_string_insns"
{
emit_move_insn (len, force_operand (operands[2], NULL_RTX));
operands[0] = replace_equiv_address_nv (operands[0], addr1);
operands[1] = replace_equiv_address_nv (operands[1], addr2);
- emit_insn (gen_rx_movmem ());
+ emit_insn (gen_rx_cpymem ());
DONE;
}
)
-(define_insn "rx_movmem"
+(define_insn "rx_cpymem"
[(set (mem:BLK (reg:SI 1))
(mem:BLK (reg:SI 2)))
(use (reg:SI 3))
- (unspec_volatile:BLK [(reg:SI 1) (reg:SI 2) (reg:SI 3)] UNSPEC_MOVMEM)
+ (unspec_volatile:BLK [(reg:SI 1) (reg:SI 2) (reg:SI 3)] UNSPEC_CPYMEM)
(clobber (reg:SI 1))
(clobber (reg:SI 2))
(clobber (reg:SI 3))]
extern void s390_expand_plus_operand (rtx, rtx, rtx);
extern void emit_symbolic_move (rtx *);
extern void s390_load_address (rtx, rtx);
-extern bool s390_expand_movmem (rtx, rtx, rtx);
+extern bool s390_expand_cpymem (rtx, rtx, rtx);
extern void s390_expand_setmem (rtx, rtx, rtx);
extern bool s390_expand_cmpmem (rtx, rtx, rtx, rtx);
extern void s390_expand_vec_strlen (rtx, rtx, rtx);
/* Emit code to move LEN bytes from DST to SRC. */
bool
-s390_expand_movmem (rtx dst, rtx src, rtx len)
+s390_expand_cpymem (rtx dst, rtx src, rtx len)
{
/* When tuning for z10 or higher we rely on the Glibc functions to
do the right thing. Only for constant lengths below 64k we will
{
rtx newdst = adjust_address (dst, BLKmode, o);
rtx newsrc = adjust_address (src, BLKmode, o);
- emit_insn (gen_movmem_short (newdst, newsrc,
+ emit_insn (gen_cpymem_short (newdst, newsrc,
GEN_INT (l > 256 ? 255 : l - 1)));
}
}
else if (TARGET_MVCLE)
{
- emit_insn (gen_movmem_long (dst, src, convert_to_mode (Pmode, len, 1)));
+ emit_insn (gen_cpymem_long (dst, src, convert_to_mode (Pmode, len, 1)));
}
else
emit_insn (prefetch);
}
- emit_insn (gen_movmem_short (dst, src, GEN_INT (255)));
+ emit_insn (gen_cpymem_short (dst, src, GEN_INT (255)));
s390_load_address (dst_addr,
gen_rtx_PLUS (Pmode, dst_addr, GEN_INT (256)));
s390_load_address (src_addr,
emit_jump (loop_start_label);
emit_label (loop_end_label);
- emit_insn (gen_movmem_short (dst, src,
+ emit_insn (gen_cpymem_short (dst, src,
convert_to_mode (Pmode, count, 1)));
emit_label (end_label);
}
if (l > 1)
{
rtx newdstp1 = adjust_address (dst, BLKmode, o + 1);
- emit_insn (gen_movmem_short (newdstp1, newdst,
+ emit_insn (gen_cpymem_short (newdstp1, newdst,
GEN_INT (l > 257 ? 255 : l - 2)));
}
}
/* Set the first byte in the block to the value and use an
overlapping mvc for the block. */
emit_move_insn (adjust_address (dst, QImode, 0), val);
- emit_insn (gen_movmem_short (dstp1, dst, GEN_INT (254)));
+ emit_insn (gen_cpymem_short (dstp1, dst, GEN_INT (254)));
}
s390_load_address (dst_addr,
gen_rtx_PLUS (Pmode, dst_addr, GEN_INT (256)));
emit_move_insn (adjust_address (dst, QImode, 0), val);
/* execute only uses the lowest 8 bits of count that's
exactly what we need here. */
- emit_insn (gen_movmem_short (dstp1, dst,
+ emit_insn (gen_cpymem_short (dstp1, dst,
convert_to_mode (Pmode, count, 1)));
}
dest = adjust_address (dest, BLKmode, 0);
set_mem_size (dest, size);
- s390_expand_movmem (dest, src_mem, GEN_INT (size));
+ s390_expand_cpymem (dest, src_mem, GEN_INT (size));
return true;
}
;
-; movmemM instruction pattern(s).
+; cpymemM instruction pattern(s).
;
-(define_expand "movmem<mode>"
+(define_expand "cpymem<mode>"
[(set (match_operand:BLK 0 "memory_operand" "") ; destination
(match_operand:BLK 1 "memory_operand" "")) ; source
(use (match_operand:GPR 2 "general_operand" "")) ; count
(match_operand 3 "" "")]
""
{
- if (s390_expand_movmem (operands[0], operands[1], operands[2]))
+ if (s390_expand_cpymem (operands[0], operands[1], operands[2]))
DONE;
else
FAIL;
; Move a block that is up to 256 bytes in length.
; The block length is taken as (operands[2] % 256) + 1.
-(define_expand "movmem_short"
+(define_expand "cpymem_short"
[(parallel
[(set (match_operand:BLK 0 "memory_operand" "")
(match_operand:BLK 1 "memory_operand" ""))
""
"operands[3] = gen_rtx_SCRATCH (Pmode);")
-(define_insn "*movmem_short"
+(define_insn "*cpymem_short"
[(set (match_operand:BLK 0 "memory_operand" "=Q,Q,Q,Q")
(match_operand:BLK 1 "memory_operand" "Q,Q,Q,Q"))
(use (match_operand 2 "nonmemory_operand" "n,a,a,a"))
; Move a block of arbitrary length.
-(define_expand "movmem_long"
+(define_expand "cpymem_long"
[(parallel
[(clobber (match_dup 2))
(clobber (match_dup 3))
operands[3] = reg1;
})
-(define_insn "*movmem_long"
+(define_insn "*cpymem_long"
[(clobber (match_operand:<DBL> 0 "register_operand" "=d"))
(clobber (match_operand:<DBL> 1 "register_operand" "=d"))
(set (mem:BLK (subreg:P (match_operand:<DBL> 2 "register_operand" "0") 0))
[(set_attr "length" "8")
(set_attr "type" "vs")])
-(define_insn "*movmem_long_31z"
+(define_insn "*cpymem_long_31z"
[(clobber (match_operand:TI 0 "register_operand" "=d"))
(clobber (match_operand:TI 1 "register_operand" "=d"))
(set (mem:BLK (subreg:SI (match_operand:TI 2 "register_operand" "0") 4))
;; String/block move insn.
-(define_expand "movmemsi"
+(define_expand "cpymemsi"
[(parallel [(set (mem:BLK (match_operand:BLK 0))
(mem:BLK (match_operand:BLK 1)))
(use (match_operand:SI 2 "nonmemory_operand"))
#define MOVE_MAX 8
/* If a memory-to-memory move would take MOVE_RATIO or more simple
- move-instruction pairs, we will do a movmem or libcall instead. */
+ move-instruction pairs, we will do a cpymem or libcall instead. */
#define MOVE_RATIO(speed) ((speed) ? 8 : 3)
extern const char * vax_output_int_move (rtx, rtx *, machine_mode);
extern const char * vax_output_int_add (rtx_insn *, rtx *, machine_mode);
extern const char * vax_output_int_subtract (rtx_insn *, rtx *, machine_mode);
-extern const char * vax_output_movmemsi (rtx, rtx *);
#endif /* RTX_CODE */
#ifdef REAL_VALUE_TYPE
#define MOVE_MAX 8
/* If a memory-to-memory move would take MOVE_RATIO or more simple
- move-instruction pairs, we will do a movmem or libcall instead. */
+ move-instruction pairs, we will do a cpymem or libcall instead. */
#define MOVE_RATIO(speed) ((speed) ? 6 : 3)
#define CLEAR_RATIO(speed) ((speed) ? 6 : 2)
}")
;; This is here to accept 4 arguments and pass the first 3 along
-;; to the movmemhi1 pattern that really does the work.
-(define_expand "movmemhi"
+;; to the cpymemhi1 pattern that really does the work.
+(define_expand "cpymemhi"
[(set (match_operand:BLK 0 "general_operand" "=g")
(match_operand:BLK 1 "general_operand" "g"))
(use (match_operand:HI 2 "general_operand" "g"))
""
"
{
- emit_insn (gen_movmemhi1 (operands[0], operands[1], operands[2]));
+ emit_insn (gen_cpymemhi1 (operands[0], operands[1], operands[2]));
DONE;
}")
;; that anything generated as this insn will be recognized as one
;; and that it won't successfully combine with anything.
-(define_insn "movmemhi1"
+(define_insn "cpymemhi1"
[(set (match_operand:BLK 0 "memory_operand" "=o")
(match_operand:BLK 1 "memory_operand" "o"))
(use (match_operand:HI 2 "general_operand" "g"))
always make code faster, but eventually incurs high cost in
increased code size.
- Since we have a movmemsi pattern, the default MOVE_RATIO is 2, which
- is too low given that movmemsi will invoke a libcall. */
+ Since we have a cpymemsi pattern, the default MOVE_RATIO is 2, which
+ is too low given that cpymemsi will invoke a libcall. */
#define MOVE_RATIO(speed) ((speed) ? 9 : 3)
/* `CLEAR_RATIO (SPEED)`
;; Argument 2 is the length
;; Argument 3 is the alignment
-(define_expand "movmemsi"
+(define_expand "cpymemsi"
[(parallel [(set (match_operand:BLK 0 "memory_operand" "")
(match_operand:BLK 1 "memory_operand" ""))
(use (match_operand:SI 2 "general_operand" ""))
;; Block moves
-(define_expand "movmemsi"
+(define_expand "cpymemsi"
[(parallel [(set (match_operand:BLK 0 "" "")
(match_operand:BLK 1 "" ""))
(use (match_operand:SI 2 "arith_operand" ""))
#endif
/* If a memory-to-memory move would take MOVE_RATIO or more simple
- move-instruction sequences, we will do a movmem or libcall instead. */
+ move-instruction sequences, we will do a cpymem or libcall instead. */
#ifndef MOVE_RATIO
-#if defined (HAVE_movmemqi) || defined (HAVE_movmemhi) || defined (HAVE_movmemsi) || defined (HAVE_movmemdi) || defined (HAVE_movmemti)
+#if defined (HAVE_cpymemqi) || defined (HAVE_cpymemhi) || defined (HAVE_cpymemsi) || defined (HAVE_cpymemdi) || defined (HAVE_cpymemti)
#define MOVE_RATIO(speed) 2
#else
/* If we are optimizing for space (-Os), cut down the default move ratio. */
#endif
/* If a memory set (to value other than zero) operation would take
- SET_RATIO or more simple move-instruction sequences, we will do a movmem
+ SET_RATIO or more simple move-instruction sequences, we will do a setmem
or libcall instead. */
#ifndef SET_RATIO
#define SET_RATIO(speed) MOVE_RATIO (speed)
@item @samp{one_cmpl@var{m}2}
Store the bitwise-complement of operand 1 into operand 0.
-@cindex @code{movmem@var{m}} instruction pattern
-@item @samp{movmem@var{m}}
-Block move instruction. The destination and source blocks of memory
+@cindex @code{cpymem@var{m}} instruction pattern
+@item @samp{cpymem@var{m}}
+Block copy instruction. The destination and source blocks of memory
are the first two operands, and both are @code{mem:BLK}s with an
address in mode @code{Pmode}.
-The number of bytes to move is the third operand, in mode @var{m}.
+The number of bytes to copy is the third operand, in mode @var{m}.
Usually, you specify @code{Pmode} for @var{m}. However, if you can
generate better code knowing the range of valid lengths is smaller than
those representable in a full Pmode pointer, you should provide
all cases. This expected alignment is also in bytes, just like operand 4.
Expected size, when unknown, is set to @code{(const_int -1)}.
-Descriptions of multiple @code{movmem@var{m}} patterns can only be
+Descriptions of multiple @code{cpymem@var{m}} patterns can only be
beneficial if the patterns for smaller modes have fewer restrictions
on their first, second and fourth operands. Note that the mode @var{m}
-in @code{movmem@var{m}} does not impose any restriction on the mode of
-individually moved data units in the block.
+in @code{cpymem@var{m}} does not impose any restriction on the mode of
+individually copied data units in the block.
-These patterns need not give special consideration to the possibility
-that the source and destination strings might overlap.
+The @code{cpymem@var{m}} patterns need not give special consideration
+to the possibility that the source and destination strings might
+overlap. These patterns are used to do inline expansion of
+@code{__builtin_memcpy}.
@cindex @code{movstr} instruction pattern
@item @samp{movstr}
number of bytes to set is the second operand, in mode @var{m}. The value to
initialize the memory with is the third operand. Targets that only support the
clearing of memory should reject any value that is not the constant 0. See
-@samp{movmem@var{m}} for a discussion of the choice of mode.
+@samp{cpymem@var{m}} for a discussion of the choice of mode.
The fourth operand is the known alignment of the destination, in the form
of a @code{const_int} rtx. Thus, if the compiler knows that the
correctness, but it can be used for choosing proper code sequence for a
given size).
-The use for multiple @code{setmem@var{m}} is as for @code{movmem@var{m}}.
+The use for multiple @code{setmem@var{m}} is as for @code{cpymem@var{m}}.
@cindex @code{cmpstrn@var{m}} instruction pattern
@item @samp{cmpstrn@var{m}}
String compare instruction, with five operands. Operand 0 is the output;
it has mode @var{m}. The remaining four operands are like the operands
-of @samp{movmem@var{m}}. The two memory blocks specified are compared
+of @samp{cpymem@var{m}}. The two memory blocks specified are compared
byte by byte in lexicographic order starting at the beginning of each
string. The instruction is not allowed to prefetch more than one byte
at a time since either string may end in the first byte and reading past
instead. The @code{use} RTX is most commonly useful to describe that
a fixed register is implicitly used in an insn. It is also safe to use
in patterns where the compiler knows for other reasons that the result
-of the whole pattern is variable, such as @samp{movmem@var{m}} or
+of the whole pattern is variable, such as @samp{cpymem@var{m}} or
@samp{call} patterns.
During the reload phase, an insn that has a @code{use} as pattern
when copying a @code{struct}. The @code{by_pieces} infrastructure
implements such memory operations as a sequence of load, store or move
insns. Alternate strategies are to expand the
-@code{movmem} or @code{setmem} optabs, to emit a library call, or to emit
+@code{cpymem} or @code{setmem} optabs, to emit a library call, or to emit
unit-by-unit, loop-based operations.
This target hook should return true if, for a memory operation with a
Returning true for higher values of @var{size} can improve code generation
for speed if the target does not provide an implementation of the
-@code{movmem} or @code{setmem} standard names, if the @code{movmem} or
+@code{cpymem} or @code{setmem} standard names, if the @code{cpymem} or
@code{setmem} implementation would be more expensive than a sequence of
insns, or if the overhead of a library call would dominate that of
the body of the memory operation.
int cse_not_expected;
static bool block_move_libcall_safe_for_call_parm (void);
-static bool emit_block_move_via_movmem (rtx, rtx, rtx, unsigned, unsigned, HOST_WIDE_INT,
+static bool emit_block_move_via_cpymem (rtx, rtx, rtx, unsigned, unsigned, HOST_WIDE_INT,
unsigned HOST_WIDE_INT, unsigned HOST_WIDE_INT,
unsigned HOST_WIDE_INT);
static void emit_block_move_via_loop (rtx, rtx, rtx, unsigned);
if (CONST_INT_P (size) && can_move_by_pieces (INTVAL (size), align))
move_by_pieces (x, y, INTVAL (size), align, RETURN_BEGIN);
- else if (emit_block_move_via_movmem (x, y, size, align,
+ else if (emit_block_move_via_cpymem (x, y, size, align,
expected_align, expected_size,
min_size, max_size, probable_max_size))
;
return true;
}
-/* A subroutine of emit_block_move. Expand a movmem pattern;
+/* A subroutine of emit_block_move. Expand a cpymem pattern;
return true if successful. */
static bool
-emit_block_move_via_movmem (rtx x, rtx y, rtx size, unsigned int align,
+emit_block_move_via_cpymem (rtx x, rtx y, rtx size, unsigned int align,
unsigned int expected_align, HOST_WIDE_INT expected_size,
unsigned HOST_WIDE_INT min_size,
unsigned HOST_WIDE_INT max_size,
FOR_EACH_MODE_IN_CLASS (mode_iter, MODE_INT)
{
scalar_int_mode mode = mode_iter.require ();
- enum insn_code code = direct_optab_handler (movmem_optab, mode);
+ enum insn_code code = direct_optab_handler (cpymem_optab, mode);
if (code != CODE_FOR_nothing
/* We don't need MODE to be narrower than BITS_PER_HOST_WIDE_INT
OPTAB_D (cmpmem_optab, "cmpmem$a")
OPTAB_D (cmpstr_optab, "cmpstr$a")
OPTAB_D (cmpstrn_optab, "cmpstrn$a")
-OPTAB_D (movmem_optab, "movmem$a")
+OPTAB_D (cpymem_optab, "cpymem$a")
OPTAB_D (setmem_optab, "setmem$a")
OPTAB_D (strlen_optab, "strlen$a")
when copying a @code{struct}. The @code{by_pieces} infrastructure\n\
implements such memory operations as a sequence of load, store or move\n\
insns. Alternate strategies are to expand the\n\
-@code{movmem} or @code{setmem} optabs, to emit a library call, or to emit\n\
+@code{cpymem} or @code{setmem} optabs, to emit a library call, or to emit\n\
unit-by-unit, loop-based operations.\n\
\n\
This target hook should return true if, for a memory operation with a\n\
\n\
Returning true for higher values of @var{size} can improve code generation\n\
for speed if the target does not provide an implementation of the\n\
-@code{movmem} or @code{setmem} standard names, if the @code{movmem} or\n\
+@code{cpymem} or @code{setmem} standard names, if the @code{cpymem} or\n\
@code{setmem} implementation would be more expensive than a sequence of\n\
insns, or if the overhead of a library call would dominate that of\n\
the body of the memory operation.\n\
#ifdef MOVE_RATIO
move_ratio = (unsigned int) MOVE_RATIO (speed_p);
#else
-#if defined (HAVE_movmemqi) || defined (HAVE_movmemhi) || defined (HAVE_movmemsi) || defined (HAVE_movmemdi) || defined (HAVE_movmemti)
+#if defined (HAVE_cpymemqi) || defined (HAVE_cpymemhi) || defined (HAVE_cpymemsi) || defined (HAVE_cpymemdi) || defined (HAVE_cpymemti)
move_ratio = 2;
-#else /* No movmem patterns, pick a default. */
+#else /* No cpymem patterns, pick a default. */
move_ratio = ((speed_p) ? 15 : 3);
#endif
#endif
}
/* Return TRUE if the move_by_pieces/set_by_pieces infrastructure should be
- used; return FALSE if the movmem/setmem optab should be expanded, or
+ used; return FALSE if the cpymem/setmem optab should be expanded, or
a call to memcpy emitted. */
bool