Thumb2 code now uses the Arm implementation of legitimize_address.
That code has a case to handle addresses that are absolute CONST_INT
values, which is a common use case in deeply embedded targets (eg:
void *p = (void*)0x12345678). Since thumb has very limited negative
offsets from a constant, we want to avoid forming a CSE base that will
then be used with a negative value.
This was reported upstream originally in
https://gcc.gnu.org/ml/gcc-help/2019-10/msg00122.html
For example,
void test1(void) {
volatile uint32_t * const p = (uint32_t *) 0x43fe1800;
p[3] = 1;
p[4] = 2;
p[1] = 3;
p[7] = 4;
p[0] = 6;
}
With the new code, instead of
ldr r3, .L2
subw r2, r3, #2035
movs r1, #1
str r1, [r2]
subw r2, r3, #2031
movs r1, #2
str r1, [r2]
subw r2, r3, #2043
movs r1, #3
str r1, [r2]
subw r2, r3, #2019
movs r1, #4
subw r3, r3, #2047
str r1, [r2]
movs r2, #6
str r2, [r3]
bx lr
We now get
ldr r3, .L2
movs r2, #1
str r2, [r3, #2060]
movs r2, #2
str r2, [r3, #2064]
movs r2, #3
str r2, [r3, #2052]
movs r2, #4
str r2, [r3, #2076]
movs r2, #6
str r2, [r3, #2048]
bx lr
* config/arm/arm.c (arm_legitimize_address): Don't form negative
offsets from a CONST_INT address when TARGET_THUMB2.
From-SVN: r277677
+2019-10-31 Richard Earnshaw <rearnsha@arm.com>
+
+ * config/arm/arm.c (arm_legitimize_address): Don't form negative offsets
+ from a CONST_INT address when TARGET_THUMB2.
+
2019-10-31 Richard Earnshaw <rearnsha@arm.com>
* config/arm/arm.md (add_not_cin): New insn.
HOST_WIDE_INT mask, base, index;
rtx base_reg;
- /* ldr and ldrb can use a 12-bit index, ldrsb and the rest can only
- use a 8-bit index. So let's use a 12-bit index for SImode only and
- hope that arm_gen_constant will enable ldrb to use more bits. */
+ /* LDR and LDRB can use a 12-bit index, ldrsb and the rest can
+ only use a 8-bit index. So let's use a 12-bit index for
+ SImode only and hope that arm_gen_constant will enable LDRB
+ to use more bits. */
bits = (mode == SImode) ? 12 : 8;
mask = (1 << bits) - 1;
base = INTVAL (x) & ~mask;
index = INTVAL (x) & mask;
- if (bit_count (base & 0xffffffff) > (32 - bits)/2)
- {
- /* It'll most probably be more efficient to generate the base
- with more bits set and use a negative index instead. */
+ if (TARGET_ARM && bit_count (base & 0xffffffff) > (32 - bits)/2)
+ {
+ /* It'll most probably be more efficient to generate the
+ base with more bits set and use a negative index instead.
+ Don't do this for Thumb as negative offsets are much more
+ limited. */
base |= mask;
index -= mask;
}