[AArch64] Don't split 64-bit constant stores to volatile location
The optimisation to optimise:
typedef unsigned long long u64;
void bar(u64 *x)
{
*x = 0xabcdef10abcdef10;
}
from:
mov x1, 61200
movk x1, 0xabcd, lsl 16
movk x1, 0xef10, lsl 32
movk x1, 0xabcd, lsl 48
str x1, [x0]
into:
mov w1, 61200
movk w1, 0xabcd, lsl 16
stp w1, w1, [x0]
ends up producing two distinct stores if the destination is volatile:
void bar(u64 *x)
{
*(volatile u64 *)x = 0xabcdef10abcdef10;
}
mov w1, 61200
movk w1, 0xabcd, lsl 16
str w1, [x0]
str w1, [x0, 4]
because we end up not merging the strs into an stp. It's questionable whether the use of STP is valid for volatile in the first place.
To avoid unnecessary pain in a context where it's unlikely to be performance critical [1] (use of volatile), this patch avoids this
transformation for volatile destinations, so we produce the original single STR-X.
Bootstrapped and tested on aarch64-none-linux-gnu.
[1] https://lore.kernel.org/lkml/
20190821103200.kpufwtviqhpbuv2n@willie-the-truck/
* config/aarch64/aarch64.md (mov<mode>): Don't call
aarch64_split_dimode_const_store on volatile MEM.
* gcc.target/aarch64/nosplit-di-const-volatile_1.c: New test.
From-SVN: r276098