It is tricky to get right. Aarch64 does it by adding the appropriate lane-swapping
operations during expansion.
I'd like to do the same on arm eventually, but we'd need to port and validate the VTBL-generating
code and add it to all the right places and I'm not comfortable enough doing it for GCC 8, but I am keen
in getting the wrong-code fixed.
As I say in the PR, vectorisation on armeb is already severely restricted (we disable many patterns on BYTES_BIG_ENDIAN)
and the load/store_lanes patterns really were not working properly at all, so disabling them is not
a radical approach.
The way to do that is to return false in ARRAY_MODE_SUPPORTED_P for BYTES_BIG_ENDIAN.
Bootstrapped and tested on arm-none-linux-gnueabihf.
Also tested on armeb-none-eabi.
PR target/82518
* config/arm/arm.c (arm_array_mode_supported_p): Return false for
BYTES_BIG_ENDIAN.
* lib/target-supports.exp (check_effective_target_vect_load_lanes):
Disable for armeb targets.
* gcc.target/arm/pr82518.c: New test.
From-SVN: r258687
arm_array_mode_supported_p (machine_mode mode,
unsigned HOST_WIDE_INT nelems)
{
- if (TARGET_NEON
+ /* We don't want to enable interleaved loads and stores for BYTES_BIG_ENDIAN
+ for now, as the lane-swapping logic needs to be extended in the expanders.
+ See PR target/82518. */
+ if (TARGET_NEON && !BYTES_BIG_ENDIAN
&& (VALID_NEON_DREG_MODE (mode) || VALID_NEON_QREG_MODE (mode))
&& (nelems >= 2 && nelems <= 4))
return true;
--- /dev/null
+/* { dg-do run } */
+/* { dg-require-effective-target arm_neon_ok } */
+/* { dg-additional-options "-O3 -fno-inline -std=gnu99" } */
+/* { dg-add-options arm_neon } */
+
+typedef struct { int x, y; } X;
+
+void f4(X *p, int n)
+{
+ for (int i = 0; i < n; i++)
+ { p[i].x = i;
+ p[i].y = i + 1;
+ }
+}
+
+__attribute ((aligned (16))) X arr[100];
+
+int main(void)
+{
+ volatile int fail = 0;
+ f4 (arr, 100);
+ for (int i = 0; i < 100; i++)
+ if (arr[i].y != i+1 || arr[i].x != i)
+ fail = 1;
+ if (fail)
+ __builtin_abort ();
+
+ return 0;
+}
verbose "check_effective_target_vect_load_lanes: using cached result" 2
} else {
set et_vect_load_lanes 0
- if { ([istarget arm*-*-*] && [check_effective_target_arm_neon_ok])
+ # We don't support load_lanes correctly on big-endian arm.
+ if { ([istarget arm-*-*] && [check_effective_target_arm_neon_ok])
|| [istarget aarch64*-*-*] } {
set et_vect_load_lanes 1
}