SIMD operations like combine prefer to have their operands in FP registers,
authorWilco Dijkstra <wdijkstr@arm.com>
Thu, 26 May 2016 12:12:20 +0000 (12:12 +0000)
committerWilco Dijkstra <wilco@gcc.gnu.org>
Thu, 26 May 2016 12:12:20 +0000 (12:12 +0000)
so increase the cost of integer registers slightly to avoid unnecessary int<->FP
moves. This improves register allocation of scalar SIMD operations.

        * config/aarch64/aarch64-simd.md (aarch64_combinez):
        Add ? to integer variant.
        (aarch64_combinez_be): Likewise.

From-SVN: r236770

gcc/ChangeLog
gcc/config/aarch64/aarch64-simd.md

index b1cd89ec20951eb0abdcd2f3e2d4c5ed2d4ea400..b52e581d60af7492d7e287e4724b09670bd93d38 100644 (file)
@@ -1,3 +1,9 @@
+2016-05-26  Wilco Dijkstra  <wdijkstr@arm.com>
+
+       * config/aarch64/aarch64-simd.md (aarch64_combinez):
+       Add ? to integer variant.
+       (aarch64_combinez_be): Likewise.
+
 2016-05-26  Jakub Jelinek  <jakub@redhat.com>
 
        * config/i386/sse.md (*vcvtps2ph_store<mask_name>): Use v constraint
index 59a578f5937a240b325af22021bbd662230ed404..3318c2155f551c4ccd35188b2bedee5bf14ba2b0 100644 (file)
 (define_insn "*aarch64_combinez<mode>"
   [(set (match_operand:<VDBL> 0 "register_operand" "=w,w,w")
         (vec_concat:<VDBL>
-          (match_operand:VD_BHSI 1 "general_operand" "w,r,m")
+          (match_operand:VD_BHSI 1 "general_operand" "w,?r,m")
           (match_operand:VD_BHSI 2 "aarch64_simd_imm_zero" "Dz,Dz,Dz")))]
   "TARGET_SIMD && !BYTES_BIG_ENDIAN"
   "@
   [(set (match_operand:<VDBL> 0 "register_operand" "=w,w,w")
         (vec_concat:<VDBL>
           (match_operand:VD_BHSI 2 "aarch64_simd_imm_zero" "Dz,Dz,Dz")
-          (match_operand:VD_BHSI 1 "general_operand" "w,r,m")))]
+          (match_operand:VD_BHSI 1 "general_operand" "w,?r,m")))]
   "TARGET_SIMD && BYTES_BIG_ENDIAN"
   "@
    mov\\t%0.8b, %1.8b