From 58c2ad42a89438281327c74afb3f7483ffe22514 Mon Sep 17 00:00:00 2001 From: Luis Machado Date: Tue, 22 May 2018 18:35:15 +0000 Subject: [PATCH] [AArch64] Recognize a missed usage of a sbfiz instruction A customer reported the following missed opportunities to combine a couple instructions into a sbfiz. int sbfiz32 (int x) { return x << 29 >> 10; } long sbfiz64 (long x) { return x << 58 >> 20; } This gets converted to the following pattern: (set (reg:SI 98) (ashift:SI (sign_extend:SI (reg:HI 0 x0 [ xD.3334 ])) (const_int 6 [0x6]))) Currently, gcc generates the following: sbfiz32: lsl x0, x0, 29 asr x0, x0, 10 ret sbfiz64: lsl x0, x0, 58 asr x0, x0, 20 ret It could generate this instead: sbfiz32: sbfiz w0, w0, 19, 3 ret sbfiz64:: sbfiz x0, x0, 38, 6 ret The unsigned versions already generate ubfiz for the same code, so the lack of a sbfiz pattern may have been an oversight. This particular sbfiz pattern shows up in both CPU2006 (~ 80 hits) and CPU2017 (~ 280 hits). It's not a lot, but seems beneficial in any case. No significant performance differences, probably due to the small number of occurrences or cases outside hot areas. gcc/ChangeLog: 2018-05-22 Luis Machado gcc/ * config/aarch64/aarch64.md (*ashift_extv_bfiz): New pattern. gcc/testsuite/ChangeLog: 2018-05-22 Luis Machado gcc/testsuite/ * gcc.target/aarch64/lsl_asr_sbfiz.c: New test. From-SVN: r260546 --- gcc/ChangeLog | 4 ++++ gcc/config/aarch64/aarch64.md | 14 +++++++++++ gcc/testsuite/ChangeLog | 4 ++++ .../gcc.target/aarch64/lsl_asr_sbfiz.c | 24 +++++++++++++++++++ 4 files changed, 46 insertions(+) create mode 100644 gcc/testsuite/gcc.target/aarch64/lsl_asr_sbfiz.c diff --git a/gcc/ChangeLog b/gcc/ChangeLog index 7ba041be303..d2e0b06741e 100644 --- a/gcc/ChangeLog +++ b/gcc/ChangeLog @@ -1,3 +1,7 @@ +2018-05-22 Luis Machado + + * config/aarch64/aarch64.md (*ashift_extv_bfiz): New pattern. + 2018-05-22 Martin Sebor * calls.c (maybe_warn_nonstring_arg): Fix a typo in a comment. diff --git a/gcc/config/aarch64/aarch64.md b/gcc/config/aarch64/aarch64.md index 8a894f8ac00..7a0e34d624b 100644 --- a/gcc/config/aarch64/aarch64.md +++ b/gcc/config/aarch64/aarch64.md @@ -4790,6 +4790,20 @@ [(set_attr "type" "bfx")] ) +;; Match sbfiz pattern in a shift left + shift right operation. + +(define_insn "*ashift_extv_bfiz" + [(set (match_operand:GPI 0 "register_operand" "=r") + (ashift:GPI (sign_extract:GPI (match_operand:GPI 1 "register_operand" "r") + (match_operand 2 "aarch64_simd_shift_imm_offset_" "n") + (const_int 0)) + (match_operand 3 "aarch64_simd_shift_imm_" "n")))] + "IN_RANGE (INTVAL (operands[2]) + INTVAL (operands[3]), + 1, GET_MODE_BITSIZE (mode) - 1)" + "sbfiz\\t%0, %1, %3, %2" + [(set_attr "type" "bfx")] +) + ;; When the bit position and width of the equivalent extraction add up to 32 ;; we can use a W-reg LSL instruction taking advantage of the implicit ;; zero-extension of the X-reg. diff --git a/gcc/testsuite/ChangeLog b/gcc/testsuite/ChangeLog index 12079ae57e2..5ab91fd0b88 100644 --- a/gcc/testsuite/ChangeLog +++ b/gcc/testsuite/ChangeLog @@ -1,3 +1,7 @@ +2018-05-22 Luis Machado + + * gcc.target/aarch64/lsl_asr_sbfiz.c: New test. + 2018-05-22 Janus Weil PR fortran/85841 diff --git a/gcc/testsuite/gcc.target/aarch64/lsl_asr_sbfiz.c b/gcc/testsuite/gcc.target/aarch64/lsl_asr_sbfiz.c new file mode 100644 index 00000000000..106433d9131 --- /dev/null +++ b/gcc/testsuite/gcc.target/aarch64/lsl_asr_sbfiz.c @@ -0,0 +1,24 @@ +/* { dg-do compile } */ +/* { dg-options "-O3" } */ + +/* Check that a LSL followed by an ASR can be combined into a single SBFIZ + instruction. */ + +/* Using W-reg */ + +int +sbfiz32 (int x) +{ + return x << 29 >> 10; +} + +/* Using X-reg */ + +long long +sbfiz64 (long long x) +{ + return x << 58 >> 20; +} + +/* { dg-final { scan-assembler "sbfiz\tw" } } */ +/* { dg-final { scan-assembler "sbfiz\tx" } } */ -- 2.30.2