widening_mul: Recognize another form of ADD_OVERFLOW [PR96272]
The following patch recognizes another form of hand written
__builtin_add_overflow (this time _p), in particular when
the code does unsigned
if (x > ~0U - y)
or
if (x <= ~0U - y)
it can be optimized (if the subtraction turned into ~y is single use)
into
if (__builtin_add_overflow_p (x, y, 0U))
or
if (!__builtin_add_overflow_p (x, y, 0U))
and generate better code, e.g. for the first function in the testcase:
- movl %esi, %eax
addl %edi, %esi
- notl %eax
- cmpl %edi, %eax
- movl $-1, %eax
- cmovnb %esi, %eax
+ jc .L3
+ movl %esi, %eax
+ ret
+.L3:
+ orl $-1, %eax
ret
on x86_64. As for the jumps vs. conditional move case, that is some CE
issue with complex branch patterns we should fix up no matter what, but
in this case I'm actually not sure if branchy code isn't better, overflow
is something that isn't that common.
2020-12-12 Jakub Jelinek <jakub@redhat.com>
PR tree-optimization/96272
* tree-ssa-math-opts.c (uaddsub_overflow_check_p): Add OTHER argument.
Handle BIT_NOT_EXPR.
(match_uaddsub_overflow): Optimize unsigned a > ~b into
__imag__ .ADD_OVERFLOW (a, b).
(math_opts_dom_walker::after_dom_children): Call match_uaddsub_overflow
even for BIT_NOT_EXPR.
* gcc.dg/tree-ssa/pr96272.c: New test.