gcc.git
6 years agovisium.md (nop): Tweak comment.
Eric Botcazou [Tue, 16 Jan 2018 20:54:25 +0000 (20:54 +0000)]
visium.md (nop): Tweak comment.

* config/visium/visium.md (nop): Tweak comment.
(hazard_nop): Likewise.

From-SVN: r256758

6 years agore PR testsuite/77734 (FAIL: gcc.dg/plugin/must-tail-call-1.c -fplugin=./must_tail_ca...
Eric Botcazou [Tue, 16 Jan 2018 20:48:05 +0000 (20:48 +0000)]
re PR testsuite/77734 (FAIL: gcc.dg/plugin/must-tail-call-1.c -fplugin=./must_tail_call_plugin.so (test  for excess errors))

PR testsuite/77734
* gcc.dg/plugin/must-tail-call-1.c: Pass -fdelayed-branch on SPARC.

From-SVN: r256756

6 years ago* testsuite/17_intro/names.cc: Undefine 'y' on SPARC/Linux.
Eric Botcazou [Tue, 16 Jan 2018 20:40:09 +0000 (20:40 +0000)]
* testsuite/17_intro/names.cc: Undefine 'y' on SPARC/Linux.

From-SVN: r256754

6 years agors6000.c (rs6000_opt_vars): Add entry for -mspeculate-indirect-jumps.
Bill Schmidt [Tue, 16 Jan 2018 16:49:39 +0000 (16:49 +0000)]
rs6000.c (rs6000_opt_vars): Add entry for -mspeculate-indirect-jumps.

[gcc]

2018-01-16  Bill Schmidt  <wschmidt@linux.vnet.ibm.com>

* config/rs6000/rs6000.c (rs6000_opt_vars): Add entry for
-mspeculate-indirect-jumps.
* config/rs6000/rs6000.md (*call_indirect_elfv2<mode>): Disable
for -mno-speculate-indirect-jumps.
(*call_indirect_elfv2<mode>_nospec): New define_insn.
(*call_value_indirect_elfv2<mode>): Disable for
-mno-speculate-indirect-jumps.
(*call_value_indirect_elfv2<mode>_nospec): New define_insn.
(indirect_jump): Emit different RTL for
-mno-speculate-indirect-jumps.
(*indirect_jump<mode>): Disable for
-mno-speculate-indirect-jumps.
(*indirect_jump<mode>_nospec): New define_insn.
(tablejump): Emit different RTL for
-mno-speculate-indirect-jumps.
(tablejumpsi): Disable for -mno-speculate-indirect-jumps.
(tablejumpsi_nospec): New define_expand.
(tablejumpdi): Disable for -mno-speculate-indirect-jumps.
(tablejumpdi_nospec): New define_expand.
(*tablejump<mode>_internal1): Disable for
-mno-speculate-indirect-jumps.
(*tablejump<mode>_internal1_nospec): New define_insn.
* config/rs6000/rs6000.opt (mspeculate-indirect-jumps): New
option.

[gcc/testsuite]

2018-01-16  Bill Schmidt  <wschmidt@linux.vnet.ibm.com>

* gcc.target/powerpc/safe-indirect-jump-1.c: New file.
* gcc.target/powerpc/safe-indirect-jump-2.c: New file.
* gcc.target/powerpc/safe-indirect-jump-3.c: New file.
* gcc.target/powerpc/safe-indirect-jump-4.c: New file.
* gcc.target/powerpc/safe-indirect-jump-5.c: New file.
* gcc.target/powerpc/safe-indirect-jump-6.c: New file.

From-SVN: r256753

6 years agocaller-save.c (insert_save): Drop unnecessary parameter.
Artyom Skrobov [Tue, 16 Jan 2018 16:28:36 +0000 (09:28 -0700)]
caller-save.c (insert_save): Drop unnecessary parameter.

        * caller-save.c (insert_save): Drop unnecessary parameter.  All
callers updated.

From-SVN: r256751

6 years agore PR libgomp/83590 ([nvptx] openacc reduction C regressions)
Jakub Jelinek [Tue, 16 Jan 2018 15:18:24 +0000 (16:18 +0100)]
re PR libgomp/83590 ([nvptx] openacc reduction C regressions)

PR libgomp/83590
* gimplify.c (gimplify_one_sizepos): For is_gimple_constant (expr)
return early, inline manually is_gimple_sizepos.  Make sure if we
call gimplify_expr we don't end up with a gimple constant.
* tree.c (variably_modified_type_p): Don't return true for
is_gimple_constant (_t).  Inline manually is_gimple_sizepos.
* gimplify.h (is_gimple_sizepos): Remove.

Co-Authored-By: Richard Biener <rguenther@suse.de>
From-SVN: r256748

6 years agoTwo fixes for live-out SLP inductions (PR 83857)
Richard Sandiford [Tue, 16 Jan 2018 15:13:32 +0000 (15:13 +0000)]
Two fixes for live-out SLP inductions (PR 83857)

vect_analyze_loop_operations was calling vectorizable_live_operation
for all live-out phis, which led to a bogus ncopies calculation in
the pure SLP case.  I think v_a_l_o should only be passing phis
that are vectorised using normal loop vectorisation, since
vect_slp_analyze_node_operations handles the SLP side (and knows
the correct slp_index and slp_node arguments to pass in, via
vect_analyze_stmt).

With that fixed we hit an older bug that vectorizable_live_operation
didn't handle live-out SLP inductions.  Fixed by using gimple_phi_result
rather than gimple_get_lhs for phis.

2018-01-16  Richard Sandiford  <richard.sandiford@linaro.org>

gcc/
PR tree-optimization/83857
* tree-vect-loop.c (vect_analyze_loop_operations): Don't call
vectorizable_live_operation for pure SLP statements.
(vectorizable_live_operation): Handle PHIs.

gcc/testsuite/
PR tree-optimization/83857
* gcc.dg/vect/pr83857.c: New test.

From-SVN: r256747

6 years agore PR tree-optimization/83867 (ICE: Segmentation fault in nested_in_vect_loop_p)
Richard Biener [Tue, 16 Jan 2018 15:13:05 +0000 (15:13 +0000)]
re PR tree-optimization/83867 (ICE: Segmentation fault in nested_in_vect_loop_p)

2018-01-16  Richard Biener  <rguenther@suse.de>

PR tree-optimization/83867
* tree-vect-stmts.c (vect_transform_stmt): Precompute
nested_in_vect_loop_p since the scalar stmt may get invalidated.

* gcc.dg/vect/pr83867.c: New testcase.

From-SVN: r256746

6 years agore PR c/83844 (ICE with warn_if_not_aligned attribute)
Jakub Jelinek [Tue, 16 Jan 2018 15:08:32 +0000 (16:08 +0100)]
re PR c/83844 (ICE with warn_if_not_aligned attribute)

PR c/83844
* stor-layout.c (handle_warn_if_not_align): Use byte_position and
multiple_of_p instead of unchecked tree_to_uhwi and UHWI check.
If off is not INTEGER_CST, issue a may not be aligned warning
rather than isn't aligned.  Use isn%'t rather than isn't.
* fold-const.c (multiple_of_p) <case BIT_AND_EXPR>: Don't fall through
into MULT_EXPR.
<case MULT_EXPR>: Improve the case when bottom and one of the
MULT_EXPR operands are INTEGER_CSTs and bottom is multiple of that
operand, in that case check if the other operand is multiple of
bottom divided by the INTEGER_CST operand.

* gcc.dg/pr83844.c: New test.

From-SVN: r256745

6 years agoMove pa.h FUNCTION_ARG_SIZE to pa.c (PR83858)
Richard Sandiford [Tue, 16 Jan 2018 14:47:49 +0000 (14:47 +0000)]
Move pa.h FUNCTION_ARG_SIZE to pa.c (PR83858)

The port-local FUNCTION_ARG_SIZE:

  ((((MODE) != BLKmode \
     ? (HOST_WIDE_INT) GET_MODE_SIZE (MODE) \
     : int_size_in_bytes (TYPE)) + UNITS_PER_WORD - 1) / UNITS_PER_WORD)

is used by code in pa.c and by ASM_DECLARE_FUNCTION_NAME in som.h.
Treating GET_MODE_SIZE as a constant is OK for the former but not
the latter, which is used in target-independent code.  This caused
a build failure on hppa2.0w-hp-hpux11.11.

2018-01-16  Richard Sandiford  <richard.sandiford@linaro.org>

gcc/
PR target/83858
* config/pa/pa.h (FUNCTION_ARG_SIZE): Delete.
* config/pa/pa-protos.h (pa_function_arg_size): Declare.
* config/pa/som.h (ASM_DECLARE_FUNCTION_NAME): Use
pa_function_arg_size instead of FUNCTION_ARG_SIZE.
* config/pa/pa.c (pa_function_arg_advance): Likewise.
(pa_function_arg, pa_arg_partial_bytes): Likewise.
(pa_function_arg_size): New function.

From-SVN: r256744

6 years agoFix whitespace in changelog
Segher Boessenkool [Tue, 16 Jan 2018 13:42:46 +0000 (14:42 +0100)]
Fix whitespace in changelog

From-SVN: r256743

6 years agoFix changelog
Richard Sandiford [Tue, 16 Jan 2018 12:49:24 +0000 (12:49 +0000)]
Fix changelog

From-SVN: r256741

6 years agoAvoid GCC 4.1 build failure in fold-const.c
Richard Sandiford [Tue, 16 Jan 2018 12:44:37 +0000 (12:44 +0000)]
Avoid GCC 4.1 build failure in fold-const.c

We had:

      tree t = fold_vec_perm (type, arg1, arg2,
      vec_perm_indices (sel, 2, nelts));

where fold_vec_perm takes a const vec_perm_indices &.  GCC 4.1 apparently
required a public copy constructor:

gcc/vec-perm-indices.h:85: error: 'vec_perm_indices::vec_perm_indices(const vec_perm_indices&)' is private
gcc/fold-const.c:11410: error: within this context

even though no copy should be made here.  This patch tries to work
around that by constructing the vec_perm_indices separately.

2018-01-16  Richard Sandiford  <richard.sandiford@linaro.org>

gcc/
* fold-const.c (fold_ternary_loc): Construct the vec_perm_indices
in a separate statement.

From-SVN: r256740

6 years agoPR libstdc++/83834 replace wildcard pattern in linker script
Jonathan Wakely [Tue, 16 Jan 2018 12:43:08 +0000 (12:43 +0000)]
PR libstdc++/83834 replace wildcard pattern in linker script

PR libstdc++/83834
* config/abi/pre/gnu.ver (GLIBCXX_3.4): Replace std::c[a-g]* wildcard
pattern with exact match for std::cerr.

From-SVN: r256739

6 years ago* MAINTAINERS (write after approval): Add myself.
Sebastian Perta [Tue, 16 Jan 2018 12:23:39 +0000 (12:23 +0000)]
* MAINTAINERS (write after approval): Add myself.

From-SVN: r256738

6 years agoDon't group gather loads (PR83847)
Richard Sandiford [Tue, 16 Jan 2018 09:28:26 +0000 (09:28 +0000)]
Don't group gather loads (PR83847)

In the testcase we were trying to group two gather loads, even though
that isn't supported.  Fixed by explicitly disallowing grouping of
gathers and scatters.

This problem didn't show up on SVE because there we convert to
IFN_GATHER_LOAD/IFN_SCATTER_STORE pattern statements, which fail
the can_group_stmts_p check.

2018-01-16  Richard Sandiford  <richard.sandiford@linaro.org>

gcc/
* tree-vect-data-refs.c (vect_analyze_data_ref_accesses):

gcc/testsuite/
* gcc.dg/torture/pr83847.c: New test.

From-SVN: r256730

6 years agore PR rtl-optimization/83620 (ICE: in assign_by_spills, at lra-assigns.c:1470: unable...
Jakub Jelinek [Tue, 16 Jan 2018 08:55:14 +0000 (09:55 +0100)]
re PR rtl-optimization/83620 (ICE: in assign_by_spills, at lra-assigns.c:1470: unable to find a register to spill with -flive-range-shrinkage --param=max-sched-ready-insns=0)

PR rtl-optimization/86620
* params.def (max-sched-ready-insns): Bump minimum value to 1.

* gcc.dg/pr64935-2.c: Use --param=max-sched-ready-insns=1
instead of --param=max-sched-ready-insns=0.
* gcc.target/i386/pr83620.c: New test.
* gcc.dg/pr83620.c: New test.

From-SVN: r256729

6 years agore PR rtl-optimization/83213 (peephole bug with -O2)
Jakub Jelinek [Tue, 16 Jan 2018 08:54:03 +0000 (09:54 +0100)]
re PR rtl-optimization/83213 (peephole bug with -O2)

PR rtl-optimization/83213
* recog.c (peep2_attempt): Copy over CROSSING_JUMP_P from peepinsn
to last if both are JUMP_INSNs.

From-SVN: r256728

6 years agore PR tree-optimization/83843 (wrong code at -O2)
Jakub Jelinek [Tue, 16 Jan 2018 08:53:09 +0000 (09:53 +0100)]
re PR tree-optimization/83843 (wrong code at -O2)

PR tree-optimization/83843
* gimple-ssa-store-merging.c
(imm_store_chain_info::output_merged_store): Handle bit_not_p on
store_immediate_info for bswap/nop orig_stores.

* gcc.dg/store_merging_18.c: New test.

From-SVN: r256727

6 years agore PR c++/83817 (internal compiler error: tree check: expected call_expr, have aggr_i...
Jakub Jelinek [Tue, 16 Jan 2018 08:44:48 +0000 (09:44 +0100)]
re PR c++/83817 (internal compiler error: tree check: expected call_expr, have aggr_init_expr in tsubst_copy_and_build, at cp/pt.c:17822)

PR c++/83817
* pt.c (tsubst_copy_and_build) <case CALL_EXPR>: If function
is AGGR_INIT_EXPR rather than CALL_EXPR, set AGGR_INIT_FROM_THUNK_P
instead of CALL_FROM_THUNK_P.

* g++.dg/cpp1y/pr83817.C: New test.

From-SVN: r256726

6 years agore PR c++/83825 (ICE on invalid C++ code with shadowed identifiers: in operator[...
Jakub Jelinek [Tue, 16 Jan 2018 08:43:31 +0000 (09:43 +0100)]
re PR c++/83825 (ICE on invalid C++ code with shadowed identifiers: in operator[], at vec.h:826)

PR c++/83825
* name-lookup.c (member_vec_dedup): Return early if len is 0.
(resort_type_member_vec, set_class_bindings,
insert_late_enum_def_bindings): Use vec qsort method instead of
calling qsort directly.

* g++.dg/template/pr83825.C: New test.

From-SVN: r256725

6 years agopr83435.c: Restrict to target pthread.
Richard Biener [Tue, 16 Jan 2018 08:08:35 +0000 (08:08 +0000)]
pr83435.c: Restrict to target pthread.

2018-01-16  Richard Biener  <rguenther@suse.de>

* gcc.dg/graphite/pr83435.c: Restrict to target pthread.

From-SVN: r256724

6 years agore PR testsuite/82132 (FAIL: gcc.dg/vect/vect-tail-nomask-1.c (test for excess errors...
Richard Biener [Tue, 16 Jan 2018 08:04:28 +0000 (08:04 +0000)]
re PR testsuite/82132 (FAIL: gcc.dg/vect/vect-tail-nomask-1.c (test for excess errors) due to missing posix_memalign)

2018-01-16  Richard Biener  <rguenther@suse.de>

PR testsuite/82132
* gcc.dg/vect/vect-tail-nomask-1.c: Copy posix_memalign boiler-plate
from gcc.dg/torture/pr60092.c.

From-SVN: r256723

6 years agoRISC-V: Increase mult/div cost if not implemented in hardware.
Andrew Waterman [Tue, 16 Jan 2018 03:03:09 +0000 (03:03 +0000)]
RISC-V: Increase mult/div cost if not implemented in hardware.

2018-01-15  Andrew Waterman  <andrew@sifive.com>
gcc/
* config/riscv/riscv.c (riscv_rtx_costs) <MULT>: Increase cost if
!TARGET_MUL.
<UDIV>: Increase cost if !TARGET_DIV.

From-SVN: r256722

6 years agoPR c++/83588 - struct with two flexible arrays causes an internal compiler error
Martin Sebor [Tue, 16 Jan 2018 03:02:34 +0000 (03:02 +0000)]
PR c++/83588 - struct with two flexible arrays causes an internal compiler error

gcc/cp/ChangeLog:

PR c++/83588
* class.c (find_flexarrays): Make a record of multiple flexible array
members.

gcc/testsuite/ChangeLog:

PR c++/83588
* g++.dg/ext/flexary28.C: New test.

From-SVN: r256721

6 years agore PR fortran/82257 (f951: Internal compiler error segmentation fault)
Louis Krupp [Tue, 16 Jan 2018 01:09:11 +0000 (01:09 +0000)]
re PR fortran/82257 (f951: Internal compiler error segmentation fault)

2018-01-15  Louis Krupp  <louis.krupp@zoho.com>

PR fortran/82257
* interface.c (compare_rank): Don't try to retrieve CLASS_DATA
from symbol marked unlimited polymorphic.
* resolve.c (resolve_structure_cons): Likewise.
* misc.c (gfc_typename): Don't dereference derived->components
if it's NULL.

2018-01-15  Louis Krupp  <louis.krupp@zoho.com>

PR fortran/82257
* gfortran.dg/unlimited_polymorphic_28.f90: New test.

From-SVN: r256720

6 years agoDaily bump.
GCC Administrator [Tue, 16 Jan 2018 00:16:24 +0000 (00:16 +0000)]
Daily bump.

From-SVN: r256719

6 years agors6000: Delete "delayed_cr" insn type
Segher Boessenkool [Mon, 15 Jan 2018 23:02:03 +0000 (00:02 +0100)]
rs6000: Delete "delayed_cr" insn type

"delayed_cr" is just "cr_logical" with the second source operand not
equal to the destination operand.  This patch changes it to be
expressed as type "cr_logical", with a new boolean attribute
"cr_logical_3op" added.  This simplifies code.

* config/rs6000/rs6000.md (define_attr "type"): Remove delayed_cr.
(define_attr "cr_logical_3op"): New.
(cceq_ior_compare): Adjust.
(cceq_ior_compare_complement): Adjust.
(*cceq_rev_compare): Adjust.
* config/rs6000/rs6000.c (rs6000_adjust_cost): Adjust.
(is_cracked_insn): Adjust.
(insn_must_be_first_in_group): Adjust.
* config/rs6000/40x.md: Adjust.
* config/rs6000/440.md: Adjust.
* config/rs6000/476.md: Adjust.
* config/rs6000/601.md: Adjust.
* config/rs6000/603.md: Adjust.
* config/rs6000/6xx.md: Adjust.
* config/rs6000/7450.md: Adjust.
* config/rs6000/7xx.md: Adjust.
* config/rs6000/8540.md: Adjust.
* config/rs6000/cell.md: Adjust.
* config/rs6000/e300c2c3.md: Adjust.
* config/rs6000/e500mc.md: Adjust.
* config/rs6000/e500mc64.md: Adjust.
* config/rs6000/e5500.md: Adjust.
* config/rs6000/e6500.md: Adjust.
* config/rs6000/mpc.md: Adjust.
* config/rs6000/power4.md: Adjust.
* config/rs6000/power5.md: Adjust.
* config/rs6000/power6.md: Adjust.
* config/rs6000/power7.md: Adjust.
* config/rs6000/power8.md: Adjust.
* config/rs6000/power9.md: Adjust.
* config/rs6000/rs64.md: Adjust.
* config/rs6000/titan.md: Adjust.

From-SVN: r256716

6 years agoi386: Rewrite indirect_branch_operand logic
H.J. Lu [Mon, 15 Jan 2018 22:36:42 +0000 (22:36 +0000)]
i386: Rewrite indirect_branch_operand logic

* config/i386/predicates.md (indirect_branch_operand): Rewrite
ix86_indirect_branch_register logic.

From-SVN: r256715

6 years agoDon't check ix86_indirect_branch_register for GOT operand
H.J. Lu [Mon, 15 Jan 2018 22:35:36 +0000 (22:35 +0000)]
Don't check ix86_indirect_branch_register for GOT operand

Since GOT_memory_operand and GOT32_symbol_operand are simple pattern
matches, don't check ix86_indirect_branch_register here.  If needed,
-mindirect-branch= will convert indirect branch via GOT slot to a call
and return thunk.

* config/i386/constraints.md (Bs): Update
ix86_indirect_branch_register check.  Don't check
ix86_indirect_branch_register with GOT_memory_operand.
(Bw): Likewise.
* config/i386/predicates.md (GOT_memory_operand): Don't check
ix86_indirect_branch_register here.
(GOT32_symbol_operand): Likewise.

From-SVN: r256714

6 years agoi386: Rewrite ix86_indirect_branch_register logic
H.J. Lu [Mon, 15 Jan 2018 22:32:37 +0000 (22:32 +0000)]
i386: Rewrite ix86_indirect_branch_register logic

Rewrite ix86_indirect_branch_register logic with

(and (not (match_test "ix86_indirect_branch_register"))
     (original condition before r256662))

* config/i386/predicates.md (constant_call_address_operand):
Rewrite ix86_indirect_branch_register logic.
(sibcall_insn_operand): Likewise.

From-SVN: r256713

6 years agoi386: Rename to ix86_indirect_branch_register
H.J. Lu [Mon, 15 Jan 2018 22:29:41 +0000 (22:29 +0000)]
i386: Rename to ix86_indirect_branch_register

Rename the variable for -mindirect-branch-register to
ix86_indirect_branch_register to match the command-line option name.

* config/i386/constraints.md (Bs): Replace
ix86_indirect_branch_thunk_register with
ix86_indirect_branch_register.
(Bw): Likewise.
* config/i386/i386.md (indirect_jump): Likewise.
(tablejump): Likewise.
(*sibcall_memory): Likewise.
(*sibcall_value_memory): Likewise.
Peepholes of indirect call and jump via memory: Likewise.
* config/i386/i386.opt: Likewise.
* config/i386/predicates.md (indirect_branch_operand): Likewise.
(GOT_memory_operand): Likewise.
(call_insn_operand): Likewise.
(sibcall_insn_operand): Likewise.
(GOT32_symbol_operand): Likewise.

From-SVN: r256712

6 years agore PR middle-end/83837 (libgomp.fortran/pointer[12].f90 FAIL)
Jakub Jelinek [Mon, 15 Jan 2018 21:47:11 +0000 (22:47 +0100)]
re PR middle-end/83837 (libgomp.fortran/pointer[12].f90 FAIL)

PR middle-end/83837
* omp-expand.c (expand_omp_atomic_pipeline): Use loaded_val
type rather than type addr's type points to.
(expand_omp_atomic_mutex): Likewise.
(expand_omp_atomic): Likewise.

From-SVN: r256710

6 years agoPR testsuite/83869 - c-c++-common/attr-nonstring-3.c fails starting with r256683
Martin Sebor [Mon, 15 Jan 2018 21:45:06 +0000 (21:45 +0000)]
PR testsuite/83869 - c-c++-common/attr-nonstring-3.c fails starting with r256683

testsuite/CHangeLog:
* c-c++-common/attr-nonstring-3.c: Work around bug c++/74762.

From-SVN: r256709

6 years agoPR libstdc++/83833 fix chi_squared_distribution::param(const param&)
Jonathan Wakely [Mon, 15 Jan 2018 19:58:22 +0000 (19:58 +0000)]
PR libstdc++/83833 fix chi_squared_distribution::param(const param&)

PR libstdc++/83833
* include/bits/random.h (chi_squared_distribution::param): Update
gamma distribution parameter.
* testsuite/26_numerics/random/chi_squared_distribution/83833.cc: New
test.

From-SVN: r256708

6 years agocompiler: reclaim memory of escape analysis Nodes
Ian Lance Taylor [Mon, 15 Jan 2018 19:35:44 +0000 (19:35 +0000)]
compiler: reclaim memory of escape analysis Nodes

    Reclaim the memory of escape analysis Nodes before kicking off
    the backend, as they are not needed in get_backend.

    Reviewed-on: https://go-review.googlesource.com/86243

From-SVN: r256707

6 years agocompiler: make sure variables captured by defer closure live
Ian Lance Taylor [Mon, 15 Jan 2018 19:13:47 +0000 (19:13 +0000)]
compiler: make sure variables captured by defer closure live

    Local variables captured by the deferred closure need to be live
    until the function finishes, especially when the deferred
    function runs. In Function::build, for function that has a defer,
    we wrap the function body in a try block. So the backend sees
    the local variables only live in the try block, without knowing
    that they are needed also in the finally block where we invoke
    the deferred function. Fix this by creating top-level
    declarations for non-escaping address-taken locals when there
    is a defer.

    An example of miscompilation without this CL:

    func F(fn func()) {
            didPanic := true
            defer func() {
                    println(didPanic)
            }()
            fn()
            didPanic = false
    }

    With escape analysis turned on, at optimization level -O1 or -O2,
    the store "didPanic = false" is elided by the backend's
    optimizer, presumably because it thinks "didPanic" is not live
    after the store, so the store is useless.

    Reviewed-on: https://go-review.googlesource.com/86241

From-SVN: r256706

6 years agore PR fortran/54613 ([F08] Add FINDLOC plus support MAXLOC/MINLOC with KIND=/BACK=)
Thomas Koenig [Mon, 15 Jan 2018 18:35:13 +0000 (18:35 +0000)]
re PR fortran/54613 ([F08] Add FINDLOC plus support MAXLOC/MINLOC with KIND=/BACK=)

2018-01-15  Thomas Koenig  <tkoenig@gcc.gnu.org>

PR fortran/54613
* gfortran.h (gfc_check_f): Rename f4ml to f5ml.
(gfc_logical_4_kind): New macro
* intrinsic.h (gfc_simplify_minloc): Add a gfc_expr *argument.
(gfc_simplify_maxloc): Likewise.
(gfc_resolve_maxloc): Likewise.
(gfc_resolve_minloc): Likewise.
* check.c (gfc_check_minloc_maxloc): Add checking for "back"
argument; also raise error if it is used (for now). Add it
if it isn't present.
* intrinsic.c (add_sym_4ml): Rename to
(add_sym_5ml), adjust for extra argument.
(add_functions): Add "back" constant. Adjust maxloc and minloc
for back argument.
* iresolve.c (gfc_resolve_maxloc): Add back argument. If back is
not of gfc_logical_4_kind, convert.
(gfc_resolve_minloc): Likewise.
* simplify.c (gfc_simplify_minloc): Add back argument.
(gfc_simplify_maxloc): Likewise.
* trans-intinsic.c (gfc_conv_intrinsic_minmaxloc): Rename last
argument to %VAL to ensure passing by value.
(gfc_conv_intrinsic_function): Call gfc_conv_intrinsic_minmaxloc
also for library calls.

2018-01-15  Thomas Koenig  <tkoenig@gcc.gnu.org>

PR fortran/54613
* m4/iparm.m4: Add back_arg macro if in minloc or maxloc.
* m4/iforeach-s.m4: Add optional argument back with back_arg
macro. Improve m4 quoting. If HAVE_BACK_ARG is defined, assert
that back is non-true.
* m4/iforeach.m4: Likewise.
* m4/ifunction-s.m4: Likewise.
* m4/ifunction.m4: Likewise.
* m4/maxloc0.m4: Include assert.h
* m4/minloc0.m4: Likewise.
* m4/maxloc0s.m4: #define HAVE_BACK_ARG.
* m4/minloc0s.m4: Likewise.
* m4/maxloc1s.m4: Likewise.
* m4/minloc1s.m4: Likewise.
* m4/maxloc1.m4: Include assert.h, #define HAVE_BACK_ARG.
* m4/minloc1.m4: Likewise.
* m4/maxloc2s.m4: Add assert.h, add back_arg, assert that
back is non-true.
* m4/minloc2s.m4: Likewise.
* generated/iall_i1.c: Regenerated.
* generated/iall_i16.c: Regenerated.
* generated/iall_i2.c: Regenerated.
* generated/iall_i4.c: Regenerated.
* generated/iall_i8.c: Regenerated.
* generated/iany_i1.c: Regenerated.
* generated/iany_i16.c: Regenerated.
* generated/iany_i2.c: Regenerated.
* generated/iany_i4.c: Regenerated.
* generated/iany_i8.c: Regenerated.
* generated/iparity_i1.c: Regenerated.
* generated/iparity_i16.c: Regenerated.
* generated/iparity_i2.c: Regenerated.
* generated/iparity_i4.c: Regenerated.
* generated/iparity_i8.c: Regenerated.
* generated/maxloc0_16_i1.c: Regenerated.
* generated/maxloc0_16_i16.c: Regenerated.
* generated/maxloc0_16_i2.c: Regenerated.
* generated/maxloc0_16_i4.c: Regenerated.
* generated/maxloc0_16_i8.c: Regenerated.
* generated/maxloc0_16_r10.c: Regenerated.
* generated/maxloc0_16_r16.c: Regenerated.
* generated/maxloc0_16_r4.c: Regenerated.
* generated/maxloc0_16_r8.c: Regenerated.
* generated/maxloc0_16_s1.c: Regenerated.
* generated/maxloc0_16_s4.c: Regenerated.
* generated/maxloc0_4_i1.c: Regenerated.
* generated/maxloc0_4_i16.c: Regenerated.
* generated/maxloc0_4_i2.c: Regenerated.
* generated/maxloc0_4_i4.c: Regenerated.
* generated/maxloc0_4_i8.c: Regenerated.
* generated/maxloc0_4_r10.c: Regenerated.
* generated/maxloc0_4_r16.c: Regenerated.
* generated/maxloc0_4_r4.c: Regenerated.
* generated/maxloc0_4_r8.c: Regenerated.
* generated/maxloc0_4_s1.c: Regenerated.
* generated/maxloc0_4_s4.c: Regenerated.
* generated/maxloc0_8_i1.c: Regenerated.
* generated/maxloc0_8_i16.c: Regenerated.
* generated/maxloc0_8_i2.c: Regenerated.
* generated/maxloc0_8_i4.c: Regenerated.
* generated/maxloc0_8_i8.c: Regenerated.
* generated/maxloc0_8_r10.c: Regenerated.
* generated/maxloc0_8_r16.c: Regenerated.
* generated/maxloc0_8_r4.c: Regenerated.
* generated/maxloc0_8_r8.c: Regenerated.
* generated/maxloc0_8_s1.c: Regenerated.
* generated/maxloc0_8_s4.c: Regenerated.
* generated/maxloc1_16_i1.c: Regenerated.
* generated/maxloc1_16_i16.c: Regenerated.
* generated/maxloc1_16_i2.c: Regenerated.
* generated/maxloc1_16_i4.c: Regenerated.
* generated/maxloc1_16_i8.c: Regenerated.
* generated/maxloc1_16_r10.c: Regenerated.
* generated/maxloc1_16_r16.c: Regenerated.
* generated/maxloc1_16_r4.c: Regenerated.
* generated/maxloc1_16_r8.c: Regenerated.
* generated/maxloc1_16_s1.c: Regenerated.
* generated/maxloc1_16_s4.c: Regenerated.
* generated/maxloc1_4_i1.c: Regenerated.
* generated/maxloc1_4_i16.c: Regenerated.
* generated/maxloc1_4_i2.c: Regenerated.
* generated/maxloc1_4_i4.c: Regenerated.
* generated/maxloc1_4_i8.c: Regenerated.
* generated/maxloc1_4_r10.c: Regenerated.
* generated/maxloc1_4_r16.c: Regenerated.
* generated/maxloc1_4_r4.c: Regenerated.
* generated/maxloc1_4_r8.c: Regenerated.
* generated/maxloc1_4_s1.c: Regenerated.
* generated/maxloc1_4_s4.c: Regenerated.
* generated/maxloc1_8_i1.c: Regenerated.
* generated/maxloc1_8_i16.c: Regenerated.
* generated/maxloc1_8_i2.c: Regenerated.
* generated/maxloc1_8_i4.c: Regenerated.
* generated/maxloc1_8_i8.c: Regenerated.
* generated/maxloc1_8_r10.c: Regenerated.
* generated/maxloc1_8_r16.c: Regenerated.
* generated/maxloc1_8_r4.c: Regenerated.
* generated/maxloc1_8_r8.c: Regenerated.
* generated/maxloc1_8_s1.c: Regenerated.
* generated/maxloc1_8_s4.c: Regenerated.
* generated/maxval_i1.c: Regenerated.
* generated/maxval_i16.c: Regenerated.
* generated/maxval_i2.c: Regenerated.
* generated/maxval_i4.c: Regenerated.
* generated/maxval_i8.c: Regenerated.
* generated/maxval_r10.c: Regenerated.
* generated/maxval_r16.c: Regenerated.
* generated/maxval_r4.c: Regenerated.
* generated/maxval_r8.c: Regenerated.
* generated/minloc0_16_i1.c: Regenerated.
* generated/minloc0_16_i16.c: Regenerated.
* generated/minloc0_16_i2.c: Regenerated.
* generated/minloc0_16_i4.c: Regenerated.
* generated/minloc0_16_i8.c: Regenerated.
* generated/minloc0_16_r10.c: Regenerated.
* generated/minloc0_16_r16.c: Regenerated.
* generated/minloc0_16_r4.c: Regenerated.
* generated/minloc0_16_r8.c: Regenerated.
* generated/minloc0_16_s1.c: Regenerated.
* generated/minloc0_16_s4.c: Regenerated.
* generated/minloc0_4_i1.c: Regenerated.
* generated/minloc0_4_i16.c: Regenerated.
* generated/minloc0_4_i2.c: Regenerated.
* generated/minloc0_4_i4.c: Regenerated.
* generated/minloc0_4_i8.c: Regenerated.
* generated/minloc0_4_r10.c: Regenerated.
* generated/minloc0_4_r16.c: Regenerated.
* generated/minloc0_4_r4.c: Regenerated.
* generated/minloc0_4_r8.c: Regenerated.
* generated/minloc0_4_s1.c: Regenerated.
* generated/minloc0_4_s4.c: Regenerated.
* generated/minloc0_8_i1.c: Regenerated.
* generated/minloc0_8_i16.c: Regenerated.
* generated/minloc0_8_i2.c: Regenerated.
* generated/minloc0_8_i4.c: Regenerated.
* generated/minloc0_8_i8.c: Regenerated.
* generated/minloc0_8_r10.c: Regenerated.
* generated/minloc0_8_r16.c: Regenerated.
* generated/minloc0_8_r4.c: Regenerated.
* generated/minloc0_8_r8.c: Regenerated.
* generated/minloc0_8_s1.c: Regenerated.
* generated/minloc0_8_s4.c: Regenerated.
* generated/minloc1_16_i1.c: Regenerated.
* generated/minloc1_16_i16.c: Regenerated.
* generated/minloc1_16_i2.c: Regenerated.
* generated/minloc1_16_i4.c: Regenerated.
* generated/minloc1_16_i8.c: Regenerated.
* generated/minloc1_16_r10.c: Regenerated.
* generated/minloc1_16_r16.c: Regenerated.
* generated/minloc1_16_r4.c: Regenerated.
* generated/minloc1_16_r8.c: Regenerated.
* generated/minloc1_16_s1.c: Regenerated.
* generated/minloc1_16_s4.c: Regenerated.
* generated/minloc1_4_i1.c: Regenerated.
* generated/minloc1_4_i16.c: Regenerated.
* generated/minloc1_4_i2.c: Regenerated.
* generated/minloc1_4_i4.c: Regenerated.
* generated/minloc1_4_i8.c: Regenerated.
* generated/minloc1_4_r10.c: Regenerated.
* generated/minloc1_4_r16.c: Regenerated.
* generated/minloc1_4_r4.c: Regenerated.
* generated/minloc1_4_r8.c: Regenerated.
* generated/minloc1_4_s1.c: Regenerated.
* generated/minloc1_4_s4.c: Regenerated.
* generated/minloc1_8_i1.c: Regenerated.
* generated/minloc1_8_i16.c: Regenerated.
* generated/minloc1_8_i2.c: Regenerated.
* generated/minloc1_8_i4.c: Regenerated.
* generated/minloc1_8_i8.c: Regenerated.
* generated/minloc1_8_r10.c: Regenerated.
* generated/minloc1_8_r16.c: Regenerated.
* generated/minloc1_8_r4.c: Regenerated.
* generated/minloc1_8_r8.c: Regenerated.
* generated/minloc1_8_s1.c: Regenerated.
* generated/minloc1_8_s4.c: Regenerated.
* generated/minval_i1.c: Regenerated.
* generated/minval_i16.c: Regenerated.
* generated/minval_i2.c: Regenerated.
* generated/minval_i4.c: Regenerated.
* generated/minval_i8.c: Regenerated.
* generated/minval_r10.c: Regenerated.
* generated/minval_r16.c: Regenerated.
* generated/minval_r4.c: Regenerated.
* generated/minval_r8.c: Regenerated.
* generated/norm2_r10.c: Regenerated.
* generated/norm2_r16.c: Regenerated.
* generated/norm2_r4.c: Regenerated.
* generated/norm2_r8.c: Regenerated.
* generated/parity_l1.c: Regenerated.
* generated/parity_l16.c: Regenerated.
* generated/parity_l2.c: Regenerated.
* generated/parity_l4.c: Regenerated.
* generated/parity_l8.c: Regenerated.
* generated/product_c10.c: Regenerated.
* generated/product_c16.c: Regenerated.
* generated/product_c4.c: Regenerated.
* generated/product_c8.c: Regenerated.
* generated/product_i1.c: Regenerated.
* generated/product_i16.c: Regenerated.
* generated/product_i2.c: Regenerated.
* generated/product_i4.c: Regenerated.
* generated/product_i8.c: Regenerated.
* generated/product_r10.c: Regenerated.
* generated/product_r16.c: Regenerated.
* generated/product_r4.c: Regenerated.
* generated/product_r8.c: Regenerated.
* generated/sum_c10.c: Regenerated.
* generated/sum_c16.c: Regenerated.
* generated/sum_c4.c: Regenerated.
* generated/sum_c8.c: Regenerated.
* generated/sum_i1.c: Regenerated.
* generated/sum_i16.c: Regenerated.
* generated/sum_i2.c: Regenerated.
* generated/sum_i4.c: Regenerated.
* generated/sum_i8.c: Regenerated.
* generated/sum_r10.c: Regenerated.
* generated/sum_r16.c: Regenerated.
* generated/sum_r4.c: Regenerated.
* generated/sum_r8.c: Regenerated.

2018-01-15  Thomas Koenig  <tkoenig@gcc.gnu.org>

PR fortran/54613
* gfortran.dg/minmaxloc_9.f90: New test.
* gfortran.dg/minmaxloc_10.f90: New test.
* gfortran.dg/minmaxloc_11.f90: New test.

From-SVN: r256705

6 years agoi386: Don't use ASM_OUTPUT_DEF for TARGET_MACHO
H.J. Lu [Mon, 15 Jan 2018 18:16:01 +0000 (18:16 +0000)]
i386: Don't use ASM_OUTPUT_DEF for TARGET_MACHO

ASM_OUTPUT_DEF isn't defined for TARGET_MACHO.  Use ASM_OUTPUT_LABEL to
generate the __x86_return_thunk label, instead of the set directive.
Update testcase to remove the __x86_return_thunk label check.  Since
-fno-pic is ignored on Darwin, update testcases to scan or "push" only
on Linux.

gcc/

PR target/83839
* config/i386/i386.c (output_indirect_thunk_function): Use
ASM_OUTPUT_LABEL, instead of ASM_OUTPUT_DEF, for TARGET_MACHO
for  __x86_return_thunk.

gcc/testsuite/

PR target/83839
* gcc.target/i386/indirect-thunk-1.c: Scan for "push" only on
Linux.
* gcc.target/i386/indirect-thunk-2.c: Likewise.
* gcc.target/i386/indirect-thunk-3.c: Likewise.
* gcc.target/i386/indirect-thunk-4.c: Likewise.
* gcc.target/i386/indirect-thunk-7.c: Likewise.
* gcc.target/i386/indirect-thunk-attr-1.c: Likewise.
* gcc.target/i386/indirect-thunk-attr-2.c: Likewise.
* gcc.target/i386/indirect-thunk-attr-5.c: Likewise.
* gcc.target/i386/indirect-thunk-attr-6.c: Likewise.
* gcc.target/i386/indirect-thunk-attr-7.c: Likewise.
* gcc.target/i386/indirect-thunk-extern-1.c: Likewise.
* gcc.target/i386/indirect-thunk-extern-2.c: Likewise.
* gcc.target/i386/indirect-thunk-extern-3.c: Likewise.
* gcc.target/i386/indirect-thunk-extern-4.c: Likewise.
* gcc.target/i386/indirect-thunk-extern-7.c: Likewise.
* gcc.target/i386/indirect-thunk-register-1.c: Likewise.
* gcc.target/i386/indirect-thunk-register-3.c: Likewise.
* gcc.target/i386/indirect-thunk-register-4.c: Likewise.
* gcc.target/i386/ret-thunk-10.c: Likewise.
* gcc.target/i386/ret-thunk-11.c: Likewise.
* gcc.target/i386/ret-thunk-12.c: Likewise.
* gcc.target/i386/ret-thunk-13.c: Likewise.
* gcc.target/i386/ret-thunk-14.c: Likewise.
* gcc.target/i386/ret-thunk-15.c: Likewise.
* gcc.target/i386/ret-thunk-9.c: Don't check the
__x86_return_thunk label.
Scan for "push" only for Linux.

From-SVN: r256704

6 years agoPR libstdc++/83830 Define std::has_unique_object_representations_v
Jonathan Wakely [Mon, 15 Jan 2018 15:02:01 +0000 (15:02 +0000)]
PR libstdc++/83830 Define std::has_unique_object_representations_v

PR libstdc++/83830
* include/std/type_traits (has_unique_object_representations_v): Add
variable template.
* testsuite/20_util/has_unique_object_representations/value.cc: Check
variable template.

From-SVN: r256701

6 years agore PR target/83850 (Spills on vector extract, gcc.target/i386/pr80846-1.c FAILs)
Richard Biener [Mon, 15 Jan 2018 14:43:52 +0000 (14:43 +0000)]
re PR target/83850 (Spills on vector extract, gcc.target/i386/pr80846-1.c FAILs)

2018-01-15  Richard Biener  <rguenther@suse.de>

PR middle-end/83850
* expmed.c (extract_bit_field_1): Fix typo.

From-SVN: r256700

6 years agoMissing vect_double in gcc.dg/vect/pr79920.c (PR83836)
Richard Sandiford [Mon, 15 Jan 2018 12:38:55 +0000 (12:38 +0000)]
Missing vect_double in gcc.dg/vect/pr79920.c (PR83836)

2018-01-15  Richard Sandiford  <richard.sandiford@linaro.org>

gcc/testsuite/
PR testsuite/79920
* gcc.dg/vect/pr79920.c: Restrict reduction test to vect_double

From-SVN: r256698

6 years ago[arm] PR target/83687: Fix invalid combination of VSUB + VABS into VABD
Kyrylo Tkachov [Mon, 15 Jan 2018 11:56:03 +0000 (11:56 +0000)]
[arm] PR target/83687: Fix invalid combination of VSUB + VABS into VABD

In this wrong-code bug we combine a VSUB.I8 and a VABS.S8
into a VABD.S8 instruction . This combination is not valid
for integer operands because in the VABD instruction the semantics
are that the difference is computed in notionally infinite precision
and the absolute difference is computed on that, whereas for a
VSUB.I8 + VABS.S8 sequence the VSUB operation will perform any
wrapping that's needed for the 8-bit signed type before the VABS
gets its hands on it.

This leads to the wrong-code in the PR where the expected
sequence from the intrinsics:
VSUB + VABS of two vectors {-100, -100, -100...}, {100, 100, 100...}
gives a result of {56, 56, 56...} (-100 - 100)

but GCC optimises it into a single
VABD of {-100, -100, -100...}, {100, 100, 100...}
which produces a result of {200, 200, 200...}

The transformation is still valid for floating-point operands,
which is why it was added in the first place I believe (r178817)
but this patch disables it for integer operands.
The HFmode variants though only exist for TARGET_NEON_FP16INST, so
this patch adds the appropriate guards to the new mode iterator

Bootstrapped and tested on arm-none-linux-gnueabihf.

PR target/83687
* config/arm/iterators.md (VF): New mode iterator.
* config/arm/neon.md (neon_vabd<mode>_2): Use the above.
Remove integer-related logic from pattern.
(neon_vabd<mode>_3): Likewise.

* gcc.target/arm/neon-combine-sub-abs-into-vabd.c: Delete integer
tests.
* gcc.target/arm/pr83687.c: New test.

From-SVN: r256696

6 years agoMake optional conditionally trivially_{copy,move}_{constructible,assignable}
Ville Voutilainen [Mon, 15 Jan 2018 11:32:24 +0000 (13:32 +0200)]
Make optional conditionally trivially_{copy,move}_{constructible,assignable}

* include/std/optional (_Optional_payload): Fix the comment in
the class head and turn into a primary and one specialization.
(_Optional_payload::_M_engaged): Strike the NSDMI.
(_Optional_payload<_Tp, false>::operator=(const _Optional_payload&)):
New.
(_Optional_payload<_Tp, false>::operator=(_Optional_payload&&)):
Likewise.
(_Optional_payload<_Tp, false>::_M_get): Likewise.
(_Optional_payload<_Tp, false>::_M_reset): Likewise.
(_Optional_base_impl): Likewise.
(_Optional_base): Turn into a primary and three specializations.
(optional(nullopt)): Change the base init.
* testsuite/20_util/optional/assignment/8.cc: New.
* testsuite/20_util/optional/cons/trivial.cc: Likewise.
* testsuite/20_util/optional/cons/value_neg.cc: Adjust.

From-SVN: r256694

6 years agoAdjust tests to AVR_TINY.
Georg-Johann Lay [Mon, 15 Jan 2018 11:18:18 +0000 (11:18 +0000)]
Adjust tests to AVR_TINY.

* gcc.target/avr/progmem.h (pgm_read_char): Handle AVR_TINY.
* gcc.target/avr/pr52472.c: Add "! avr_tiny" target filter.
* gcc.target/avr/pr71627.c: Same.
* gcc.target/avr/torture/addr-space-1-0.c: Same.
* gcc.target/avr/torture/addr-space-1-1.c: Same.
* gcc.target/avr/torture/addr-space-1-x.c: Same.
* gcc.target/avr/torture/addr-space-2-0.c: Same.
* gcc.target/avr/torture/addr-space-2-1.c: Same.
* gcc.target/avr/torture/addr-space-2-x.c: Same.
* gcc.target/avr/torture/sat-hr-plus-minus.c: Same.
* gcc.target/avr/torture/sat-k-plus-minus.c: Same.
* gcc.target/avr/torture/sat-llk-plus-minus.c: Same.
* gcc.target/avr/torture/sat-r-plus-minus.c: Same.
* gcc.target/avr/torture/sat-uhr-plus-minus.c: Same.
* gcc.target/avr/torture/sat-uk-plus-minus.c: Same.
* gcc.target/avr/torture/sat-ullk-plus-minus.c: Same.
* gcc.target/avr/torture/sat-ur-plus-minus.c: Same.
* gcc.target/avr/torture/pr61055.c: Same.
* gcc.target/avr/torture/builtins-3-absfx.c: Only use __flash if
available.
* gcc.target/avr/torture/int24-mul.c: Same.
* gcc.target/avr/torture/pr51782-1.c: Same.
* gcc.target/avr/torture/pr61443.c: Same.
* gcc.target/avr/torture/builtins-2.c: Factor out addr-space stuff...
* gcc.target/avr/torture/builtins-2-flash.c: ...to this new test.

From-SVN: r256690

6 years agoPR libstdc++/80276 fix template argument handling in type printers
Jonathan Wakely [Mon, 15 Jan 2018 11:13:53 +0000 (11:13 +0000)]
PR libstdc++/80276 fix template argument handling in type printers

PR libstdc++/80276
* python/libstdcxx/v6/printers.py (strip_inline_namespaces): New.
(get_template_arg_list): New.
(StdVariantPrinter._template_args): Remove, use get_template_arg_list
instead.
(TemplateTypePrinter): Rewrite to work with gdb.Type objects instead
of strings and regular expressions.
(add_one_template_type_printer): Adapt to new TemplateTypePrinter.
(FilteringTypePrinter): Add docstring. Match using startswith. Use
strip_inline_namespaces instead of strip_versioned_namespace.
(add_one_type_printer): Prepend namespace to match argument.
(register_type_printers): Add type printers for char16_t and char32_t
string types and for types using cxx11 ABI. Update calls to
add_one_template_type_printer to provide default argument dicts.
* testsuite/libstdc++-prettyprinters/80276.cc: New test.
* testsuite/libstdc++-prettyprinters/whatis.cc: Remove tests for
basic_string<unsigned char> and basic_string<signed char>.
* testsuite/libstdc++-prettyprinters/whatis2.cc: Duplicate whatis.cc
to test local variables, without overriding _GLIBCXX_USE_CXX11_ABI.

From-SVN: r256689

6 years agoCorrect earlier ChangeLog entry
Juraj Oršulić [Mon, 15 Jan 2018 11:13:49 +0000 (11:13 +0000)]
Correct earlier ChangeLog entry

Add Juraj OrÅ¡ulić as original patch author.

From-SVN: r256688

6 years agore PR c/83801 ([avr] String constant in __flash not put into .progmem)
Georg-Johann Lay [Mon, 15 Jan 2018 10:04:32 +0000 (10:04 +0000)]
re PR c/83801 ([avr] String constant in __flash not put into .progmem)

PR c/83801
PR c/83729
* gcc.target/avr/torture/pr83729.c: New test.
* gcc.target/avr/torture/pr83801.c: New test.

From-SVN: r256687

6 years agore PR middle-end/82694 (Linux kernel miscompiled since r250765)
Jakub Jelinek [Mon, 15 Jan 2018 09:05:59 +0000 (10:05 +0100)]
re PR middle-end/82694 (Linux kernel miscompiled since r250765)

PR middle-end/82694
* common.opt (fstrict-overflow): No longer an alias.
(fwrapv-pointer): New option.
* tree.h (TYPE_OVERFLOW_WRAPS, TYPE_OVERFLOW_UNDEFINED): Define
also for pointer types based on flag_wrapv_pointer.
* opts.c (common_handle_option) <case OPT_fstrict_overflow>: Set
opts->x_flag_wrap[pv] to !value, clear opts->x_flag_trapv if
opts->x_flag_wrapv got set.
* fold-const.c (fold_comparison, fold_binary_loc): Revert 2017-08-01
changes, just use TYPE_OVERFLOW_UNDEFINED on pointer type instead of
POINTER_TYPE_OVERFLOW_UNDEFINED.
* match.pd: Likewise in address comparison pattern.
* doc/invoke.texi: Document -fwrapv and -fstrict-overflow.

* gcc.dg/no-strict-overflow-7.c: Revert 2017-08-01 changes.
* gcc.dg/tree-ssa/pr81388-1.c: Likewise.

From-SVN: r256686

6 years agore PR lto/83804 ([meta] LTO memory consumption)
Richard Biener [Mon, 15 Jan 2018 08:57:28 +0000 (08:57 +0000)]
re PR lto/83804 ([meta] LTO memory consumption)

2018-01-15  Richard Biener  <rguenther@suse.de>

PR lto/83804
* tree.c (free_lang_data_in_type): Always unlink TYPE_DECLs
from TYPE_FIELDS.  Free TYPE_BINFO if not used by devirtualization.
Reset type names to their identifier if their TYPE_DECL doesn't
have linkage (and thus is used for ODR and devirt).
(save_debug_info_for_decl): Remove.
(save_debug_info_for_type): Likewise.
(add_tree_to_fld_list): Adjust.
* tree-pretty-print.c (dump_generic_node): Make dumping of
type names more robust.

From-SVN: r256685

6 years agoBASE-VER: Bump to 8.0.1.
Richard Biener [Mon, 15 Jan 2018 08:28:13 +0000 (08:28 +0000)]
BASE-VER: Bump to 8.0.1.

2018-01-15  Richard Biener  <rguenther@suse.de>

* BASE-VER: Bump to 8.0.1.

From-SVN: r256684

6 years agore PR other/83508 ([arm] c-c++-common/Wrestrict.c fails since r255836)
Martin Sebor [Mon, 15 Jan 2018 06:15:09 +0000 (06:15 +0000)]
re PR other/83508 ([arm] c-c++-common/Wrestrict.c fails since r255836)

PR other/83508
* builtins.c (check_access): Avoid warning when the no-warning bit
is set.

PR other/83508
* gcc.dg/Wstringop-overflow-2.c: New test.

From-SVN: r256683

6 years agotree-ssa-loop-im.c (sort_bbs_in_loop_postorder_cmp): Stabilize sort.
Cory Fields [Mon, 15 Jan 2018 06:05:50 +0000 (06:05 +0000)]
tree-ssa-loop-im.c (sort_bbs_in_loop_postorder_cmp): Stabilize sort.

* tree-ssa-loop-im.c (sort_bbs_in_loop_postorder_cmp): Stabilize sort.
* ira-color (allocno_hard_regs_compare): Likewise.

From-SVN: r256682

6 years agore PR target/83013 (MicroBlaze - #ident - Error: operation combines symbols in differ...
Nathan Rossi [Mon, 15 Jan 2018 06:02:19 +0000 (06:02 +0000)]
re PR target/83013 (MicroBlaze - #ident - Error: operation combines symbols in different segments)

        PR target/83013
        * config/microblaze/microblaze.c (microblaze_asm_output_ident):
        Use .pushsection/.popsection.

From-SVN: r256681

6 years agoDaily bump.
GCC Administrator [Mon, 15 Jan 2018 00:16:26 +0000 (00:16 +0000)]
Daily bump.

From-SVN: r256680

6 years agoPR c++/81327 - cast to void* does not suppress -Wclass-memaccess
Martin Sebor [Sun, 14 Jan 2018 21:54:25 +0000 (21:54 +0000)]
PR c++/81327 - cast to void* does not suppress -Wclass-memaccess

gcc/ChangeLog:
PR c++/81327
* doc/invoke.texi (-Wlass-memaccess): Document suppression by casting.

From-SVN: r256677

6 years agoFix date in log.
Jerry DeLisle [Sun, 14 Jan 2018 21:46:43 +0000 (21:46 +0000)]
Fix date in log.

From-SVN: r256676

6 years agoFix date in Changelog
Jerry DeLisle [Sun, 14 Jan 2018 21:00:29 +0000 (21:00 +0000)]
Fix date in Changelog

From-SVN: r256674

6 years agoCorrect ChangeLog of x86: Add -mfunction-return=
H.J. Lu [Sun, 14 Jan 2018 20:57:36 +0000 (12:57 -0800)]
Correct ChangeLog of x86: Add -mfunction-return=

From-SVN: r256673

6 years agoCorrect ChangeLog of x86: Add -mindirect-branch=
H.J. Lu [Sun, 14 Jan 2018 20:56:07 +0000 (12:56 -0800)]
Correct ChangeLog of x86: Add -mindirect-branch=

From-SVN: r256672

6 years agore PR libfortran/83811 (fortran 'e' format broken for single digit exponents)
Jerry DeLisle [Sun, 14 Jan 2018 17:36:29 +0000 (17:36 +0000)]
re PR libfortran/83811 (fortran 'e' format broken for single digit exponents)

2018-01-18  Jerry DeLisle  <jvdelisle@gcc.gnu.org>

        PR libgfortran/83811
        * write.c (select_buffer): Adjust buffer size up by 1.

        * gfortran.dg/fmt_e.f90: New test.

From-SVN: r256669

6 years agore PR libstdc++/81092 (Missing symbols for new std::wstring constructors)
Andreas Schwab [Sun, 14 Jan 2018 17:32:20 +0000 (17:32 +0000)]
re PR libstdc++/81092 (Missing symbols for new std::wstring constructors)

PR libstdc++/81092
* config/abi/post/ia64-linux-gnu/baseline_symbols.txt: Update.

From-SVN: r256668

6 years agoconfig.gcc (i[34567]86-*-*): Remove one duplicate gfniintrin.h entry from extra_headers.
Jakub Jelinek [Sun, 14 Jan 2018 16:19:14 +0000 (17:19 +0100)]
config.gcc (i[34567]86-*-*): Remove one duplicate gfniintrin.h entry from extra_headers.

* config.gcc (i[34567]86-*-*): Remove one duplicate gfniintrin.h
entry from extra_headers.
(x86_64-*-*): Remove two duplicate gfniintrin.h entries from
extra_headers, make the list bitwise identical to the i?86-*-* one.

From-SVN: r256667

6 years agox86: Disallow -mindirect-branch=/-mfunction-return= with -mcmodel=large
H.J. Lu [Sun, 14 Jan 2018 14:43:10 +0000 (14:43 +0000)]
x86: Disallow -mindirect-branch=/-mfunction-return= with -mcmodel=large

Since the thunk function may not be reachable in large code model,
-mcmodel=large is incompatible with -mindirect-branch=thunk,
-mindirect-branch=thunk-extern, -mfunction-return=thunk and
-mfunction-return=thunk-extern.  Issue an error when they are used with
-mcmodel=large.

gcc/

* config/i386/i386.c (ix86_set_indirect_branch_type): Disallow
-mcmodel=large with -mindirect-branch=thunk,
-mindirect-branch=thunk-extern, -mfunction-return=thunk and
-mfunction-return=thunk-extern.
* doc/invoke.texi: Document -mcmodel=large is incompatible with
-mindirect-branch=thunk, -mindirect-branch=thunk-extern,
-mfunction-return=thunk and -mfunction-return=thunk-extern.

gcc/testsuite/

* gcc.target/i386/indirect-thunk-10.c: New test.
* gcc.target/i386/indirect-thunk-8.c: Likewise.
* gcc.target/i386/indirect-thunk-9.c: Likewise.
* gcc.target/i386/indirect-thunk-attr-10.c: Likewise.
* gcc.target/i386/indirect-thunk-attr-11.c: Likewise.
* gcc.target/i386/indirect-thunk-attr-9.c: Likewise.
* gcc.target/i386/ret-thunk-17.c: Likewise.
* gcc.target/i386/ret-thunk-18.c: Likewise.
* gcc.target/i386/ret-thunk-19.c: Likewise.
* gcc.target/i386/ret-thunk-20.c: Likewise.
* gcc.target/i386/ret-thunk-21.c: Likewise.

From-SVN: r256664

6 years agox86: Add 'V' register operand modifier
H.J. Lu [Sun, 14 Jan 2018 14:41:25 +0000 (14:41 +0000)]
x86: Add 'V' register operand modifier

Add 'V', a special modifier which prints the name of the full integer
register without '%'.  For

extern void (*func_p) (void);

void
foo (void)
{
  asm ("call __x86_indirect_thunk_%V0" : : "a" (func_p));
}

it generates:

foo:
movq func_p(%rip), %rax
call __x86_indirect_thunk_rax
ret

gcc/

* config/i386/i386.c (print_reg): Print the name of the full
integer register without '%'.
(ix86_print_operand): Handle 'V'.
 * doc/extend.texi: Document 'V' modifier.

gcc/testsuite/

* gcc.target/i386/indirect-thunk-register-4.c: New test.

From-SVN: r256663

6 years agox86: Add -mindirect-branch-register
H.J. Lu [Sun, 14 Jan 2018 14:40:01 +0000 (14:40 +0000)]
x86: Add -mindirect-branch-register

Add -mindirect-branch-register to force indirect branch via register.
This is implemented by disabling patterns of indirect branch via memory,
similar to TARGET_X32.

-mindirect-branch= and -mfunction-return= tests are updated with
-mno-indirect-branch-register to avoid false test failures when
-mindirect-branch-register is added to RUNTESTFLAGS for "make check".

gcc/

* config/i386/constraints.md (Bs): Disallow memory operand for
-mindirect-branch-register.
(Bw): Likewise.
* config/i386/predicates.md (indirect_branch_operand): Likewise.
(GOT_memory_operand): Likewise.
(call_insn_operand): Likewise.
(sibcall_insn_operand): Likewise.
(GOT32_symbol_operand): Likewise.
* config/i386/i386.md (indirect_jump): Call convert_memory_address
for -mindirect-branch-register.
(tablejump): Likewise.
(*sibcall_memory): Likewise.
(*sibcall_value_memory): Likewise.
Disallow peepholes of indirect call and jump via memory for
-mindirect-branch-register.
(*call_pop): Replace m with Bw.
(*call_value_pop): Likewise.
(*sibcall_pop_memory): Replace m with Bs.
* config/i386/i386.opt (mindirect-branch-register): New option.
* doc/invoke.texi: Document -mindirect-branch-register option.

gcc/testsuite/

* gcc.target/i386/indirect-thunk-1.c (dg-options): Add
-mno-indirect-branch-register.
* gcc.target/i386/indirect-thunk-2.c: Likewise.
* gcc.target/i386/indirect-thunk-3.c: Likewise.
* gcc.target/i386/indirect-thunk-4.c: Likewise.
* gcc.target/i386/indirect-thunk-5.c: Likewise.
* gcc.target/i386/indirect-thunk-6.c: Likewise.
* gcc.target/i386/indirect-thunk-7.c: Likewise.
* gcc.target/i386/indirect-thunk-attr-1.c: Likewise.
* gcc.target/i386/indirect-thunk-attr-2.c: Likewise.
* gcc.target/i386/indirect-thunk-attr-3.c: Likewise.
* gcc.target/i386/indirect-thunk-attr-4.c: Likewise.
* gcc.target/i386/indirect-thunk-attr-5.c: Likewise.
* gcc.target/i386/indirect-thunk-attr-6.c: Likewise.
* gcc.target/i386/indirect-thunk-attr-7.c: Likewise.
* gcc.target/i386/indirect-thunk-bnd-1.c: Likewise.
* gcc.target/i386/indirect-thunk-bnd-2.c: Likewise.
* gcc.target/i386/indirect-thunk-bnd-3.c: Likewise.
* gcc.target/i386/indirect-thunk-bnd-4.c: Likewise.
* gcc.target/i386/indirect-thunk-extern-1.c: Likewise.
* gcc.target/i386/indirect-thunk-extern-2.c: Likewise.
* gcc.target/i386/indirect-thunk-extern-3.c: Likewise.
* gcc.target/i386/indirect-thunk-extern-4.c: Likewise.
* gcc.target/i386/indirect-thunk-extern-5.c: Likewise.
* gcc.target/i386/indirect-thunk-extern-6.c: Likewise.
* gcc.target/i386/indirect-thunk-extern-7.c: Likewise.
* gcc.target/i386/indirect-thunk-inline-1.c: Likewise.
* gcc.target/i386/indirect-thunk-inline-2.c: Likewise.
* gcc.target/i386/indirect-thunk-inline-3.c: Likewise.
* gcc.target/i386/indirect-thunk-inline-4.c: Likewise.
* gcc.target/i386/indirect-thunk-inline-5.c: Likewise.
* gcc.target/i386/indirect-thunk-inline-6.c: Likewise.
* gcc.target/i386/indirect-thunk-inline-7.c: Likewise.
* gcc.target/i386/ret-thunk-10.c: Likewise.
* gcc.target/i386/ret-thunk-11.c: Likewise.
* gcc.target/i386/ret-thunk-12.c: Likewise.
* gcc.target/i386/ret-thunk-13.c: Likewise.
* gcc.target/i386/ret-thunk-14.c: Likewise.
* gcc.target/i386/ret-thunk-15.c: Likewise.
* gcc.target/i386/ret-thunk-9.c: Likewise.
* gcc.target/i386/indirect-thunk-register-1.c: New test.
* gcc.target/i386/indirect-thunk-register-2.c: Likewise.
* gcc.target/i386/indirect-thunk-register-3.c: Likewise.

From-SVN: r256662

6 years agox86: Add -mfunction-return=
H.J. Lu [Sun, 14 Jan 2018 14:37:39 +0000 (14:37 +0000)]
x86: Add -mfunction-return=

Add -mfunction-return= option to convert function return to call and
return thunks.  The default is 'keep', which keeps function return
unmodified.  'thunk' converts function return to call and return thunk.
'thunk-inline' converts function return to inlined call and return thunk.
'thunk-extern' converts function return to external call and return
thunk provided in a separate object file.  You can control this behavior
for a specific function by using the function attribute function_return.

Function return thunk is the same as memory thunk for -mindirect-branch=
where the return address is at the top of the stack:

__x86_return_thunk:
call L2
L1:
pause
lfence
jmp L1
L2:
lea 8(%rsp), %rsp|lea 4(%esp), %esp
ret

and function return becomes

jmp __x86_return_thunk

-mindirect-branch= tests are updated with -mfunction-return=keep to
avoid false test failures when -mfunction-return=thunk is added to
RUNTESTFLAGS for "make check".

gcc/

* config/i386/i386-protos.h (ix86_output_function_return): New.
* config/i386/i386.c (ix86_set_indirect_branch_type): Also
set function_return_type.
(indirect_thunk_name): Add ret_p to indicate thunk for function
return.
(output_indirect_thunk_function): Pass false to
indirect_thunk_name.
(ix86_output_indirect_branch): Likewise.
(output_indirect_thunk_function): Create alias for function
return thunk if regno < 0.
(ix86_output_function_return): New function.
(ix86_handle_fndecl_attribute): Handle function_return.
(ix86_attribute_table): Add function_return.
* config/i386/i386.h (machine_function): Add
function_return_type.
* config/i386/i386.md (simple_return_internal): Use
ix86_output_function_return.
(simple_return_internal_long): Likewise.
* config/i386/i386.opt (mfunction-return=): New option.
(indirect_branch): Mention -mfunction-return=.
* doc/extend.texi: Document function_return function attribute.
* doc/invoke.texi: Document -mfunction-return= option.

gcc/testsuite/

* gcc.target/i386/indirect-thunk-1.c (dg-options): Add
-mfunction-return=keep.
* gcc.target/i386/indirect-thunk-2.c: Likewise.
* gcc.target/i386/indirect-thunk-3.c: Likewise.
* gcc.target/i386/indirect-thunk-4.c: Likewise.
* gcc.target/i386/indirect-thunk-5.c: Likewise.
* gcc.target/i386/indirect-thunk-6.c: Likewise.
* gcc.target/i386/indirect-thunk-7.c: Likewise.
* gcc.target/i386/indirect-thunk-attr-1.c: Likewise.
* gcc.target/i386/indirect-thunk-attr-2.c: Likewise.
* gcc.target/i386/indirect-thunk-attr-3.c: Likewise.
* gcc.target/i386/indirect-thunk-attr-4.c: Likewise.
* gcc.target/i386/indirect-thunk-attr-5.c: Likewise.
* gcc.target/i386/indirect-thunk-attr-6.c: Likewise.
* gcc.target/i386/indirect-thunk-attr-7.c: Likewise.
* gcc.target/i386/indirect-thunk-attr-8.c: Likewise.
* gcc.target/i386/indirect-thunk-bnd-1.c: Likewise.
* gcc.target/i386/indirect-thunk-bnd-2.c: Likewise.
* gcc.target/i386/indirect-thunk-bnd-3.c: Likewise.
* gcc.target/i386/indirect-thunk-bnd-4.c: Likewise.
* gcc.target/i386/indirect-thunk-extern-1.c: Likewise.
* gcc.target/i386/indirect-thunk-extern-2.c: Likewise.
* gcc.target/i386/indirect-thunk-extern-3.c: Likewise.
* gcc.target/i386/indirect-thunk-extern-4.c: Likewise.
* gcc.target/i386/indirect-thunk-extern-5.c: Likewise.
* gcc.target/i386/indirect-thunk-extern-6.c: Likewise.
* gcc.target/i386/indirect-thunk-extern-7.c: Likewise.
* gcc.target/i386/indirect-thunk-inline-1.c: Likewise.
* gcc.target/i386/indirect-thunk-inline-2.c: Likewise.
* gcc.target/i386/indirect-thunk-inline-3.c: Likewise.
* gcc.target/i386/indirect-thunk-inline-4.c: Likewise.
* gcc.target/i386/indirect-thunk-inline-5.c: Likewise.
* gcc.target/i386/indirect-thunk-inline-6.c: Likewise.
* gcc.target/i386/indirect-thunk-inline-7.c: Likewise.
* gcc.target/i386/ret-thunk-1.c: New test.
* gcc.target/i386/ret-thunk-10.c: Likewise.
* gcc.target/i386/ret-thunk-11.c: Likewise.
* gcc.target/i386/ret-thunk-12.c: Likewise.
* gcc.target/i386/ret-thunk-13.c: Likewise.
* gcc.target/i386/ret-thunk-14.c: Likewise.
* gcc.target/i386/ret-thunk-15.c: Likewise.
* gcc.target/i386/ret-thunk-16.c: Likewise.
* gcc.target/i386/ret-thunk-2.c: Likewise.
* gcc.target/i386/ret-thunk-3.c: Likewise.
* gcc.target/i386/ret-thunk-4.c: Likewise.
* gcc.target/i386/ret-thunk-5.c: Likewise.
* gcc.target/i386/ret-thunk-6.c: Likewise.
* gcc.target/i386/ret-thunk-7.c: Likewise.
* gcc.target/i386/ret-thunk-8.c: Likewise.
* gcc.target/i386/ret-thunk-9.c: Likewise.

From-SVN: r256661

6 years agox86: Add -mindirect-branch=
H.J. Lu [Sun, 14 Jan 2018 14:35:19 +0000 (14:35 +0000)]
x86: Add -mindirect-branch=

Add -mindirect-branch= option to convert indirect call and jump to call
and return thunks.  The default is 'keep', which keeps indirect call and
jump unmodified.  'thunk' converts indirect call and jump to call and
return thunk.  'thunk-inline' converts indirect call and jump to inlined
call and return thunk.  'thunk-extern' converts indirect call and jump to
external call and return thunk provided in a separate object file.  You
can control this behavior for a specific function by using the function
attribute indirect_branch.

2 kinds of thunks are geneated.  Memory thunk where the function address
is at the top of the stack:

__x86_indirect_thunk:
call L2
L1:
pause
lfence
jmp L1
L2:
lea 8(%rsp), %rsp|lea 4(%esp), %esp
ret

Indirect jmp via memory, "jmp mem", is converted to

push memory
jmp __x86_indirect_thunk

Indirect call via memory, "call mem", is converted to

jmp L2
L1:
push [mem]
jmp __x86_indirect_thunk
L2:
call L1

Register thunk where the function address is in a register, reg:

__x86_indirect_thunk_reg:
call L2
L1:
pause
lfence
jmp L1
L2:
movq %reg, (%rsp)|movl    %reg, (%esp)
ret

where reg is one of (r|e)ax, (r|e)dx, (r|e)cx, (r|e)bx, (r|e)si, (r|e)di,
(r|e)bp, r8, r9, r10, r11, r12, r13, r14 and r15.

Indirect jmp via register, "jmp reg", is converted to

jmp __x86_indirect_thunk_reg

Indirect call via register, "call reg", is converted to

call __x86_indirect_thunk_reg

gcc/

* config/i386/i386-opts.h (indirect_branch): New.
* config/i386/i386-protos.h (ix86_output_indirect_jmp): Likewise.
* config/i386/i386.c (ix86_using_red_zone): Disallow red-zone
with local indirect jump when converting indirect call and jump.
(ix86_set_indirect_branch_type): New.
(ix86_set_current_function): Call ix86_set_indirect_branch_type.
(indirectlabelno): New.
(indirect_thunk_needed): Likewise.
(indirect_thunk_bnd_needed): Likewise.
(indirect_thunks_used): Likewise.
(indirect_thunks_bnd_used): Likewise.
(INDIRECT_LABEL): Likewise.
(indirect_thunk_name): Likewise.
(output_indirect_thunk): Likewise.
(output_indirect_thunk_function): Likewise.
(ix86_output_indirect_branch): Likewise.
(ix86_output_indirect_jmp): Likewise.
(ix86_code_end): Call output_indirect_thunk_function if needed.
(ix86_output_call_insn): Call ix86_output_indirect_branch if
needed.
(ix86_handle_fndecl_attribute): Handle indirect_branch.
(ix86_attribute_table): Add indirect_branch.
* config/i386/i386.h (machine_function): Add indirect_branch_type
and has_local_indirect_jump.
* config/i386/i386.md (indirect_jump): Set has_local_indirect_jump
to true.
(tablejump): Likewise.
(*indirect_jump): Use ix86_output_indirect_jmp.
(*tablejump_1): Likewise.
(simple_return_indirect_internal): Likewise.
* config/i386/i386.opt (mindirect-branch=): New option.
(indirect_branch): New.
(keep): Likewise.
(thunk): Likewise.
(thunk-inline): Likewise.
(thunk-extern): Likewise.
* doc/extend.texi: Document indirect_branch function attribute.
* doc/invoke.texi: Document -mindirect-branch= option.

gcc/testsuite/

* gcc.target/i386/indirect-thunk-1.c: New test.
* gcc.target/i386/indirect-thunk-2.c: Likewise.
* gcc.target/i386/indirect-thunk-3.c: Likewise.
* gcc.target/i386/indirect-thunk-4.c: Likewise.
* gcc.target/i386/indirect-thunk-5.c: Likewise.
* gcc.target/i386/indirect-thunk-6.c: Likewise.
* gcc.target/i386/indirect-thunk-7.c: Likewise.
* gcc.target/i386/indirect-thunk-attr-1.c: Likewise.
* gcc.target/i386/indirect-thunk-attr-2.c: Likewise.
* gcc.target/i386/indirect-thunk-attr-3.c: Likewise.
* gcc.target/i386/indirect-thunk-attr-4.c: Likewise.
* gcc.target/i386/indirect-thunk-attr-5.c: Likewise.
* gcc.target/i386/indirect-thunk-attr-6.c: Likewise.
* gcc.target/i386/indirect-thunk-attr-7.c: Likewise.
* gcc.target/i386/indirect-thunk-attr-8.c: Likewise.
* gcc.target/i386/indirect-thunk-bnd-1.c: Likewise.
* gcc.target/i386/indirect-thunk-bnd-2.c: Likewise.
* gcc.target/i386/indirect-thunk-bnd-3.c: Likewise.
* gcc.target/i386/indirect-thunk-bnd-4.c: Likewise.
* gcc.target/i386/indirect-thunk-extern-1.c: Likewise.
* gcc.target/i386/indirect-thunk-extern-2.c: Likewise.
* gcc.target/i386/indirect-thunk-extern-3.c: Likewise.
* gcc.target/i386/indirect-thunk-extern-4.c: Likewise.
* gcc.target/i386/indirect-thunk-extern-5.c: Likewise.
* gcc.target/i386/indirect-thunk-extern-6.c: Likewise.
* gcc.target/i386/indirect-thunk-extern-7.c: Likewise.
* gcc.target/i386/indirect-thunk-inline-1.c: Likewise.
* gcc.target/i386/indirect-thunk-inline-2.c: Likewise.
* gcc.target/i386/indirect-thunk-inline-3.c: Likewise.
* gcc.target/i386/indirect-thunk-inline-4.c: Likewise.
* gcc.target/i386/indirect-thunk-inline-5.c: Likewise.
* gcc.target/i386/indirect-thunk-inline-6.c: Likewise.
* gcc.target/i386/indirect-thunk-inline-7.c: Likewise.

From-SVN: r256660

6 years agore PR ipa/83051 (ICE on valid code at -O3: in edge_badness, at ipa-inline.c:1024)
Jan Hubicka [Sun, 14 Jan 2018 11:20:31 +0000 (12:20 +0100)]
re PR ipa/83051 (ICE on valid code at -O3: in edge_badness, at ipa-inline.c:1024)

PR ipa/83051
* gcc.c-torture/compile/pr83051.c: New testcase.
* ipa-inline.c (edge_badness): Tolerate roundoff errors.

From-SVN: r256659

6 years agoinline_small_functions speedup
Richard Sandiford [Sun, 14 Jan 2018 10:56:56 +0000 (10:56 +0000)]
inline_small_functions speedup

After inlining A into B, inline_small_functions updates the information
for (most) callees and callers of the new B:

  update_callee_keys (&edge_heap, where, updated_nodes);
      [...]
      /* Our profitability metric can depend on local properties
 such as number of inlinable calls and size of the function body.
 After inlining these properties might change for the function we
 inlined into (since it's body size changed) and for the functions
 called by function we inlined (since number of it inlinable callers
 might change).  */
      update_caller_keys (&edge_heap, where, updated_nodes, NULL);

These functions in turn call can_inline_edge_p for most of the associated
edges:

    if (can_inline_edge_p (edge, false)
&& want_inline_small_function_p (edge, false))
      update_edge_key (heap, edge);

can_inline_edge_p indirectly calls estimate_calls_size_and_time
on the caller node, which seems to recursively process all callee
edges rooted at the node.  It looks from this like the algorithm
can be at least quadratic in the worst case.

Maybe there's something we can do to make can_inline_edge_p cheaper, but
since neither of these two calls is responsible for reporting an inline
failure reason, it seems cheaper to test want_inline_small_function_p
first, so that we don't calculate an estimate for something that we
already know isn't a "small function".  I think the only change
needed to make that work is to check for CIF_FINAL_ERROR in
want_inline_small_function_p; at the moment we rely on can_inline_edge_p
to make that check.

This cuts the time to build optabs.ii by over 4% with an
--enable-checking=release compiler on x86_64-linux-gnu.  I've seen more
dramatic wins on aarch64-linux-gnu due to the NUM_POLY_INT_COEFFS==2
thing.  The patch doesn't affect the output code.

2018-01-13  Richard Sandiford  <richard.sandiford@linaro.org>

gcc/
* ipa-inline.c (want_inline_small_function_p): Return false if
inlining has already failed with CIF_FINAL_ERROR.
(update_caller_keys): Call want_inline_small_function_p before
can_inline_edge_p.
(update_callee_keys): Likewise.

From-SVN: r256658

6 years agore PR tree-optimization/83501 (strlen(a) not folded after strcpy(a, "..."))
Prathamesh Kulkarni [Sun, 14 Jan 2018 08:58:58 +0000 (08:58 +0000)]
re PR tree-optimization/83501 (strlen(a) not folded after strcpy(a, "..."))

2018-01-14  Prathamesh Kulkarni  <prathamesh.kulkarni@linaro.org>

PR tree-optimization/83501
* gcc.dg/strlenopt-39.c: Restrict to i?86 and x86_64-*-* targets.

From-SVN: r256657

6 years agors6000-p8swap.c (rs6000_sum_of_two_registers_p): New function.
Kelvin Nilsen [Sun, 14 Jan 2018 05:19:29 +0000 (05:19 +0000)]
rs6000-p8swap.c (rs6000_sum_of_two_registers_p): New function.

gcc/ChangeLog:

2018-01-10  Kelvin Nilsen  <kelvin@gcc.gnu.org>

* config/rs6000/rs6000-p8swap.c (rs6000_sum_of_two_registers_p):
New function.
(rs6000_quadword_masked_address_p): Likewise.
(quad_aligned_load_p): Likewise.
(quad_aligned_store_p): Likewise.
(const_load_sequence_p): Add comment to describe the outer-most loop.
(mimic_memory_attributes_and_flags): New function.
(rs6000_gen_stvx): Likewise.
(replace_swapped_aligned_store): Likewise.
(rs6000_gen_lvx): Likewise.
(replace_swapped_aligned_load): Likewise.
(replace_swapped_load_constant): Capitalize argument name in
comment describing this function.
(rs6000_analyze_swaps): Add a third pass to search for vector loads
and stores that access quad-word aligned addresses and replace
with stvx or lvx instructions when appropriate.
* config/rs6000/rs6000-protos.h (rs6000_sum_of_two_registers_p):
New function prototype.
(rs6000_quadword_masked_address_p): Likewise.
(rs6000_gen_lvx): Likewise.
(rs6000_gen_stvx): Likewise.
* config/rs6000/vsx.md (*vsx_le_perm_load_<mode>): For modes
VSX_D (V2DF, V2DI), modify this split to select lvx instruction
when memory address is aligned.
(*vsx_le_perm_load_<mode>): For modes VSX_W (V4SF, V4SI), modify
this split to select lvx instruction when memory address is aligned.
(*vsx_le_perm_load_v8hi): Modify this split to select lvx
instruction when memory address is aligned.
(*vsx_le_perm_load_v16qi): Likewise.
(four unnamed splitters): Modify to select the stvx instruction
when memory is aligned.

gcc/testsuite/ChangeLog:

2018-01-10  Kelvin Nilsen  <kelvin@gcc.gnu.org>

* gcc.target/powerpc/pr48857.c: Modify dejagnu directives to look
for lvx and stvx instead of lxvd2x and stxvd2x and require
little-endian target.  Add comments.
* gcc.target/powerpc/swaps-p8-28.c: Add functions for more
comprehensive testing.
* gcc.target/powerpc/swaps-p8-29.c: Likewise.
* gcc.target/powerpc/swaps-p8-30.c: Likewise.
* gcc.target/powerpc/swaps-p8-31.c: Likewise.
* gcc.target/powerpc/swaps-p8-32.c: Likewise.
* gcc.target/powerpc/swaps-p8-33.c: Likewise.
* gcc.target/powerpc/swaps-p8-34.c: Likewise.
* gcc.target/powerpc/swaps-p8-35.c: Likewise.
* gcc.target/powerpc/swaps-p8-36.c: Likewise.
* gcc.target/powerpc/swaps-p8-37.c: Likewise.
* gcc.target/powerpc/swaps-p8-38.c: Likewise.
* gcc.target/powerpc/swaps-p8-39.c: Likewise.
* gcc.target/powerpc/swaps-p8-40.c: Likewise.
* gcc.target/powerpc/swaps-p8-41.c: Likewise.
* gcc.target/powerpc/swaps-p8-42.c: Likewise.
* gcc.target/powerpc/swaps-p8-43.c: Likewise.
* gcc.target/powerpc/swaps-p8-44.c: Likewise.
* gcc.target/powerpc/swaps-p8-45.c: Likewise.
* gcc.target/powerpc/vec-extract-2.c: Add comment and remove
scan-assembler-not directives that forbid lvx and xxpermdi.
* gcc.target/powerpc/vec-extract-3.c: Likewise.
* gcc.target/powerpc/vec-extract-5.c: Likewise.
* gcc.target/powerpc/vec-extract-6.c: Likewise.
* gcc.target/powerpc/vec-extract-7.c: Likewise.
* gcc.target/powerpc/vec-extract-8.c: Likewise.
* gcc.target/powerpc/vec-extract-9.c: Likewise.
* gcc.target/powerpc/vsx-vector-6-le.c: Change
scan-assembler-times directives to reflect different numbers of
expected xxlnor, xxlor, xvcmpgtdp, and xxland instructions.

libcpp/ChangeLog:

2018-01-10  Kelvin Nilsen  <kelvin@gcc.gnu.org>

* lex.c (search_line_fast): Remove illegal coercion of an
unaligned pointer value to vector pointer type and replace with
use of __builtin_vec_vsx_ld () built-in function, which operates
on unaligned pointer values.

From-SVN: r256656

6 years agogo/types: implement SizesFor for gccgo
Ian Lance Taylor [Sun, 14 Jan 2018 04:59:01 +0000 (04:59 +0000)]
go/types: implement SizesFor for gccgo

    Move the architecture-specific settings out of configure.ac into a new
    shell script goarch.sh.  Use the new script to collect the values for
    all architectures to make them available in go/types.

    Also fix cmd/vet to pass the right compiler when it calls SizesFor.

    This fixes cmd/vet for systems that are not implemented in the gc
    toolchain, such as alpha and ia64.

    Reviewed-on: https://go-review.googlesource.com/87635

From-SVN: r256655

6 years agore PR libstdc++/83601 (std::regex_replace C++14 conformance issue: escaping in SED...
Tim Shen [Sun, 14 Jan 2018 00:48:30 +0000 (00:48 +0000)]
re PR libstdc++/83601 (std::regex_replace C++14 conformance issue: escaping in SED mode)

PR libstdc++/83601
* include/bits/regex.tcc (regex_replace): Fix escaping in sed.
* testsuite/28_regex/algorithms/regex_replace/char/pr83601.cc: Tests.
* testsuite/28_regex/algorithms/regex_replace/wchar_t/pr83601.cc: Tests.

From-SVN: r256654

6 years agoDaily bump.
GCC Administrator [Sun, 14 Jan 2018 00:16:15 +0000 (00:16 +0000)]
Daily bump.

From-SVN: r256653

6 years agoAllow for lack of VM_MEMORY_OS_ALLOC_ONCE on Mac OS X (PR sanitizer/82824)
Rainer Orth [Sat, 13 Jan 2018 21:01:27 +0000 (21:01 +0000)]
Allow for lack of VM_MEMORY_OS_ALLOC_ONCE on Mac OS X (PR sanitizer/82824)

PR sanitizer/82824
* lsan/lsan_common_mac.cc: Cherry-pick upstream r322437.

From-SVN: r256650

6 years agore PR fortran/82007 (DTIO write format stored in a string leads to severe errors)
Jerry DeLisle [Sat, 13 Jan 2018 20:41:00 +0000 (20:41 +0000)]
re PR fortran/82007 (DTIO write format stored in a string leads to severe errors)

2018-01-13  Jerry DeLisle  <jvdelisle@gcc.gnu.org>

        PR fortran/82007
        * resolve.c (resolve_transfer): Delete code looking for 'DT'
        format specifiers in format strings. Set formatted to true if a
        format string or format label is present.
        * trans-io.c (get_dtio_proc): Likewise. (transfer_expr): Fix
        whitespace.

From-SVN: r256649

6 years agopredict.c (determine_unlikely_bbs): Handle correctly BBs which appears in the queue...
Jan Hubicka [Sat, 13 Jan 2018 19:32:04 +0000 (20:32 +0100)]
predict.c (determine_unlikely_bbs): Handle correctly BBs which appears in the queue multiple times.

* predict.c (determine_unlikely_bbs): Handle correctly BBs
which appears in the queue multiple times.

From-SVN: r256648

6 years agore PR fortran/83744 (ICE in ../../gcc/gcc/fortran/dump-parse-tree.c:3093 while using...
Thomas Koenig [Sat, 13 Jan 2018 18:22:36 +0000 (18:22 +0000)]
re PR fortran/83744 (ICE in ../../gcc/gcc/fortran/dump-parse-tree.c:3093 while using -fc-prototypes)

2018-01-13  Thomas Koenig <tkoenig@gcc.gnu.org>

PR fortran/83744
* dump-parse-tree.c (get_c_type_name): Remove extra line.
Change for loop to use declaration in for loop. Handle BT_LOGICAL
and BT_CHARACTER.
(write_decl): Add where argument. Fix indentation. Replace
assert with error message. Add typename to warning
in comment.
(write_type): Adjust locus to call of write_decl.
(write_variable): Likewise.
(write_proc): Likewise. Replace assert with error message.

From-SVN: r256645

6 years agoSupport for aliasing with variable strides
Richard Sandiford [Sat, 13 Jan 2018 18:02:10 +0000 (18:02 +0000)]
Support for aliasing with variable strides

This patch adds runtime alias checks for loops with variable strides,
so that we can vectorise them even without a restrict qualifier.
There are several parts to doing this:

1) For accesses like:

     x[i * n] += 1;

   we need to check whether n (and thus the DR_STEP) is nonzero.
   vect_analyze_data_ref_dependence records values that need to be
   checked in this way, then prune_runtime_alias_test_list records a
   bounds check on DR_STEP being outside the range [0, 0].

2) For accesses like:

     x[i * n] = x[i * n + 1] + 1;

   we simply need to test whether abs (n) >= 2.
   prune_runtime_alias_test_list looks for cases like this and tries
   to guess whether it is better to use this kind of check or a check
   for non-overlapping ranges.  (We could do an OR of the two conditions
   at runtime, but that isn't implemented yet.)

3) Checks for overlapping ranges need to cope with variable strides.
   At present the "length" of each segment in a range check is
   represented as an offset from the base that lies outside the
   touched range, in the same direction as DR_STEP.  The length
   can therefore be negative and is sometimes conservative.

   With variable steps it's easier to reaon about if we split
   this into two:

     seg_len:
       distance travelled from the first iteration of interest
       to the last, e.g. DR_STEP * (VF - 1)

     access_size:
       the number of bytes accessed in each iteration

   with access_size always being a positive constant and seg_len
   possibly being variable.  We can then combine alias checks
   for two accesses that are a constant number of bytes apart by
   adjusting the access size to account for the gap.  This leaves
   the segment length unchanged, which allows the check to be combined
   with further accesses.

   When seg_len is positive, the runtime alias check has the form:

        base_a >= base_b + seg_len_b + access_size_b
     || base_b >= base_a + seg_len_a + access_size_a

   In many accesses the base will be aligned to the access size, which
   allows us to skip the addition:

        base_a > base_b + seg_len_b
     || base_b > base_a + seg_len_a

   A similar saving is possible with "negative" lengths.

   The patch therefore tracks the alignment in addition to seg_len
   and access_size.

2018-01-13  Richard Sandiford  <richard.sandiford@linaro.org>
    Alan Hayward  <alan.hayward@arm.com>
    David Sherwood  <david.sherwood@arm.com>

gcc/
* tree-vectorizer.h (vec_lower_bound): New structure.
(_loop_vec_info): Add check_nonzero and lower_bounds.
(LOOP_VINFO_CHECK_NONZERO): New macro.
(LOOP_VINFO_LOWER_BOUNDS): Likewise.
(LOOP_REQUIRES_VERSIONING_FOR_ALIAS): Check lower_bounds too.
* tree-data-ref.h (dr_with_seg_len): Add access_size and align
fields.  Make seg_len the distance travelled, not including the
access size.
(dr_direction_indicator): Declare.
(dr_zero_step_indicator): Likewise.
(dr_known_forward_stride_p): Likewise.
* tree-data-ref.c: Include stringpool.h, tree-vrp.h and
tree-ssanames.h.
(runtime_alias_check_p): Allow runtime alias checks with
variable strides.
(operator ==): Compare access_size and align.
(prune_runtime_alias_test_list): Rework for new distinction between
the access_size and seg_len.
(create_intersect_range_checks_index): Likewise.  Cope with polynomial
segment lengths.
(get_segment_min_max): New function.
(create_intersect_range_checks): Use it.
(dr_step_indicator): New function.
(dr_direction_indicator): Likewise.
(dr_zero_step_indicator): Likewise.
(dr_known_forward_stride_p): Likewise.
* tree-loop-distribution.c (data_ref_segment_size): Return
DR_STEP * (niters - 1).
(compute_alias_check_pairs): Update call to the dr_with_seg_len
constructor.
* tree-vect-data-refs.c (vect_check_nonzero_value): New function.
(vect_preserves_scalar_order_p): New function, split out from...
(vect_analyze_data_ref_dependence): ...here.  Check for zero steps.
(vect_vfa_segment_size): Return DR_STEP * (length_factor - 1).
(vect_vfa_access_size): New function.
(vect_vfa_align): Likewise.
(vect_compile_time_alias): Take access_size_a and access_b arguments.
(dump_lower_bound): New function.
(vect_check_lower_bound): Likewise.
(vect_small_gap_p): Likewise.
(vectorizable_with_step_bound_p): Likewise.
(vect_prune_runtime_alias_test_list): Ignore cross-iteration
depencies if the vectorization factor is 1.  Convert the checks
for nonzero steps into checks on the bounds of DR_STEP.  Try using
a bunds check for variable steps if the minimum required step is
relatively small. Update calls to the dr_with_seg_len
constructor and to vect_compile_time_alias.
* tree-vect-loop-manip.c (vect_create_cond_for_lower_bounds): New
function.
(vect_loop_versioning): Call it.
* tree-vect-loop.c (vect_analyze_loop_2): Clear LOOP_VINFO_LOWER_BOUNDS
when retrying.
(vect_estimate_min_profitable_iters): Account for any bounds checks.

gcc/testsuite/
* gcc.dg/vect/bb-slp-cond-1.c: Expect loop vectorization rather
than SLP vectorization.
* gcc.dg/vect/vect-alias-check-10.c: New test.
* gcc.dg/vect/vect-alias-check-11.c: Likewise.
* gcc.dg/vect/vect-alias-check-12.c: Likewise.
* gcc.dg/vect/vect-alias-check-8.c: Likewise.
* gcc.dg/vect/vect-alias-check-9.c: Likewise.
* gcc.target/aarch64/sve/strided_load_8.c: Likewise.
* gcc.target/aarch64/sve/var_stride_1.c: Likewise.
* gcc.target/aarch64/sve/var_stride_1.h: Likewise.
* gcc.target/aarch64/sve/var_stride_1_run.c: Likewise.
* gcc.target/aarch64/sve/var_stride_2.c: Likewise.
* gcc.target/aarch64/sve/var_stride_2_run.c: Likewise.
* gcc.target/aarch64/sve/var_stride_3.c: Likewise.
* gcc.target/aarch64/sve/var_stride_3_run.c: Likewise.
* gcc.target/aarch64/sve/var_stride_4.c: Likewise.
* gcc.target/aarch64/sve/var_stride_4_run.c: Likewise.
* gcc.target/aarch64/sve/var_stride_5.c: Likewise.
* gcc.target/aarch64/sve/var_stride_5_run.c: Likewise.
* gcc.target/aarch64/sve/var_stride_6.c: Likewise.
* gcc.target/aarch64/sve/var_stride_6_run.c: Likewise.
* gcc.target/aarch64/sve/var_stride_7.c: Likewise.
* gcc.target/aarch64/sve/var_stride_7_run.c: Likewise.
* gcc.target/aarch64/sve/var_stride_8.c: Likewise.
* gcc.target/aarch64/sve/var_stride_8_run.c: Likewise.
* gfortran.dg/vect/vect-alias-check-1.F90: Likewise.

Co-Authored-By: Alan Hayward <alan.hayward@arm.com>
Co-Authored-By: David Sherwood <david.sherwood@arm.com>
From-SVN: r256644

6 years agoAdd support for SVE scatter stores
Richard Sandiford [Sat, 13 Jan 2018 18:01:59 +0000 (18:01 +0000)]
Add support for SVE scatter stores

This is mostly a mechanical extension of the previous gather load
support to scatter stores.  The internal functions in this case are:

  IFN_SCATTER_STORE (base, offsets, scale, values)
  IFN_MASK_SCATTER_STORE (base, offsets, scale, values, mask)

However, one nonobvious change is to vect_analyze_data_ref_access.
If we're treating an access as a gather load or scatter store
(i.e. if STMT_VINFO_GATHER_SCATTER_P is true), the existing code
would create a dummy data_reference whose step is 0.  There's not
really much else it could do, since the whole point is that the
step isn't predictable from iteration to iteration.  We then
went into this code in vect_analyze_data_ref_access:

  /* Allow loads with zero step in inner-loop vectorization.  */
  if (loop_vinfo && integer_zerop (step))
    {
      GROUP_FIRST_ELEMENT (vinfo_for_stmt (stmt)) = NULL;
      if (!nested_in_vect_loop_p (loop, stmt))
return DR_IS_READ (dr);

I.e. we'd take the step literally and assume that this is a load
or store to an invariant address.  Loads from invariant addresses
are supported but stores to them aren't.

The code therefore had the effect of disabling all scatter stores.
AFAICT this is true of AVX too: although tests like avx512f-scatter-1.c
test for the correctness of a scatter-like loop, they don't seem to
check whether a scatter instruction is actually used.

The patch therefore makes vect_analyze_data_ref_access return true
for scatters.  We do seem to handle the aliasing correctly;
that's tested by other functions, and is symmetrical to the
already-working gather case.

2018-01-13  Richard Sandiford  <richard.sandiford@linaro.org>
    Alan Hayward  <alan.hayward@arm.com>
    David Sherwood  <david.sherwood@arm.com>

gcc/
* doc/sourcebuild.texi (vect_scatter_store): Document.
* optabs.def (scatter_store_optab, mask_scatter_store_optab): New
optabs.
* doc/md.texi (scatter_store@var{m}, mask_scatter_store@var{m}):
Document.
* genopinit.c (main): Add supports_vec_scatter_store and
supports_vec_scatter_store_cached to target_optabs.
* gimple.h (gimple_expr_type): Handle IFN_SCATTER_STORE and
IFN_MASK_SCATTER_STORE.
* internal-fn.def (SCATTER_STORE, MASK_SCATTER_STORE): New internal
functions.
* internal-fn.h (internal_store_fn_p): Declare.
(internal_fn_stored_value_index): Likewise.
* internal-fn.c (scatter_store_direct): New macro.
(expand_scatter_store_optab_fn): New function.
(direct_scatter_store_optab_supported_p): New macro.
(internal_store_fn_p): New function.
(internal_gather_scatter_fn_p): Handle IFN_SCATTER_STORE and
IFN_MASK_SCATTER_STORE.
(internal_fn_mask_index): Likewise.
(internal_fn_stored_value_index): New function.
(internal_gather_scatter_fn_supported_p): Adjust operand numbers
for scatter stores.
* optabs-query.h (supports_vec_scatter_store_p): Declare.
* optabs-query.c (supports_vec_scatter_store_p): New function.
* tree-vectorizer.h (vect_get_store_rhs): Declare.
* tree-vect-data-refs.c (vect_analyze_data_ref_access): Return
true for scatter stores.
(vect_gather_scatter_fn_p): Handle scatter stores too.
(vect_check_gather_scatter): Consider using scatter stores if
supports_vec_scatter_store_p.
* tree-vect-patterns.c (vect_try_gather_scatter_pattern): Handle
scatter stores too.
* tree-vect-stmts.c (exist_non_indexing_operands_for_use_p): Use
internal_fn_stored_value_index.
(check_load_store_masking): Handle scatter stores too.
(vect_get_store_rhs): Make public.
(vectorizable_call): Use internal_store_fn_p.
(vectorizable_store): Handle scatter store internal functions.
(vect_transform_stmt): Compare GROUP_STORE_COUNT with GROUP_SIZE
when deciding whether the end of the group has been reached.
* config/aarch64/aarch64.md (UNSPEC_ST1_SCATTER): New unspec.
* config/aarch64/aarch64-sve.md (scatter_store<mode>): New expander.
(mask_scatter_store<mode>): New insns.

gcc/testsuite/
* lib/target-supports.exp (check_effective_target_vect_scatter_store):
New proc.
* gcc.dg/vect/pr25413a.c: Expect both loops to be optimized on
targets with scatter stores.
* gcc.dg/vect/vect-71.c: Restrict XFAIL to targets without scatter
stores.
* gcc.target/aarch64/sve/mask_scatter_store_1.c: New test.
* gcc.target/aarch64/sve/mask_scatter_store_2.c: Likewise.
* gcc.target/aarch64/sve/scatter_store_1.c: Likewise.
* gcc.target/aarch64/sve/scatter_store_2.c: Likewise.
* gcc.target/aarch64/sve/scatter_store_3.c: Likewise.
* gcc.target/aarch64/sve/scatter_store_4.c: Likewise.
* gcc.target/aarch64/sve/scatter_store_5.c: Likewise.
* gcc.target/aarch64/sve/scatter_store_6.c: Likewise.
* gcc.target/aarch64/sve/scatter_store_7.c: Likewise.
* gcc.target/aarch64/sve/strided_store_1.c: Likewise.
* gcc.target/aarch64/sve/strided_store_2.c: Likewise.
* gcc.target/aarch64/sve/strided_store_3.c: Likewise.
* gcc.target/aarch64/sve/strided_store_4.c: Likewise.
* gcc.target/aarch64/sve/strided_store_5.c: Likewise.
* gcc.target/aarch64/sve/strided_store_6.c: Likewise.
* gcc.target/aarch64/sve/strided_store_7.c: Likewise.

Co-Authored-By: Alan Hayward <alan.hayward@arm.com>
Co-Authored-By: David Sherwood <david.sherwood@arm.com>
From-SVN: r256643

6 years agoAllow gather loads to be used for grouped accesses
Richard Sandiford [Sat, 13 Jan 2018 18:01:49 +0000 (18:01 +0000)]
Allow gather loads to be used for grouped accesses

Following on from the previous patch for strided accesses, this patch
allows gather loads to be used with grouped accesses, if we otherwise
would need to fall back to VMAT_ELEMENTWISE.  However, as the comment
says, this is restricted to single-element groups for now:

 ??? Although the code can handle all group sizes correctly,
 it probably isn't a win to use separate strided accesses based
 on nearby locations.  Or, even if it's a win over scalar code,
 it might not be a win over vectorizing at a lower VF, if that
 allows us to use contiguous accesses.

Single-element groups are an important special case though,
and this means that code is less sensitive to GCC's classification
of single accesses with constant steps as "grouped" and ones with
variable steps as "strided".

2018-01-13  Richard Sandiford  <richard.sandiford@linaro.org>
    Alan Hayward  <alan.hayward@arm.com>
    David Sherwood  <david.sherwood@arm.com>

gcc/
* tree-vectorizer.h (vect_gather_scatter_fn_p): Declare.
* tree-vect-data-refs.c (vect_gather_scatter_fn_p): Make public.
* tree-vect-stmts.c (vect_truncate_gather_scatter_offset): New
function.
(vect_use_strided_gather_scatters_p): Take a masked_p argument.
Use vect_truncate_gather_scatter_offset if we can't treat the
operation as a normal gather load or scatter store.
(get_group_load_store_type): Take the gather_scatter_info
as argument.  Try using a gather load or scatter store for
single-element groups.
(get_load_store_type): Update calls to get_group_load_store_type
and vect_use_strided_gather_scatters_p.

gcc/testsuite/
* gcc.target/aarch64/sve/reduc_strict_3.c: Expect FADDA to be used
for double_reduc1.
* gcc.target/aarch64/sve/strided_load_4.c: New test.
* gcc.target/aarch64/sve/strided_load_5.c: Likewise.
* gcc.target/aarch64/sve/strided_load_6.c: Likewise.
* gcc.target/aarch64/sve/strided_load_7.c: Likewise.

Co-Authored-By: Alan Hayward <alan.hayward@arm.com>
Co-Authored-By: David Sherwood <david.sherwood@arm.com>
From-SVN: r256642

6 years agoUse gather loads for strided accesses
Richard Sandiford [Sat, 13 Jan 2018 18:01:42 +0000 (18:01 +0000)]
Use gather loads for strided accesses

This patch tries to use gather loads for strided accesses,
rather than falling back to VMAT_ELEMENTWISE.

2018-01-13  Richard Sandiford  <richard.sandiford@linaro.org>
    Alan Hayward  <alan.hayward@arm.com>
    David Sherwood  <david.sherwood@arm.com>

gcc/
* tree-vectorizer.h (vect_create_data_ref_ptr): Take an extra
optional tree argument.
* tree-vect-data-refs.c (vect_check_gather_scatter): Check for
null target hooks.
(vect_create_data_ref_ptr): Take the iv_step as an optional argument,
but continue to use the current value as a fallback.
(bump_vector_ptr): Use operand_equal_p rather than tree_int_cst_compare
to compare the updates.
* tree-vect-stmts.c (vect_use_strided_gather_scatters_p): New function.
(get_load_store_type): Use it when handling a strided access.
(vect_get_strided_load_store_ops): New function.
(vect_get_data_ptr_increment): Likewise.
(vectorizable_load): Handle strided gather loads.  Always pass
a step to vect_create_data_ref_ptr and bump_vector_ptr.

gcc/testsuite/
* gcc.target/aarch64/sve/strided_load_1.c: New test.
* gcc.target/aarch64/sve/strided_load_2.c: Likewise.
* gcc.target/aarch64/sve/strided_load_3.c: Likewise.

Co-Authored-By: Alan Hayward <alan.hayward@arm.com>
Co-Authored-By: David Sherwood <david.sherwood@arm.com>
From-SVN: r256641

6 years agoAdd support for SVE gather loads
Richard Sandiford [Sat, 13 Jan 2018 18:01:34 +0000 (18:01 +0000)]
Add support for SVE gather loads

This patch adds support for SVE gather loads.  It uses the basically
the same analysis code as the AVX gather support, but after that
there are two major differences:

- It uses new internal functions rather than target built-ins.
  The interface is:

     IFN_GATHER_LOAD (base, offsets scale)
     IFN_MASK_GATHER_LOAD (base, offsets scale, mask)

  which should be reasonably generic.  One of the advantages of
  using internal functions is that other passes can understand what
  the functions do, but a more immediate advantage is that we can
  query the underlying target pattern to see which scales it supports.

- It uses pattern recognition to convert the offset to the right width,
  if it was originally narrower than that.  This avoids having to do
  a widening operation as part of the gather expansion itself.

2018-01-13  Richard Sandiford  <richard.sandiford@linaro.org>
    Alan Hayward  <alan.hayward@arm.com>
    David Sherwood  <david.sherwood@arm.com>

gcc/
* doc/md.texi (gather_load@var{m}): Document.
(mask_gather_load@var{m}): Likewise.
* genopinit.c (main): Add supports_vec_gather_load and
supports_vec_gather_load_cached to target_optabs.
* optabs-tree.c (init_tree_optimization_optabs): Use
ggc_cleared_alloc to allocate target_optabs.
* optabs.def (gather_load_optab, mask_gather_laod_optab): New optabs.
* internal-fn.def (GATHER_LOAD, MASK_GATHER_LOAD): New internal
functions.
* internal-fn.h (internal_load_fn_p): Declare.
(internal_gather_scatter_fn_p): Likewise.
(internal_fn_mask_index): Likewise.
(internal_gather_scatter_fn_supported_p): Likewise.
* internal-fn.c (gather_load_direct): New macro.
(expand_gather_load_optab_fn): New function.
(direct_gather_load_optab_supported_p): New macro.
(direct_internal_fn_optab): New function.
(internal_load_fn_p): Likewise.
(internal_gather_scatter_fn_p): Likewise.
(internal_fn_mask_index): Likewise.
(internal_gather_scatter_fn_supported_p): Likewise.
* optabs-query.c (supports_at_least_one_mode_p): New function.
(supports_vec_gather_load_p): Likewise.
* optabs-query.h (supports_vec_gather_load_p): Declare.
* tree-vectorizer.h (gather_scatter_info): Add ifn, element_type
and memory_type field.
(NUM_PATTERNS): Bump to 15.
* tree-vect-data-refs.c: Include internal-fn.h.
(vect_gather_scatter_fn_p): New function.
(vect_describe_gather_scatter_call): Likewise.
(vect_check_gather_scatter): Try using internal functions for
gather loads.  Recognize existing calls to a gather load function.
(vect_analyze_data_refs): Consider using gather loads if
supports_vec_gather_load_p.
* tree-vect-patterns.c (vect_get_load_store_mask): New function.
(vect_get_gather_scatter_offset_type): Likewise.
(vect_convert_mask_for_vectype): Likewise.
(vect_add_conversion_to_patterm): Likewise.
(vect_try_gather_scatter_pattern): Likewise.
(vect_recog_gather_scatter_pattern): New pattern recognizer.
(vect_vect_recog_func_ptrs): Add it.
* tree-vect-stmts.c (exist_non_indexing_operands_for_use_p): Use
internal_fn_mask_index and internal_gather_scatter_fn_p.
(check_load_store_masking): Take the gather_scatter_info as an
argument and handle gather loads.
(vect_get_gather_scatter_ops): New function.
(vectorizable_call): Check internal_load_fn_p.
(vectorizable_load): Likewise.  Handle gather load internal
functions.
(vectorizable_store): Update call to check_load_store_masking.
* config/aarch64/aarch64.md (UNSPEC_LD1_GATHER): New unspec.
* config/aarch64/iterators.md (SVE_S, SVE_D): New mode iterators.
* config/aarch64/predicates.md (aarch64_gather_scale_operand_w)
(aarch64_gather_scale_operand_d): New predicates.
* config/aarch64/aarch64-sve.md (gather_load<mode>): New expander.
(mask_gather_load<mode>): New insns.

gcc/testsuite/
* gcc.target/aarch64/sve/gather_load_1.c: New test.
* gcc.target/aarch64/sve/gather_load_2.c: Likewise.
* gcc.target/aarch64/sve/gather_load_3.c: Likewise.
* gcc.target/aarch64/sve/gather_load_4.c: Likewise.
* gcc.target/aarch64/sve/gather_load_5.c: Likewise.
* gcc.target/aarch64/sve/gather_load_6.c: Likewise.
* gcc.target/aarch64/sve/gather_load_7.c: Likewise.
* gcc.target/aarch64/sve/mask_gather_load_1.c: Likewise.
* gcc.target/aarch64/sve/mask_gather_load_2.c: Likewise.
* gcc.target/aarch64/sve/mask_gather_load_3.c: Likewise.
* gcc.target/aarch64/sve/mask_gather_load_4.c: Likewise.
* gcc.target/aarch64/sve/mask_gather_load_5.c: Likewise.
* gcc.target/aarch64/sve/mask_gather_load_6.c: Likewise.
* gcc.target/aarch64/sve/mask_gather_load_7.c: Likewise.

Co-Authored-By: Alan Hayward <alan.hayward@arm.com>
Co-Authored-By: David Sherwood <david.sherwood@arm.com>
From-SVN: r256640

6 years agoAdd support for in-order addition reduction using SVE FADDA
Richard Sandiford [Sat, 13 Jan 2018 18:01:24 +0000 (18:01 +0000)]
Add support for in-order addition reduction using SVE FADDA

This patch adds support for in-order floating-point addition reductions,
which are suitable even in strict IEEE mode.

Previously vect_is_simple_reduction would reject any cases that forbid
reassociation.  The idea is instead to tentatively accept them as
"FOLD_LEFT_REDUCTIONs" and only fail later if there is no support
for them.  Although this patch only handles the particular case of plus
and minus on floating-point types, there's no reason in principle why
we couldn't handle other cases.

The reductions use a new fold_left_plus_optab if available, otherwise
they fall back to elementwise additions or subtractions.

The vect_force_simple_reduction change makes it easier for parloops
to read the type of reduction.

2018-01-13  Richard Sandiford  <richard.sandiford@linaro.org>
    Alan Hayward  <alan.hayward@arm.com>
    David Sherwood  <david.sherwood@arm.com>

gcc/
* optabs.def (fold_left_plus_optab): New optab.
* doc/md.texi (fold_left_plus_@var{m}): Document.
* internal-fn.def (IFN_FOLD_LEFT_PLUS): New internal function.
* internal-fn.c (fold_left_direct): Define.
(expand_fold_left_optab_fn): Likewise.
(direct_fold_left_optab_supported_p): Likewise.
* fold-const-call.c (fold_const_fold_left): New function.
(fold_const_call): Use it to fold CFN_FOLD_LEFT_PLUS.
* tree-parloops.c (valid_reduction_p): New function.
(gather_scalar_reductions): Use it.
* tree-vectorizer.h (FOLD_LEFT_REDUCTION): New vect_reduction_type.
(vect_finish_replace_stmt): Declare.
* tree-vect-loop.c (fold_left_reduction_fn): New function.
(needs_fold_left_reduction_p): New function, split out from...
(vect_is_simple_reduction): ...here.  Accept reductions that
forbid reassociation, but give them type FOLD_LEFT_REDUCTION.
(vect_force_simple_reduction): Also store the reduction type in
the assignment's STMT_VINFO_REDUC_TYPE.
(vect_model_reduction_cost): Handle FOLD_LEFT_REDUCTION.
(merge_with_identity): New function.
(vect_expand_fold_left): Likewise.
(vectorize_fold_left_reduction): Likewise.
(vectorizable_reduction): Handle FOLD_LEFT_REDUCTION.  Leave the
scalar phi in place for it.  Check for target support and reject
cases that would reassociate the operation.  Defer the transform
phase to vectorize_fold_left_reduction.
* config/aarch64/aarch64.md (UNSPEC_FADDA): New unspec.
* config/aarch64/aarch64-sve.md (fold_left_plus_<mode>): New expander.
(*fold_left_plus_<mode>, *pred_fold_left_plus_<mode>): New insns.

gcc/testsuite/
* gcc.dg/vect/no-fast-math-vect16.c: Expect the test to pass and
check for a message about using in-order reductions.
* gcc.dg/vect/pr79920.c: Expect both loops to be vectorized and
check for a message about using in-order reductions.
* gcc.dg/vect/trapv-vect-reduc-4.c: Expect all three loops to be
vectorized and check for a message about using in-order reductions.
Expect targets with variable-length vectors to fall back to the
fixed-length mininum.
* gcc.dg/vect/vect-reduc-6.c: Expect the loop to be vectorized and
check for a message about using in-order reductions.
* gcc.dg/vect/vect-reduc-in-order-1.c: New test.
* gcc.dg/vect/vect-reduc-in-order-2.c: Likewise.
* gcc.dg/vect/vect-reduc-in-order-3.c: Likewise.
* gcc.dg/vect/vect-reduc-in-order-4.c: Likewise.
* gcc.target/aarch64/sve/reduc_strict_1.c: New test.
* gcc.target/aarch64/sve/reduc_strict_1_run.c: Likewise.
* gcc.target/aarch64/sve/reduc_strict_2.c: Likewise.
* gcc.target/aarch64/sve/reduc_strict_2_run.c: Likewise.
* gcc.target/aarch64/sve/reduc_strict_3.c: Likewise.
* gcc.target/aarch64/sve/slp_13.c: Add floating-point types.
* gfortran.dg/vect/vect-8.f90: Expect 22 loops to be vectorized if
vect_fold_left_plus.

Co-Authored-By: Alan Hayward <alan.hayward@arm.com>
Co-Authored-By: David Sherwood <david.sherwood@arm.com>
From-SVN: r256639

6 years agoRemove unnecessary temporary in tree-if-conv.c
Richard Sandiford [Sat, 13 Jan 2018 18:01:14 +0000 (18:01 +0000)]
Remove unnecessary temporary in tree-if-conv.c

The call to ifc_temp_var in predicate_mem_writes become redundant
in r230099.  Before that point the mask was calculated using
fold_build_*s, but now it's calculated by gimple_build and so
is already a valid gimple value.

As it stands, the call forces an SSA_NAME-to-SSA_NAME copy
to be created, whereas SLP expects that such redundant copies
have already been eliminated.

2018-01-13  Richard Sandiford  <richard.sandiford@linaro.org>

gcc/
* tree-if-conv.c (predicate_mem_writes): Remove redundant
call to ifc_temp_var.

From-SVN: r256638

6 years agoRework the legitimize_address_displacement hook
Richard Sandiford [Sat, 13 Jan 2018 18:00:59 +0000 (18:00 +0000)]
Rework the legitimize_address_displacement hook

This patch:

- tweaks the handling of legitimize_address_displacement
  so that it gets called before rather than after the address has
  been expanded.  This means that we're no longer at the mercy
  of LRA being able to interpret the expanded instructions.

- passes the original offset to legitimize_address_displacement.

- adds SVE support to the AArch64 implementation of
  legitimize_address_displacement.

2018-01-13  Richard Sandiford  <richard.sandiford@linaro.org>
    Alan Hayward  <alan.hayward@arm.com>
    David Sherwood  <david.sherwood@arm.com>

gcc/
* target.def (legitimize_address_displacement): Take the original
offset as a poly_int.
* targhooks.h (default_legitimize_address_displacement): Update
accordingly.
* targhooks.c (default_legitimize_address_displacement): Likewise.
* doc/tm.texi: Regenerate.
* lra-constraints.c (base_plus_disp_to_reg): Take the displacement
as an argument, moving assert of ad->disp == ad->disp_term to...
(process_address_1): ...here.  Update calls to base_plus_disp_to_reg.
Try calling targetm.legitimize_address_displacement before expanding
the address rather than afterwards, and adjust for the new interface.
* config/aarch64/aarch64.c (aarch64_legitimize_address_displacement):
Match the new hook interface.  Handle SVE addresses.
* config/sh/sh.c (sh_legitimize_address_displacement): Make the
new hook interface.

Co-Authored-By: Alan Hayward <alan.hayward@arm.com>
Co-Authored-By: David Sherwood <david.sherwood@arm.com>
From-SVN: r256637

6 years agoAdd an "early rematerialisation" pass
Richard Sandiford [Sat, 13 Jan 2018 18:00:51 +0000 (18:00 +0000)]
Add an "early rematerialisation" pass

This patch looks for pseudo registers that are live across a call
and for which no call-preserved hard registers exist.  It then
recomputes the pseudos as necessary to ensure that they are no
longer live across a call.  The comment at the head of the file
describes the approach.

A new target hook selects which modes should be treated in this way.
By default none are, in which case the pass is skipped very early.

It might also be worth looking for cases like:

   C1: R1 := f (...)
   ...
   C2: R2 := f (...)
   C3: R1 := C2

and giving the same value number to C1 and C3, effectively treating
it like:

   C1: R1 := f (...)
   ...
   C2: R2 := f (...)
   C3: R1 := f (...)

Another (much more expensive) enhancement would be to apply value
numbering to all pseudo registers (not just rematerialisation
candidates), so that we can handle things like:

  C1: R1 := f (...R2...)
  ...
  C2: R1 := f (...R3...)

where R2 and R3 hold the same value.  But the current pass seems
to catch the vast majority of cases.

2018-01-13  Richard Sandiford  <richard.sandiford@linaro.org>

gcc/
* Makefile.in (OBJS): Add early-remat.o.
* target.def (select_early_remat_modes): New hook.
* doc/tm.texi.in (TARGET_SELECT_EARLY_REMAT_MODES): New hook.
* doc/tm.texi: Regenerate.
* targhooks.h (default_select_early_remat_modes): Declare.
* targhooks.c (default_select_early_remat_modes): New function.
* timevar.def (TV_EARLY_REMAT): New timevar.
* passes.def (pass_early_remat): New pass.
* tree-pass.h (make_pass_early_remat): Declare.
* early-remat.c: New file.
* config/aarch64/aarch64.c (aarch64_select_early_remat_modes): New
function.
(TARGET_SELECT_EARLY_REMAT_MODES): Define.

gcc/testsuite/
* gcc.target/aarch64/sve/spill_1.c: Also test that no predicates
are spilled.
* gcc.target/aarch64/sve/spill_2.c: New test.
* gcc.target/aarch64/sve/spill_3.c: Likewise.
* gcc.target/aarch64/sve/spill_4.c: Likewise.
* gcc.target/aarch64/sve/spill_5.c: Likewise.
* gcc.target/aarch64/sve/spill_6.c: Likewise.
* gcc.target/aarch64/sve/spill_7.c: Likewise.

From-SVN: r256636

6 years agoUse single-iteration epilogues when peeling for gaps
Richard Sandiford [Sat, 13 Jan 2018 18:00:41 +0000 (18:00 +0000)]
Use single-iteration epilogues when peeling for gaps

This patch adds support for fully-masking loops that require peeling
for gaps.  It peels exactly one scalar iteration and uses the masked
loop to handle the rest.  Previously we would fall back on using a
standard unmasked loop instead.

2018-01-13  Richard Sandiford  <richard.sandiford@linaro.org>
    Alan Hayward  <alan.hayward@arm.com>
    David Sherwood  <david.sherwood@arm.com>

gcc/
* tree-vect-loop-manip.c (vect_gen_scalar_loop_niters): Replace
vfm1 with a bound_epilog parameter.
(vect_do_peeling): Update calls accordingly, and move the prologue
call earlier in the function.  Treat the base bound_epilog as 0 for
fully-masked loops and retain vf - 1 for other loops.  Add 1 to
this base when peeling for gaps.
* tree-vect-loop.c (vect_analyze_loop_2): Allow peeling for gaps
with fully-masked loops.
(vect_estimate_min_profitable_iters): Handle the single peeled
iteration in that case.

gcc/testsuite/
* gcc.target/aarch64/sve/struct_vect_18.c: Check the number
of branches.
* gcc.target/aarch64/sve/struct_vect_19.c: Likewise.
* gcc.target/aarch64/sve/struct_vect_20.c: New test.
* gcc.target/aarch64/sve/struct_vect_20_run.c: Likewise.
* gcc.target/aarch64/sve/struct_vect_21.c: Likewise.
* gcc.target/aarch64/sve/struct_vect_21_run.c: Likewise.
* gcc.target/aarch64/sve/struct_vect_22.c: Likewise.
* gcc.target/aarch64/sve/struct_vect_22_run.c: Likewise.
* gcc.target/aarch64/sve/struct_vect_23.c: Likewise.
* gcc.target/aarch64/sve/struct_vect_23_run.c: Likewise.

Co-Authored-By: Alan Hayward <alan.hayward@arm.com>
Co-Authored-By: David Sherwood <david.sherwood@arm.com>
From-SVN: r256635

6 years agoAllow single-element interleaving for non-power-of-2 strides
Richard Sandiford [Sat, 13 Jan 2018 18:00:31 +0000 (18:00 +0000)]
Allow single-element interleaving for non-power-of-2 strides

This allows LD3 to be used for isolated a[i * 3] accesses, in a similar
way to the current a[i * 2] and a[i * 4] for LD2 and LD4 respectively.
Given the problems with the cost model underestimating the cost of
elementwise accesses, the patch continues to reject the VMAT_ELEMENTWISE
cases that are currently rejected.

2018-01-13  Richard Sandiford  <richard.sandiford@linaro.org>
    Alan Hayward  <alan.hayward@arm.com>
    David Sherwood  <david.sherwood@arm.com>

gcc/
* tree-vect-data-refs.c (vect_analyze_group_access_1): Allow
single-element interleaving even if the size is not a power of 2.
* tree-vect-stmts.c (get_load_store_type): Disallow elementwise
accesses for single-element interleaving if the group size is
not a power of 2.

gcc/testsuite/
* gcc.target/aarch64/sve/struct_vect_18.c: New test.
* gcc.target/aarch64/sve/struct_vect_18_run.c: Likewise.
* gcc.target/aarch64/sve/struct_vect_19.c: Likewise.
* gcc.target/aarch64/sve/struct_vect_19_run.c: Likewise.

Co-Authored-By: Alan Hayward <alan.hayward@arm.com>
Co-Authored-By: David Sherwood <david.sherwood@arm.com>
From-SVN: r256634

6 years agoAdd support for conditional reductions using SVE CLASTB
Richard Sandiford [Sat, 13 Jan 2018 17:59:59 +0000 (17:59 +0000)]
Add support for conditional reductions using SVE CLASTB

This patch uses SVE CLASTB to optimise conditional reductions.  It means
that we no longer need to maintain a separate index vector to record
the most recent valid value, and no longer need to worry about overflow
cases.

2018-01-13  Richard Sandiford  <richard.sandiford@linaro.org>
    Alan Hayward  <alan.hayward@arm.com>
    David Sherwood  <david.sherwood@arm.com>

gcc/
* doc/md.texi (fold_extract_last_@var{m}): Document.
* doc/sourcebuild.texi (vect_fold_extract_last): Likewise.
* optabs.def (fold_extract_last_optab): New optab.
* internal-fn.def (FOLD_EXTRACT_LAST): New internal function.
* internal-fn.c (fold_extract_direct): New macro.
(expand_fold_extract_optab_fn): Likewise.
(direct_fold_extract_optab_supported_p): Likewise.
* tree-vectorizer.h (EXTRACT_LAST_REDUCTION): New vect_reduction_type.
* tree-vect-loop.c (vect_model_reduction_cost): Handle
EXTRACT_LAST_REDUCTION.
(get_initial_def_for_reduction): Do not create an initial vector
for EXTRACT_LAST_REDUCTION reductions.
(vectorizable_reduction): Leave the scalar phi in place for
EXTRACT_LAST_REDUCTIONs.  Try using EXTRACT_LAST_REDUCTION
ahead of INTEGER_INDUC_COND_REDUCTION.  Do not check for an
epilogue code for EXTRACT_LAST_REDUCTION and defer the
transform phase to vectorizable_condition.
* tree-vect-stmts.c (vect_finish_stmt_generation_1): New function,
split out from...
(vect_finish_stmt_generation): ...here.
(vect_finish_replace_stmt): New function.
(vectorizable_condition): Handle EXTRACT_LAST_REDUCTION.
* config/aarch64/aarch64-sve.md (fold_extract_last_<mode>): New
pattern.
* config/aarch64/aarch64.md (UNSPEC_CLASTB): New unspec.

gcc/testsuite/
* lib/target-supports.exp
(check_effective_target_vect_fold_extract_last): New proc.
* gcc.dg/vect/pr65947-1.c: Update dump messages.  Add markup
for fold_extract_last.
* gcc.dg/vect/pr65947-2.c: Likewise.
* gcc.dg/vect/pr65947-3.c: Likewise.
* gcc.dg/vect/pr65947-4.c: Likewise.
* gcc.dg/vect/pr65947-5.c: Likewise.
* gcc.dg/vect/pr65947-6.c: Likewise.
* gcc.dg/vect/pr65947-9.c: Likewise.
* gcc.dg/vect/pr65947-10.c: Likewise.
* gcc.dg/vect/pr65947-12.c: Likewise.
* gcc.dg/vect/pr65947-14.c: Likewise.
* gcc.dg/vect/pr80631-1.c: Likewise.
* gcc.target/aarch64/sve/clastb_1.c: New test.
* gcc.target/aarch64/sve/clastb_1_run.c: Likewise.
* gcc.target/aarch64/sve/clastb_2.c: Likewise.
* gcc.target/aarch64/sve/clastb_2_run.c: Likewise.
* gcc.target/aarch64/sve/clastb_3.c: Likewise.
* gcc.target/aarch64/sve/clastb_3_run.c: Likewise.
* gcc.target/aarch64/sve/clastb_4.c: Likewise.
* gcc.target/aarch64/sve/clastb_4_run.c: Likewise.
* gcc.target/aarch64/sve/clastb_5.c: Likewise.
* gcc.target/aarch64/sve/clastb_5_run.c: Likewise.
* gcc.target/aarch64/sve/clastb_6.c: Likewise.
* gcc.target/aarch64/sve/clastb_6_run.c: Likewise.
* gcc.target/aarch64/sve/clastb_7.c: Likewise.
* gcc.target/aarch64/sve/clastb_7_run.c: Likewise.

Co-Authored-By: Alan Hayward <alan.hayward@arm.com>
Co-Authored-By: David Sherwood <david.sherwood@arm.com>
From-SVN: r256633

6 years agoAdd support for vectorising live-out values using SVE LASTB
Richard Sandiford [Sat, 13 Jan 2018 17:59:50 +0000 (17:59 +0000)]
Add support for vectorising live-out values using SVE LASTB

This patch uses the SVE LASTB instruction to optimise cases in which
a value produced by the final scalar iteration of a vectorised loop is
live outside the loop.  Previously this situation would stop us from
using a fully-masked loop.

2018-01-13  Richard Sandiford  <richard.sandiford@linaro.org>
    Alan Hayward  <alan.hayward@arm.com>
    David Sherwood  <david.sherwood@arm.com>

gcc/
* doc/md.texi (extract_last_@var{m}): Document.
* optabs.def (extract_last_optab): New optab.
* internal-fn.def (EXTRACT_LAST): New internal function.
* internal-fn.c (cond_unary_direct): New macro.
(expand_cond_unary_optab_fn): Likewise.
(direct_cond_unary_optab_supported_p): Likewise.
* tree-vect-loop.c (vectorizable_live_operation): Allow fully-masked
loops using EXTRACT_LAST.
* config/aarch64/aarch64-sve.md (aarch64_sve_lastb<mode>): Rename to...
(extract_last_<mode>): ...this optab.
(vec_extract<mode><Vel>): Update accordingly.

gcc/testsuite/
* gcc.target/aarch64/sve/live_1.c: New test.
* gcc.target/aarch64/sve/live_1_run.c: Likewise.

Co-Authored-By: Alan Hayward <alan.hayward@arm.com>
Co-Authored-By: David Sherwood <david.sherwood@arm.com>
From-SVN: r256632

6 years agoAdd an empty_mask_is_expensive hook
Richard Sandiford [Sat, 13 Jan 2018 17:59:40 +0000 (17:59 +0000)]
Add an empty_mask_is_expensive hook

This patch adds a hook to control whether we avoid executing masked
(predicated) stores when the mask is all false.  We don't want to do
that by default for SVE.

2018-01-13  Richard Sandiford  <richard.sandiford@linaro.org>
    Alan Hayward  <alan.hayward@arm.com>
    David Sherwood  <david.sherwood@arm.com>

gcc/
* target.def (empty_mask_is_expensive): New hook.
* doc/tm.texi.in (TARGET_VECTORIZE_EMPTY_MASK_IS_EXPENSIVE): New hook.
* doc/tm.texi: Regenerate.
* targhooks.h (default_empty_mask_is_expensive): Declare.
* targhooks.c (default_empty_mask_is_expensive): New function.
* tree-vectorizer.c (vectorize_loops): Only call optimize_mask_stores
if the target says that empty masks are expensive.
* config/aarch64/aarch64.c (aarch64_empty_mask_is_expensive):
New function.
(TARGET_VECTORIZE_EMPTY_MASK_IS_EXPENSIVE): Redefine.

Co-Authored-By: Alan Hayward <alan.hayward@arm.com>
Co-Authored-By: David Sherwood <david.sherwood@arm.com>
From-SVN: r256631

6 years agoHandle peeling for alignment with masking
Richard Sandiford [Sat, 13 Jan 2018 17:59:32 +0000 (17:59 +0000)]
Handle peeling for alignment with masking

This patch adds support for aligning vectors by using a partial
first iteration.  E.g. if the start pointer is 3 elements beyond
an aligned address, the first iteration will have a mask in which
the first three elements are false.

On SVE, the optimisation is only useful for vector-length-specific
code.  Vector-length-agnostic code doesn't try to align vectors
since the vector length might not be a power of 2.

2018-01-13  Richard Sandiford  <richard.sandiford@linaro.org>
    Alan Hayward  <alan.hayward@arm.com>
    David Sherwood  <david.sherwood@arm.com>

gcc/
* tree-vectorizer.h (_loop_vec_info::mask_skip_niters): New field.
(LOOP_VINFO_MASK_SKIP_NITERS): New macro.
(vect_use_loop_mask_for_alignment_p): New function.
(vect_prepare_for_masked_peels, vect_gen_while_not): Declare.
* tree-vect-loop-manip.c (vect_set_loop_masks_directly): Add an
niters_skip argument.  Make sure that the first niters_skip elements
of the first iteration are inactive.
(vect_set_loop_condition_masked): Handle LOOP_VINFO_MASK_SKIP_NITERS.
Update call to vect_set_loop_masks_directly.
(get_misalign_in_elems): New function, split out from...
(vect_gen_prolog_loop_niters): ...here.
(vect_update_init_of_dr): Take a code argument that specifies whether
the adjustment should be added or subtracted.
(vect_update_init_of_drs): Likewise.
(vect_prepare_for_masked_peels): New function.
(vect_do_peeling): Skip prologue peeling if we're using a mask
instead.  Update call to vect_update_inits_of_drs.
* tree-vect-loop.c (_loop_vec_info::_loop_vec_info): Initialize
mask_skip_niters.
(vect_analyze_loop_2): Allow fully-masked loops with peeling for
alignment.  Do not include the number of peeled iterations in
the minimum threshold in that case.
(vectorizable_induction): Adjust the start value down by
LOOP_VINFO_MASK_SKIP_NITERS iterations.
(vect_transform_loop): Call vect_prepare_for_masked_peels.
Take the number of skipped iterations into account when calculating
the loop bounds.
* tree-vect-stmts.c (vect_gen_while_not): New function.

gcc/testsuite/
* gcc.target/aarch64/sve/nopeel_1.c: New test.
* gcc.target/aarch64/sve/peel_ind_1.c: Likewise.
* gcc.target/aarch64/sve/peel_ind_1_run.c: Likewise.
* gcc.target/aarch64/sve/peel_ind_2.c: Likewise.
* gcc.target/aarch64/sve/peel_ind_2_run.c: Likewise.
* gcc.target/aarch64/sve/peel_ind_3.c: Likewise.
* gcc.target/aarch64/sve/peel_ind_3_run.c: Likewise.
* gcc.target/aarch64/sve/peel_ind_4.c: Likewise.
* gcc.target/aarch64/sve/peel_ind_4_run.c: Likewise.

Co-Authored-By: Alan Hayward <alan.hayward@arm.com>
Co-Authored-By: David Sherwood <david.sherwood@arm.com>
From-SVN: r256630

6 years agoAllow the number of iterations to be smaller than VF
Richard Sandiford [Sat, 13 Jan 2018 17:59:23 +0000 (17:59 +0000)]
Allow the number of iterations to be smaller than VF

Fully-masked loops can be profitable even if the iteration
count is smaller than the vectorisation factor.  In this case
we're effectively doing a complete unroll followed by SLP.

The documentation for min-vect-loop-bound says that the
default value was 0, but actually the default and minimum
were 1.  We need it to be 0 for this case since the parameter
counts a whole number of vector iterations.

2018-01-13  Richard Sandiford  <richard.sandiford@linaro.org>
    Alan Hayward  <alan.hayward@arm.com>
    David Sherwood  <david.sherwood@arm.com>

gcc/
* doc/sourcebuild.texi (vect_fully_masked): Document.
* params.def (PARAM_MIN_VECT_LOOP_BOUND): Change minimum and
default value to 0.
* tree-vect-loop.c (vect_analyze_loop_costing): New function,
split out from...
(vect_analyze_loop_2): ...here. Don't check the vectorization
factor against the number of loop iterations if the loop is
fully-masked.

gcc/testsuite/
* lib/target-supports.exp (check_effective_target_vect_fully_masked):
New proc.
* gcc.dg/vect/slp-3.c: Expect all loops to be vectorized if
vect_fully_masked.
* gcc.target/aarch64/sve/loop_add_4.c: New test.
* gcc.target/aarch64/sve/loop_add_4_run.c: Likewise.
* gcc.target/aarch64/sve/loop_add_5.c: Likewise.
* gcc.target/aarch64/sve/loop_add_5_run.c: Likewise.
* gcc.target/aarch64/sve/miniloop_1.c: Likewise.
* gcc.target/aarch64/sve/miniloop_2.c: Likewise.

Co-Authored-By: Alan Hayward <alan.hayward@arm.com>
Co-Authored-By: David Sherwood <david.sherwood@arm.com>
From-SVN: r256629

6 years agoMake ivopts handle calls to internal functions
Richard Sandiford [Sat, 13 Jan 2018 17:59:15 +0000 (17:59 +0000)]
Make ivopts handle calls to internal functions

ivopts previously treated pointer arguments to internal functions
like IFN_MASK_LOAD and IFN_MASK_STORE as normal gimple values.
This patch makes it treat them as addresses instead.  This makes
a significant difference to the code quality for SVE loops,
since we can then use loads and stores with scaled indices.

2018-01-13  Richard Sandiford  <richard.sandiford@linaro.org>
    Alan Hayward  <alan.hayward@arm.com>
    David Sherwood  <david.sherwood@arm.com>

gcc/
* tree-ssa-loop-ivopts.c (USE_ADDRESS): Split into...
(USE_REF_ADDRESS, USE_PTR_ADDRESS): ...these new use types.
(dump_groups): Update accordingly.
(iv_use::mem_type): New member variable.
(address_p): New function.
(record_use): Add a mem_type argument and initialize the new
mem_type field.
(record_group_use): Add a mem_type argument.  Use address_p.
Remove obsolete null checks of base_object.  Update call to record_use.
(find_interesting_uses_op): Update call to record_group_use.
(find_interesting_uses_cond): Likewise.
(find_interesting_uses_address): Likewise.
(get_mem_type_for_internal_fn): New function.
(find_address_like_use): Likewise.
(find_interesting_uses_stmt): Try find_address_like_use before
calling find_interesting_uses_op.
(addr_offset_valid_p): Use the iv mem_type field as the type
of the addressed memory.
(add_autoinc_candidates): Likewise.
(get_address_cost): Likewise.
(split_small_address_groups_p): Use address_p.
(split_address_groups): Likewise.
(add_iv_candidate_for_use): Likewise.
(autoinc_possible_for_pair): Likewise.
(rewrite_groups): Likewise.
(get_use_type): Check for USE_REF_ADDRESS instead of USE_ADDRESS.
(determine_group_iv_cost): Update after split of USE_ADDRESS.
(get_alias_ptr_type_for_ptr_address): New function.
(rewrite_use_address): Rewrite address uses in calls that were
identified by find_address_like_use.

gcc/testsuite/
* gcc.dg/tree-ssa/scev-9.c: Expected REFERENCE ADDRESS
instead of just ADDRESS.
* gcc.dg/tree-ssa/scev-10.c: Likewise.
* gcc.dg/tree-ssa/scev-11.c: Likewise.
* gcc.dg/tree-ssa/scev-12.c: Likewise.
* gcc.target/aarch64/sve/index_offset_1.c: New test.
* gcc.target/aarch64/sve/index_offset_1_run.c: Likewise.
* gcc.target/aarch64/sve/loop_add_2.c: Likewise.
* gcc.target/aarch64/sve/loop_add_3.c: Likewise.
* gcc.target/aarch64/sve/while_1.c: Check for indexed addressing modes.
* gcc.target/aarch64/sve/while_2.c: Likewise.
* gcc.target/aarch64/sve/while_3.c: Likewise.
* gcc.target/aarch64/sve/while_4.c: Likewise.

Co-Authored-By: Alan Hayward <alan.hayward@arm.com>
Co-Authored-By: David Sherwood <david.sherwood@arm.com>
From-SVN: r256628

6 years agoAllow ADDR_EXPRs of TARGET_MEM_REFs
Richard Sandiford [Sat, 13 Jan 2018 17:59:08 +0000 (17:59 +0000)]
Allow ADDR_EXPRs of TARGET_MEM_REFs

This patch allows ADDR_EXPR <TARGET_MEM_REF ...>, which is useful
when calling internal functions that take pointers to memory that
is conditionally loaded or stored.  This is a prerequisite to the
following ivopts patch.

2018-01-13  Richard Sandiford  <richard.sandiford@linaro.org>
    Alan Hayward  <alan.hayward@arm.com>
    David Sherwood  <david.sherwood@arm.com>

gcc/
* expr.c (expand_expr_addr_expr_1): Handle ADDR_EXPRs of
TARGET_MEM_REFs.
* gimple-expr.h (is_gimple_addressable: Likewise.
* gimple-expr.c (is_gimple_address): Likewise.
* internal-fn.c (expand_call_mem_ref): New function.
(expand_mask_load_optab_fn): Use it.
(expand_mask_store_optab_fn): Likewise.

Co-Authored-By: Alan Hayward <alan.hayward@arm.com>
Co-Authored-By: David Sherwood <david.sherwood@arm.com>
From-SVN: r256627

6 years agoAdd support for reductions in fully-masked loops
Richard Sandiford [Sat, 13 Jan 2018 17:59:00 +0000 (17:59 +0000)]
Add support for reductions in fully-masked loops

This patch removes the restriction that fully-masked loops cannot
have reductions.  The key thing here is to make sure that the
reduction accumulator doesn't include any values associated with
inactive lanes; the patch adds a bunch of conditional binary
operations for doing that.

2018-01-13  Richard Sandiford  <richard.sandiford@linaro.org>
    Alan Hayward  <alan.hayward@arm.com>
    David Sherwood  <david.sherwood@arm.com>

gcc/
* doc/md.texi (cond_add@var{mode}, cond_sub@var{mode})
(cond_and@var{mode}, cond_ior@var{mode}, cond_xor@var{mode})
(cond_smin@var{mode}, cond_smax@var{mode}, cond_umin@var{mode})
(cond_umax@var{mode}): Document.
* optabs.def (cond_add_optab, cond_sub_optab, cond_and_optab)
(cond_ior_optab, cond_xor_optab, cond_smin_optab, cond_smax_optab)
(cond_umin_optab, cond_umax_optab): New optabs.
* internal-fn.def (COND_ADD, COND_SUB, COND_MIN, COND_MAX, COND_AND)
(COND_IOR, COND_XOR): New internal functions.
* internal-fn.h (get_conditional_internal_fn): Declare.
* internal-fn.c (cond_binary_direct): New macro.
(expand_cond_binary_optab_fn): Likewise.
(direct_cond_binary_optab_supported_p): Likewise.
(get_conditional_internal_fn): New function.
* tree-vect-loop.c (vectorizable_reduction): Handle fully-masked loops.
Cope with reduction statements that are vectorized as calls rather
than assignments.
* config/aarch64/aarch64-sve.md (cond_<optab><mode>): New insns.
* config/aarch64/iterators.md (UNSPEC_COND_ADD, UNSPEC_COND_SUB)
(UNSPEC_COND_SMAX, UNSPEC_COND_UMAX, UNSPEC_COND_SMIN)
(UNSPEC_COND_UMIN, UNSPEC_COND_AND, UNSPEC_COND_ORR)
(UNSPEC_COND_EOR): New unspecs.
(optab): Add mappings for them.
(SVE_COND_INT_OP, SVE_COND_FP_OP): New int iterators.
(sve_int_op, sve_fp_op): New int attributes.

gcc/testsuite/
* gcc.dg/vect/pr60482.c: Remove XFAIL for variable-length vectors.
* gcc.target/aarch64/sve/reduc_1.c: Expect the loop operations
to be predicated.
* gcc.target/aarch64/sve/slp_5.c: Check for a fully-masked loop.
* gcc.target/aarch64/sve/slp_7.c: Likewise.
* gcc.target/aarch64/sve/reduc_5.c: New test.
* gcc.target/aarch64/sve/slp_13.c: Likewise.
* gcc.target/aarch64/sve/slp_13_run.c: Likewise.

Co-Authored-By: Alan Hayward <alan.hayward@arm.com>
Co-Authored-By: David Sherwood <david.sherwood@arm.com>
From-SVN: r256626

6 years agoAdd support for fully-predicated loops
Richard Sandiford [Sat, 13 Jan 2018 17:58:52 +0000 (17:58 +0000)]
Add support for fully-predicated loops

This patch adds support for using a single fully-predicated loop instead
of a vector loop and a scalar tail.  An SVE WHILELO instruction generates
the predicate for each iteration of the loop, given the current scalar
iv value and the loop bound.  This operation is wrapped up in a new internal
function called WHILE_ULT.  E.g.:

   WHILE_ULT (0, 3, { 0, 0, 0, 0}) -> { 1, 1, 1, 0 }
   WHILE_ULT (UINT_MAX - 1, UINT_MAX, { 0, 0, 0, 0 }) -> { 1, 0, 0, 0 }

The third WHILE_ULT argument is needed to make the operation
unambiguous: without it, WHILE_ULT (0, 3) for one vector type would
seem equivalent to WHILE_ULT (0, 3) for another, even if the types have
different numbers of elements.

Note that the patch uses "mask" and "fully-masked" instead of
"predicate" and "fully-predicated", to follow existing GCC terminology.

This patch just handles the simple cases, punting for things like
reductions and live-out values.  Later patches remove most of these
restrictions.

2018-01-13  Richard Sandiford  <richard.sandiford@linaro.org>
    Alan Hayward  <alan.hayward@arm.com>
    David Sherwood  <david.sherwood@arm.com>

gcc/
* optabs.def (while_ult_optab): New optab.
* doc/md.texi (while_ult@var{m}@var{n}): Document.
* internal-fn.def (WHILE_ULT): New internal function.
* internal-fn.h (direct_internal_fn_supported_p): New override
that takes two types as argument.
* internal-fn.c (while_direct): New macro.
(expand_while_optab_fn): New function.
(convert_optab_supported_p): Likewise.
(direct_while_optab_supported_p): New macro.
* wide-int.h (wi::udiv_ceil): New function.
* tree-vectorizer.h (rgroup_masks): New structure.
(vec_loop_masks): New typedef.
(_loop_vec_info): Add masks, mask_compare_type, can_fully_mask_p
and fully_masked_p.
(LOOP_VINFO_CAN_FULLY_MASK_P, LOOP_VINFO_FULLY_MASKED_P)
(LOOP_VINFO_MASKS, LOOP_VINFO_MASK_COMPARE_TYPE): New macros.
(vect_max_vf): New function.
(slpeel_make_loop_iterate_ntimes): Delete.
(vect_set_loop_condition, vect_get_loop_mask_type, vect_gen_while)
(vect_halve_mask_nunits, vect_double_mask_nunits): Declare.
(vect_record_loop_mask, vect_get_loop_mask): Likewise.
* tree-vect-loop-manip.c: Include tree-ssa-loop-niter.h,
internal-fn.h, stor-layout.h and optabs-query.h.
(vect_set_loop_mask): New function.
(add_preheader_seq): Likewise.
(add_header_seq): Likewise.
(interleave_supported_p): Likewise.
(vect_maybe_permute_loop_masks): Likewise.
(vect_set_loop_masks_directly): Likewise.
(vect_set_loop_condition_masked): Likewise.
(vect_set_loop_condition_unmasked): New function, split out from
slpeel_make_loop_iterate_ntimes.
(slpeel_make_loop_iterate_ntimes): Rename to..
(vect_set_loop_condition): ...this.  Use vect_set_loop_condition_masked
for fully-masked loops and vect_set_loop_condition_unmasked otherwise.
(vect_do_peeling): Update call accordingly.
(vect_gen_vector_loop_niters): Use VF as the step for fully-masked
loops.
* tree-vect-loop.c (_loop_vec_info::_loop_vec_info): Initialize
mask_compare_type, can_fully_mask_p and fully_masked_p.
(release_vec_loop_masks): New function.
(_loop_vec_info): Use it to free the loop masks.
(can_produce_all_loop_masks_p): New function.
(vect_get_max_nscalars_per_iter): Likewise.
(vect_verify_full_masking): Likewise.
(vect_analyze_loop_2): Save LOOP_VINFO_CAN_FULLY_MASK_P around
retries, and free the mask rgroups before retrying.  Check loop-wide
reasons for disallowing fully-masked loops.  Make the final decision
about whether use a fully-masked loop or not.
(vect_estimate_min_profitable_iters): Do not assume that peeling
for the number of iterations will be needed for fully-masked loops.
(vectorizable_reduction): Disable fully-masked loops.
(vectorizable_live_operation): Likewise.
(vect_halve_mask_nunits): New function.
(vect_double_mask_nunits): Likewise.
(vect_record_loop_mask): Likewise.
(vect_get_loop_mask): Likewise.
(vect_transform_loop): Handle the case in which the final loop
iteration might handle a partial vector.  Call vect_set_loop_condition
instead of slpeel_make_loop_iterate_ntimes.
* tree-vect-stmts.c: Include tree-ssa-loop-niter.h and gimple-fold.h.
(check_load_store_masking): New function.
(prepare_load_store_mask): Likewise.
(vectorizable_store): Handle fully-masked loops.
(vectorizable_load): Likewise.
(supportable_widening_operation): Use vect_halve_mask_nunits for
booleans.
(supportable_narrowing_operation): Likewise vect_double_mask_nunits.
(vect_gen_while): New function.
* config/aarch64/aarch64.md (umax<mode>3): New expander.
(aarch64_uqdec<mode>): New insn.

gcc/testsuite/
* gcc.dg/tree-ssa/cunroll-10.c: Disable vectorization.
* gcc.dg/tree-ssa/peel1.c: Likewise.
* gcc.dg/vect/vect-load-lanes-peeling-1.c: Remove XFAIL for
variable-length vectors.
* gcc.target/aarch64/sve/vcond_6.c: XFAIL test for AND.
* gcc.target/aarch64/sve/vec_bool_cmp_1.c: Expect BIC instead of NOT.
* gcc.target/aarch64/sve/slp_1.c: Check for a fully-masked loop.
* gcc.target/aarch64/sve/slp_2.c: Likewise.
* gcc.target/aarch64/sve/slp_3.c: Likewise.
* gcc.target/aarch64/sve/slp_4.c: Likewise.
* gcc.target/aarch64/sve/slp_6.c: Likewise.
* gcc.target/aarch64/sve/slp_8.c: New test.
* gcc.target/aarch64/sve/slp_8_run.c: Likewise.
* gcc.target/aarch64/sve/slp_9.c: Likewise.
* gcc.target/aarch64/sve/slp_9_run.c: Likewise.
* gcc.target/aarch64/sve/slp_10.c: Likewise.
* gcc.target/aarch64/sve/slp_10_run.c: Likewise.
* gcc.target/aarch64/sve/slp_11.c: Likewise.
* gcc.target/aarch64/sve/slp_11_run.c: Likewise.
* gcc.target/aarch64/sve/slp_12.c: Likewise.
* gcc.target/aarch64/sve/slp_12_run.c: Likewise.
* gcc.target/aarch64/sve/ld1r_2.c: Likewise.
* gcc.target/aarch64/sve/ld1r_2_run.c: Likewise.
* gcc.target/aarch64/sve/while_1.c: Likewise.
* gcc.target/aarch64/sve/while_2.c: Likewise.
* gcc.target/aarch64/sve/while_3.c: Likewise.
* gcc.target/aarch64/sve/while_4.c: Likewise.

Co-Authored-By: Alan Hayward <alan.hayward@arm.com>
Co-Authored-By: David Sherwood <david.sherwood@arm.com>
From-SVN: r256625

6 years agoAdd support for bitwise reductions
Richard Sandiford [Sat, 13 Jan 2018 17:58:42 +0000 (17:58 +0000)]
Add support for bitwise reductions

This patch adds support for the SVE bitwise reduction instructions
(ANDV, ORV and EORV).  It's a fairly mechanical extension of existing
REDUC_* operators.

2018-01-13  Richard Sandiford  <richard.sandiford@linaro.org>
    Alan Hayward  <alan.hayward@arm.com>
    David Sherwood  <david.sherwood@arm.com>

gcc/
* optabs.def (reduc_and_scal_optab, reduc_ior_scal_optab)
(reduc_xor_scal_optab): New optabs.
* doc/md.texi (reduc_and_scal_@var{m}, reduc_ior_scal_@var{m})
(reduc_xor_scal_@var{m}): Document.
* doc/sourcebuild.texi (vect_logical_reduc): Likewise.
* internal-fn.def (IFN_REDUC_AND, IFN_REDUC_IOR, IFN_REDUC_XOR): New
internal functions.
* fold-const-call.c (fold_const_call): Handle them.
* tree-vect-loop.c (reduction_fn_for_scalar_code): Return the new
internal functions for BIT_AND_EXPR, BIT_IOR_EXPR and BIT_XOR_EXPR.
* config/aarch64/aarch64-sve.md (reduc_<bit_reduc>_scal_<mode>):
(*reduc_<bit_reduc>_scal_<mode>): New patterns.
* config/aarch64/iterators.md (UNSPEC_ANDV, UNSPEC_ORV)
(UNSPEC_XORV): New unspecs.
(optab): Add entries for them.
(BITWISEV): New int iterator.
(bit_reduc_op): New int attributes.

gcc/testsuite/
* lib/target-supports.exp (check_effective_target_vect_logical_reduc):
New proc.
* gcc.dg/vect/vect-reduc-or_1.c: Also run for vect_logical_reduc
and add an associated scan-dump test.  Prevent vectorization
of the first two loops.
* gcc.dg/vect/vect-reduc-or_2.c: Likewise.
* gcc.target/aarch64/sve/reduc_1.c: Add AND, IOR and XOR reductions.
* gcc.target/aarch64/sve/reduc_2.c: Likewise.
* gcc.target/aarch64/sve/reduc_1_run.c: Likewise.
(INIT_VECTOR): Tweak initial value so that some bits are always set.
* gcc.target/aarch64/sve/reduc_2_run.c: Likewise.

Co-Authored-By: Alan Hayward <alan.hayward@arm.com>
Co-Authored-By: David Sherwood <david.sherwood@arm.com>
From-SVN: r256624