+2015-11-05 Jakub Jelinek <jakub@redhat.com>
+ Ilya Verbin <ilya.verbin@intel.com>
+
+ * builtin-types.def
+ (BT_FN_VOID_INT_OMPFN_SIZE_PTR_PTR_PTR_UINT_PTR): Remove.
+ (BT_FN_VOID_INT_OMPFN_SIZE_PTR_PTR_PTR_UINT_PTR_INT_INT): New.
+ * cgraph.h (enum cgraph_simd_clone_arg_type): Add
+ SIMD_CLONE_ARG_TYPE_LINEAR_REF_VARIABLE_STEP,
+ SIMD_CLONE_ARG_TYPE_LINEAR_UVAL_VARIABLE_STEP and
+ SIMD_CLONE_ARG_TYPE_LINEAR_VAL_VARIABLE_STEP.
+ (struct cgraph_simd_clone_arg): Adjust comment.
+ * omp-builtins.def (BUILT_IN_GOMP_TARGET): Rename GOMP_target_41
+ to GOMP_target_ext. Add num_teams and thread_limit arguments.
+ (BUILT_IN_GOMP_TARGET_DATA): Rename GOMP_target_data_41
+ to GOMP_target_data_ext.
+ (BUILT_IN_GOMP_TARGET_UPDATE): Rename GOMP_target_update_41
+ to GOMP_target_update_ext.
+ (BUILT_IN_GOMP_LOOP_NONMONOTONIC_DYNAMIC_START,
+ BUILT_IN_GOMP_LOOP_NONMONOTONIC_GUIDED_START,
+ BUILT_IN_GOMP_LOOP_NONMONOTONIC_DYNAMIC_NEXT,
+ BUILT_IN_GOMP_LOOP_NONMONOTONIC_GUIDED_NEXT,
+ BUILT_IN_GOMP_LOOP_ULL_NONMONOTONIC_DYNAMIC_START,
+ BUILT_IN_GOMP_LOOP_ULL_NONMONOTONIC_GUIDED_START,
+ BUILT_IN_GOMP_LOOP_ULL_NONMONOTONIC_DYNAMIC_NEXT,
+ BUILT_IN_GOMP_LOOP_ULL_NONMONOTONIC_GUIDED_NEXT,
+ BUILT_IN_GOMP_PARALLEL_LOOP_NONMONOTONIC_DYNAMIC,
+ BUILT_IN_GOMP_PARALLEL_LOOP_NONMONOTONIC_GUIDED): New built-ins.
+ * tree-core.h (enum omp_clause_schedule_kind): Add
+ OMP_CLAUSE_SCHEDULE_MASK, OMP_CLAUSE_SCHEDULE_MONOTONIC,
+ OMP_CLAUSE_SCHEDULE_NONMONOTONIC and change
+ OMP_CLAUSE_SCHEDULE_LAST value.
+ * tree.def (OMP_SIMD, CILK_SIMD, CILK_FOR, OMP_DISTRIBUTE,
+ OMP_TASKLOOP, OACC_LOOP): Add OMP_FOR_ORIG_DECLS argument.
+ * tree.h (OMP_FOR_ORIG_DECLS): Use OMP_LOOP_CHECK instead of
+ OMP_FOR_CHECK. Remove comment.
+ * tree-pretty-print.c (dump_omp_clause): Handle
+ GOMP_MAP_FIRSTPRIVATE_REFERENCE and GOMP_MAP_ALWAYS_POINTER.
+ Simplify. Print schedule clause modifiers.
+ * tree-vect-stmts.c (vectorizable_simd_clone_call): Add
+ SIMD_CLONE_ARG_TYPE_LINEAR_{REF,VAL,UVAL}_VARIABLE_STEP
+ cases.
+ * gimplify.c (enum gimplify_omp_var_data): Add GOVD_MAP_ALWAYS_TO.
+ (omp_default_clause): Tweak for
+ private/firstprivate/is_device_ptr variables on target
+ construct and use_device_ptr on target data.
+ (omp_check_private): Likewise.
+ (omp_notice_variable): For references check whether what it refers
+ to has mappable type, rather than the reference itself.
+ (omp_is_private): Diagnose linear iteration variables on non-simd
+ constructs.
+ (omp_no_lastprivate): Return true only for Fortran.
+ (gimplify_scan_omp_clauses): Or in GOVD_MAP_ALWAYS_TO for
+ GOMP_MAP_ALWAYS_TO or GOMP_MAP_ALWAYS_TOFROM kinds.
+ Add support for GOMP_MAP_FIRSTPRIVATE_REFERENCE and
+ GOMP_MAP_ALWAYS_POINTER, remove old handling of structure element
+ based array sections. Use GOMP_MAP_ALWAYS_P. Fix up handling of
+ lastprivate and linear when combined with distribute. Gimplify
+ variable low-bound for array reduction. Look through
+ POINTER_PLUS_EXPR when looking for ADDR_EXPR for array section
+ reductions.
+ (gimplify_adjust_omp_clauses_1): For implicit references to
+ variables with reference type and when not ref to scalar or
+ ref to pointer, map what they refer to using tofrom and
+ use GOMP_MAP_FIRSTPRIVATE_REFERENCE for the reference.
+ (gimplify_adjust_omp_clauses): Remove GOMP_MAP_ALWAYS_POINTER
+ from target exit data. Handle GOMP_MAP_FIRSTPRIVATE_REFERENCE.
+ Drop OMP_CLAUSE_MAP_PRIVATE support. Use GOMP_MAP_ALWAYS_P.
+ Diagnose the same var on both firstprivate and lastprivate on
+ distribute construct.
+ (gimplify_omp_for): Fix up handling of predetermined
+ lastprivate or linear iter vars when combined with distribute.
+ (find_omp_teams, computable_teams_clause, optimize_target_teams): New
+ functions.
+ (gimplify_omp_workshare): Call optimize_target_teams.
+ * omp-low.c (struct omp_region): Add sched_modifiers field.
+ (struct omp_for_data): Likewise.
+ (omp_any_child_fn_dumped): New variable.
+ (extract_omp_for_data): Fill in sched_modifiers, and mask out
+ OMP_CLAUSE_SCHEDULE_KIND bits outside of OMP_CLAUSE_SCHEDULE_MASK
+ from sched_kind.
+ (determine_parallel_type): Use only OMP_CLAUSE_SCHEDULE_MASK
+ bits of OMP_CLAUSE_SCHED_KIND.
+ (scan_sharing_clauses): Handle GOMP_MAP_FIRSTPRIVATE_REFERENCE,
+ drop OMP_CLAUSE_MAP_PRIVATE support. Look through POINTER_PLUS_EXPR
+ for array section reductions.
+ (add_taskreg_looptemp_clauses): Add one extra _looptemp_ clause even
+ for distribute parallel for, if there are lastprivate clauses on the
+ for.
+ (lower_rec_input_clauses): Handle non-zero low-bound on array
+ section reductions.
+ (lower_reduction_clauses): Likewise.
+ (lower_send_clauses): Look through POINTER_PLUS_EXPR
+ for array section reductions.
+ (expand_parallel_call): Use nonmonotonic entrypoints for
+ nonmonotonic: dynamic/guided.
+ (expand_omp_taskreg): Call assign_assembler_name_if_neeeded on
+ child_fn if current_function_decl has assembler name set, but child_fn
+ does not. Dump the header and IL of the child function when not in SSA
+ form.
+ (expand_omp_target): Likewise. Pass num_teams and thread_limit
+ arguments to BUILT_IN_GOMP_TARGET.
+ (expand_omp_for_static_nochunk, expand_omp_for_static_chunk):
+ Initialize the extra _looptemp_ clause to fd->loop.n2.
+ (expand_omp_for): Use nonmonotonic entrypoints for
+ nonmonotonic: dynamic/guided. Initialize region->sched_modifiers.
+ (expand_omp): Clear omp_any_child_fn_dumped. Dump function header
+ again if we have dumped any child functions.
+ (lower_omp_for_lastprivate): Determine the right count variable
+ for distribute simd, or distribute parallel for{, simd}.
+ (lower_omp_target): Handle GOMP_MAP_FIRSTPRIVATE_REFERENCE
+ and GOMP_MAP_ALWAYS_POINTER. Drop OMP_CLAUSE_MAP_PRIVATE
+ support.
+ (simd_clone_clauses_extract): Handle variable step
+ for references and arguments passed by reference.
+ (simd_clone_mangle): Mangle ref/uval/val variable steps.
+ (simd_clone_adjust_argument_types): Handle
+ SIMD_CLONE_ARG_TYPE_LINEAR_UVAL_VARIABLE_STEP like
+ SIMD_CLONE_ARG_TYPE_LINEAR_UVAL_CONSTANT_STEP and
+ SIMD_CLONE_ARG_TYPE_LINEAR_VAL_VARIABLE_STEP like
+ SIMD_CLONE_ARG_TYPE_LINEAR_VAL_CONSTANT_STEP.
+ (simd_clone_linear_addend): New function.
+ (simd_clone_adjust): Handle variable step like similarly
+ to constant step, use simd_clone_linear_addend to determine
+ the actual step at runtime.
+
2015-11-05 Nathan Sidwell <nathan@codesourcery.com>
* target.def (goacc.dim_limit): New hook.
BT_VOID, BT_INT, BT_SIZE, BT_PTR, BT_PTR, BT_PTR, BT_UINT,
BT_PTR)
-DEF_FUNCTION_TYPE_8 (BT_FN_VOID_INT_OMPFN_SIZE_PTR_PTR_PTR_UINT_PTR,
- BT_VOID, BT_INT, BT_PTR_FN_VOID_PTR, BT_SIZE, BT_PTR,
- BT_PTR, BT_PTR, BT_UINT, BT_PTR)
DEF_FUNCTION_TYPE_8 (BT_FN_VOID_OMPFN_PTR_UINT_LONG_LONG_LONG_LONG_UINT,
BT_VOID, BT_PTR_FN_VOID_PTR, BT_PTR, BT_UINT,
BT_LONG, BT_LONG, BT_LONG, BT_LONG, BT_UINT)
BT_PTR_FN_VOID_PTR_PTR, BT_LONG, BT_LONG,
BT_BOOL, BT_UINT, BT_PTR, BT_INT)
+DEF_FUNCTION_TYPE_10 (BT_FN_VOID_INT_OMPFN_SIZE_PTR_PTR_PTR_UINT_PTR_INT_INT,
+ BT_VOID, BT_INT, BT_PTR_FN_VOID_PTR, BT_SIZE, BT_PTR,
+ BT_PTR, BT_PTR, BT_UINT, BT_PTR, BT_INT, BT_INT)
+
DEF_FUNCTION_TYPE_11 (BT_FN_VOID_OMPFN_PTR_OMPCPYFN_LONG_LONG_UINT_LONG_INT_LONG_LONG_LONG,
BT_VOID, BT_PTR_FN_VOID_PTR, BT_PTR,
BT_PTR_FN_VOID_PTR_PTR, BT_LONG, BT_LONG,
+2015-11-05 Jakub Jelinek <jakub@redhat.com>
+
+ * c-common.h (c_finish_omp_atomic): Add TEST argument.
+ (c_omp_check_loop_iv, c_omp_check_loop_iv_exprs): New prototypes.
+ * c-omp.c (c_finish_omp_atomic): Add TEST argument. Don't call
+ save_expr or create_tmp_var* if TEST is true.
+ (c_finish_omp_for): Store OMP_FOR_ORIG_DECLS always.
+ Don't call add_stmt here.
+ (struct c_omp_check_loop_iv_data): New type.
+ (c_omp_check_loop_iv_r, c_omp_check_loop_iv,
+ c_omp_check_loop_iv_exprs): New functions.
+ (c_omp_split_clauses): Adjust for lastprivate being allowed on
+ distribute.
+ (c_omp_declare_simd_clauses_to_numbers): Change
+ OMP_CLAUSE_LINEAR_VARIABLE_STRIDE OMP_CLAUSE_LINEAR_STEP into numbers.
+ (c_omp_declare_simd_clauses_to_decls): Similarly change those
+ from numbers to PARM_DECLs.
+
2015-11-04 Mikhail Maltsev <maltsevm@gmail.com>
* c-omp.c (c_omp_split_clauses): Remove conditional compilation. Use
extern tree c_finish_omp_ordered (location_t, tree, tree);
extern void c_finish_omp_barrier (location_t);
extern tree c_finish_omp_atomic (location_t, enum tree_code, enum tree_code,
- tree, tree, tree, tree, tree, bool, bool);
+ tree, tree, tree, tree, tree, bool, bool,
+ bool = false);
extern void c_finish_omp_flush (location_t);
extern void c_finish_omp_taskwait (location_t);
extern void c_finish_omp_taskyield (location_t);
extern tree c_finish_omp_for (location_t, enum tree_code, tree, tree, tree,
tree, tree, tree, tree);
+extern bool c_omp_check_loop_iv (tree, tree, walk_tree_lh);
+extern bool c_omp_check_loop_iv_exprs (location_t, tree, tree, tree, tree,
+ walk_tree_lh);
extern tree c_finish_oacc_wait (location_t, tree, tree);
extern tree c_oacc_split_loop_clauses (tree, tree *);
extern void c_omp_split_clauses (location_t, enum tree_code, omp_clause_mask,
LOC is the location of the atomic statement. The value returned
is either error_mark_node (if the construct was erroneous) or an
OMP_ATOMIC* node which should be added to the current statement
- tree with add_stmt. */
+ tree with add_stmt. If TEST is set, avoid calling save_expr
+ or create_tmp_var*. */
tree
c_finish_omp_atomic (location_t loc, enum tree_code code,
enum tree_code opcode, tree lhs, tree rhs,
- tree v, tree lhs1, tree rhs1, bool swapped, bool seq_cst)
+ tree v, tree lhs1, tree rhs1, bool swapped, bool seq_cst,
+ bool test)
{
tree x, type, addr, pre = NULL_TREE;
addr = build_unary_op (loc, ADDR_EXPR, lhs, 0);
if (addr == error_mark_node)
return error_mark_node;
- addr = save_expr (addr);
- if (TREE_CODE (addr) != SAVE_EXPR
+ if (!test)
+ addr = save_expr (addr);
+ if (!test
+ && TREE_CODE (addr) != SAVE_EXPR
&& (TREE_CODE (addr) != ADDR_EXPR
|| !VAR_P (TREE_OPERAND (addr, 0))))
{
if (rhs1
&& VAR_P (rhs1)
&& VAR_P (lhs)
- && rhs1 != lhs)
+ && rhs1 != lhs
+ && !test)
{
if (code == OMP_ATOMIC)
- error_at (loc, "%<#pragma omp atomic update%> uses two different variables for memory");
+ error_at (loc, "%<#pragma omp atomic update%> uses two different "
+ "variables for memory");
else
- error_at (loc, "%<#pragma omp atomic capture%> uses two different variables for memory");
+ error_at (loc, "%<#pragma omp atomic capture%> uses two different "
+ "variables for memory");
return error_mark_node;
}
location, just diagnose different variables. */
if (lhs1 && VAR_P (lhs1) && VAR_P (lhs))
{
- if (lhs1 != lhs)
+ if (lhs1 != lhs && !test)
{
- error_at (loc, "%<#pragma omp atomic capture%> uses two different variables for memory");
+ error_at (loc, "%<#pragma omp atomic capture%> uses two "
+ "different variables for memory");
return error_mark_node;
}
}
x = omit_one_operand_loc (loc, type, x, lhs1addr);
else
{
- x = save_expr (x);
+ if (!test)
+ x = save_expr (x);
x = omit_two_operands_loc (loc, type, x, x, lhs1addr);
}
}
OMP_FOR_INCR (t) = incrv;
OMP_FOR_BODY (t) = body;
OMP_FOR_PRE_BODY (t) = pre_body;
- if (code == OMP_FOR)
- OMP_FOR_ORIG_DECLS (t) = orig_declv;
+ OMP_FOR_ORIG_DECLS (t) = orig_declv;
SET_EXPR_LOCATION (t, locus);
- return add_stmt (t);
+ return t;
}
}
+/* Type for passing data in between c_omp_check_loop_iv and
+ c_omp_check_loop_iv_r. */
+
+struct c_omp_check_loop_iv_data
+{
+ tree declv;
+ bool fail;
+ location_t stmt_loc;
+ location_t expr_loc;
+ int kind;
+ walk_tree_lh lh;
+ hash_set<tree> *ppset;
+};
+
+/* Helper function called via walk_tree, to diagnose uses
+ of associated loop IVs inside of lb, b and incr expressions
+ of OpenMP loops. */
+
+static tree
+c_omp_check_loop_iv_r (tree *tp, int *walk_subtrees, void *data)
+{
+ struct c_omp_check_loop_iv_data *d
+ = (struct c_omp_check_loop_iv_data *) data;
+ if (DECL_P (*tp))
+ {
+ int i;
+ for (i = 0; i < TREE_VEC_LENGTH (d->declv); i++)
+ if (*tp == TREE_VEC_ELT (d->declv, i))
+ {
+ location_t loc = d->expr_loc;
+ if (loc == UNKNOWN_LOCATION)
+ loc = d->stmt_loc;
+ switch (d->kind)
+ {
+ case 0:
+ error_at (loc, "initializer expression refers to "
+ "iteration variable %qD", *tp);
+ break;
+ case 1:
+ error_at (loc, "condition expression refers to "
+ "iteration variable %qD", *tp);
+ break;
+ case 2:
+ error_at (loc, "increment expression refers to "
+ "iteration variable %qD", *tp);
+ break;
+ }
+ d->fail = true;
+ }
+ }
+ /* Don't walk dtors added by C++ wrap_cleanups_r. */
+ else if (TREE_CODE (*tp) == TRY_CATCH_EXPR
+ && TRY_CATCH_IS_CLEANUP (*tp))
+ {
+ *walk_subtrees = 0;
+ return walk_tree_1 (&TREE_OPERAND (*tp, 0), c_omp_check_loop_iv_r, data,
+ d->ppset, d->lh);
+ }
+
+ return NULL_TREE;
+}
+
+/* Diagnose invalid references to loop iterators in lb, b and incr
+ expressions. */
+
+bool
+c_omp_check_loop_iv (tree stmt, tree declv, walk_tree_lh lh)
+{
+ hash_set<tree> pset;
+ struct c_omp_check_loop_iv_data data;
+ int i;
+
+ data.declv = declv;
+ data.fail = false;
+ data.stmt_loc = EXPR_LOCATION (stmt);
+ data.lh = lh;
+ data.ppset = &pset;
+ for (i = 0; i < TREE_VEC_LENGTH (OMP_FOR_INIT (stmt)); i++)
+ {
+ tree init = TREE_VEC_ELT (OMP_FOR_INIT (stmt), i);
+ gcc_assert (TREE_CODE (init) == MODIFY_EXPR);
+ tree decl = TREE_OPERAND (init, 0);
+ tree cond = TREE_VEC_ELT (OMP_FOR_COND (stmt), i);
+ gcc_assert (COMPARISON_CLASS_P (cond));
+ gcc_assert (TREE_OPERAND (cond, 0) == decl);
+ tree incr = TREE_VEC_ELT (OMP_FOR_INCR (stmt), i);
+ data.expr_loc = EXPR_LOCATION (TREE_OPERAND (init, 1));
+ data.kind = 0;
+ walk_tree_1 (&TREE_OPERAND (init, 1),
+ c_omp_check_loop_iv_r, &data, &pset, lh);
+ /* Don't warn for C++ random access iterators here, the
+ expression then involves the subtraction and always refers
+ to the original value. The C++ FE needs to warn on those
+ earlier. */
+ if (decl == TREE_VEC_ELT (declv, i))
+ {
+ data.expr_loc = EXPR_LOCATION (cond);
+ data.kind = 1;
+ walk_tree_1 (&TREE_OPERAND (cond, 1),
+ c_omp_check_loop_iv_r, &data, &pset, lh);
+ }
+ if (TREE_CODE (incr) == MODIFY_EXPR)
+ {
+ gcc_assert (TREE_OPERAND (incr, 0) == decl);
+ incr = TREE_OPERAND (incr, 1);
+ data.kind = 2;
+ if (TREE_CODE (incr) == PLUS_EXPR
+ && TREE_OPERAND (incr, 1) == decl)
+ {
+ data.expr_loc = EXPR_LOCATION (TREE_OPERAND (incr, 0));
+ walk_tree_1 (&TREE_OPERAND (incr, 0),
+ c_omp_check_loop_iv_r, &data, &pset, lh);
+ }
+ else
+ {
+ data.expr_loc = EXPR_LOCATION (TREE_OPERAND (incr, 1));
+ walk_tree_1 (&TREE_OPERAND (incr, 1),
+ c_omp_check_loop_iv_r, &data, &pset, lh);
+ }
+ }
+ }
+ return !data.fail;
+}
+
+/* Similar, but allows to check the init or cond expressions individually. */
+
+bool
+c_omp_check_loop_iv_exprs (location_t stmt_loc, tree declv, tree decl,
+ tree init, tree cond, walk_tree_lh lh)
+{
+ hash_set<tree> pset;
+ struct c_omp_check_loop_iv_data data;
+
+ data.declv = declv;
+ data.fail = false;
+ data.stmt_loc = stmt_loc;
+ data.lh = lh;
+ data.ppset = &pset;
+ if (init)
+ {
+ data.expr_loc = EXPR_LOCATION (init);
+ data.kind = 0;
+ walk_tree_1 (&init,
+ c_omp_check_loop_iv_r, &data, &pset, lh);
+ }
+ if (cond)
+ {
+ gcc_assert (COMPARISON_CLASS_P (cond));
+ data.expr_loc = EXPR_LOCATION (init);
+ data.kind = 1;
+ if (TREE_OPERAND (cond, 0) == decl)
+ walk_tree_1 (&TREE_OPERAND (cond, 1),
+ c_omp_check_loop_iv_r, &data, &pset, lh);
+ else
+ walk_tree_1 (&TREE_OPERAND (cond, 0),
+ c_omp_check_loop_iv_r, &data, &pset, lh);
+ }
+ return !data.fail;
+}
+
/* This function splits clauses for OpenACC combined loop
constructs. OpenACC combined loop constructs are:
#pragma acc kernels loop
- #pragma acc parallel loop
-*/
+ #pragma acc parallel loop */
tree
c_oacc_split_loop_clauses (tree clauses, tree *not_loop_clauses)
s = C_OMP_CLAUSE_SPLIT_FOR;
}
break;
- /* Lastprivate is allowed on for, sections and simd. In
+ /* Lastprivate is allowed on distribute, for, sections and simd. In
parallel {for{, simd},sections} we actually want to put it on
parallel rather than for or sections. */
case OMP_CLAUSE_LASTPRIVATE:
+ if (code == OMP_DISTRIBUTE)
+ {
+ s = C_OMP_CLAUSE_SPLIT_DISTRIBUTE;
+ break;
+ }
+ if ((mask & (OMP_CLAUSE_MASK_1
+ << PRAGMA_OMP_CLAUSE_DIST_SCHEDULE)) != 0)
+ {
+ c = build_omp_clause (OMP_CLAUSE_LOCATION (clauses),
+ OMP_CLAUSE_LASTPRIVATE);
+ OMP_CLAUSE_DECL (c) = OMP_CLAUSE_DECL (clauses);
+ OMP_CLAUSE_CHAIN (c) = cclauses[C_OMP_CLAUSE_SPLIT_DISTRIBUTE];
+ cclauses[C_OMP_CLAUSE_SPLIT_DISTRIBUTE] = c;
+ }
if (code == OMP_FOR || code == OMP_SECTIONS)
{
if ((mask & (OMP_CLAUSE_MASK_1 << PRAGMA_OMP_CLAUSE_NUM_THREADS))
continue;
}
OMP_CLAUSE_DECL (c) = build_int_cst (integer_type_node, idx);
+ if (OMP_CLAUSE_CODE (c) == OMP_CLAUSE_LINEAR
+ && OMP_CLAUSE_LINEAR_VARIABLE_STRIDE (c))
+ {
+ decl = OMP_CLAUSE_LINEAR_STEP (c);
+ for (arg = parms, idx = 0; arg;
+ arg = TREE_CHAIN (arg), idx++)
+ if (arg == decl)
+ break;
+ if (arg == NULL_TREE)
+ {
+ error_at (OMP_CLAUSE_LOCATION (c),
+ "%qD is not an function argument", decl);
+ continue;
+ }
+ OMP_CLAUSE_LINEAR_STEP (c)
+ = build_int_cst (integer_type_node, idx);
+ }
}
clvec.safe_push (c);
}
break;
gcc_assert (arg);
OMP_CLAUSE_DECL (c) = arg;
+ if (OMP_CLAUSE_CODE (c) == OMP_CLAUSE_LINEAR
+ && OMP_CLAUSE_LINEAR_VARIABLE_STRIDE (c))
+ {
+ idx = tree_to_shwi (OMP_CLAUSE_LINEAR_STEP (c));
+ for (arg = DECL_ARGUMENTS (fndecl), i = 0; arg;
+ arg = TREE_CHAIN (arg), i++)
+ if (i == idx)
+ break;
+ gcc_assert (arg);
+ OMP_CLAUSE_LINEAR_STEP (c) = arg;
+ }
}
}
+2015-11-05 Jakub Jelinek <jakub@redhat.com>
+ Ilya Verbin <ilya.verbin@intel.com>
+
+ * c-parser.c: Include context.h and gimple-expr.h.
+ (c_parser_omp_clause_schedule): Parse schedule modifiers, diagnose
+ monotonic together with nonmonotonic.
+ (c_parser_omp_for_loop): Call c_omp_check_loop_iv. Call add_stmt here.
+ (OMP_DISTRIBUTE_CLAUSE_MASK): Add lastprivate clause.
+ (c_parser_omp_target_data, c_parser_omp_target_enter_data,
+ c_parser_omp_target_exit_data): Allow GOMP_MAP_ALWAYS_POINTER.
+ (c_parser_omp_target): Likewise. Evaluate num_teams and thread_limit
+ expressions on combined target teams before the target.
+ (c_parser_omp_declare_target): If decl has "omp declare target" or
+ "omp declare target link" attribute, and cgraph or varpool node already
+ exists, then set corresponding flags. Call c_finish_omp_clauses
+ in the parenthesized extended-list syntax case.
+ * c-decl.c (c_decl_attributes): Don't diagnose block scope vars inside
+ declare target.
+ * c-typeck.c (handle_omp_array_sections_1): Allow non-zero low-bound
+ on OMP_CLAUSE_REDUCTION array sections.
+ (handle_omp_array_sections): Encode low-bound into the MEM_REF, either
+ into the constant offset, or for variable low-bound using
+ POINTER_PLUS_EXPR. For structure element based array sections use
+ GOMP_MAP_ALWAYS_POINTER instead of GOMP_MAP_FIRSTPRIVATE_POINTER.
+ (c_finish_omp_clauses): Drop generic_field_head, structure
+ elements are now always mapped even as array section bases,
+ diagnose same var in data sharing and mapping clauses. Diagnose if
+ linear step on declare simd is neither a constant nor a uniform
+ parameter. Look through POINTER_PLUS_EXPR for array section
+ reductions. Diagnose the same var or function appearing multiple
+ times on the same directive. Fix up wording for the to clause if t
+ is neither a FUNCTION_DECL nor a VAR_DECL. Diagnose nonmonotonic
+ modifier on kinds other than dynamic or guided or nonmonotonic
+ modifier together with ordered clause.
+
2015-11-03 Thomas Schwinge <thomas@codesourcery.com>
Chung-Lin Tang <cltang@codesourcery.com>
|| TREE_CODE (*node) == FUNCTION_DECL))
{
if (VAR_P (*node)
- && ((DECL_CONTEXT (*node)
- && TREE_CODE (DECL_CONTEXT (*node)) == FUNCTION_DECL)
- || (current_function_decl && !DECL_EXTERNAL (*node))))
- error ("%q+D in block scope inside of declare target directive",
- *node);
- else if (VAR_P (*node)
- && !lang_hooks.types.omp_mappable_type (TREE_TYPE (*node)))
+ && !lang_hooks.types.omp_mappable_type (TREE_TYPE (*node)))
error ("%q+D in declare target directive does not have mappable type",
*node);
else
#include "builtins.h"
#include "gomp-constants.h"
#include "c-family/c-indentation.h"
+#include "gimple-expr.h"
+#include "context.h"
\f
/* Initialization routine for this file. */
OpenMP 4.5:
schedule ( schedule-modifier : schedule-kind )
- schedule ( schedule-modifier : schedule-kind , expression )
+ schedule ( schedule-modifier [ , schedule-modifier ] : schedule-kind , expression )
schedule-modifier:
- simd */
+ simd
+ monotonic
+ nonmonotonic */
static tree
c_parser_omp_clause_schedule (c_parser *parser, tree list)
{
tree c, t;
location_t loc = c_parser_peek_token (parser)->location;
+ int modifiers = 0, nmodifiers = 0;
if (!c_parser_require (parser, CPP_OPEN_PAREN, "expected %<(%>"))
return list;
c = build_omp_clause (loc, OMP_CLAUSE_SCHEDULE);
- if (c_parser_next_token_is (parser, CPP_NAME))
+ while (c_parser_next_token_is (parser, CPP_NAME))
{
tree kind = c_parser_peek_token (parser)->value;
const char *p = IDENTIFIER_POINTER (kind);
- if (strcmp ("simd", p) == 0
- && c_parser_peek_2nd_token (parser)->type == CPP_COLON)
+ if (strcmp ("simd", p) == 0)
+ OMP_CLAUSE_SCHEDULE_SIMD (c) = 1;
+ else if (strcmp ("monotonic", p) == 0)
+ modifiers |= OMP_CLAUSE_SCHEDULE_MONOTONIC;
+ else if (strcmp ("nonmonotonic", p) == 0)
+ modifiers |= OMP_CLAUSE_SCHEDULE_NONMONOTONIC;
+ else
+ break;
+ c_parser_consume_token (parser);
+ if (nmodifiers++ == 0
+ && c_parser_next_token_is (parser, CPP_COMMA))
+ c_parser_consume_token (parser);
+ else
{
- OMP_CLAUSE_SCHEDULE_SIMD (c) = 1;
- c_parser_consume_token (parser);
- c_parser_consume_token (parser);
+ c_parser_require (parser, CPP_COLON, "expected %<:%>");
+ break;
}
}
+ if ((modifiers & (OMP_CLAUSE_SCHEDULE_MONOTONIC
+ | OMP_CLAUSE_SCHEDULE_NONMONOTONIC))
+ == (OMP_CLAUSE_SCHEDULE_MONOTONIC
+ | OMP_CLAUSE_SCHEDULE_NONMONOTONIC))
+ {
+ error_at (loc, "both %<monotonic%> and %<nonmonotonic%> modifiers "
+ "specified");
+ modifiers = 0;
+ }
+
if (c_parser_next_token_is (parser, CPP_NAME))
{
tree kind = c_parser_peek_token (parser)->value;
c_parser_skip_until_found (parser, CPP_CLOSE_PAREN,
"expected %<,%> or %<)%>");
+ OMP_CLAUSE_SCHEDULE_KIND (c)
+ = (enum omp_clause_schedule_kind)
+ (OMP_CLAUSE_SCHEDULE_KIND (c) | modifiers);
+
check_no_duplicate_clause (list, OMP_CLAUSE_SCHEDULE, "schedule");
OMP_CLAUSE_CHAIN (c) = list;
return c;
{
stmt = c_finish_omp_for (loc, code, declv, NULL, initv, condv,
incrv, body, pre_body);
+
+ /* Check for iterators appearing in lb, b or incr expressions. */
+ if (stmt && !c_omp_check_loop_iv (stmt, declv, NULL))
+ stmt = NULL_TREE;
+
if (stmt)
{
+ add_stmt (stmt);
+
if (cclauses != NULL
&& cclauses[C_OMP_CLAUSE_SPLIT_PARALLEL] != NULL)
{
#define OMP_DISTRIBUTE_CLAUSE_MASK \
( (OMP_CLAUSE_MASK_1 << PRAGMA_OMP_CLAUSE_PRIVATE) \
| (OMP_CLAUSE_MASK_1 << PRAGMA_OMP_CLAUSE_FIRSTPRIVATE) \
+ | (OMP_CLAUSE_MASK_1 << PRAGMA_OMP_CLAUSE_LASTPRIVATE) \
| (OMP_CLAUSE_MASK_1 << PRAGMA_OMP_CLAUSE_DIST_SCHEDULE)\
| (OMP_CLAUSE_MASK_1 << PRAGMA_OMP_CLAUSE_COLLAPSE))
map_seen = 3;
break;
case GOMP_MAP_FIRSTPRIVATE_POINTER:
+ case GOMP_MAP_ALWAYS_POINTER:
break;
default:
map_seen |= 1;
map_seen = 3;
break;
case GOMP_MAP_FIRSTPRIVATE_POINTER:
+ case GOMP_MAP_ALWAYS_POINTER:
break;
default:
map_seen |= 1;
map_seen = 3;
break;
case GOMP_MAP_FIRSTPRIVATE_POINTER:
+ case GOMP_MAP_ALWAYS_POINTER:
break;
default:
map_seen |= 1;
block = c_end_compound_stmt (loc, block, true);
if (ret == NULL_TREE)
return false;
+ if (ccode == OMP_TEAMS)
+ {
+ /* For combined target teams, ensure the num_teams and
+ thread_limit clause expressions are evaluated on the host,
+ before entering the target construct. */
+ tree c;
+ for (c = cclauses[C_OMP_CLAUSE_SPLIT_TEAMS];
+ c; c = OMP_CLAUSE_CHAIN (c))
+ if ((OMP_CLAUSE_CODE (c) == OMP_CLAUSE_NUM_TEAMS
+ || OMP_CLAUSE_CODE (c) == OMP_CLAUSE_THREAD_LIMIT)
+ && TREE_CODE (OMP_CLAUSE_OPERAND (c, 0)) != INTEGER_CST)
+ {
+ tree expr = OMP_CLAUSE_OPERAND (c, 0);
+ tree tmp = create_tmp_var_raw (TREE_TYPE (expr));
+ expr = build4 (TARGET_EXPR, TREE_TYPE (expr), tmp,
+ expr, NULL_TREE, NULL_TREE);
+ add_stmt (expr);
+ OMP_CLAUSE_OPERAND (c, 0) = expr;
+ tree tc = build_omp_clause (OMP_CLAUSE_LOCATION (c),
+ OMP_CLAUSE_FIRSTPRIVATE);
+ OMP_CLAUSE_DECL (tc) = tmp;
+ OMP_CLAUSE_CHAIN (tc)
+ = cclauses[C_OMP_CLAUSE_SPLIT_TARGET];
+ cclauses[C_OMP_CLAUSE_SPLIT_TARGET] = tc;
+ }
+ }
tree stmt = make_node (OMP_TARGET);
TREE_TYPE (stmt) = void_type_node;
OMP_TARGET_CLAUSES (stmt) = cclauses[C_OMP_CLAUSE_SPLIT_TARGET];
case GOMP_MAP_ALWAYS_TOFROM:
case GOMP_MAP_ALLOC:
case GOMP_MAP_FIRSTPRIVATE_POINTER:
+ case GOMP_MAP_ALWAYS_POINTER:
break;
default:
error_at (OMP_CLAUSE_LOCATION (*pc),
{
clauses = c_parser_omp_var_list_parens (parser, OMP_CLAUSE_TO_DECLARE,
clauses);
+ clauses = c_finish_omp_clauses (clauses, true);
c_parser_skip_to_pragma_eol (parser);
}
else
continue;
}
if (!at1)
- DECL_ATTRIBUTES (t) = tree_cons (id, NULL_TREE, DECL_ATTRIBUTES (t));
+ {
+ symtab_node *node = symtab_node::get (t);
+ DECL_ATTRIBUTES (t) = tree_cons (id, NULL_TREE, DECL_ATTRIBUTES (t));
+ if (node != NULL)
+ {
+ node->offloadable = 1;
+#ifdef ENABLE_OFFLOADING
+ g->have_offload = true;
+ if (is_a <varpool_node *> (node))
+ {
+ vec_safe_push (offload_vars, t);
+ node->force_output = 1;
+ }
+#endif
+ }
+ }
}
}
&& (TREE_CODE (length) != INTEGER_CST || integer_onep (length)))
first_non_one++;
}
- if (OMP_CLAUSE_CODE (c) == OMP_CLAUSE_REDUCTION
- && !integer_zerop (low_bound))
- {
- error_at (OMP_CLAUSE_LOCATION (c),
- "%<reduction%> array section has to be zero-based");
- return error_mark_node;
- }
if (TREE_CODE (type) == ARRAY_TYPE)
{
if (length == NULL_TREE
tree ptype = build_pointer_type (eltype);
if (TREE_CODE (TREE_TYPE (t)) == ARRAY_TYPE)
t = build_fold_addr_expr (t);
- t = build2 (MEM_REF, type, t, build_int_cst (ptype, 0));
+ tree t2 = build_fold_addr_expr (first);
+ t2 = fold_convert_loc (OMP_CLAUSE_LOCATION (c),
+ ptrdiff_type_node, t2);
+ t2 = fold_build2_loc (OMP_CLAUSE_LOCATION (c), MINUS_EXPR,
+ ptrdiff_type_node, t2,
+ fold_convert_loc (OMP_CLAUSE_LOCATION (c),
+ ptrdiff_type_node, t));
+ t2 = c_fully_fold (t2, false, NULL);
+ if (tree_fits_shwi_p (t2))
+ t = build2 (MEM_REF, type, t,
+ build_int_cst (ptype, tree_to_shwi (t2)));
+ else
+ {
+ t2 = fold_convert_loc (OMP_CLAUSE_LOCATION (c), sizetype, t2);
+ t = build2_loc (OMP_CLAUSE_LOCATION (c), POINTER_PLUS_EXPR,
+ TREE_TYPE (t), t, t2);
+ t = build2 (MEM_REF, type, t, build_int_cst (ptype, 0));
+ }
OMP_CLAUSE_DECL (c) = t;
return false;
}
break;
}
tree c2 = build_omp_clause (OMP_CLAUSE_LOCATION (c), OMP_CLAUSE_MAP);
- OMP_CLAUSE_SET_MAP_KIND (c2, is_omp
- ? GOMP_MAP_FIRSTPRIVATE_POINTER
- : GOMP_MAP_POINTER);
- if (!is_omp && !c_mark_addressable (t))
+ if (!is_omp)
+ OMP_CLAUSE_SET_MAP_KIND (c2, GOMP_MAP_POINTER);
+ else if (TREE_CODE (t) == COMPONENT_REF)
+ OMP_CLAUSE_SET_MAP_KIND (c2, GOMP_MAP_ALWAYS_POINTER);
+ else
+ OMP_CLAUSE_SET_MAP_KIND (c2, GOMP_MAP_FIRSTPRIVATE_POINTER);
+ if (OMP_CLAUSE_MAP_KIND (c2) != GOMP_MAP_FIRSTPRIVATE_POINTER
+ && !c_mark_addressable (t))
return false;
OMP_CLAUSE_DECL (c2) = t;
t = build_fold_addr_expr (first);
c_finish_omp_clauses (tree clauses, bool is_omp, bool declare_simd)
{
bitmap_head generic_head, firstprivate_head, lastprivate_head;
- bitmap_head aligned_head, map_head, map_field_head, generic_field_head;
+ bitmap_head aligned_head, map_head, map_field_head;
tree c, t, type, *pc;
tree simdlen = NULL_TREE, safelen = NULL_TREE;
bool branch_seen = false;
bool copyprivate_seen = false;
+ bool linear_variable_step_check = false;
tree *nowait_clause = NULL;
+ bool ordered_seen = false;
+ tree schedule_clause = NULL_TREE;
bitmap_obstack_initialize (NULL);
bitmap_initialize (&generic_head, &bitmap_default_obstack);
bitmap_initialize (&aligned_head, &bitmap_default_obstack);
bitmap_initialize (&map_head, &bitmap_default_obstack);
bitmap_initialize (&map_field_head, &bitmap_default_obstack);
- bitmap_initialize (&generic_field_head, &bitmap_default_obstack);
for (pc = &clauses, c = clauses; c ; c = *pc)
{
break;
}
t = TREE_OPERAND (t, 0);
+ if (TREE_CODE (t) == POINTER_PLUS_EXPR)
+ t = TREE_OPERAND (t, 0);
if (TREE_CODE (t) == ADDR_EXPR)
t = TREE_OPERAND (t, 0);
}
remove = true;
break;
}
+ if (declare_simd)
+ {
+ tree s = OMP_CLAUSE_LINEAR_STEP (c);
+ if (TREE_CODE (s) == PARM_DECL)
+ {
+ OMP_CLAUSE_LINEAR_VARIABLE_STRIDE (c) = 1;
+ /* map_head bitmap is used as uniform_head if
+ declare_simd. */
+ if (!bitmap_bit_p (&map_head, DECL_UID (s)))
+ linear_variable_step_check = true;
+ goto check_dup_generic;
+ }
+ if (TREE_CODE (s) != INTEGER_CST)
+ {
+ error_at (OMP_CLAUSE_LOCATION (c),
+ "%<linear%> clause step %qE is neither constant "
+ "nor a parameter", s);
+ remove = true;
+ break;
+ }
+ }
if (TREE_CODE (TREE_TYPE (OMP_CLAUSE_DECL (c))) == POINTER_TYPE)
{
tree s = OMP_CLAUSE_LINEAR_STEP (c);
"%qE appears more than once in data clauses", t);
remove = true;
}
+ else if (OMP_CLAUSE_CODE (c) == OMP_CLAUSE_PRIVATE
+ && bitmap_bit_p (&map_head, DECL_UID (t)))
+ {
+ error ("%qD appears both in data and map clauses", t);
+ remove = true;
+ }
else
bitmap_set_bit (&generic_head, DECL_UID (t));
break;
"%qE appears more than once in data clauses", t);
remove = true;
}
+ else if (bitmap_bit_p (&map_head, DECL_UID (t)))
+ {
+ error ("%qD appears both in data and map clauses", t);
+ remove = true;
+ }
else
bitmap_set_bit (&firstprivate_head, DECL_UID (t));
break;
break;
if (VAR_P (t) || TREE_CODE (t) == PARM_DECL)
{
- if (OMP_CLAUSE_CODE (c) == OMP_CLAUSE_MAP
- && (OMP_CLAUSE_MAP_KIND (c)
- == GOMP_MAP_FIRSTPRIVATE_POINTER))
- {
- if (bitmap_bit_p (&generic_field_head, DECL_UID (t)))
- break;
- }
- else if (bitmap_bit_p (&map_field_head, DECL_UID (t)))
+ if (bitmap_bit_p (&map_field_head, DECL_UID (t)))
break;
}
}
error ("%qD appears more than once in data clauses", t);
remove = true;
}
- else
+ else if (bitmap_bit_p (&map_head, DECL_UID (t)))
{
- bitmap_set_bit (&generic_head, DECL_UID (t));
- if (t != OMP_CLAUSE_DECL (c)
- && TREE_CODE (OMP_CLAUSE_DECL (c)) == COMPONENT_REF)
- bitmap_set_bit (&generic_field_head, DECL_UID (t));
+ error ("%qD appears both in data and map clauses", t);
+ remove = true;
}
+ else
+ bitmap_set_bit (&generic_head, DECL_UID (t));
}
else if (bitmap_bit_p (&map_head, DECL_UID (t)))
{
error ("%qD appears more than once in map clauses", t);
remove = true;
}
+ else if (bitmap_bit_p (&generic_head, DECL_UID (t))
+ || bitmap_bit_p (&firstprivate_head, DECL_UID (t)))
+ {
+ error ("%qD appears both in data and map clauses", t);
+ remove = true;
+ }
else
{
bitmap_set_bit (&map_head, DECL_UID (t));
break;
case OMP_CLAUSE_TO_DECLARE:
- t = OMP_CLAUSE_DECL (c);
- if (TREE_CODE (t) == FUNCTION_DECL)
- break;
- /* FALLTHRU */
case OMP_CLAUSE_LINK:
t = OMP_CLAUSE_DECL (c);
- if (!VAR_P (t))
+ if (TREE_CODE (t) == FUNCTION_DECL
+ && OMP_CLAUSE_CODE (c) == OMP_CLAUSE_TO_DECLARE)
+ ;
+ else if (!VAR_P (t))
{
- error_at (OMP_CLAUSE_LOCATION (c),
- "%qE is not a variable in clause %qs", t,
- omp_clause_code_name[OMP_CLAUSE_CODE (c)]);
+ if (OMP_CLAUSE_CODE (c) == OMP_CLAUSE_TO_DECLARE)
+ error_at (OMP_CLAUSE_LOCATION (c),
+ "%qE is neither a variable nor a function name in "
+ "clause %qs", t,
+ omp_clause_code_name[OMP_CLAUSE_CODE (c)]);
+ else
+ error_at (OMP_CLAUSE_LOCATION (c),
+ "%qE is not a variable in clause %qs", t,
+ omp_clause_code_name[OMP_CLAUSE_CODE (c)]);
remove = true;
}
else if (DECL_THREAD_LOCAL_P (t))
omp_clause_code_name[OMP_CLAUSE_CODE (c)]);
remove = true;
}
+ if (remove)
+ break;
+ if (bitmap_bit_p (&generic_head, DECL_UID (t)))
+ {
+ error_at (OMP_CLAUSE_LOCATION (c),
+ "%qE appears more than once on the same "
+ "%<declare target%> directive", t);
+ remove = true;
+ }
+ else
+ bitmap_set_bit (&generic_head, DECL_UID (t));
break;
case OMP_CLAUSE_UNIFORM:
remove = true;
break;
}
+ /* map_head bitmap is used as uniform_head if declare_simd. */
+ bitmap_set_bit (&map_head, DECL_UID (t));
goto check_dup_generic;
case OMP_CLAUSE_IS_DEVICE_PTR:
case OMP_CLAUSE_NUM_THREADS:
case OMP_CLAUSE_NUM_TEAMS:
case OMP_CLAUSE_THREAD_LIMIT:
- case OMP_CLAUSE_SCHEDULE:
- case OMP_CLAUSE_ORDERED:
case OMP_CLAUSE_DEFAULT:
case OMP_CLAUSE_UNTIED:
case OMP_CLAUSE_COLLAPSE:
pc = &OMP_CLAUSE_CHAIN (c);
continue;
+ case OMP_CLAUSE_SCHEDULE:
+ if (OMP_CLAUSE_SCHEDULE_KIND (c) & OMP_CLAUSE_SCHEDULE_NONMONOTONIC)
+ {
+ const char *p = NULL;
+ switch (OMP_CLAUSE_SCHEDULE_KIND (c) & OMP_CLAUSE_SCHEDULE_MASK)
+ {
+ case OMP_CLAUSE_SCHEDULE_STATIC: p = "static"; break;
+ case OMP_CLAUSE_SCHEDULE_DYNAMIC: break;
+ case OMP_CLAUSE_SCHEDULE_GUIDED: break;
+ case OMP_CLAUSE_SCHEDULE_AUTO: p = "auto"; break;
+ case OMP_CLAUSE_SCHEDULE_RUNTIME: p = "runtime"; break;
+ default: gcc_unreachable ();
+ }
+ if (p)
+ {
+ error_at (OMP_CLAUSE_LOCATION (c),
+ "%<nonmonotonic%> modifier specified for %qs "
+ "schedule kind", p);
+ OMP_CLAUSE_SCHEDULE_KIND (c)
+ = (enum omp_clause_schedule_kind)
+ (OMP_CLAUSE_SCHEDULE_KIND (c)
+ & ~OMP_CLAUSE_SCHEDULE_NONMONOTONIC);
+ }
+ }
+ schedule_clause = c;
+ pc = &OMP_CLAUSE_CHAIN (c);
+ continue;
+
+ case OMP_CLAUSE_ORDERED:
+ ordered_seen = true;
+ pc = &OMP_CLAUSE_CHAIN (c);
+ continue;
+
case OMP_CLAUSE_SAFELEN:
safelen = c;
pc = &OMP_CLAUSE_CHAIN (c);
= OMP_CLAUSE_SAFELEN_EXPR (safelen);
}
+ if (ordered_seen
+ && schedule_clause
+ && (OMP_CLAUSE_SCHEDULE_KIND (schedule_clause)
+ & OMP_CLAUSE_SCHEDULE_NONMONOTONIC))
+ {
+ error_at (OMP_CLAUSE_LOCATION (schedule_clause),
+ "%<nonmonotonic%> schedule modifier specified together "
+ "with %<ordered%> clause");
+ OMP_CLAUSE_SCHEDULE_KIND (schedule_clause)
+ = (enum omp_clause_schedule_kind)
+ (OMP_CLAUSE_SCHEDULE_KIND (schedule_clause)
+ & ~OMP_CLAUSE_SCHEDULE_NONMONOTONIC);
+ }
+
+ if (linear_variable_step_check)
+ for (pc = &clauses, c = clauses; c ; c = *pc)
+ {
+ bool remove = false;
+ if (OMP_CLAUSE_CODE (c) == OMP_CLAUSE_LINEAR
+ && OMP_CLAUSE_LINEAR_VARIABLE_STRIDE (c)
+ && !bitmap_bit_p (&map_head,
+ DECL_UID (OMP_CLAUSE_LINEAR_STEP (c))))
+ {
+ error_at (OMP_CLAUSE_LOCATION (c),
+ "%<linear%> clause step is a parameter %qD not "
+ "specified in %<uniform%> clause",
+ OMP_CLAUSE_LINEAR_STEP (c));
+ remove = true;
+ }
+
+ if (remove)
+ *pc = OMP_CLAUSE_CHAIN (c);
+ else
+ pc = &OMP_CLAUSE_CHAIN (c);
+ }
+
bitmap_obstack_release (NULL);
return clauses;
}
/* These are only for integer/pointer arguments passed by value. */
SIMD_CLONE_ARG_TYPE_LINEAR_CONSTANT_STEP,
SIMD_CLONE_ARG_TYPE_LINEAR_VARIABLE_STEP,
- /* These 3 are only for reference type arguments or arguments passed
+ /* These 6 are only for reference type arguments or arguments passed
by reference. */
SIMD_CLONE_ARG_TYPE_LINEAR_REF_CONSTANT_STEP,
+ SIMD_CLONE_ARG_TYPE_LINEAR_REF_VARIABLE_STEP,
SIMD_CLONE_ARG_TYPE_LINEAR_UVAL_CONSTANT_STEP,
+ SIMD_CLONE_ARG_TYPE_LINEAR_UVAL_VARIABLE_STEP,
SIMD_CLONE_ARG_TYPE_LINEAR_VAL_CONSTANT_STEP,
+ SIMD_CLONE_ARG_TYPE_LINEAR_VAL_VARIABLE_STEP,
SIMD_CLONE_ARG_TYPE_MASK
};
/* For arg_type SIMD_CLONE_ARG_TYPE_LINEAR_*CONSTANT_STEP this is
the constant linear step, if arg_type is
- SIMD_CLONE_ARG_TYPE_LINEAR_VARIABLE_STEP, this is index of
+ SIMD_CLONE_ARG_TYPE_LINEAR_*VARIABLE_STEP, this is index of
the uniform argument holding the step, otherwise 0. */
HOST_WIDE_INT linear_step;
+2015-11-05 Jakub Jelinek <jakub@redhat.com>
+ Ilya Verbin <ilya.verbin@intel.com>
+
+ * cp-tree.h (finish_omp_for): Add ORIG_INITS argument.
+ (omp_privatize_field): Add SHARED argument.
+ * parser.c: Include context.h.
+ (cp_parser_omp_clause_schedule): Parse schedule
+ modifiers, diagnose monotonic together with nonmonotonic.
+ (cp_parser_omp_clause_linear): Add DECLARE_SIMD argument. Parse
+ parameter name as linear step as id-expression rather than expression.
+ (cp_parser_omp_all_clauses): Adjust caller.
+ (cp_parser_omp_for_loop_init): Add ORIG_INIT argument,
+ initialize it. Adjust omp_privatize_field caller.
+ (cp_parser_omp_for_loop): Compute orig_inits, pass it's address
+ to finish_omp_for.
+ (OMP_DISTRIBUTE_CLAUSE_MASK): Add lastprivate clause.
+ (cp_parser_omp_target_data,
+ cp_parser_omp_target_enter_data,
+ cp_parser_omp_target_exit_data): Allow GOMP_MAP_ALWAYS_POINTER
+ and GOMP_MAP_FIRSTPRIVATE_REFERENCE.
+ (cp_parser_omp_target): Likewise. Evaluate num_teams and
+ thread_limit expressions on combined target teams before the target.
+ (cp_parser_omp_declare_target): If decl has "omp declare target" or
+ "omp declare target link" attribute, and cgraph or varpool node already
+ exists, then set corresponding flags. Call finish_omp_clauses
+ in the parenthesized extended-list syntax case. Call
+ cp_parser_require_pragma_eol instead of cp_parser_skip_to_pragma_eol.
+ (cp_parser_omp_end_declare_target): Call cp_parser_require_pragma_eol
+ instead of cp_parser_skip_to_pragma_eol.
+ * decl2.c (cplus_decl_attributes): Don't diagnose block scope vars inside
+ declare target.
+ * pt.c (tsubst_omp_clauses): If OMP_CLAUSE_LINEAR_VARIABLE_STRIDE,
+ use tsubst_omp_clause_decl instead of tsubst_expr on
+ OMP_CLAUSE_LINEAR_STEP. Handle non-static data members in shared
+ clauses.
+ (tsubst_omp_for_iterator): Adjust omp_privatize_field caller.
+ (tsubst_find_omp_teams): New function.
+ (tsubst_expr): Evaluate num_teams and thread_limit expressions on
+ combined target teams before the target. Use OMP_FOR_ORIG_DECLS for
+ all OpenMP/OpenACC/Cilk+ looping constructs. Adjust finish_omp_for
+ caller.
+ * semantics.c (omp_privatize_field): Add SHARED argument, if true,
+ always create artificial var and never put it into the hash table
+ or vector.
+ (handle_omp_array_sections_1): Adjust omp_privatize_field caller.
+ Allow non-zero low-bound on OMP_CLAUSE_REDUCTION array sections.
+ (handle_omp_array_sections): For structure element
+ based array sections use GOMP_MAP_ALWAYS_POINTER instead of
+ GOMP_MAP_FIRSTPRIVATE_POINTER. Encode low-bound into the MEM_REF,
+ either into the constant offset, or for variable low-bound using
+ POINTER_PLUS_EXPR.
+ (finish_omp_clauses): Adjust omp_privatize_field caller. Drop
+ generic_field_head, structure elements are now always mapped even
+ as array section bases, diagnose same var in data sharing and
+ mapping clauses. For references map what they refer to using
+ GOMP_MAP_ALWAYS_POINTER for structure elements and
+ GOMP_MAP_FIRSTPRIVATE_REFERENCE otherwise. Diagnose if linear step
+ on declare simd is neither a constant nor a uniform parameter.
+ Allow non-static data members on shared clauses. Look through
+ POINTER_PLUS_EXPR for array section reductions. Diagnose nonmonotonic
+ modifier on kinds other than dynamic or guided or nonmonotonic
+ modifier together with ordered clause. Diagnose the same var or
+ function appearing multiple times on the same directive. Fix up
+ wording for the to clause if t is neither a FUNCTION_DECL nor a
+ VAR_DECL, use special wording for OVERLOADs and TEMPLATE_ID_EXPR.
+ (handle_omp_for_class_iterator): Add ORIG_DECLS argument. Call
+ c_omp_check_loop_iv_exprs on cond.
+ (finish_omp_for): Add ORIG_INITS argument. Call
+ c_omp_check_loop_iv_exprs on ORIG_INITS elements. Adjust
+ handle_omp_for_class_iterator caller. Call c_omp_check_loop_iv.
+ Call add_stmt.
+ (finish_omp_atomic): Adjust c_finish_omp_atomic caller.
+
2015-11-04 Cesar Philippidis <cesar@codesourcery.com>
* (cp_parser_oacc_single_int_clause): New function.
extern tree finish_omp_task (tree, tree);
extern tree finish_omp_for (location_t, enum tree_code,
tree, tree, tree, tree, tree,
- tree, tree, tree);
+ tree, tree, vec<tree> *, tree);
extern void finish_omp_atomic (enum tree_code, enum tree_code,
tree, tree, tree, tree, tree,
bool);
extern void finish_omp_taskyield (void);
extern void finish_omp_cancel (tree);
extern void finish_omp_cancellation_point (tree);
-extern tree omp_privatize_field (tree);
+extern tree omp_privatize_field (tree, bool);
extern tree begin_transaction_stmt (location_t, tree *, int);
extern void finish_transaction_stmt (tree, tree, int, tree);
extern tree build_transaction_expr (location_t, tree, int, tree);
&& DECL_CLASS_SCOPE_P (*decl))
error ("%q+D static data member inside of declare target directive",
*decl);
- else if (VAR_P (*decl)
- && (DECL_FUNCTION_SCOPE_P (*decl)
- || (current_function_decl && !DECL_EXTERNAL (*decl))))
- error ("%q+D in block scope inside of declare target directive",
- *decl);
else if (!processing_template_decl
&& VAR_P (*decl)
&& !cp_omp_mappable_type (TREE_TYPE (*decl)))
#include "omp-low.h"
#include "gomp-constants.h"
#include "c-family/c-indentation.h"
+#include "context.h"
\f
/* The lexer. */
OpenMP 4.5:
schedule ( schedule-modifier : schedule-kind )
- schedule ( schedule-modifier : schedule-kind , expression )
+ schedule ( schedule-modifier [ , schedule-modifier ] : schedule-kind , expression )
schedule-modifier:
- simd */
+ simd
+ monotonic
+ nonmonotonic */
static tree
cp_parser_omp_clause_schedule (cp_parser *parser, tree list, location_t location)
{
tree c, t;
+ int modifiers = 0, nmodifiers = 0;
if (!cp_parser_require (parser, CPP_OPEN_PAREN, RT_OPEN_PAREN))
return list;
c = build_omp_clause (location, OMP_CLAUSE_SCHEDULE);
- if (cp_lexer_next_token_is (parser->lexer, CPP_NAME))
+ while (cp_lexer_next_token_is (parser->lexer, CPP_NAME))
{
tree id = cp_lexer_peek_token (parser->lexer)->u.value;
const char *p = IDENTIFIER_POINTER (id);
- if (strcmp ("simd", p) == 0
- && cp_lexer_nth_token_is (parser->lexer, 2, CPP_COLON))
+ if (strcmp ("simd", p) == 0)
+ OMP_CLAUSE_SCHEDULE_SIMD (c) = 1;
+ else if (strcmp ("monotonic", p) == 0)
+ modifiers |= OMP_CLAUSE_SCHEDULE_MONOTONIC;
+ else if (strcmp ("nonmonotonic", p) == 0)
+ modifiers |= OMP_CLAUSE_SCHEDULE_NONMONOTONIC;
+ else
+ break;
+ cp_lexer_consume_token (parser->lexer);
+ if (nmodifiers++ == 0
+ && cp_lexer_next_token_is (parser->lexer, CPP_COMMA))
+ cp_lexer_consume_token (parser->lexer);
+ else
{
- OMP_CLAUSE_SCHEDULE_SIMD (c) = 1;
- cp_lexer_consume_token (parser->lexer);
- cp_lexer_consume_token (parser->lexer);
+ cp_parser_require (parser, CPP_COLON, RT_COLON);
+ break;
}
}
goto invalid_kind;
cp_lexer_consume_token (parser->lexer);
+ if ((modifiers & (OMP_CLAUSE_SCHEDULE_MONOTONIC
+ | OMP_CLAUSE_SCHEDULE_NONMONOTONIC))
+ == (OMP_CLAUSE_SCHEDULE_MONOTONIC
+ | OMP_CLAUSE_SCHEDULE_NONMONOTONIC))
+ {
+ error_at (location, "both %<monotonic%> and %<nonmonotonic%> modifiers "
+ "specified");
+ modifiers = 0;
+ }
+
if (cp_lexer_next_token_is (parser->lexer, CPP_COMMA))
{
cp_token *token;
else if (!cp_parser_require (parser, CPP_CLOSE_PAREN, RT_COMMA_CLOSE_PAREN))
goto resync_fail;
+ OMP_CLAUSE_SCHEDULE_KIND (c)
+ = (enum omp_clause_schedule_kind)
+ (OMP_CLAUSE_SCHEDULE_KIND (c) | modifiers);
+
check_no_duplicate_clause (list, OMP_CLAUSE_SCHEDULE, "schedule", location);
OMP_CLAUSE_CHAIN (c) = list;
return c;
static tree
cp_parser_omp_clause_linear (cp_parser *parser, tree list,
- bool is_cilk_simd_fn)
+ bool is_cilk_simd_fn, bool declare_simd)
{
tree nlist, c, step = integer_one_node;
bool colon;
if (colon)
{
- step = cp_parser_expression (parser);
+ step = NULL_TREE;
+ if (declare_simd
+ && cp_lexer_next_token_is (parser->lexer, CPP_NAME)
+ && cp_lexer_nth_token_is (parser->lexer, 2, CPP_CLOSE_PAREN))
+ {
+ cp_token *token = cp_lexer_peek_token (parser->lexer);
+ cp_parser_parse_tentatively (parser);
+ step = cp_parser_id_expression (parser, /*template_p=*/false,
+ /*check_dependency_p=*/true,
+ /*template_p=*/NULL,
+ /*declarator_p=*/false,
+ /*optional_p=*/false);
+ if (step != error_mark_node)
+ step = cp_parser_lookup_name_simple (parser, step, token->location);
+ if (step == error_mark_node)
+ {
+ step = NULL_TREE;
+ cp_parser_abort_tentative_parse (parser);
+ }
+ else if (!cp_parser_parse_definitely (parser))
+ step = NULL_TREE;
+ }
+ if (!step)
+ step = cp_parser_expression (parser);
if (is_cilk_simd_fn && TREE_CODE (step) == PARM_DECL)
{
tree clauses = NULL;
bool first = true;
cp_token *token = NULL;
- bool cilk_simd_fn = false;
while (cp_lexer_next_token_is_not (parser->lexer, CPP_PRAGMA_EOL))
{
c_name = "aligned";
break;
case PRAGMA_OMP_CLAUSE_LINEAR:
- if (((mask >> PRAGMA_CILK_CLAUSE_VECTORLENGTH) & 1) != 0)
- cilk_simd_fn = true;
- clauses = cp_parser_omp_clause_linear (parser, clauses, cilk_simd_fn);
+ {
+ bool cilk_simd_fn = false, declare_simd = false;
+ if (((mask >> PRAGMA_CILK_CLAUSE_VECTORLENGTH) & 1) != 0)
+ cilk_simd_fn = true;
+ else if (((mask >> PRAGMA_OMP_CLAUSE_UNIFORM) & 1) != 0)
+ declare_simd = true;
+ clauses = cp_parser_omp_clause_linear (parser, clauses,
+ cilk_simd_fn, declare_simd);
+ }
c_name = "linear";
break;
case PRAGMA_OMP_CLAUSE_DEPEND:
tree &this_pre_body,
vec<tree, va_gc> *for_block,
tree &init,
+ tree &orig_init,
tree &decl,
tree &real_decl)
{
cp_finish_decl (decl, init, !is_non_constant_init,
asm_specification,
LOOKUP_ONLYCONVERTING);
+ orig_init = init;
if (CLASS_TYPE_P (TREE_TYPE (decl)))
{
vec_safe_push (for_block, this_pre_body);
decl = cp_parser_lookup_name_simple (parser, name,
token->location);
if (TREE_CODE (decl) == FIELD_DECL)
- add_private_clause = omp_privatize_field (decl);
+ add_private_clause = omp_privatize_field (decl, false);
}
cp_parser_abort_tentative_parse (parser);
cp_parser_parse_tentatively (parser);
cp_parser_parse_definitely (parser);
cp_parser_require (parser, CPP_EQ, RT_EQ);
rhs = cp_parser_assignment_expression (parser);
+ orig_init = rhs;
finish_expr_stmt (build_x_modify_expr (EXPR_LOCATION (rhs),
decl, NOP_EXPR,
rhs,
cp_parser_omp_for_loop (cp_parser *parser, enum tree_code code, tree clauses,
tree *cclauses)
{
- tree init, cond, incr, body, decl, pre_body = NULL_TREE, ret;
+ tree init, orig_init, cond, incr, body, decl, pre_body = NULL_TREE, ret;
tree real_decl, initv, condv, incrv, declv;
tree this_pre_body, cl, ordered_cl = NULL_TREE;
location_t loc_first;
bool collapse_err = false;
int i, collapse = 1, ordered = 0, count, nbraces = 0;
vec<tree, va_gc> *for_block = make_tree_vector ();
+ auto_vec<tree, 4> orig_inits;
for (cl = clauses; cl; cl = OMP_CLAUSE_CHAIN (cl))
if (OMP_CLAUSE_CODE (cl) == OMP_CLAUSE_COLLAPSE)
if (!cp_parser_require (parser, CPP_OPEN_PAREN, RT_OPEN_PAREN))
return NULL;
- init = decl = real_decl = NULL;
+ init = orig_init = decl = real_decl = NULL;
this_pre_body = push_stmt_list ();
add_private_clause
= cp_parser_omp_for_loop_init (parser, code,
this_pre_body, for_block,
- init, decl, real_decl);
+ init, orig_init, decl, real_decl);
cp_parser_require (parser, CPP_SEMICOLON, RT_SEMICOLON);
if (this_pre_body)
TREE_VEC_ELT (initv, i) = init;
TREE_VEC_ELT (condv, i) = cond;
TREE_VEC_ELT (incrv, i) = incr;
+ if (orig_init)
+ {
+ orig_inits.safe_grow_cleared (i + 1);
+ orig_inits[i] = orig_init;
+ }
if (i == count - 1)
break;
ret = NULL_TREE;
else
ret = finish_omp_for (loc_first, code, declv, NULL, initv, condv, incrv,
- body, pre_body, clauses);
+ body, pre_body, &orig_inits, clauses);
while (nbraces)
{
#define OMP_DISTRIBUTE_CLAUSE_MASK \
( (OMP_CLAUSE_MASK_1 << PRAGMA_OMP_CLAUSE_PRIVATE) \
| (OMP_CLAUSE_MASK_1 << PRAGMA_OMP_CLAUSE_FIRSTPRIVATE) \
+ | (OMP_CLAUSE_MASK_1 << PRAGMA_OMP_CLAUSE_LASTPRIVATE) \
| (OMP_CLAUSE_MASK_1 << PRAGMA_OMP_CLAUSE_DIST_SCHEDULE)\
| (OMP_CLAUSE_MASK_1 << PRAGMA_OMP_CLAUSE_COLLAPSE))
map_seen = 3;
break;
case GOMP_MAP_FIRSTPRIVATE_POINTER:
+ case GOMP_MAP_FIRSTPRIVATE_REFERENCE:
+ case GOMP_MAP_ALWAYS_POINTER:
break;
default:
map_seen |= 1;
map_seen = 3;
break;
case GOMP_MAP_FIRSTPRIVATE_POINTER:
+ case GOMP_MAP_FIRSTPRIVATE_REFERENCE:
+ case GOMP_MAP_ALWAYS_POINTER:
break;
default:
map_seen |= 1;
map_seen = 3;
break;
case GOMP_MAP_FIRSTPRIVATE_POINTER:
+ case GOMP_MAP_FIRSTPRIVATE_REFERENCE:
+ case GOMP_MAP_ALWAYS_POINTER:
break;
default:
map_seen |= 1;
tree body = finish_omp_structured_block (sb);
if (ret == NULL_TREE)
return false;
+ if (ccode == OMP_TEAMS && !processing_template_decl)
+ {
+ /* For combined target teams, ensure the num_teams and
+ thread_limit clause expressions are evaluated on the host,
+ before entering the target construct. */
+ tree c;
+ for (c = cclauses[C_OMP_CLAUSE_SPLIT_TEAMS];
+ c; c = OMP_CLAUSE_CHAIN (c))
+ if ((OMP_CLAUSE_CODE (c) == OMP_CLAUSE_NUM_TEAMS
+ || OMP_CLAUSE_CODE (c) == OMP_CLAUSE_THREAD_LIMIT)
+ && TREE_CODE (OMP_CLAUSE_OPERAND (c, 0)) != INTEGER_CST)
+ {
+ tree expr = OMP_CLAUSE_OPERAND (c, 0);
+ expr = force_target_expr (TREE_TYPE (expr), expr, tf_none);
+ if (expr == error_mark_node)
+ continue;
+ tree tmp = TARGET_EXPR_SLOT (expr);
+ add_stmt (expr);
+ OMP_CLAUSE_OPERAND (c, 0) = expr;
+ tree tc = build_omp_clause (OMP_CLAUSE_LOCATION (c),
+ OMP_CLAUSE_FIRSTPRIVATE);
+ OMP_CLAUSE_DECL (tc) = tmp;
+ OMP_CLAUSE_CHAIN (tc)
+ = cclauses[C_OMP_CLAUSE_SPLIT_TARGET];
+ cclauses[C_OMP_CLAUSE_SPLIT_TARGET] = tc;
+ }
+ }
tree stmt = make_node (OMP_TARGET);
TREE_TYPE (stmt) = void_type_node;
OMP_TARGET_CLAUSES (stmt) = cclauses[C_OMP_CLAUSE_SPLIT_TARGET];
case GOMP_MAP_ALWAYS_TOFROM:
case GOMP_MAP_ALLOC:
case GOMP_MAP_FIRSTPRIVATE_POINTER:
+ case GOMP_MAP_FIRSTPRIVATE_REFERENCE:
+ case GOMP_MAP_ALWAYS_POINTER:
break;
default:
error_at (OMP_CLAUSE_LOCATION (*pc),
{
clauses = cp_parser_omp_var_list (parser, OMP_CLAUSE_TO_DECLARE,
clauses);
- cp_parser_skip_to_pragma_eol (parser, pragma_tok);
+ clauses = finish_omp_clauses (clauses, true);
+ cp_parser_require_pragma_eol (parser, pragma_tok);
}
else
{
- cp_parser_skip_to_pragma_eol (parser, pragma_tok);
+ cp_parser_require_pragma_eol (parser, pragma_tok);
scope_chain->omp_declare_target_attribute++;
return;
}
continue;
}
if (!at1)
- DECL_ATTRIBUTES (t) = tree_cons (id, NULL_TREE, DECL_ATTRIBUTES (t));
+ {
+ symtab_node *node = symtab_node::get (t);
+ DECL_ATTRIBUTES (t) = tree_cons (id, NULL_TREE, DECL_ATTRIBUTES (t));
+ if (node != NULL)
+ {
+ node->offloadable = 1;
+#ifdef ENABLE_OFFLOADING
+ g->have_offload = true;
+ if (is_a <varpool_node *> (node))
+ {
+ vec_safe_push (offload_vars, t);
+ node->force_output = 1;
+ }
+#endif
+ }
+ }
}
}
cp_parser_skip_to_pragma_eol (parser, pragma_tok);
return;
}
- cp_parser_skip_to_pragma_eol (parser, pragma_tok);
+ cp_parser_require_pragma_eol (parser, pragma_tok);
if (!scope_chain->omp_declare_target_attribute)
error_at (pragma_tok->location,
"%<#pragma omp end declare target%> without corresponding "
= tsubst_omp_clause_decl (OMP_CLAUSE_DECL (oc), args, complain,
in_decl);
break;
- case OMP_CLAUSE_LINEAR:
case OMP_CLAUSE_ALIGNED:
OMP_CLAUSE_DECL (nc)
= tsubst_omp_clause_decl (OMP_CLAUSE_DECL (oc), args, complain,
OMP_CLAUSE_OPERAND (nc, 1)
= tsubst_expr (OMP_CLAUSE_OPERAND (oc, 1), args, complain,
in_decl, /*integral_constant_expression_p=*/false);
- if (OMP_CLAUSE_CODE (oc) == OMP_CLAUSE_LINEAR
- && OMP_CLAUSE_LINEAR_STEP (oc) == NULL_TREE)
+ break;
+ case OMP_CLAUSE_LINEAR:
+ OMP_CLAUSE_DECL (nc)
+ = tsubst_omp_clause_decl (OMP_CLAUSE_DECL (oc), args, complain,
+ in_decl);
+ if (OMP_CLAUSE_LINEAR_STEP (oc) == NULL_TREE)
{
gcc_assert (!linear_no_step);
linear_no_step = nc;
}
+ else if (OMP_CLAUSE_LINEAR_VARIABLE_STRIDE (oc))
+ OMP_CLAUSE_LINEAR_STEP (nc)
+ = tsubst_omp_clause_decl (OMP_CLAUSE_LINEAR_STEP (oc), args,
+ complain, in_decl);
+ else
+ OMP_CLAUSE_LINEAR_STEP (nc)
+ = tsubst_expr (OMP_CLAUSE_LINEAR_STEP (oc), args, complain,
+ in_decl,
+ /*integral_constant_expression_p=*/false);
break;
case OMP_CLAUSE_NOWAIT:
case OMP_CLAUSE_DEFAULT:
if (allow_fields)
switch (OMP_CLAUSE_CODE (nc))
{
+ case OMP_CLAUSE_SHARED:
case OMP_CLAUSE_PRIVATE:
case OMP_CLAUSE_FIRSTPRIVATE:
case OMP_CLAUSE_LASTPRIVATE:
&& DECL_NAME (v) == this_identifier)
{
decl = TREE_OPERAND (decl, 1);
- decl = omp_privatize_field (decl);
+ decl = omp_privatize_field (decl, false);
}
/* FALLTHRU */
default:
#undef RECUR
}
+/* Helper function of tsubst_expr, find OMP_TEAMS inside
+ of OMP_TARGET's body. */
+
+static tree
+tsubst_find_omp_teams (tree *tp, int *walk_subtrees, void *)
+{
+ *walk_subtrees = 0;
+ switch (TREE_CODE (*tp))
+ {
+ case OMP_TEAMS:
+ return *tp;
+ case BIND_EXPR:
+ case STATEMENT_LIST:
+ *walk_subtrees = 1;
+ break;
+ default:
+ break;
+ }
+ return NULL_TREE;
+}
+
/* Like tsubst_copy for expressions, etc. but also does semantic
processing. */
if (OMP_FOR_INIT (t) != NULL_TREE)
{
declv = make_tree_vec (TREE_VEC_LENGTH (OMP_FOR_INIT (t)));
- if (TREE_CODE (t) == OMP_FOR && OMP_FOR_ORIG_DECLS (t))
+ if (OMP_FOR_ORIG_DECLS (t))
orig_declv = make_tree_vec (TREE_VEC_LENGTH (OMP_FOR_INIT (t)));
initv = make_tree_vec (TREE_VEC_LENGTH (OMP_FOR_INIT (t)));
condv = make_tree_vec (TREE_VEC_LENGTH (OMP_FOR_INIT (t)));
if (OMP_FOR_INIT (t) != NULL_TREE)
t = finish_omp_for (EXPR_LOCATION (t), TREE_CODE (t), declv,
orig_declv, initv, condv, incrv, body, pre_body,
- clauses);
+ NULL, clauses);
else
{
t = make_node (TREE_CODE (t));
t = copy_node (t);
OMP_BODY (t) = stmt;
OMP_CLAUSES (t) = tmp;
+ if (TREE_CODE (t) == OMP_TARGET && OMP_TARGET_COMBINED (t))
+ {
+ tree teams = cp_walk_tree (&stmt, tsubst_find_omp_teams, NULL, NULL);
+ if (teams)
+ {
+ /* For combined target teams, ensure the num_teams and
+ thread_limit clause expressions are evaluated on the host,
+ before entering the target construct. */
+ tree c;
+ for (c = OMP_TEAMS_CLAUSES (teams);
+ c; c = OMP_CLAUSE_CHAIN (c))
+ if ((OMP_CLAUSE_CODE (c) == OMP_CLAUSE_NUM_TEAMS
+ || OMP_CLAUSE_CODE (c) == OMP_CLAUSE_THREAD_LIMIT)
+ && TREE_CODE (OMP_CLAUSE_OPERAND (c, 0)) != INTEGER_CST)
+ {
+ tree expr = OMP_CLAUSE_OPERAND (c, 0);
+ expr = force_target_expr (TREE_TYPE (expr), expr, tf_none);
+ if (expr == error_mark_node)
+ continue;
+ tmp = TARGET_EXPR_SLOT (expr);
+ add_stmt (expr);
+ OMP_CLAUSE_OPERAND (c, 0) = expr;
+ tree tc = build_omp_clause (OMP_CLAUSE_LOCATION (c),
+ OMP_CLAUSE_FIRSTPRIVATE);
+ OMP_CLAUSE_DECL (tc) = tmp;
+ OMP_CLAUSE_CHAIN (tc) = OMP_TARGET_CLAUSES (t);
+ OMP_TARGET_CLAUSES (t) = tc;
+ }
+ }
+ }
add_stmt (t);
break;
dummy VAR_DECL. */
tree
-omp_privatize_field (tree t)
+omp_privatize_field (tree t, bool shared)
{
tree m = finish_non_static_data_member (t, NULL_TREE, NULL_TREE);
if (m == error_mark_node)
return error_mark_node;
- if (!omp_private_member_map)
+ if (!omp_private_member_map && !shared)
omp_private_member_map = new hash_map<tree, tree>;
if (TREE_CODE (TREE_TYPE (t)) == REFERENCE_TYPE)
{
gcc_assert (TREE_CODE (m) == INDIRECT_REF);
m = TREE_OPERAND (m, 0);
}
- tree &v = omp_private_member_map->get_or_insert (t);
+ tree vb = NULL_TREE;
+ tree &v = shared ? vb : omp_private_member_map->get_or_insert (t);
if (v == NULL_TREE)
{
v = create_temporary_var (TREE_TYPE (m));
DECL_OMP_PRIVATIZED_MEMBER (v) = 1;
SET_DECL_VALUE_EXPR (v, m);
DECL_HAS_VALUE_EXPR_P (v) = 1;
- omp_private_member_vec.safe_push (t);
+ if (!shared)
+ omp_private_member_vec.safe_push (t);
}
return v;
}
if (OMP_CLAUSE_CODE (c) == OMP_CLAUSE_REDUCTION
&& TREE_CODE (TREE_CHAIN (t)) == FIELD_DECL)
- TREE_CHAIN (t) = omp_privatize_field (TREE_CHAIN (t));
+ TREE_CHAIN (t) = omp_privatize_field (TREE_CHAIN (t), false);
ret = handle_omp_array_sections_1 (c, TREE_CHAIN (t), types,
maybe_zero_len, first_non_one, is_omp);
if (ret == error_mark_node || ret == NULL_TREE)
&& (TREE_CODE (length) != INTEGER_CST || integer_onep (length)))
first_non_one++;
}
- if (OMP_CLAUSE_CODE (c) == OMP_CLAUSE_REDUCTION
- && !integer_zerop (low_bound))
- {
- error_at (OMP_CLAUSE_LOCATION (c),
- "%<reduction%> array section has to be zero-based");
- return error_mark_node;
- }
if (TREE_CODE (type) == ARRAY_TYPE)
{
if (length == NULL_TREE
t = convert_from_reference (t);
else if (TREE_CODE (TREE_TYPE (t)) == ARRAY_TYPE)
t = build_fold_addr_expr (t);
- t = build2 (MEM_REF, type, t, build_int_cst (ptype, 0));
+ tree t2 = build_fold_addr_expr (first);
+ t2 = fold_convert_loc (OMP_CLAUSE_LOCATION (c),
+ ptrdiff_type_node, t2);
+ t2 = fold_build2_loc (OMP_CLAUSE_LOCATION (c), MINUS_EXPR,
+ ptrdiff_type_node, t2,
+ fold_convert_loc (OMP_CLAUSE_LOCATION (c),
+ ptrdiff_type_node, t));
+ if (tree_fits_shwi_p (t2))
+ t = build2 (MEM_REF, type, t,
+ build_int_cst (ptype, tree_to_shwi (t2)));
+ else
+ {
+ t2 = fold_convert_loc (OMP_CLAUSE_LOCATION (c),
+ sizetype, t2);
+ t = build2_loc (OMP_CLAUSE_LOCATION (c), POINTER_PLUS_EXPR,
+ TREE_TYPE (t), t, t2);
+ t = build2 (MEM_REF, type, t, build_int_cst (ptype, 0));
+ }
OMP_CLAUSE_DECL (c) = t;
return false;
}
}
tree c2 = build_omp_clause (OMP_CLAUSE_LOCATION (c),
OMP_CLAUSE_MAP);
- OMP_CLAUSE_SET_MAP_KIND (c2, is_omp ? GOMP_MAP_FIRSTPRIVATE_POINTER
- : GOMP_MAP_POINTER);
- if (!is_omp && !cxx_mark_addressable (t))
+ if (!is_omp)
+ OMP_CLAUSE_SET_MAP_KIND (c2, GOMP_MAP_POINTER);
+ else if (TREE_CODE (t) == COMPONENT_REF)
+ OMP_CLAUSE_SET_MAP_KIND (c2, GOMP_MAP_ALWAYS_POINTER);
+ else if (REFERENCE_REF_P (t)
+ && TREE_CODE (TREE_OPERAND (t, 0)) == COMPONENT_REF)
+ {
+ t = TREE_OPERAND (t, 0);
+ OMP_CLAUSE_SET_MAP_KIND (c2, GOMP_MAP_ALWAYS_POINTER);
+ }
+ else
+ OMP_CLAUSE_SET_MAP_KIND (c2, GOMP_MAP_FIRSTPRIVATE_POINTER);
+ if (OMP_CLAUSE_MAP_KIND (c2) != GOMP_MAP_FIRSTPRIVATE_POINTER
+ && !cxx_mark_addressable (t))
return false;
OMP_CLAUSE_DECL (c2) = t;
t = build_fold_addr_expr (first);
OMP_CLAUSE_CHAIN (c2) = OMP_CLAUSE_CHAIN (c);
OMP_CLAUSE_CHAIN (c) = c2;
ptr = OMP_CLAUSE_DECL (c2);
- if (!is_omp
+ if (OMP_CLAUSE_MAP_KIND (c2) != GOMP_MAP_FIRSTPRIVATE_POINTER
&& TREE_CODE (TREE_TYPE (ptr)) == REFERENCE_TYPE
&& POINTER_TYPE_P (TREE_TYPE (TREE_TYPE (ptr))))
{
tree c3 = build_omp_clause (OMP_CLAUSE_LOCATION (c),
OMP_CLAUSE_MAP);
- OMP_CLAUSE_SET_MAP_KIND (c3, GOMP_MAP_POINTER);
+ OMP_CLAUSE_SET_MAP_KIND (c3, OMP_CLAUSE_MAP_KIND (c2));
OMP_CLAUSE_DECL (c3) = ptr;
- OMP_CLAUSE_DECL (c2) = convert_from_reference (ptr);
+ if (OMP_CLAUSE_MAP_KIND (c2) == GOMP_MAP_ALWAYS_POINTER)
+ OMP_CLAUSE_DECL (c2) = build_simple_mem_ref (ptr);
+ else
+ OMP_CLAUSE_DECL (c2) = convert_from_reference (ptr);
OMP_CLAUSE_SIZE (c3) = size_zero_node;
OMP_CLAUSE_CHAIN (c3) = OMP_CLAUSE_CHAIN (c2);
OMP_CLAUSE_CHAIN (c2) = c3;
finish_omp_clauses (tree clauses, bool allow_fields, bool declare_simd)
{
bitmap_head generic_head, firstprivate_head, lastprivate_head;
- bitmap_head aligned_head, map_head, map_field_head, generic_field_head;
+ bitmap_head aligned_head, map_head, map_field_head;
tree c, t, *pc;
tree safelen = NULL_TREE;
bool branch_seen = false;
bool copyprivate_seen = false;
+ bool ordered_seen = false;
bitmap_obstack_initialize (NULL);
bitmap_initialize (&generic_head, &bitmap_default_obstack);
bitmap_initialize (&aligned_head, &bitmap_default_obstack);
bitmap_initialize (&map_head, &bitmap_default_obstack);
bitmap_initialize (&map_field_head, &bitmap_default_obstack);
- bitmap_initialize (&generic_field_head, &bitmap_default_obstack);
for (pc = &clauses, c = clauses; c ; c = *pc)
{
switch (OMP_CLAUSE_CODE (c))
{
case OMP_CLAUSE_SHARED:
+ field_ok = allow_fields;
goto check_dup_generic;
case OMP_CLAUSE_PRIVATE:
field_ok = allow_fields;
{
gcc_assert (TREE_CODE (t) == MEM_REF);
t = TREE_OPERAND (t, 0);
+ if (TREE_CODE (t) == POINTER_PLUS_EXPR)
+ t = TREE_OPERAND (t, 0);
if (TREE_CODE (t) == ADDR_EXPR
|| TREE_CODE (t) == INDIRECT_REF)
t = TREE_OPERAND (t, 0);
break;
}
else if (!type_dependent_expression_p (t)
- && !INTEGRAL_TYPE_P (TREE_TYPE (t)))
+ && !INTEGRAL_TYPE_P (TREE_TYPE (t))
+ && (!declare_simd
+ || TREE_CODE (t) != PARM_DECL
+ || TREE_CODE (TREE_TYPE (t)) != REFERENCE_TYPE
+ || !INTEGRAL_TYPE_P (TREE_TYPE (TREE_TYPE (t)))))
{
error ("linear step expression must be integral");
remove = true;
else
{
t = mark_rvalue_use (t);
+ if (declare_simd && TREE_CODE (t) == PARM_DECL)
+ {
+ OMP_CLAUSE_LINEAR_VARIABLE_STRIDE (c) = 1;
+ goto check_dup_generic;
+ }
if (!processing_template_decl
&& (VAR_P (OMP_CLAUSE_DECL (c))
|| TREE_CODE (OMP_CLAUSE_DECL (c)) == PARM_DECL))
{
- if (TREE_CODE (OMP_CLAUSE_DECL (c)) == PARM_DECL)
- t = maybe_constant_value (t);
+ if (declare_simd)
+ {
+ t = maybe_constant_value (t);
+ if (TREE_CODE (t) != INTEGER_CST)
+ {
+ error_at (OMP_CLAUSE_LOCATION (c),
+ "%<linear%> clause step %qE is neither "
+ "constant nor a parameter", t);
+ remove = true;
+ break;
+ }
+ }
t = fold_build_cleanup_point_expr (TREE_TYPE (t), t);
tree type = TREE_TYPE (OMP_CLAUSE_DECL (c));
if (TREE_CODE (type) == REFERENCE_TYPE)
t = omp_clause_decl_field (OMP_CLAUSE_DECL (c));
if (t)
{
- if (!remove)
+ if (!remove && OMP_CLAUSE_CODE (c) != OMP_CLAUSE_SHARED)
omp_note_field_privatization (t, OMP_CLAUSE_DECL (c));
}
else
error ("%qD appears more than once in data clauses", t);
remove = true;
}
+ else if (OMP_CLAUSE_CODE (c) == OMP_CLAUSE_PRIVATE
+ && bitmap_bit_p (&map_head, DECL_UID (t)))
+ {
+ error ("%qD appears both in data and map clauses", t);
+ remove = true;
+ }
else
bitmap_set_bit (&generic_head, DECL_UID (t));
if (!field_ok)
&& TREE_CODE (t) == FIELD_DECL
&& t == OMP_CLAUSE_DECL (c))
{
- OMP_CLAUSE_DECL (c) = omp_privatize_field (t);
+ OMP_CLAUSE_DECL (c)
+ = omp_privatize_field (t, (OMP_CLAUSE_CODE (c)
+ == OMP_CLAUSE_SHARED));
if (OMP_CLAUSE_DECL (c) == error_mark_node)
remove = true;
}
error ("%qD appears more than once in data clauses", t);
remove = true;
}
+ else if (bitmap_bit_p (&map_head, DECL_UID (t)))
+ {
+ error ("%qD appears both in data and map clauses", t);
+ remove = true;
+ }
else
bitmap_set_bit (&firstprivate_head, DECL_UID (t));
goto handle_field_decl;
break;
case OMP_CLAUSE_SCHEDULE:
+ if (OMP_CLAUSE_SCHEDULE_KIND (c) & OMP_CLAUSE_SCHEDULE_NONMONOTONIC)
+ {
+ const char *p = NULL;
+ switch (OMP_CLAUSE_SCHEDULE_KIND (c) & OMP_CLAUSE_SCHEDULE_MASK)
+ {
+ case OMP_CLAUSE_SCHEDULE_STATIC: p = "static"; break;
+ case OMP_CLAUSE_SCHEDULE_DYNAMIC: break;
+ case OMP_CLAUSE_SCHEDULE_GUIDED: break;
+ case OMP_CLAUSE_SCHEDULE_AUTO: p = "auto"; break;
+ case OMP_CLAUSE_SCHEDULE_RUNTIME: p = "runtime"; break;
+ default: gcc_unreachable ();
+ }
+ if (p)
+ {
+ error_at (OMP_CLAUSE_LOCATION (c),
+ "%<nonmonotonic%> modifier specified for %qs "
+ "schedule kind", p);
+ OMP_CLAUSE_SCHEDULE_KIND (c)
+ = (enum omp_clause_schedule_kind)
+ (OMP_CLAUSE_SCHEDULE_KIND (c)
+ & ~OMP_CLAUSE_SCHEDULE_NONMONOTONIC);
+ }
+ }
+
t = OMP_CLAUSE_SCHEDULE_CHUNK_EXPR (c);
if (t == NULL)
;
}
if (REFERENCE_REF_P (t)
&& TREE_CODE (TREE_OPERAND (t, 0)) == COMPONENT_REF)
- t = TREE_OPERAND (t, 0);
+ {
+ t = TREE_OPERAND (t, 0);
+ OMP_CLAUSE_DECL (c) = t;
+ }
if (TREE_CODE (t) == COMPONENT_REF
&& allow_fields
&& OMP_CLAUSE_CODE (c) != OMP_CLAUSE__CACHE_)
break;
if (VAR_P (t) || TREE_CODE (t) == PARM_DECL)
{
- if (OMP_CLAUSE_CODE (c) == OMP_CLAUSE_MAP
- && (OMP_CLAUSE_MAP_KIND (c)
- == GOMP_MAP_FIRSTPRIVATE_POINTER))
- {
- if (bitmap_bit_p (&generic_field_head, DECL_UID (t)))
- break;
- }
- else if (bitmap_bit_p (&map_field_head, DECL_UID (t)))
- break;
+ if (bitmap_bit_p (&map_field_head, DECL_UID (t)))
+ goto handle_map_references;
}
}
if (!VAR_P (t) && TREE_CODE (t) != PARM_DECL)
if (processing_template_decl)
break;
if (OMP_CLAUSE_CODE (c) == OMP_CLAUSE_MAP
- && OMP_CLAUSE_MAP_KIND (c) == GOMP_MAP_POINTER)
+ && (OMP_CLAUSE_MAP_KIND (c) == GOMP_MAP_POINTER
+ || OMP_CLAUSE_MAP_KIND (c) == GOMP_MAP_ALWAYS_POINTER))
break;
if (DECL_P (t))
error ("%qD is not a variable in %qs clause", t,
error ("%qD appears more than once in data clauses", t);
remove = true;
}
- else
+ else if (bitmap_bit_p (&map_head, DECL_UID (t)))
{
- bitmap_set_bit (&generic_head, DECL_UID (t));
- if (t != OMP_CLAUSE_DECL (c)
- && (TREE_CODE (OMP_CLAUSE_DECL (c)) == COMPONENT_REF
- || (REFERENCE_REF_P (OMP_CLAUSE_DECL (c))
- && (TREE_CODE (TREE_OPERAND (OMP_CLAUSE_DECL (c),
- 0))
- == COMPONENT_REF))))
- bitmap_set_bit (&generic_field_head, DECL_UID (t));
+ error ("%qD appears both in data and map clauses", t);
+ remove = true;
}
+ else
+ bitmap_set_bit (&generic_head, DECL_UID (t));
}
else if (bitmap_bit_p (&map_head, DECL_UID (t)))
{
error ("%qD appears more than once in map clauses", t);
remove = true;
}
+ else if (bitmap_bit_p (&generic_head, DECL_UID (t))
+ || bitmap_bit_p (&firstprivate_head, DECL_UID (t)))
+ {
+ error ("%qD appears both in data and map clauses", t);
+ remove = true;
+ }
else
{
bitmap_set_bit (&map_head, DECL_UID (t));
&& TREE_CODE (OMP_CLAUSE_DECL (c)) == COMPONENT_REF)
bitmap_set_bit (&map_field_head, DECL_UID (t));
}
+ handle_map_references:
+ if (!remove
+ && !processing_template_decl
+ && allow_fields
+ && TREE_CODE (TREE_TYPE (OMP_CLAUSE_DECL (c))) == REFERENCE_TYPE)
+ {
+ t = OMP_CLAUSE_DECL (c);
+ if (OMP_CLAUSE_CODE (c) != OMP_CLAUSE_MAP)
+ {
+ OMP_CLAUSE_DECL (c) = build_simple_mem_ref (t);
+ if (OMP_CLAUSE_SIZE (c) == NULL_TREE)
+ OMP_CLAUSE_SIZE (c)
+ = TYPE_SIZE_UNIT (TREE_TYPE (TREE_TYPE (t)));
+ }
+ else if (OMP_CLAUSE_MAP_KIND (c)
+ != GOMP_MAP_FIRSTPRIVATE_POINTER
+ && (OMP_CLAUSE_MAP_KIND (c)
+ != GOMP_MAP_FIRSTPRIVATE_REFERENCE)
+ && (OMP_CLAUSE_MAP_KIND (c)
+ != GOMP_MAP_ALWAYS_POINTER))
+ {
+ tree c2 = build_omp_clause (OMP_CLAUSE_LOCATION (c),
+ OMP_CLAUSE_MAP);
+ if (TREE_CODE (t) == COMPONENT_REF)
+ OMP_CLAUSE_SET_MAP_KIND (c2, GOMP_MAP_ALWAYS_POINTER);
+ else
+ OMP_CLAUSE_SET_MAP_KIND (c2,
+ GOMP_MAP_FIRSTPRIVATE_REFERENCE);
+ OMP_CLAUSE_DECL (c2) = t;
+ OMP_CLAUSE_SIZE (c2) = size_zero_node;
+ OMP_CLAUSE_CHAIN (c2) = OMP_CLAUSE_CHAIN (c);
+ OMP_CLAUSE_CHAIN (c) = c2;
+ OMP_CLAUSE_DECL (c) = build_simple_mem_ref (t);
+ if (OMP_CLAUSE_SIZE (c) == NULL_TREE)
+ OMP_CLAUSE_SIZE (c)
+ = TYPE_SIZE_UNIT (TREE_TYPE (TREE_TYPE (t)));
+ c = c2;
+ }
+ }
break;
case OMP_CLAUSE_TO_DECLARE:
- t = OMP_CLAUSE_DECL (c);
- if (TREE_CODE (t) == FUNCTION_DECL)
- break;
- /* FALLTHRU */
case OMP_CLAUSE_LINK:
t = OMP_CLAUSE_DECL (c);
- if (!VAR_P (t))
+ if (TREE_CODE (t) == FUNCTION_DECL
+ && OMP_CLAUSE_CODE (c) == OMP_CLAUSE_TO_DECLARE)
+ ;
+ else if (!VAR_P (t))
{
- error_at (OMP_CLAUSE_LOCATION (c),
- "%qE is not a variable in clause %qs", t,
- omp_clause_code_name[OMP_CLAUSE_CODE (c)]);
+ if (OMP_CLAUSE_CODE (c) == OMP_CLAUSE_TO_DECLARE)
+ {
+ if (TREE_CODE (t) == OVERLOAD && OVL_CHAIN (t))
+ error_at (OMP_CLAUSE_LOCATION (c),
+ "overloaded function name %qE in clause %qs", t,
+ omp_clause_code_name[OMP_CLAUSE_CODE (c)]);
+ else if (TREE_CODE (t) == TEMPLATE_ID_EXPR)
+ error_at (OMP_CLAUSE_LOCATION (c),
+ "template %qE in clause %qs", t,
+ omp_clause_code_name[OMP_CLAUSE_CODE (c)]);
+ else
+ error_at (OMP_CLAUSE_LOCATION (c),
+ "%qE is neither a variable nor a function name "
+ "in clause %qs", t,
+ omp_clause_code_name[OMP_CLAUSE_CODE (c)]);
+ }
+ else
+ error_at (OMP_CLAUSE_LOCATION (c),
+ "%qE is not a variable in clause %qs", t,
+ omp_clause_code_name[OMP_CLAUSE_CODE (c)]);
remove = true;
}
else if (DECL_THREAD_LOCAL_P (t))
omp_clause_code_name[OMP_CLAUSE_CODE (c)]);
remove = true;
}
+ if (remove)
+ break;
+ if (bitmap_bit_p (&generic_head, DECL_UID (t)))
+ {
+ error_at (OMP_CLAUSE_LOCATION (c),
+ "%qE appears more than once on the same "
+ "%<declare target%> directive", t);
+ remove = true;
+ }
+ else
+ bitmap_set_bit (&generic_head, DECL_UID (t));
break;
case OMP_CLAUSE_UNIFORM:
remove = true;
break;
}
+ /* map_head bitmap is used as uniform_head if declare_simd. */
+ bitmap_set_bit (&map_head, DECL_UID (t));
goto check_dup_generic;
case OMP_CLAUSE_GRAINSIZE:
goto check_dup_generic;
case OMP_CLAUSE_NOWAIT:
- case OMP_CLAUSE_ORDERED:
case OMP_CLAUSE_DEFAULT:
case OMP_CLAUSE_UNTIED:
case OMP_CLAUSE_COLLAPSE:
case OMP_CLAUSE_SEQ:
break;
+ case OMP_CLAUSE_ORDERED:
+ ordered_seen = true;
+ break;
+
case OMP_CLAUSE_INBRANCH:
case OMP_CLAUSE_NOTINBRANCH:
if (branch_seen)
case OMP_CLAUSE_LINEAR:
if (!declare_simd)
need_implicitly_determined = true;
+ else if (OMP_CLAUSE_LINEAR_VARIABLE_STRIDE (c)
+ && !bitmap_bit_p (&map_head,
+ DECL_UID (OMP_CLAUSE_LINEAR_STEP (c))))
+ {
+ error_at (OMP_CLAUSE_LOCATION (c),
+ "%<linear%> clause step is a parameter %qD not "
+ "specified in %<uniform%> clause",
+ OMP_CLAUSE_LINEAR_STEP (c));
+ *pc = OMP_CLAUSE_CHAIN (c);
+ continue;
+ }
break;
case OMP_CLAUSE_COPYPRIVATE:
need_copy_assignment = true;
}
pc = &OMP_CLAUSE_CHAIN (c);
continue;
+ case OMP_CLAUSE_SCHEDULE:
+ if (ordered_seen
+ && (OMP_CLAUSE_SCHEDULE_KIND (c)
+ & OMP_CLAUSE_SCHEDULE_NONMONOTONIC))
+ {
+ error_at (OMP_CLAUSE_LOCATION (c),
+ "%<nonmonotonic%> schedule modifier specified "
+ "together with %<ordered%> clause");
+ OMP_CLAUSE_SCHEDULE_KIND (c)
+ = (enum omp_clause_schedule_kind)
+ (OMP_CLAUSE_SCHEDULE_KIND (c)
+ & ~OMP_CLAUSE_SCHEDULE_NONMONOTONIC);
+ }
+ pc = &OMP_CLAUSE_CHAIN (c);
+ continue;
case OMP_CLAUSE_NOWAIT:
if (copyprivate_seen)
{
need_dtor))
remove = true;
+ if (!remove
+ && c_kind == OMP_CLAUSE_SHARED
+ && processing_template_decl)
+ {
+ t = omp_clause_decl_field (OMP_CLAUSE_DECL (c));
+ if (t)
+ OMP_CLAUSE_DECL (c) = t;
+ }
+
if (remove)
*pc = OMP_CLAUSE_CHAIN (c);
else
static bool
handle_omp_for_class_iterator (int i, location_t locus, enum tree_code code,
- tree declv, tree initv, tree condv, tree incrv,
- tree *body, tree *pre_body, tree &clauses,
- tree *lastp, int collapse, int ordered)
+ tree declv, tree orig_declv, tree initv,
+ tree condv, tree incrv, tree *body,
+ tree *pre_body, tree &clauses, tree *lastp,
+ int collapse, int ordered)
{
tree diff, iter_init, iter_incr = NULL, last;
tree incr_var = NULL, orig_pre_body, orig_body, c;
TREE_OPERAND (cond, 1), iter);
return true;
}
+ if (!c_omp_check_loop_iv_exprs (locus, orig_declv,
+ TREE_VEC_ELT (declv, i), NULL_TREE,
+ cond, cp_walk_subtrees))
+ return true;
switch (TREE_CODE (incr))
{
tree
finish_omp_for (location_t locus, enum tree_code code, tree declv,
tree orig_declv, tree initv, tree condv, tree incrv,
- tree body, tree pre_body, tree clauses)
+ tree body, tree pre_body, vec<tree> *orig_inits, tree clauses)
{
tree omp_for = NULL, orig_incr = NULL;
tree decl = NULL, init, cond, incr, orig_decl = NULL_TREE, block = NULL_TREE;
TREE_VEC_ELT (initv, i) = init;
}
+ if (orig_inits)
+ {
+ bool fail = false;
+ tree orig_init;
+ FOR_EACH_VEC_ELT (*orig_inits, i, orig_init)
+ if (orig_init
+ && !c_omp_check_loop_iv_exprs (locus, declv,
+ TREE_VEC_ELT (declv, i), orig_init,
+ NULL_TREE, cp_walk_subtrees))
+ fail = true;
+ if (fail)
+ return NULL;
+ }
+
if (dependent_omp_for_p (declv, initv, condv, incrv))
{
tree stmt;
}
if (code == CILK_FOR && i == 0)
orig_decl = decl;
- if (handle_omp_for_class_iterator (i, locus, code, declv, initv,
- condv, incrv, &body, &pre_body,
- clauses, &last, collapse,
- ordered))
+ if (handle_omp_for_class_iterator (i, locus, code, declv, orig_declv,
+ initv, condv, incrv, &body,
+ &pre_body, clauses, &last,
+ collapse, ordered))
return NULL;
continue;
}
omp_for = c_finish_omp_for (locus, code, declv, orig_declv, initv, condv,
incrv, body, pre_body);
+ /* Check for iterators appearing in lb, b or incr expressions. */
+ if (omp_for && !c_omp_check_loop_iv (omp_for, orig_declv, cp_walk_subtrees))
+ omp_for = NULL_TREE;
+
if (omp_for == NULL)
{
if (block)
return NULL;
}
+ add_stmt (omp_for);
+
for (i = 0; i < TREE_VEC_LENGTH (OMP_FOR_INCR (omp_for)); i++)
{
decl = TREE_OPERAND (TREE_VEC_ELT (OMP_FOR_INIT (omp_for), i), 0);
return;
}
stmt = c_finish_omp_atomic (input_location, code, opcode, lhs, rhs,
- v, lhs1, rhs1, swapped, seq_cst);
+ v, lhs1, rhs1, swapped, seq_cst,
+ processing_template_decl != 0);
if (stmt == error_mark_node)
return;
}
+2015-11-05 Jakub Jelinek <jakub@redhat.com>
+
+ * types.def (BT_FN_VOID_INT_OMPFN_SIZE_PTR_PTR_PTR_UINT_PTR): Remove.
+ (BT_FN_VOID_INT_OMPFN_SIZE_PTR_PTR_PTR_UINT_PTR_INT_INT): New.
+
2015-11-03 Thomas Schwinge <thomas@codesourcery.com>
Chung-Lin Tang <cltang@codesourcery.com>
DEF_FUNCTION_TYPE_8 (BT_FN_VOID_OMPFN_PTR_UINT_LONG_LONG_LONG_LONG_UINT,
BT_VOID, BT_PTR_FN_VOID_PTR, BT_PTR, BT_UINT,
BT_LONG, BT_LONG, BT_LONG, BT_LONG, BT_UINT)
-DEF_FUNCTION_TYPE_8 (BT_FN_VOID_INT_OMPFN_SIZE_PTR_PTR_PTR_UINT_PTR,
- BT_VOID, BT_INT, BT_PTR_FN_VOID_PTR, BT_SIZE, BT_PTR,
- BT_PTR, BT_PTR, BT_UINT, BT_PTR)
DEF_FUNCTION_TYPE_9 (BT_FN_VOID_OMPFN_PTR_OMPCPYFN_LONG_LONG_BOOL_UINT_PTR_INT,
BT_VOID, BT_PTR_FN_VOID_PTR, BT_PTR,
BT_PTR_FN_VOID_PTR_PTR, BT_LONG, BT_LONG,
BT_BOOL, BT_UINT, BT_PTR, BT_INT)
+DEF_FUNCTION_TYPE_10 (BT_FN_VOID_INT_OMPFN_SIZE_PTR_PTR_PTR_UINT_PTR_INT_INT,
+ BT_VOID, BT_INT, BT_PTR_FN_VOID_PTR, BT_SIZE, BT_PTR,
+ BT_PTR, BT_PTR, BT_UINT, BT_PTR, BT_INT, BT_INT)
+
DEF_FUNCTION_TYPE_11 (BT_FN_VOID_OMPFN_PTR_OMPCPYFN_LONG_LONG_UINT_LONG_INT_LONG_LONG_LONG,
BT_VOID, BT_PTR_FN_VOID_PTR, BT_PTR,
BT_PTR_FN_VOID_PTR_PTR, BT_LONG, BT_LONG,
GOVD_MAP_0LEN_ARRAY = 32768,
+ /* Flag for GOVD_MAP, if it is always, to or always, tofrom mapping. */
+ GOVD_MAP_ALWAYS_TO = 65536,
+
GOVD_DATA_SHARE_CLASS = (GOVD_SHARED | GOVD_PRIVATE | GOVD_FIRSTPRIVATE
| GOVD_LASTPRIVATE | GOVD_REDUCTION | GOVD_LINEAR
| GOVD_LOCAL)
{
splay_tree_node n2;
- if ((octx->region_type & (ORT_TARGET_DATA | ORT_TARGET)) != 0)
- continue;
n2 = splay_tree_lookup (octx->variables, (splay_tree_key) decl);
+ if ((octx->region_type & (ORT_TARGET_DATA | ORT_TARGET)) != 0
+ && (n2 == NULL || (n2->value & GOVD_DATA_SHARE_CLASS) == 0))
+ continue;
if (n2 && (n2->value & GOVD_DATA_SHARE_CLASS) != GOVD_SHARED)
{
flags |= GOVD_FIRSTPRIVATE;
else if (is_scalar)
nflags |= GOVD_FIRSTPRIVATE;
}
+ tree type = TREE_TYPE (decl);
if (nflags == flags
- && !lang_hooks.types.omp_mappable_type (TREE_TYPE (decl)))
+ && gimplify_omp_ctxp->target_firstprivatize_array_bases
+ && lang_hooks.decls.omp_privatize_by_reference (decl))
+ type = TREE_TYPE (type);
+ if (nflags == flags
+ && !lang_hooks.types.omp_mappable_type (type))
{
error ("%qD referenced in target region does not have "
"a mappable type", decl);
else if ((n->value & GOVD_REDUCTION) != 0)
error ("iteration variable %qE should not be reduction",
DECL_NAME (decl));
+ else if (simd == 0 && (n->value & GOVD_LINEAR) != 0)
+ error ("iteration variable %qE should not be linear",
+ DECL_NAME (decl));
else if (simd == 1 && (n->value & GOVD_LASTPRIVATE) != 0)
error ("iteration variable %qE should not be lastprivate",
DECL_NAME (decl));
return true;
}
- if ((ctx->region_type & (ORT_TARGET | ORT_TARGET_DATA)) != 0)
+ n = splay_tree_lookup (ctx->variables, (splay_tree_key) decl);
+
+ if ((ctx->region_type & (ORT_TARGET | ORT_TARGET_DATA)) != 0
+ && (n == NULL || (n->value & GOVD_DATA_SHARE_CLASS) == 0))
continue;
- n = splay_tree_lookup (ctx->variables, (splay_tree_key) decl);
if (n != NULL)
{
if ((n->value & GOVD_LOCAL) != 0
if (!ctx->combined_loop)
return false;
if (ctx->distribute)
- return true;
+ return lang_GNU_Fortran ();
break;
case ORT_COMBINED_PARALLEL:
break;
case ORT_COMBINED_TEAMS:
- return true;
+ return lang_GNU_Fortran ();
default:
return false;
}
struct gimplify_omp_ctx *ctx, *outer_ctx;
tree c;
hash_map<tree, tree> *struct_map_to_clause = NULL;
- tree *orig_list_p = list_p;
+ tree *prev_list_p = NULL;
ctx = new_omp_context (region_type);
outer_ctx = ctx->outer_context;
else if (error_operand_p (decl))
goto do_add;
else if (outer_ctx
- && outer_ctx->region_type == ORT_COMBINED_PARALLEL
+ && (outer_ctx->region_type == ORT_COMBINED_PARALLEL
+ || outer_ctx->region_type == ORT_COMBINED_TEAMS)
&& splay_tree_lookup (outer_ctx->variables,
(splay_tree_key) decl) == NULL)
- omp_add_variable (outer_ctx, decl, GOVD_SHARED | GOVD_SEEN);
+ {
+ omp_add_variable (outer_ctx, decl, GOVD_SHARED | GOVD_SEEN);
+ if (outer_ctx->outer_context)
+ omp_notice_variable (outer_ctx->outer_context, decl, true);
+ }
else if (outer_ctx
&& (outer_ctx->region_type & ORT_TASK) != 0
&& outer_ctx->combined_loop
&& splay_tree_lookup (outer_ctx->variables,
(splay_tree_key) decl) == NULL)
- omp_add_variable (outer_ctx, decl, GOVD_LASTPRIVATE | GOVD_SEEN);
+ {
+ omp_add_variable (outer_ctx, decl, GOVD_LASTPRIVATE | GOVD_SEEN);
+ if (outer_ctx->outer_context)
+ omp_notice_variable (outer_ctx->outer_context, decl, true);
+ }
else if (outer_ctx
&& outer_ctx->region_type == ORT_WORKSHARE
&& outer_ctx->combined_loop
== ORT_COMBINED_PARALLEL)
&& splay_tree_lookup (outer_ctx->outer_context->variables,
(splay_tree_key) decl) == NULL)
- omp_add_variable (outer_ctx->outer_context, decl,
- GOVD_SHARED | GOVD_SEEN);
+ {
+ struct gimplify_omp_ctx *octx = outer_ctx->outer_context;
+ omp_add_variable (octx, decl, GOVD_SHARED | GOVD_SEEN);
+ if (octx->outer_context)
+ omp_notice_variable (octx->outer_context, decl, true);
+ }
+ else if (outer_ctx->outer_context)
+ omp_notice_variable (outer_ctx->outer_context, decl, true);
}
goto do_add;
case OMP_CLAUSE_REDUCTION:
omp_notice_variable (ctx, v, true);
}
decl = TREE_OPERAND (decl, 0);
+ if (TREE_CODE (decl) == POINTER_PLUS_EXPR)
+ {
+ if (gimplify_expr (&TREE_OPERAND (decl, 1), pre_p,
+ NULL, is_gimple_val, fb_rvalue)
+ == GS_ERROR)
+ {
+ remove = true;
+ break;
+ }
+ v = TREE_OPERAND (decl, 1);
+ if (DECL_P (v))
+ {
+ omp_firstprivatize_variable (ctx, v);
+ omp_notice_variable (ctx, v, true);
+ }
+ decl = TREE_OPERAND (decl, 0);
+ }
if (TREE_CODE (decl) == ADDR_EXPR
|| TREE_CODE (decl) == INDIRECT_REF)
decl = TREE_OPERAND (decl, 0);
{
if (octx->outer_context
&& (octx->outer_context->region_type
- == ORT_COMBINED_PARALLEL
- || (octx->outer_context->region_type
- == ORT_COMBINED_TEAMS)))
+ == ORT_COMBINED_PARALLEL))
octx = octx->outer_context;
else if (omp_check_private (octx, decl, false))
break;
&& ctx->region_type == ORT_WORKSHARE
&& octx == outer_ctx)
flags = GOVD_SEEN | GOVD_SHARED;
+ else if (octx
+ && octx->region_type == ORT_COMBINED_TEAMS)
+ flags = GOVD_SEEN | GOVD_SHARED;
else if (octx
&& octx->region_type == ORT_COMBINED_TARGET)
- flags &= ~GOVD_LASTPRIVATE;
+ {
+ flags &= ~GOVD_LASTPRIVATE;
+ if (flags == GOVD_SEEN)
+ break;
+ }
else
break;
splay_tree_node on
case OMP_TARGET_DATA:
case OMP_TARGET_ENTER_DATA:
case OMP_TARGET_EXIT_DATA:
- if (OMP_CLAUSE_MAP_KIND (c) == GOMP_MAP_FIRSTPRIVATE_POINTER)
+ if (OMP_CLAUSE_MAP_KIND (c) == GOMP_MAP_FIRSTPRIVATE_POINTER
+ || (OMP_CLAUSE_MAP_KIND (c)
+ == GOMP_MAP_FIRSTPRIVATE_REFERENCE))
/* For target {,enter ,exit }data only the array slice is
mapped, but not the pointer to it. */
remove = true;
remove = true;
break;
}
- else if (OMP_CLAUSE_MAP_KIND (c) == GOMP_MAP_FIRSTPRIVATE_POINTER
+ else if ((OMP_CLAUSE_MAP_KIND (c) == GOMP_MAP_FIRSTPRIVATE_POINTER
+ || (OMP_CLAUSE_MAP_KIND (c)
+ == GOMP_MAP_FIRSTPRIVATE_REFERENCE))
&& TREE_CODE (OMP_CLAUSE_SIZE (c)) != INTEGER_CST)
{
OMP_CLAUSE_SIZE (c)
break;
}
+ if (OMP_CLAUSE_MAP_KIND (c) == GOMP_MAP_ALWAYS_POINTER)
+ {
+ /* Error recovery. */
+ if (prev_list_p == NULL)
+ {
+ remove = true;
+ break;
+ }
+ if (OMP_CLAUSE_CHAIN (*prev_list_p) != c)
+ {
+ tree ch = OMP_CLAUSE_CHAIN (*prev_list_p);
+ if (ch == NULL_TREE || OMP_CLAUSE_CHAIN (ch) != c)
+ {
+ remove = true;
+ break;
+ }
+ }
+ }
+
tree offset;
HOST_WIDE_INT bitsize, bitpos;
machine_mode mode;
splay_tree_node n
= splay_tree_lookup (ctx->variables, (splay_tree_key)decl);
bool ptr = (OMP_CLAUSE_MAP_KIND (c)
- == GOMP_MAP_FIRSTPRIVATE_POINTER);
- if (n == NULL || (n->value & (ptr ? GOVD_PRIVATE
- : GOVD_MAP)) == 0)
+ == GOMP_MAP_ALWAYS_POINTER);
+ if (n == NULL || (n->value & GOVD_MAP) == 0)
{
+ tree l = build_omp_clause (OMP_CLAUSE_LOCATION (c),
+ OMP_CLAUSE_MAP);
+ OMP_CLAUSE_SET_MAP_KIND (l, GOMP_MAP_STRUCT);
+ OMP_CLAUSE_DECL (l) = decl;
+ OMP_CLAUSE_SIZE (l) = size_int (1);
+ if (struct_map_to_clause == NULL)
+ struct_map_to_clause = new hash_map<tree, tree>;
+ struct_map_to_clause->put (decl, l);
if (ptr)
{
+ enum gomp_map_kind mkind
+ = code == OMP_TARGET_EXIT_DATA
+ ? GOMP_MAP_RELEASE : GOMP_MAP_ALLOC;
tree c2 = build_omp_clause (OMP_CLAUSE_LOCATION (c),
- OMP_CLAUSE_PRIVATE);
- OMP_CLAUSE_DECL (c2) = decl;
- OMP_CLAUSE_CHAIN (c2) = *orig_list_p;
- *orig_list_p = c2;
- if (struct_map_to_clause == NULL)
- struct_map_to_clause = new hash_map<tree, tree>;
- tree *osc;
- if (n == NULL || (n->value & GOVD_MAP) == 0)
- osc = NULL;
- else
- osc = struct_map_to_clause->get (decl);
- if (osc == NULL)
- struct_map_to_clause->put (decl,
- tree_cons (NULL_TREE,
- c,
- NULL_TREE));
- else
- *osc = tree_cons (*osc, c, NULL_TREE);
- flags = GOVD_PRIVATE | GOVD_EXPLICIT;
- goto do_add_decl;
+ OMP_CLAUSE_MAP);
+ OMP_CLAUSE_SET_MAP_KIND (c2, mkind);
+ OMP_CLAUSE_DECL (c2)
+ = unshare_expr (OMP_CLAUSE_DECL (c));
+ OMP_CLAUSE_CHAIN (c2) = *prev_list_p;
+ OMP_CLAUSE_SIZE (c2)
+ = TYPE_SIZE_UNIT (ptr_type_node);
+ OMP_CLAUSE_CHAIN (l) = c2;
+ if (OMP_CLAUSE_CHAIN (*prev_list_p) != c)
+ {
+ tree c4 = OMP_CLAUSE_CHAIN (*prev_list_p);
+ tree c3
+ = build_omp_clause (OMP_CLAUSE_LOCATION (c),
+ OMP_CLAUSE_MAP);
+ OMP_CLAUSE_SET_MAP_KIND (c3, mkind);
+ OMP_CLAUSE_DECL (c3)
+ = unshare_expr (OMP_CLAUSE_DECL (c4));
+ OMP_CLAUSE_SIZE (c3)
+ = TYPE_SIZE_UNIT (ptr_type_node);
+ OMP_CLAUSE_CHAIN (c3) = *prev_list_p;
+ OMP_CLAUSE_CHAIN (c2) = c3;
+ }
+ *prev_list_p = l;
+ prev_list_p = NULL;
+ }
+ else
+ {
+ OMP_CLAUSE_CHAIN (l) = c;
+ *list_p = l;
+ list_p = &OMP_CLAUSE_CHAIN (l);
}
- *list_p = build_omp_clause (OMP_CLAUSE_LOCATION (c),
- OMP_CLAUSE_MAP);
- OMP_CLAUSE_SET_MAP_KIND (*list_p, GOMP_MAP_STRUCT);
- OMP_CLAUSE_DECL (*list_p) = decl;
- OMP_CLAUSE_SIZE (*list_p) = size_int (1);
- OMP_CLAUSE_CHAIN (*list_p) = c;
- if (struct_map_to_clause == NULL)
- struct_map_to_clause = new hash_map<tree, tree>;
- struct_map_to_clause->put (decl, *list_p);
- list_p = &OMP_CLAUSE_CHAIN (*list_p);
flags = GOVD_MAP | GOVD_EXPLICIT;
- if (OMP_CLAUSE_MAP_KIND (c) & GOMP_MAP_FLAG_ALWAYS)
+ if (GOMP_MAP_ALWAYS_P (OMP_CLAUSE_MAP_KIND (c)) || ptr)
flags |= GOVD_SEEN;
goto do_add_decl;
}
else
{
tree *osc = struct_map_to_clause->get (decl);
- tree *sc = NULL, *pt = NULL;
- if (!ptr && TREE_CODE (*osc) == TREE_LIST)
- osc = &TREE_PURPOSE (*osc);
- if (OMP_CLAUSE_MAP_KIND (c) & GOMP_MAP_FLAG_ALWAYS)
+ tree *sc = NULL, *scp = NULL;
+ if (GOMP_MAP_ALWAYS_P (OMP_CLAUSE_MAP_KIND (c)) || ptr)
n->value |= GOVD_SEEN;
offset_int o1, o2;
if (offset)
o1 = 0;
if (bitpos)
o1 = o1 + bitpos / BITS_PER_UNIT;
- if (ptr)
- pt = osc;
- else
- sc = &OMP_CLAUSE_CHAIN (*osc);
- for (; ptr ? (*pt && (sc = &TREE_VALUE (*pt)))
- : *sc != c;
- ptr ? (pt = &TREE_CHAIN (*pt))
- : (sc = &OMP_CLAUSE_CHAIN (*sc)))
- if (TREE_CODE (OMP_CLAUSE_DECL (*sc)) != COMPONENT_REF
- && (TREE_CODE (OMP_CLAUSE_DECL (*sc))
- != INDIRECT_REF)
- && TREE_CODE (OMP_CLAUSE_DECL (*sc)) != ARRAY_REF)
+ for (sc = &OMP_CLAUSE_CHAIN (*osc);
+ *sc != c; sc = &OMP_CLAUSE_CHAIN (*sc))
+ if (ptr && sc == prev_list_p)
+ break;
+ else if (TREE_CODE (OMP_CLAUSE_DECL (*sc))
+ != COMPONENT_REF
+ && (TREE_CODE (OMP_CLAUSE_DECL (*sc))
+ != INDIRECT_REF)
+ && (TREE_CODE (OMP_CLAUSE_DECL (*sc))
+ != ARRAY_REF))
break;
else
{
&volatilep, false);
if (base != decl)
break;
+ if (scp)
+ continue;
gcc_assert (offset == NULL_TREE
|| TREE_CODE (offset) == INTEGER_CST);
tree d1 = OMP_CLAUSE_DECL (*sc);
o2 = o2 + bitpos2 / BITS_PER_UNIT;
if (wi::ltu_p (o1, o2)
|| (wi::eq_p (o1, o2) && bitpos < bitpos2))
- break;
+ {
+ if (ptr)
+ scp = sc;
+ else
+ break;
+ }
}
+ if (remove)
+ break;
+ OMP_CLAUSE_SIZE (*osc)
+ = size_binop (PLUS_EXPR, OMP_CLAUSE_SIZE (*osc),
+ size_one_node);
if (ptr)
{
- if (!remove)
- *pt = tree_cons (TREE_PURPOSE (*osc), c, *pt);
- break;
+ tree c2 = build_omp_clause (OMP_CLAUSE_LOCATION (c),
+ OMP_CLAUSE_MAP);
+ tree cl = NULL_TREE;
+ enum gomp_map_kind mkind
+ = code == OMP_TARGET_EXIT_DATA
+ ? GOMP_MAP_RELEASE : GOMP_MAP_ALLOC;
+ OMP_CLAUSE_SET_MAP_KIND (c2, mkind);
+ OMP_CLAUSE_DECL (c2)
+ = unshare_expr (OMP_CLAUSE_DECL (c));
+ OMP_CLAUSE_CHAIN (c2) = scp ? *scp : *prev_list_p;
+ OMP_CLAUSE_SIZE (c2)
+ = TYPE_SIZE_UNIT (ptr_type_node);
+ cl = scp ? *prev_list_p : c2;
+ if (OMP_CLAUSE_CHAIN (*prev_list_p) != c)
+ {
+ tree c4 = OMP_CLAUSE_CHAIN (*prev_list_p);
+ tree c3
+ = build_omp_clause (OMP_CLAUSE_LOCATION (c),
+ OMP_CLAUSE_MAP);
+ OMP_CLAUSE_SET_MAP_KIND (c3, mkind);
+ OMP_CLAUSE_DECL (c3)
+ = unshare_expr (OMP_CLAUSE_DECL (c4));
+ OMP_CLAUSE_SIZE (c3)
+ = TYPE_SIZE_UNIT (ptr_type_node);
+ OMP_CLAUSE_CHAIN (c3) = *prev_list_p;
+ if (!scp)
+ OMP_CLAUSE_CHAIN (c2) = c3;
+ else
+ cl = c3;
+ }
+ if (scp)
+ *scp = c2;
+ if (sc == prev_list_p)
+ {
+ *sc = cl;
+ prev_list_p = NULL;
+ }
+ else
+ {
+ *prev_list_p = OMP_CLAUSE_CHAIN (c);
+ list_p = prev_list_p;
+ prev_list_p = NULL;
+ OMP_CLAUSE_CHAIN (c) = *sc;
+ *sc = cl;
+ continue;
+ }
}
- if (!remove)
- OMP_CLAUSE_SIZE (*osc)
- = size_binop (PLUS_EXPR, OMP_CLAUSE_SIZE (*osc),
- size_one_node);
- if (!remove && *sc != c)
+ else if (*sc != c)
{
*list_p = OMP_CLAUSE_CHAIN (c);
OMP_CLAUSE_CHAIN (c) = *sc;
}
}
}
+ if (!remove
+ && OMP_CLAUSE_MAP_KIND (c) != GOMP_MAP_ALWAYS_POINTER
+ && OMP_CLAUSE_CHAIN (c)
+ && OMP_CLAUSE_CODE (OMP_CLAUSE_CHAIN (c)) == OMP_CLAUSE_MAP
+ && (OMP_CLAUSE_MAP_KIND (OMP_CLAUSE_CHAIN (c))
+ == GOMP_MAP_ALWAYS_POINTER))
+ prev_list_p = list_p;
break;
}
flags = GOVD_MAP | GOVD_EXPLICIT;
+ if (OMP_CLAUSE_MAP_KIND (c) == GOMP_MAP_ALWAYS_TO
+ || OMP_CLAUSE_MAP_KIND (c) == GOMP_MAP_ALWAYS_TOFROM)
+ flags |= GOVD_MAP_ALWAYS_TO;
goto do_add;
case OMP_CLAUSE_DEPEND:
|| decl == OMP_CLAUSE_DECL (c)
|| (TREE_CODE (OMP_CLAUSE_DECL (c)) == MEM_REF
&& (TREE_CODE (TREE_OPERAND (OMP_CLAUSE_DECL (c), 0))
- == ADDR_EXPR)))
+ == ADDR_EXPR
+ || (TREE_CODE (TREE_OPERAND (OMP_CLAUSE_DECL (c), 0))
+ == POINTER_PLUS_EXPR
+ && (TREE_CODE (TREE_OPERAND (TREE_OPERAND
+ (OMP_CLAUSE_DECL (c), 0), 0))
+ == ADDR_EXPR)))))
&& omp_check_private (ctx, decl, false))
{
error ("%s variable %qE is private in outer context",
OMP_CLAUSE_CHAIN (nc) = OMP_CLAUSE_CHAIN (clause);
OMP_CLAUSE_CHAIN (clause) = nc;
}
+ else if (gimplify_omp_ctxp->target_firstprivatize_array_bases
+ && lang_hooks.decls.omp_privatize_by_reference (decl))
+ {
+ OMP_CLAUSE_DECL (clause) = build_simple_mem_ref (decl);
+ OMP_CLAUSE_SIZE (clause)
+ = unshare_expr (TYPE_SIZE_UNIT (TREE_TYPE (TREE_TYPE (decl))));
+ struct gimplify_omp_ctx *ctx = gimplify_omp_ctxp;
+ gimplify_omp_ctxp = ctx->outer_context;
+ gimplify_expr (&OMP_CLAUSE_SIZE (clause),
+ pre_p, NULL, is_gimple_val, fb_rvalue);
+ gimplify_omp_ctxp = ctx;
+ tree nc = build_omp_clause (OMP_CLAUSE_LOCATION (clause),
+ OMP_CLAUSE_MAP);
+ OMP_CLAUSE_DECL (nc) = decl;
+ OMP_CLAUSE_SIZE (nc) = size_zero_node;
+ OMP_CLAUSE_SET_MAP_KIND (nc, GOMP_MAP_FIRSTPRIVATE_REFERENCE);
+ OMP_CLAUSE_CHAIN (nc) = OMP_CLAUSE_CHAIN (clause);
+ OMP_CLAUSE_CHAIN (clause) = nc;
+ }
else
OMP_CLAUSE_SIZE (clause) = DECL_SIZE_UNIT (decl);
}
else
OMP_CLAUSE_CODE (c) = OMP_CLAUSE_PRIVATE;
}
+ else if (code == OMP_DISTRIBUTE
+ && OMP_CLAUSE_LASTPRIVATE_FIRSTPRIVATE (c))
+ {
+ remove = true;
+ error_at (OMP_CLAUSE_LOCATION (c),
+ "same variable used in %<firstprivate%> and "
+ "%<lastprivate%> clauses on %<distribute%> "
+ "construct");
+ }
break;
case OMP_CLAUSE_ALIGNED:
break;
case OMP_CLAUSE_MAP:
+ if (code == OMP_TARGET_EXIT_DATA
+ && OMP_CLAUSE_MAP_KIND (c) == GOMP_MAP_ALWAYS_POINTER)
+ {
+ remove = true;
+ break;
+ }
decl = OMP_CLAUSE_DECL (c);
if (!DECL_P (decl))
{
n = splay_tree_lookup (ctx->variables, (splay_tree_key) decl);
if ((ctx->region_type & ORT_TARGET) != 0
&& !(n->value & GOVD_SEEN)
- && ((OMP_CLAUSE_MAP_KIND (c) & GOMP_MAP_FLAG_ALWAYS) == 0
- || OMP_CLAUSE_MAP_KIND (c) == GOMP_MAP_STRUCT))
+ && GOMP_MAP_ALWAYS_P (OMP_CLAUSE_MAP_KIND (c)) == 0)
{
remove = true;
/* For struct element mapping, if struct is never referenced
else if (DECL_SIZE (decl)
&& TREE_CODE (DECL_SIZE (decl)) != INTEGER_CST
&& OMP_CLAUSE_MAP_KIND (c) != GOMP_MAP_POINTER
- && OMP_CLAUSE_MAP_KIND (c) != GOMP_MAP_FIRSTPRIVATE_POINTER)
+ && OMP_CLAUSE_MAP_KIND (c) != GOMP_MAP_FIRSTPRIVATE_POINTER
+ && (OMP_CLAUSE_MAP_KIND (c)
+ != GOMP_MAP_FIRSTPRIVATE_REFERENCE))
{
/* For GOMP_MAP_FORCE_DEVICEPTR, we'll never enter here, because
for these, TREE_CODE (DECL_SIZE (decl)) will always be
{
if (OMP_CLAUSE_SIZE (c) == NULL_TREE)
OMP_CLAUSE_SIZE (c) = DECL_SIZE_UNIT (decl);
- if ((n->value & GOVD_SEEN)
- && (n->value & (GOVD_PRIVATE | GOVD_FIRSTPRIVATE)))
- OMP_CLAUSE_MAP_PRIVATE (c) = 1;
+ gcc_assert ((n->value & GOVD_SEEN) == 0
+ || ((n->value & (GOVD_PRIVATE | GOVD_FIRSTPRIVATE))
+ == 0));
}
break;
OMP_CLAUSE_LINEAR_NO_COPYOUT (c) = 1;
flags |= GOVD_LINEAR_LASTPRIVATE_NO_OUTER;
}
+ else
+ {
+ struct gimplify_omp_ctx *octx = outer->outer_context;
+ if (octx
+ && octx->region_type == ORT_COMBINED_PARALLEL
+ && octx->outer_context
+ && (octx->outer_context->region_type
+ == ORT_WORKSHARE)
+ && octx->outer_context->combined_loop)
+ {
+ octx = octx->outer_context;
+ n = splay_tree_lookup (octx->variables,
+ (splay_tree_key)decl);
+ if (n != NULL && (n->value & GOVD_LOCAL) != 0)
+ {
+ OMP_CLAUSE_LINEAR_NO_COPYOUT (c) = 1;
+ flags |= GOVD_LINEAR_LASTPRIVATE_NO_OUTER;
+ }
+ }
+ }
}
}
{
omp_add_variable (outer, decl,
GOVD_LASTPRIVATE | GOVD_SEEN);
- if (outer->outer_context)
+ if (outer->region_type == ORT_COMBINED_PARALLEL
+ && outer->outer_context
+ && (outer->outer_context->region_type
+ == ORT_WORKSHARE)
+ && outer->outer_context->combined_loop)
+ {
+ outer = outer->outer_context;
+ n = splay_tree_lookup (outer->variables,
+ (splay_tree_key)decl);
+ if (omp_check_private (outer, decl, false))
+ outer = NULL;
+ else if (n == NULL
+ || ((n->value & GOVD_DATA_SHARE_CLASS)
+ == 0))
+ omp_add_variable (outer, decl,
+ GOVD_LASTPRIVATE
+ | GOVD_SEEN);
+ else
+ outer = NULL;
+ }
+ if (outer && outer->outer_context
+ && (outer->outer_context->region_type
+ == ORT_COMBINED_TEAMS))
+ {
+ outer = outer->outer_context;
+ n = splay_tree_lookup (outer->variables,
+ (splay_tree_key)decl);
+ if (n == NULL
+ || (n->value & GOVD_DATA_SHARE_CLASS) == 0)
+ omp_add_variable (outer, decl,
+ GOVD_SHARED | GOVD_SEEN);
+ else
+ outer = NULL;
+ }
+ if (outer && outer->outer_context)
omp_notice_variable (outer->outer_context, decl,
true);
}
{
omp_add_variable (outer, decl,
GOVD_LASTPRIVATE | GOVD_SEEN);
- if (outer->outer_context)
+ if (outer->region_type == ORT_COMBINED_PARALLEL
+ && outer->outer_context
+ && (outer->outer_context->region_type
+ == ORT_WORKSHARE)
+ && outer->outer_context->combined_loop)
+ {
+ outer = outer->outer_context;
+ n = splay_tree_lookup (outer->variables,
+ (splay_tree_key)decl);
+ if (omp_check_private (outer, decl, false))
+ outer = NULL;
+ else if (n == NULL
+ || ((n->value & GOVD_DATA_SHARE_CLASS)
+ == 0))
+ omp_add_variable (outer, decl,
+ GOVD_LASTPRIVATE
+ | GOVD_SEEN);
+ else
+ outer = NULL;
+ }
+ if (outer && outer->outer_context
+ && (outer->outer_context->region_type
+ == ORT_COMBINED_TEAMS))
+ {
+ outer = outer->outer_context;
+ n = splay_tree_lookup (outer->variables,
+ (splay_tree_key)decl);
+ if (n == NULL
+ || (n->value & GOVD_DATA_SHARE_CLASS) == 0)
+ omp_add_variable (outer, decl,
+ GOVD_SHARED | GOVD_SEEN);
+ else
+ outer = NULL;
+ }
+ if (outer && outer->outer_context)
omp_notice_variable (outer->outer_context, decl,
true);
}
return GS_ALL_DONE;
}
+/* Helper function of optimize_target_teams, find OMP_TEAMS inside
+ of OMP_TARGET's body. */
+
+static tree
+find_omp_teams (tree *tp, int *walk_subtrees, void *)
+{
+ *walk_subtrees = 0;
+ switch (TREE_CODE (*tp))
+ {
+ case OMP_TEAMS:
+ return *tp;
+ case BIND_EXPR:
+ case STATEMENT_LIST:
+ *walk_subtrees = 1;
+ break;
+ default:
+ break;
+ }
+ return NULL_TREE;
+}
+
+/* Helper function of optimize_target_teams, determine if the expression
+ can be computed safely before the target construct on the host. */
+
+static tree
+computable_teams_clause (tree *tp, int *walk_subtrees, void *)
+{
+ splay_tree_node n;
+
+ if (TYPE_P (*tp))
+ {
+ *walk_subtrees = 0;
+ return NULL_TREE;
+ }
+ switch (TREE_CODE (*tp))
+ {
+ case VAR_DECL:
+ case PARM_DECL:
+ case RESULT_DECL:
+ *walk_subtrees = 0;
+ if (error_operand_p (*tp)
+ || !INTEGRAL_TYPE_P (TREE_TYPE (*tp))
+ || DECL_HAS_VALUE_EXPR_P (*tp)
+ || DECL_THREAD_LOCAL_P (*tp)
+ || TREE_SIDE_EFFECTS (*tp)
+ || TREE_THIS_VOLATILE (*tp))
+ return *tp;
+ if (is_global_var (*tp)
+ && (lookup_attribute ("omp declare target", DECL_ATTRIBUTES (*tp))
+ || lookup_attribute ("omp declare target link",
+ DECL_ATTRIBUTES (*tp))))
+ return *tp;
+ n = splay_tree_lookup (gimplify_omp_ctxp->variables,
+ (splay_tree_key) *tp);
+ if (n == NULL)
+ {
+ if (gimplify_omp_ctxp->target_map_scalars_firstprivate)
+ return NULL_TREE;
+ return *tp;
+ }
+ else if (n->value & GOVD_LOCAL)
+ return *tp;
+ else if (n->value & GOVD_FIRSTPRIVATE)
+ return NULL_TREE;
+ else if ((n->value & (GOVD_MAP | GOVD_MAP_ALWAYS_TO))
+ == (GOVD_MAP | GOVD_MAP_ALWAYS_TO))
+ return NULL_TREE;
+ return *tp;
+ case INTEGER_CST:
+ if (!INTEGRAL_TYPE_P (TREE_TYPE (*tp)))
+ return *tp;
+ return NULL_TREE;
+ case TARGET_EXPR:
+ if (TARGET_EXPR_INITIAL (*tp)
+ || TREE_CODE (TARGET_EXPR_SLOT (*tp)) != VAR_DECL)
+ return *tp;
+ return computable_teams_clause (&TARGET_EXPR_SLOT (*tp),
+ walk_subtrees, NULL);
+ /* Allow some reasonable subset of integral arithmetics. */
+ case PLUS_EXPR:
+ case MINUS_EXPR:
+ case MULT_EXPR:
+ case TRUNC_DIV_EXPR:
+ case CEIL_DIV_EXPR:
+ case FLOOR_DIV_EXPR:
+ case ROUND_DIV_EXPR:
+ case TRUNC_MOD_EXPR:
+ case CEIL_MOD_EXPR:
+ case FLOOR_MOD_EXPR:
+ case ROUND_MOD_EXPR:
+ case RDIV_EXPR:
+ case EXACT_DIV_EXPR:
+ case MIN_EXPR:
+ case MAX_EXPR:
+ case LSHIFT_EXPR:
+ case RSHIFT_EXPR:
+ case BIT_IOR_EXPR:
+ case BIT_XOR_EXPR:
+ case BIT_AND_EXPR:
+ case NEGATE_EXPR:
+ case ABS_EXPR:
+ case BIT_NOT_EXPR:
+ case NON_LVALUE_EXPR:
+ CASE_CONVERT:
+ if (!INTEGRAL_TYPE_P (TREE_TYPE (*tp)))
+ return *tp;
+ return NULL_TREE;
+ /* And disallow anything else, except for comparisons. */
+ default:
+ if (COMPARISON_CLASS_P (*tp))
+ return NULL_TREE;
+ return *tp;
+ }
+}
+
+/* Try to determine if the num_teams and/or thread_limit expressions
+ can have their values determined already before entering the
+ target construct.
+ INTEGER_CSTs trivially are,
+ integral decls that are firstprivate (explicitly or implicitly)
+ or explicitly map(always, to:) or map(always, tofrom:) on the target
+ region too, and expressions involving simple arithmetics on those
+ too, function calls are not ok, dereferencing something neither etc.
+ Add NUM_TEAMS and THREAD_LIMIT clauses to the OMP_CLAUSES of
+ EXPR based on what we find:
+ 0 stands for clause not specified at all, use implementation default
+ -1 stands for value that can't be determined easily before entering
+ the target construct.
+ If teams construct is not present at all, use 1 for num_teams
+ and 0 for thread_limit (only one team is involved, and the thread
+ limit is implementation defined. */
+
+static void
+optimize_target_teams (tree target, gimple_seq *pre_p)
+{
+ tree body = OMP_BODY (target);
+ tree teams = walk_tree (&body, find_omp_teams, NULL, NULL);
+ tree num_teams = integer_zero_node;
+ tree thread_limit = integer_zero_node;
+ location_t num_teams_loc = EXPR_LOCATION (target);
+ location_t thread_limit_loc = EXPR_LOCATION (target);
+ tree c, *p, expr;
+ struct gimplify_omp_ctx *target_ctx = gimplify_omp_ctxp;
+
+ if (teams == NULL_TREE)
+ num_teams = integer_one_node;
+ else
+ for (c = OMP_TEAMS_CLAUSES (teams); c; c = OMP_CLAUSE_CHAIN (c))
+ {
+ if (OMP_CLAUSE_CODE (c) == OMP_CLAUSE_NUM_TEAMS)
+ {
+ p = &num_teams;
+ num_teams_loc = OMP_CLAUSE_LOCATION (c);
+ }
+ else if (OMP_CLAUSE_CODE (c) == OMP_CLAUSE_THREAD_LIMIT)
+ {
+ p = &thread_limit;
+ thread_limit_loc = OMP_CLAUSE_LOCATION (c);
+ }
+ else
+ continue;
+ expr = OMP_CLAUSE_OPERAND (c, 0);
+ if (TREE_CODE (expr) == INTEGER_CST)
+ {
+ *p = expr;
+ continue;
+ }
+ if (walk_tree (&expr, computable_teams_clause, NULL, NULL))
+ {
+ *p = integer_minus_one_node;
+ continue;
+ }
+ *p = expr;
+ gimplify_omp_ctxp = gimplify_omp_ctxp->outer_context;
+ if (gimplify_expr (p, pre_p, NULL, is_gimple_val, fb_rvalue)
+ == GS_ERROR)
+ {
+ gimplify_omp_ctxp = target_ctx;
+ *p = integer_minus_one_node;
+ continue;
+ }
+ gimplify_omp_ctxp = target_ctx;
+ if (!DECL_P (expr) && TREE_CODE (expr) != TARGET_EXPR)
+ OMP_CLAUSE_OPERAND (c, 0) = *p;
+ }
+ c = build_omp_clause (thread_limit_loc, OMP_CLAUSE_THREAD_LIMIT);
+ OMP_CLAUSE_THREAD_LIMIT_EXPR (c) = thread_limit;
+ OMP_CLAUSE_CHAIN (c) = OMP_TARGET_CLAUSES (target);
+ OMP_TARGET_CLAUSES (target) = c;
+ c = build_omp_clause (num_teams_loc, OMP_CLAUSE_NUM_TEAMS);
+ OMP_CLAUSE_NUM_TEAMS_EXPR (c) = num_teams;
+ OMP_CLAUSE_CHAIN (c) = OMP_TARGET_CLAUSES (target);
+ OMP_TARGET_CLAUSES (target) = c;
+}
+
/* Gimplify the gross structure of several OMP constructs. */
static void
}
gimplify_scan_omp_clauses (&OMP_CLAUSES (expr), pre_p, ort,
TREE_CODE (expr));
+ if (TREE_CODE (expr) == OMP_TARGET)
+ optimize_target_teams (expr, pre_p);
if ((ort & (ORT_TARGET | ORT_TARGET_DATA)) != 0)
{
push_gimplify_context ();
"GOMP_loop_runtime_start",
BT_FN_BOOL_LONG_LONG_LONG_LONGPTR_LONGPTR,
ATTR_NOTHROW_LEAF_LIST)
+DEF_GOMP_BUILTIN (BUILT_IN_GOMP_LOOP_NONMONOTONIC_DYNAMIC_START,
+ "GOMP_loop_nonmonotonic_dynamic_start",
+ BT_FN_BOOL_LONG_LONG_LONG_LONG_LONGPTR_LONGPTR,
+ ATTR_NOTHROW_LEAF_LIST)
+DEF_GOMP_BUILTIN (BUILT_IN_GOMP_LOOP_NONMONOTONIC_GUIDED_START,
+ "GOMP_loop_nonmonotonic_guided_start",
+ BT_FN_BOOL_LONG_LONG_LONG_LONG_LONGPTR_LONGPTR,
+ ATTR_NOTHROW_LEAF_LIST)
DEF_GOMP_BUILTIN (BUILT_IN_GOMP_LOOP_ORDERED_STATIC_START,
"GOMP_loop_ordered_static_start",
BT_FN_BOOL_LONG_LONG_LONG_LONG_LONGPTR_LONGPTR,
BT_FN_BOOL_LONGPTR_LONGPTR, ATTR_NOTHROW_LEAF_LIST)
DEF_GOMP_BUILTIN (BUILT_IN_GOMP_LOOP_RUNTIME_NEXT, "GOMP_loop_runtime_next",
BT_FN_BOOL_LONGPTR_LONGPTR, ATTR_NOTHROW_LEAF_LIST)
+DEF_GOMP_BUILTIN (BUILT_IN_GOMP_LOOP_NONMONOTONIC_DYNAMIC_NEXT,
+ "GOMP_loop_nonmonotonic_dynamic_next",
+ BT_FN_BOOL_LONGPTR_LONGPTR, ATTR_NOTHROW_LEAF_LIST)
+DEF_GOMP_BUILTIN (BUILT_IN_GOMP_LOOP_NONMONOTONIC_GUIDED_NEXT,
+ "GOMP_loop_nonmonotonic_guided_next",
+ BT_FN_BOOL_LONGPTR_LONGPTR, ATTR_NOTHROW_LEAF_LIST)
DEF_GOMP_BUILTIN (BUILT_IN_GOMP_LOOP_ORDERED_STATIC_NEXT,
"GOMP_loop_ordered_static_next",
BT_FN_BOOL_LONGPTR_LONGPTR, ATTR_NOTHROW_LEAF_LIST)
"GOMP_loop_ull_runtime_start",
BT_FN_BOOL_BOOL_ULL_ULL_ULL_ULLPTR_ULLPTR,
ATTR_NOTHROW_LEAF_LIST)
+DEF_GOMP_BUILTIN (BUILT_IN_GOMP_LOOP_ULL_NONMONOTONIC_DYNAMIC_START,
+ "GOMP_loop_ull_nonmonotonic_dynamic_start",
+ BT_FN_BOOL_BOOL_ULL_ULL_ULL_ULL_ULLPTR_ULLPTR,
+ ATTR_NOTHROW_LEAF_LIST)
+DEF_GOMP_BUILTIN (BUILT_IN_GOMP_LOOP_ULL_NONMONOTONIC_GUIDED_START,
+ "GOMP_loop_ull_nonmonotonic_guided_start",
+ BT_FN_BOOL_BOOL_ULL_ULL_ULL_ULL_ULLPTR_ULLPTR,
+ ATTR_NOTHROW_LEAF_LIST)
DEF_GOMP_BUILTIN (BUILT_IN_GOMP_LOOP_ULL_ORDERED_STATIC_START,
"GOMP_loop_ull_ordered_static_start",
BT_FN_BOOL_BOOL_ULL_ULL_ULL_ULL_ULLPTR_ULLPTR,
"GOMP_loop_ull_doacross_runtime_start",
BT_FN_BOOL_UINT_ULLPTR_ULLPTR_ULLPTR,
ATTR_NOTHROW_LEAF_LIST)
-DEF_GOMP_BUILTIN (BUILT_IN_GOMP_LOOP_ULL_STATIC_NEXT, "GOMP_loop_ull_static_next",
+DEF_GOMP_BUILTIN (BUILT_IN_GOMP_LOOP_ULL_STATIC_NEXT,
+ "GOMP_loop_ull_static_next",
+ BT_FN_BOOL_ULONGLONGPTR_ULONGLONGPTR, ATTR_NOTHROW_LEAF_LIST)
+DEF_GOMP_BUILTIN (BUILT_IN_GOMP_LOOP_ULL_DYNAMIC_NEXT,
+ "GOMP_loop_ull_dynamic_next",
+ BT_FN_BOOL_ULONGLONGPTR_ULONGLONGPTR, ATTR_NOTHROW_LEAF_LIST)
+DEF_GOMP_BUILTIN (BUILT_IN_GOMP_LOOP_ULL_GUIDED_NEXT,
+ "GOMP_loop_ull_guided_next",
BT_FN_BOOL_ULONGLONGPTR_ULONGLONGPTR, ATTR_NOTHROW_LEAF_LIST)
-DEF_GOMP_BUILTIN (BUILT_IN_GOMP_LOOP_ULL_DYNAMIC_NEXT, "GOMP_loop_ull_dynamic_next",
+DEF_GOMP_BUILTIN (BUILT_IN_GOMP_LOOP_ULL_RUNTIME_NEXT,
+ "GOMP_loop_ull_runtime_next",
BT_FN_BOOL_ULONGLONGPTR_ULONGLONGPTR, ATTR_NOTHROW_LEAF_LIST)
-DEF_GOMP_BUILTIN (BUILT_IN_GOMP_LOOP_ULL_GUIDED_NEXT, "GOMP_loop_ull_guided_next",
+DEF_GOMP_BUILTIN (BUILT_IN_GOMP_LOOP_ULL_NONMONOTONIC_DYNAMIC_NEXT,
+ "GOMP_loop_ull_nonmonotonic_dynamic_next",
BT_FN_BOOL_ULONGLONGPTR_ULONGLONGPTR, ATTR_NOTHROW_LEAF_LIST)
-DEF_GOMP_BUILTIN (BUILT_IN_GOMP_LOOP_ULL_RUNTIME_NEXT, "GOMP_loop_ull_runtime_next",
+DEF_GOMP_BUILTIN (BUILT_IN_GOMP_LOOP_ULL_NONMONOTONIC_GUIDED_NEXT,
+ "GOMP_loop_ull_nonmonotonic_guided_next",
BT_FN_BOOL_ULONGLONGPTR_ULONGLONGPTR, ATTR_NOTHROW_LEAF_LIST)
DEF_GOMP_BUILTIN (BUILT_IN_GOMP_LOOP_ULL_ORDERED_STATIC_NEXT,
"GOMP_loop_ull_ordered_static_next",
"GOMP_parallel_loop_runtime",
BT_FN_VOID_OMPFN_PTR_UINT_LONG_LONG_LONG_UINT,
ATTR_NOTHROW_LIST)
+DEF_GOMP_BUILTIN (BUILT_IN_GOMP_PARALLEL_LOOP_NONMONOTONIC_DYNAMIC,
+ "GOMP_parallel_loop_nonmonotonic_dynamic",
+ BT_FN_VOID_OMPFN_PTR_UINT_LONG_LONG_LONG_LONG_UINT,
+ ATTR_NOTHROW_LIST)
+DEF_GOMP_BUILTIN (BUILT_IN_GOMP_PARALLEL_LOOP_NONMONOTONIC_GUIDED,
+ "GOMP_parallel_loop_nonmonotonic_guided",
+ BT_FN_VOID_OMPFN_PTR_UINT_LONG_LONG_LONG_LONG_UINT,
+ ATTR_NOTHROW_LIST)
DEF_GOMP_BUILTIN (BUILT_IN_GOMP_LOOP_END, "GOMP_loop_end",
BT_FN_VOID, ATTR_NOTHROW_LEAF_LIST)
DEF_GOMP_BUILTIN (BUILT_IN_GOMP_LOOP_END_CANCEL, "GOMP_loop_end_cancel",
BT_FN_PTR, ATTR_NOTHROW_LEAF_LIST)
DEF_GOMP_BUILTIN (BUILT_IN_GOMP_SINGLE_COPY_END, "GOMP_single_copy_end",
BT_FN_VOID_PTR, ATTR_NOTHROW_LEAF_LIST)
-DEF_GOMP_BUILTIN (BUILT_IN_GOMP_TARGET, "GOMP_target_41",
- BT_FN_VOID_INT_OMPFN_SIZE_PTR_PTR_PTR_UINT_PTR,
+DEF_GOMP_BUILTIN (BUILT_IN_GOMP_TARGET, "GOMP_target_ext",
+ BT_FN_VOID_INT_OMPFN_SIZE_PTR_PTR_PTR_UINT_PTR_INT_INT,
ATTR_NOTHROW_LIST)
-DEF_GOMP_BUILTIN (BUILT_IN_GOMP_TARGET_DATA, "GOMP_target_data_41",
+DEF_GOMP_BUILTIN (BUILT_IN_GOMP_TARGET_DATA, "GOMP_target_data_ext",
BT_FN_VOID_INT_SIZE_PTR_PTR_PTR, ATTR_NOTHROW_LIST)
DEF_GOMP_BUILTIN (BUILT_IN_GOMP_TARGET_END_DATA, "GOMP_target_end_data",
BT_FN_VOID, ATTR_NOTHROW_LIST)
-DEF_GOMP_BUILTIN (BUILT_IN_GOMP_TARGET_UPDATE, "GOMP_target_update_41",
+DEF_GOMP_BUILTIN (BUILT_IN_GOMP_TARGET_UPDATE, "GOMP_target_update_ext",
BT_FN_VOID_INT_SIZE_PTR_PTR_PTR_UINT_PTR,
ATTR_NOTHROW_LIST)
DEF_GOMP_BUILTIN (BUILT_IN_GOMP_TARGET_ENTER_EXIT_DATA,
/* Schedule kind, only used for GIMPLE_OMP_FOR type regions. */
enum omp_clause_schedule_kind sched_kind;
+ /* Schedule modifiers. */
+ unsigned char sched_modifiers;
+
/* True if this is a combined parallel+workshare region. */
bool is_combined_parallel;
int collapse;
int ordered;
bool have_nowait, have_ordered, simd_schedule;
+ unsigned char sched_modifiers;
enum omp_clause_schedule_kind sched_kind;
struct omp_for_data_loop *loops;
};
static struct omp_region *root_omp_region;
static bitmap task_shared_vars;
static vec<omp_context *> taskreg_contexts;
+static bool omp_any_child_fn_dumped;
static void scan_omp (gimple_seq *, omp_context *);
static tree scan_omp_1_op (tree *, int *, void *);
fd->collapse = 1;
fd->ordered = 0;
fd->sched_kind = OMP_CLAUSE_SCHEDULE_STATIC;
+ fd->sched_modifiers = 0;
fd->chunk_size = NULL_TREE;
fd->simd_schedule = false;
if (gimple_omp_for_kind (fd->for_stmt) == GF_OMP_FOR_KIND_CILKFOR)
break;
case OMP_CLAUSE_SCHEDULE:
gcc_assert (!distribute && !taskloop);
- fd->sched_kind = OMP_CLAUSE_SCHEDULE_KIND (t);
+ fd->sched_kind
+ = (enum omp_clause_schedule_kind)
+ (OMP_CLAUSE_SCHEDULE_KIND (t) & OMP_CLAUSE_SCHEDULE_MASK);
+ fd->sched_modifiers = (OMP_CLAUSE_SCHEDULE_KIND (t)
+ & ~OMP_CLAUSE_SCHEDULE_MASK);
fd->chunk_size = OMP_CLAUSE_SCHEDULE_CHUNK_EXPR (t);
fd->simd_schedule = OMP_CLAUSE_SCHEDULE_SIMD (t);
break;
tree clauses = gimple_omp_for_clauses (ws_stmt);
tree c = find_omp_clause (clauses, OMP_CLAUSE_SCHEDULE);
if (c == NULL
- || OMP_CLAUSE_SCHEDULE_KIND (c) == OMP_CLAUSE_SCHEDULE_STATIC
+ || ((OMP_CLAUSE_SCHEDULE_KIND (c) & OMP_CLAUSE_SCHEDULE_MASK)
+ == OMP_CLAUSE_SCHEDULE_STATIC)
|| find_omp_clause (clauses, OMP_CLAUSE_ORDERED))
{
region->is_combined_parallel = false;
&& TREE_CODE (decl) == MEM_REF)
{
tree t = TREE_OPERAND (decl, 0);
+ if (TREE_CODE (t) == POINTER_PLUS_EXPR)
+ t = TREE_OPERAND (t, 0);
if (TREE_CODE (t) == INDIRECT_REF
|| TREE_CODE (t) == ADDR_EXPR)
t = TREE_OPERAND (t, 0);
directly. */
if (OMP_CLAUSE_CODE (c) == OMP_CLAUSE_MAP
&& DECL_P (decl)
- && (OMP_CLAUSE_MAP_KIND (c) != GOMP_MAP_FIRSTPRIVATE_POINTER
+ && ((OMP_CLAUSE_MAP_KIND (c) != GOMP_MAP_FIRSTPRIVATE_POINTER
+ && (OMP_CLAUSE_MAP_KIND (c)
+ != GOMP_MAP_FIRSTPRIVATE_REFERENCE))
|| TREE_CODE (TREE_TYPE (decl)) == ARRAY_TYPE)
&& is_global_var (maybe_lookup_decl_in_outer_ctx (decl, ctx))
&& varpool_node::get_create (decl)->offloadable)
break;
}
if (OMP_CLAUSE_CODE (c) == OMP_CLAUSE_MAP
- && OMP_CLAUSE_MAP_KIND (c) == GOMP_MAP_FIRSTPRIVATE_POINTER)
+ && (OMP_CLAUSE_MAP_KIND (c) == GOMP_MAP_FIRSTPRIVATE_POINTER
+ || (OMP_CLAUSE_MAP_KIND (c)
+ == GOMP_MAP_FIRSTPRIVATE_REFERENCE)))
{
if (TREE_CODE (decl) == COMPONENT_REF
|| (TREE_CODE (decl) == INDIRECT_REF
gcc_assert (TREE_CODE (decl2) == INDIRECT_REF);
decl2 = TREE_OPERAND (decl2, 0);
gcc_assert (DECL_P (decl2));
- if (OMP_CLAUSE_CODE (c) == OMP_CLAUSE_MAP
- && OMP_CLAUSE_MAP_PRIVATE (c))
- install_var_field (decl2, true, 11, ctx);
- else
- install_var_field (decl2, true, 3, ctx);
+ install_var_field (decl2, true, 3, ctx);
install_var_local (decl2, ctx);
install_var_local (decl, ctx);
}
&& !OMP_CLAUSE_MAP_ZERO_BIAS_ARRAY_SECTION (c)
&& TREE_CODE (TREE_TYPE (decl)) == ARRAY_TYPE)
install_var_field (decl, true, 7, ctx);
- else if (OMP_CLAUSE_CODE (c) == OMP_CLAUSE_MAP
- && OMP_CLAUSE_MAP_PRIVATE (c))
- install_var_field (decl, true, 11, ctx);
else
install_var_field (decl, true, 3, ctx);
if (is_gimple_omp_offloaded (ctx->stmt))
break;
decl = OMP_CLAUSE_DECL (c);
if (DECL_P (decl)
- && (OMP_CLAUSE_MAP_KIND (c) != GOMP_MAP_FIRSTPRIVATE_POINTER
+ && ((OMP_CLAUSE_MAP_KIND (c) != GOMP_MAP_FIRSTPRIVATE_POINTER
+ && (OMP_CLAUSE_MAP_KIND (c)
+ != GOMP_MAP_FIRSTPRIVATE_REFERENCE))
|| TREE_CODE (TREE_TYPE (decl)) == ARRAY_TYPE)
&& is_global_var (maybe_lookup_decl_in_outer_ctx (decl, ctx))
&& varpool_node::get_create (decl)->offloadable)
&& TREE_CODE (fd.loop.n2) != INTEGER_CST)
{
count += fd.collapse - 1;
- /* For taskloop, if there are lastprivate clauses on the inner
+ /* If there are lastprivate clauses on the inner
GIMPLE_OMP_FOR, add one more temporaries for the total number
of iterations (product of count1 ... countN-1). */
- if (msk == GF_OMP_FOR_KIND_TASKLOOP
- && find_omp_clause (gimple_omp_for_clauses (for_stmt),
- OMP_CLAUSE_LASTPRIVATE))
+ if (find_omp_clause (gimple_omp_for_clauses (for_stmt),
+ OMP_CLAUSE_LASTPRIVATE))
+ count++;
+ else if (msk == GF_OMP_FOR_KIND_FOR
+ && find_omp_clause (gimple_omp_parallel_clauses (stmt),
+ OMP_CLAUSE_LASTPRIVATE))
count++;
}
for (i = 0; i < count; i++)
if (c_kind == OMP_CLAUSE_REDUCTION && TREE_CODE (var) == MEM_REF)
{
var = TREE_OPERAND (var, 0);
+ if (TREE_CODE (var) == POINTER_PLUS_EXPR)
+ var = TREE_OPERAND (var, 0);
if (TREE_CODE (var) == INDIRECT_REF
|| TREE_CODE (var) == ADDR_EXPR)
var = TREE_OPERAND (var, 0);
if (pass == 0)
continue;
+ tree bias = TREE_OPERAND (OMP_CLAUSE_DECL (c), 1);
tree orig_var = TREE_OPERAND (OMP_CLAUSE_DECL (c), 0);
+ if (TREE_CODE (orig_var) == POINTER_PLUS_EXPR)
+ {
+ tree b = TREE_OPERAND (orig_var, 1);
+ b = maybe_lookup_decl (b, ctx);
+ if (b == NULL)
+ {
+ b = TREE_OPERAND (orig_var, 1);
+ b = maybe_lookup_decl_in_outer_ctx (b, ctx);
+ }
+ if (integer_zerop (bias))
+ bias = b;
+ else
+ {
+ bias = fold_convert_loc (clause_loc,
+ TREE_TYPE (b), bias);
+ bias = fold_build2_loc (clause_loc, PLUS_EXPR,
+ TREE_TYPE (b), b, bias);
+ }
+ orig_var = TREE_OPERAND (orig_var, 0);
+ }
if (TREE_CODE (orig_var) == INDIRECT_REF
|| TREE_CODE (orig_var) == ADDR_EXPR)
orig_var = TREE_OPERAND (orig_var, 0);
tree y = create_tmp_var (ptype, name);
gimplify_assign (y, x, ilist);
x = y;
- if (TREE_CODE (TREE_OPERAND (d, 0)) == ADDR_EXPR)
+ tree yb = y;
+
+ if (!integer_zerop (bias))
+ {
+ bias = fold_convert_loc (clause_loc, sizetype, bias);
+ bias = fold_build1_loc (clause_loc, NEGATE_EXPR,
+ sizetype, bias);
+ x = fold_build2_loc (clause_loc, POINTER_PLUS_EXPR,
+ TREE_TYPE (x), x, bias);
+ yb = create_tmp_var (ptype, name);
+ gimplify_assign (yb, x, ilist);
+ x = yb;
+ }
+
+ d = TREE_OPERAND (d, 0);
+ if (TREE_CODE (d) == POINTER_PLUS_EXPR)
+ d = TREE_OPERAND (d, 0);
+ if (TREE_CODE (d) == ADDR_EXPR)
{
if (orig_var != var)
{
else
{
gcc_assert (orig_var == var);
- if (TREE_CODE (TREE_OPERAND (d, 0)) == INDIRECT_REF)
+ if (TREE_CODE (d) == INDIRECT_REF)
{
x = create_tmp_var (ptype, name);
TREE_ADDRESSABLE (x) = 1;
- gimplify_assign (x, y, ilist);
+ gimplify_assign (x, yb, ilist);
x = build_fold_addr_expr_loc (clause_loc, x);
}
x = fold_convert_loc (clause_loc, TREE_TYPE (new_var), x);
gimplify_assign (y2, y, ilist);
tree ref = build_outer_var_ref (var, ctx);
/* For ref build_outer_var_ref already performs this. */
- if (TREE_CODE (TREE_OPERAND (d, 0)) == INDIRECT_REF)
+ if (TREE_CODE (d) == INDIRECT_REF)
gcc_assert (is_reference (var));
- else if (TREE_CODE (TREE_OPERAND (d, 0)) == ADDR_EXPR)
+ else if (TREE_CODE (d) == ADDR_EXPR)
ref = build_fold_addr_expr (ref);
else if (is_reference (var))
ref = build_fold_addr_expr (ref);
if (TREE_CODE (var) == MEM_REF)
{
var = TREE_OPERAND (var, 0);
+ if (TREE_CODE (var) == POINTER_PLUS_EXPR)
+ var = TREE_OPERAND (var, 0);
if (TREE_CODE (var) == INDIRECT_REF
|| TREE_CODE (var) == ADDR_EXPR)
var = TREE_OPERAND (var, 0);
tree v = TYPE_MAX_VALUE (TYPE_DOMAIN (type));
tree i = create_tmp_var (TREE_TYPE (v), NULL);
tree ptype = build_pointer_type (TREE_TYPE (type));
+ tree bias = TREE_OPERAND (d, 1);
+ d = TREE_OPERAND (d, 0);
+ if (TREE_CODE (d) == POINTER_PLUS_EXPR)
+ {
+ tree b = TREE_OPERAND (d, 1);
+ b = maybe_lookup_decl (b, ctx);
+ if (b == NULL)
+ {
+ b = TREE_OPERAND (d, 1);
+ b = maybe_lookup_decl_in_outer_ctx (b, ctx);
+ }
+ if (integer_zerop (bias))
+ bias = b;
+ else
+ {
+ bias = fold_convert_loc (clause_loc, TREE_TYPE (b), bias);
+ bias = fold_build2_loc (clause_loc, PLUS_EXPR,
+ TREE_TYPE (b), b, bias);
+ }
+ d = TREE_OPERAND (d, 0);
+ }
/* For ref build_outer_var_ref already performs this, so
only new_var needs a dereference. */
- if (TREE_CODE (TREE_OPERAND (d, 0)) == INDIRECT_REF)
+ if (TREE_CODE (d) == INDIRECT_REF)
{
new_var = build_simple_mem_ref_loc (clause_loc, new_var);
gcc_assert (is_reference (var) && var == orig_var);
}
- else if (TREE_CODE (TREE_OPERAND (d, 0)) == ADDR_EXPR)
+ else if (TREE_CODE (d) == ADDR_EXPR)
{
if (orig_var == var)
{
v = maybe_lookup_decl_in_outer_ctx (v, ctx);
gimplify_expr (&v, stmt_seqp, NULL, is_gimple_val, fb_rvalue);
}
+ if (!integer_zerop (bias))
+ {
+ bias = fold_convert_loc (clause_loc, sizetype, bias);
+ new_var = fold_build2_loc (clause_loc, POINTER_PLUS_EXPR,
+ TREE_TYPE (new_var), new_var,
+ unshare_expr (bias));
+ ref = fold_build2_loc (clause_loc, POINTER_PLUS_EXPR,
+ TREE_TYPE (ref), ref, bias);
+ }
new_var = fold_convert_loc (clause_loc, ptype, new_var);
ref = fold_convert_loc (clause_loc, ptype, ref);
tree m = create_tmp_var (ptype, NULL);
&& TREE_CODE (val) == MEM_REF)
{
val = TREE_OPERAND (val, 0);
+ if (TREE_CODE (val) == POINTER_PLUS_EXPR)
+ val = TREE_OPERAND (val, 0);
if (TREE_CODE (val) == INDIRECT_REF
|| TREE_CODE (val) == ADDR_EXPR)
val = TREE_OPERAND (val, 0);
{
case GIMPLE_OMP_FOR:
gcc_assert (region->inner->sched_kind != OMP_CLAUSE_SCHEDULE_AUTO);
- start_ix2 = ((int)BUILT_IN_GOMP_PARALLEL_LOOP_STATIC
- + (region->inner->sched_kind
- == OMP_CLAUSE_SCHEDULE_RUNTIME
- ? 3 : region->inner->sched_kind));
- start_ix = (enum built_in_function)start_ix2;
+ switch (region->inner->sched_kind)
+ {
+ case OMP_CLAUSE_SCHEDULE_RUNTIME:
+ start_ix2 = 3;
+ break;
+ case OMP_CLAUSE_SCHEDULE_DYNAMIC:
+ case OMP_CLAUSE_SCHEDULE_GUIDED:
+ if (region->inner->sched_modifiers
+ & OMP_CLAUSE_SCHEDULE_NONMONOTONIC)
+ {
+ start_ix2 = 3 + region->inner->sched_kind;
+ break;
+ }
+ /* FALLTHRU */
+ default:
+ start_ix2 = region->inner->sched_kind;
+ break;
+ }
+ start_ix2 += (int) BUILT_IN_GOMP_PARALLEL_LOOP_STATIC;
+ start_ix = (enum built_in_function) start_ix2;
break;
case GIMPLE_OMP_SECTIONS:
start_ix = BUILT_IN_GOMP_PARALLEL_SECTIONS;
node->parallelized_function = 1;
cgraph_node::add_new_function (child_fn, true);
+ bool need_asm = DECL_ASSEMBLER_NAME_SET_P (current_function_decl)
+ && !DECL_ASSEMBLER_NAME_SET_P (child_fn);
+
/* Fix the callgraph edges for child_cfun. Those for cfun will be
fixed in a following pass. */
push_cfun (child_cfun);
+ if (need_asm)
+ assign_assembler_name_if_neeeded (child_fn);
+
if (optimize)
optimize_omp_library_calls (entry_stmt);
cgraph_edge::rebuild_edges ();
if (flag_checking && !loops_state_satisfies_p (LOOPS_NEED_FIXUP))
verify_loop_structure ();
pop_cfun ();
+
+ if (dump_file && !gimple_in_ssa_p (cfun))
+ {
+ omp_any_child_fn_dumped = true;
+ dump_function_header (dump_file, child_fn, dump_flags);
+ dump_function_to_file (child_fn, dump_file, dump_flags);
+ }
}
/* Emit a library call to launch the children threads. */
OMP_CLAUSE__LOOPTEMP_);
gcc_assert (innerc);
endvar = OMP_CLAUSE_DECL (innerc);
+ if (fd->collapse > 1 && TREE_CODE (fd->loop.n2) != INTEGER_CST
+ && gimple_omp_for_kind (fd->for_stmt) == GF_OMP_FOR_KIND_DISTRIBUTE)
+ {
+ int i;
+ for (i = 1; i < fd->collapse; i++)
+ {
+ innerc = find_omp_clause (OMP_CLAUSE_CHAIN (innerc),
+ OMP_CLAUSE__LOOPTEMP_);
+ gcc_assert (innerc);
+ }
+ innerc = find_omp_clause (OMP_CLAUSE_CHAIN (innerc),
+ OMP_CLAUSE__LOOPTEMP_);
+ if (innerc)
+ {
+ /* If needed (distribute parallel for with lastprivate),
+ propagate down the total number of iterations. */
+ tree t = fold_convert (TREE_TYPE (OMP_CLAUSE_DECL (innerc)),
+ fd->loop.n2);
+ t = force_gimple_operand_gsi (&gsi, t, false, NULL_TREE, false,
+ GSI_CONTINUE_LINKING);
+ assign_stmt = gimple_build_assign (OMP_CLAUSE_DECL (innerc), t);
+ gsi_insert_after (&gsi, assign_stmt, GSI_CONTINUE_LINKING);
+ }
+ }
}
t = fold_convert (itype, s0);
t = fold_build2 (MULT_EXPR, itype, t, step);
OMP_CLAUSE__LOOPTEMP_);
gcc_assert (innerc);
endvar = OMP_CLAUSE_DECL (innerc);
+ if (fd->collapse > 1 && TREE_CODE (fd->loop.n2) != INTEGER_CST
+ && gimple_omp_for_kind (fd->for_stmt) == GF_OMP_FOR_KIND_DISTRIBUTE)
+ {
+ int i;
+ for (i = 1; i < fd->collapse; i++)
+ {
+ innerc = find_omp_clause (OMP_CLAUSE_CHAIN (innerc),
+ OMP_CLAUSE__LOOPTEMP_);
+ gcc_assert (innerc);
+ }
+ innerc = find_omp_clause (OMP_CLAUSE_CHAIN (innerc),
+ OMP_CLAUSE__LOOPTEMP_);
+ if (innerc)
+ {
+ /* If needed (distribute parallel for with lastprivate),
+ propagate down the total number of iterations. */
+ tree t = fold_convert (TREE_TYPE (OMP_CLAUSE_DECL (innerc)),
+ fd->loop.n2);
+ t = force_gimple_operand_gsi (&gsi, t, false, NULL_TREE, false,
+ GSI_CONTINUE_LINKING);
+ assign_stmt = gimple_build_assign (OMP_CLAUSE_DECL (innerc), t);
+ gsi_insert_after (&gsi, assign_stmt, GSI_CONTINUE_LINKING);
+ }
+ }
}
t = fold_convert (itype, s0);
extract_omp_for_data (as_a <gomp_for *> (last_stmt (region->entry)),
&fd, loops);
region->sched_kind = fd.sched_kind;
+ region->sched_modifiers = fd.sched_modifiers;
gcc_assert (EDGE_COUNT (region->entry->succs) == 2);
BRANCH_EDGE (region->entry)->flags &= ~EDGE_ABNORMAL;
&& fd.sched_kind == OMP_CLAUSE_SCHEDULE_STATIC)
fd.chunk_size = integer_zero_node;
gcc_assert (fd.sched_kind != OMP_CLAUSE_SCHEDULE_AUTO);
- fn_index = (fd.sched_kind == OMP_CLAUSE_SCHEDULE_RUNTIME)
- ? 3 : fd.sched_kind;
+ switch (fd.sched_kind)
+ {
+ case OMP_CLAUSE_SCHEDULE_RUNTIME:
+ fn_index = 3;
+ break;
+ case OMP_CLAUSE_SCHEDULE_DYNAMIC:
+ case OMP_CLAUSE_SCHEDULE_GUIDED:
+ if ((fd.sched_modifiers & OMP_CLAUSE_SCHEDULE_NONMONOTONIC)
+ && !fd.ordered
+ && !fd.have_ordered)
+ {
+ fn_index = 3 + fd.sched_kind;
+ break;
+ }
+ /* FALLTHRU */
+ default:
+ fn_index = fd.sched_kind;
+ break;
+ }
if (!fd.ordered)
- fn_index += fd.have_ordered * 4;
+ fn_index += fd.have_ordered * 6;
if (fd.ordered)
start_ix = ((int)BUILT_IN_GOMP_LOOP_DOACROSS_STATIC_START) + fn_index;
else
vec_safe_push (offload_funcs, child_fn);
#endif
+ bool need_asm = DECL_ASSEMBLER_NAME_SET_P (current_function_decl)
+ && !DECL_ASSEMBLER_NAME_SET_P (child_fn);
+
/* Fix the callgraph edges for child_cfun. Those for cfun will be
fixed in a following pass. */
push_cfun (child_cfun);
+ if (need_asm)
+ assign_assembler_name_if_neeeded (child_fn);
cgraph_edge::rebuild_edges ();
#ifdef ENABLE_OFFLOADING
if (flag_checking && !loops_state_satisfies_p (LOOPS_NEED_FIXUP))
verify_loop_structure ();
pop_cfun ();
+
+ if (dump_file && !gimple_in_ssa_p (cfun))
+ {
+ omp_any_child_fn_dumped = true;
+ dump_function_header (dump_file, child_fn, dump_flags);
+ dump_function_to_file (child_fn, dump_file, dump_flags);
+ }
}
/* Emit a library call to launch the offloading region, or do data
else
depend = build_int_cst (ptr_type_node, 0);
args.quick_push (depend);
+ if (start_ix == BUILT_IN_GOMP_TARGET)
+ {
+ c = find_omp_clause (clauses, OMP_CLAUSE_NUM_TEAMS);
+ if (c)
+ {
+ t = fold_convert (integer_type_node,
+ OMP_CLAUSE_NUM_TEAMS_EXPR (c));
+ t = force_gimple_operand_gsi (&gsi, t, true, NULL,
+ true, GSI_SAME_STMT);
+ }
+ else
+ t = integer_minus_one_node;
+ args.quick_push (t);
+ c = find_omp_clause (clauses, OMP_CLAUSE_THREAD_LIMIT);
+ if (c)
+ {
+ t = fold_convert (integer_type_node,
+ OMP_CLAUSE_THREAD_LIMIT_EXPR (c));
+ t = force_gimple_operand_gsi (&gsi, t, true, NULL,
+ true, GSI_SAME_STMT);
+ }
+ else
+ t = integer_minus_one_node;
+ args.quick_push (t);
+ }
break;
case BUILT_IN_GOACC_PARALLEL:
{
static void
expand_omp (struct omp_region *region)
{
+ omp_any_child_fn_dumped = false;
while (region)
{
location_t saved_location;
input_location = saved_location;
region = region->next;
}
+ if (omp_any_child_fn_dumped)
+ {
+ if (dump_file)
+ dump_function_header (dump_file, current_function_decl, dump_flags);
+ omp_any_child_fn_dumped = false;
+ }
}
&& TREE_CODE (n2) != INTEGER_CST
&& gimple_omp_for_combined_into_p (fd->for_stmt))
{
- struct omp_context *task_ctx = NULL;
+ struct omp_context *taskreg_ctx = NULL;
if (gimple_code (ctx->outer->stmt) == GIMPLE_OMP_FOR)
{
gomp_for *gfor = as_a <gomp_for *> (ctx->outer->stmt);
- if (gimple_omp_for_kind (gfor) == GF_OMP_FOR_KIND_FOR)
+ if (gimple_omp_for_kind (gfor) == GF_OMP_FOR_KIND_FOR
+ || gimple_omp_for_kind (gfor) == GF_OMP_FOR_KIND_DISTRIBUTE)
{
- struct omp_for_data outer_fd;
- extract_omp_for_data (gfor, &outer_fd, NULL);
- n2 = fold_convert (TREE_TYPE (n2), outer_fd.loop.n2);
+ if (gimple_omp_for_combined_into_p (gfor))
+ {
+ gcc_assert (ctx->outer->outer
+ && is_parallel_ctx (ctx->outer->outer));
+ taskreg_ctx = ctx->outer->outer;
+ }
+ else
+ {
+ struct omp_for_data outer_fd;
+ extract_omp_for_data (gfor, &outer_fd, NULL);
+ n2 = fold_convert (TREE_TYPE (n2), outer_fd.loop.n2);
+ }
}
else if (gimple_omp_for_kind (gfor) == GF_OMP_FOR_KIND_TASKLOOP)
- task_ctx = ctx->outer->outer;
+ taskreg_ctx = ctx->outer->outer;
}
- else if (is_task_ctx (ctx->outer))
- task_ctx = ctx->outer;
- if (task_ctx)
+ else if (is_taskreg_ctx (ctx->outer))
+ taskreg_ctx = ctx->outer;
+ if (taskreg_ctx)
{
int i;
tree innerc
- = find_omp_clause (gimple_omp_task_clauses (task_ctx->stmt),
+ = find_omp_clause (gimple_omp_taskreg_clauses (taskreg_ctx->stmt),
OMP_CLAUSE__LOOPTEMP_);
gcc_assert (innerc);
for (i = 0; i < fd->collapse; i++)
if (innerc)
n2 = fold_convert (TREE_TYPE (n2),
lookup_decl (OMP_CLAUSE_DECL (innerc),
- task_ctx));
+ taskreg_ctx));
}
}
cond = build2 (cond_code, boolean_type_node, fd->loop.v, n2);
case GOMP_MAP_ALWAYS_FROM:
case GOMP_MAP_ALWAYS_TOFROM:
case GOMP_MAP_FIRSTPRIVATE_POINTER:
+ case GOMP_MAP_FIRSTPRIVATE_REFERENCE:
case GOMP_MAP_STRUCT:
+ case GOMP_MAP_ALWAYS_POINTER:
break;
case GOMP_MAP_FORCE_ALLOC:
case GOMP_MAP_FORCE_TO:
}
if (offloaded
- && OMP_CLAUSE_MAP_KIND (c) == GOMP_MAP_FIRSTPRIVATE_POINTER)
+ && (OMP_CLAUSE_MAP_KIND (c) == GOMP_MAP_FIRSTPRIVATE_POINTER
+ || OMP_CLAUSE_MAP_KIND (c) == GOMP_MAP_FIRSTPRIVATE_REFERENCE))
{
if (TREE_CODE (TREE_TYPE (var)) == ARRAY_TYPE)
{
continue;
}
- if (offloaded && OMP_CLAUSE_MAP_PRIVATE (c))
- {
- map_cnt++;
- continue;
- }
-
if (!maybe_lookup_field (var, ctx))
continue;
nc = c;
ovar = OMP_CLAUSE_DECL (c);
if (OMP_CLAUSE_CODE (c) == OMP_CLAUSE_MAP
- && OMP_CLAUSE_MAP_KIND (c) == GOMP_MAP_FIRSTPRIVATE_POINTER)
+ && (OMP_CLAUSE_MAP_KIND (c) == GOMP_MAP_FIRSTPRIVATE_POINTER
+ || (OMP_CLAUSE_MAP_KIND (c)
+ == GOMP_MAP_FIRSTPRIVATE_REFERENCE)))
break;
if (!DECL_P (ovar))
{
gcc_assert (DECL_P (ovar2));
ovar = ovar2;
}
- if (OMP_CLAUSE_CODE (c) == OMP_CLAUSE_MAP
- && OMP_CLAUSE_MAP_PRIVATE (c))
- {
- if (!maybe_lookup_field ((splay_tree_key) &DECL_UID (ovar),
- ctx))
- continue;
- }
- else if (!maybe_lookup_field (ovar, ctx))
+ if (!maybe_lookup_field (ovar, ctx))
continue;
}
if (nc)
{
var = lookup_decl_in_outer_ctx (ovar, ctx);
- if (OMP_CLAUSE_CODE (c) == OMP_CLAUSE_MAP
- && OMP_CLAUSE_MAP_PRIVATE (c))
- x = build_sender_ref ((splay_tree_key) &DECL_UID (ovar),
- ctx);
- else
- x = build_sender_ref (ovar, ctx);
+ x = build_sender_ref (ovar, ctx);
if (OMP_CLAUSE_CODE (c) == OMP_CLAUSE_MAP
&& OMP_CLAUSE_MAP_KIND (c) == GOMP_MAP_POINTER
}
break;
}
- /* Handle GOMP_MAP_FIRSTPRIVATE_POINTER in second pass,
+ /* Handle GOMP_MAP_FIRSTPRIVATE_{POINTER,REFERENCE} in second pass,
so that firstprivate vars holding OMP_CLAUSE_SIZE if needed
are already handled. */
for (c = clauses; c ; c = OMP_CLAUSE_CHAIN (c))
default:
break;
case OMP_CLAUSE_MAP:
- if (OMP_CLAUSE_MAP_KIND (c) == GOMP_MAP_FIRSTPRIVATE_POINTER)
+ if (OMP_CLAUSE_MAP_KIND (c) == GOMP_MAP_FIRSTPRIVATE_POINTER
+ || OMP_CLAUSE_MAP_KIND (c) == GOMP_MAP_FIRSTPRIVATE_REFERENCE)
{
location_t clause_loc = OMP_CLAUSE_LOCATION (c);
HOST_WIDE_INT offset = 0;
}
else
is_ref = is_reference (var);
+ if (OMP_CLAUSE_MAP_KIND (c) == GOMP_MAP_FIRSTPRIVATE_REFERENCE)
+ is_ref = false;
bool ref_to_array = false;
if (is_ref)
{
else if (OMP_CLAUSE_CHAIN (c)
&& OMP_CLAUSE_CODE (OMP_CLAUSE_CHAIN (c))
== OMP_CLAUSE_MAP
- && OMP_CLAUSE_MAP_KIND (OMP_CLAUSE_CHAIN (c))
- == GOMP_MAP_FIRSTPRIVATE_POINTER)
+ && (OMP_CLAUSE_MAP_KIND (OMP_CLAUSE_CHAIN (c))
+ == GOMP_MAP_FIRSTPRIVATE_POINTER
+ || (OMP_CLAUSE_MAP_KIND (OMP_CLAUSE_CHAIN (c))
+ == GOMP_MAP_FIRSTPRIVATE_REFERENCE)))
prev = c;
break;
case OMP_CLAUSE_PRIVATE:
int argno = TREE_INT_CST_LOW (decl);
if (OMP_CLAUSE_LINEAR_VARIABLE_STRIDE (t))
{
- clone_info->args[argno].arg_type
- = SIMD_CLONE_ARG_TYPE_LINEAR_VARIABLE_STEP;
+ enum cgraph_simd_clone_arg_type arg_type;
+ if (TREE_CODE (args[argno]) == REFERENCE_TYPE)
+ switch (OMP_CLAUSE_LINEAR_KIND (t))
+ {
+ case OMP_CLAUSE_LINEAR_REF:
+ arg_type
+ = SIMD_CLONE_ARG_TYPE_LINEAR_REF_VARIABLE_STEP;
+ break;
+ case OMP_CLAUSE_LINEAR_UVAL:
+ arg_type
+ = SIMD_CLONE_ARG_TYPE_LINEAR_UVAL_VARIABLE_STEP;
+ break;
+ case OMP_CLAUSE_LINEAR_VAL:
+ case OMP_CLAUSE_LINEAR_DEFAULT:
+ arg_type
+ = SIMD_CLONE_ARG_TYPE_LINEAR_VAL_VARIABLE_STEP;
+ break;
+ default:
+ gcc_unreachable ();
+ }
+ else
+ arg_type = SIMD_CLONE_ARG_TYPE_LINEAR_VARIABLE_STEP;
+ clone_info->args[argno].arg_type = arg_type;
clone_info->args[argno].linear_step = tree_to_shwi (step);
gcc_assert (clone_info->args[argno].linear_step >= 0
&& clone_info->args[argno].linear_step < n);
}
break;
case SIMD_CLONE_ARG_TYPE_LINEAR_VARIABLE_STEP:
- pp_character (&pp, 's');
+ pp_string (&pp, "ls");
+ pp_unsigned_wide_integer (&pp, arg.linear_step);
+ break;
+ case SIMD_CLONE_ARG_TYPE_LINEAR_REF_VARIABLE_STEP:
+ pp_string (&pp, "Rs");
+ pp_unsigned_wide_integer (&pp, arg.linear_step);
+ break;
+ case SIMD_CLONE_ARG_TYPE_LINEAR_VAL_VARIABLE_STEP:
+ pp_string (&pp, "Ls");
+ pp_unsigned_wide_integer (&pp, arg.linear_step);
+ break;
+ case SIMD_CLONE_ARG_TYPE_LINEAR_UVAL_VARIABLE_STEP:
+ pp_string (&pp, "Us");
pp_unsigned_wide_integer (&pp, arg.linear_step);
break;
default:
adj.op = IPA_PARM_OP_COPY;
break;
case SIMD_CLONE_ARG_TYPE_LINEAR_UVAL_CONSTANT_STEP:
+ case SIMD_CLONE_ARG_TYPE_LINEAR_UVAL_VARIABLE_STEP:
if (node->definition)
node->simdclone->args[i].simd_array
= create_tmp_simd_array (IDENTIFIER_POINTER (DECL_NAME (parm)),
adj.op = IPA_PARM_OP_COPY;
break;
case SIMD_CLONE_ARG_TYPE_LINEAR_VAL_CONSTANT_STEP:
+ case SIMD_CLONE_ARG_TYPE_LINEAR_VAL_VARIABLE_STEP:
case SIMD_CLONE_ARG_TYPE_VECTOR:
if (INTEGRAL_TYPE_P (parm_type) || POINTER_TYPE_P (parm_type))
veclen = node->simdclone->vecsize_int;
}
}
+/* Helper function of simd_clone_adjust, return linear step addend
+ of Ith argument. */
+
+static tree
+simd_clone_linear_addend (struct cgraph_node *node, unsigned int i,
+ tree addtype, basic_block entry_bb)
+{
+ tree ptype = NULL_TREE;
+ switch (node->simdclone->args[i].arg_type)
+ {
+ case SIMD_CLONE_ARG_TYPE_LINEAR_CONSTANT_STEP:
+ case SIMD_CLONE_ARG_TYPE_LINEAR_REF_CONSTANT_STEP:
+ case SIMD_CLONE_ARG_TYPE_LINEAR_VAL_CONSTANT_STEP:
+ case SIMD_CLONE_ARG_TYPE_LINEAR_UVAL_CONSTANT_STEP:
+ return build_int_cst (addtype, node->simdclone->args[i].linear_step);
+ case SIMD_CLONE_ARG_TYPE_LINEAR_VARIABLE_STEP:
+ case SIMD_CLONE_ARG_TYPE_LINEAR_REF_VARIABLE_STEP:
+ ptype = TREE_TYPE (node->simdclone->args[i].orig_arg);
+ break;
+ case SIMD_CLONE_ARG_TYPE_LINEAR_VAL_VARIABLE_STEP:
+ case SIMD_CLONE_ARG_TYPE_LINEAR_UVAL_VARIABLE_STEP:
+ ptype = TREE_TYPE (TREE_TYPE (node->simdclone->args[i].orig_arg));
+ break;
+ default:
+ gcc_unreachable ();
+ }
+
+ unsigned int idx = node->simdclone->args[i].linear_step;
+ tree arg = node->simdclone->args[idx].orig_arg;
+ gcc_assert (is_gimple_reg_type (TREE_TYPE (arg)));
+ gimple_stmt_iterator gsi = gsi_after_labels (entry_bb);
+ gimple *g;
+ tree ret;
+ if (is_gimple_reg (arg))
+ ret = get_or_create_ssa_default_def (cfun, arg);
+ else
+ {
+ g = gimple_build_assign (make_ssa_name (TREE_TYPE (arg)), arg);
+ gsi_insert_before (&gsi, g, GSI_SAME_STMT);
+ ret = gimple_assign_lhs (g);
+ }
+ if (TREE_CODE (TREE_TYPE (arg)) == REFERENCE_TYPE)
+ {
+ g = gimple_build_assign (make_ssa_name (TREE_TYPE (TREE_TYPE (arg))),
+ build_simple_mem_ref (ret));
+ gsi_insert_before (&gsi, g, GSI_SAME_STMT);
+ ret = gimple_assign_lhs (g);
+ }
+ if (!useless_type_conversion_p (addtype, TREE_TYPE (ret)))
+ {
+ g = gimple_build_assign (make_ssa_name (addtype), NOP_EXPR, ret);
+ gsi_insert_before (&gsi, g, GSI_SAME_STMT);
+ ret = gimple_assign_lhs (g);
+ }
+ if (POINTER_TYPE_P (ptype))
+ {
+ tree size = TYPE_SIZE_UNIT (TREE_TYPE (ptype));
+ if (size && TREE_CODE (size) == INTEGER_CST)
+ {
+ g = gimple_build_assign (make_ssa_name (addtype), MULT_EXPR,
+ ret, fold_convert (addtype, size));
+ gsi_insert_before (&gsi, g, GSI_SAME_STMT);
+ ret = gimple_assign_lhs (g);
+ }
+ }
+ return ret;
+}
+
/* Adjust the argument types in NODE to their appropriate vector
counterparts. */
else if ((node->simdclone->args[i].arg_type
== SIMD_CLONE_ARG_TYPE_LINEAR_CONSTANT_STEP)
|| (node->simdclone->args[i].arg_type
- == SIMD_CLONE_ARG_TYPE_LINEAR_REF_CONSTANT_STEP))
+ == SIMD_CLONE_ARG_TYPE_LINEAR_REF_CONSTANT_STEP)
+ || (node->simdclone->args[i].arg_type
+ == SIMD_CLONE_ARG_TYPE_LINEAR_VARIABLE_STEP)
+ || (node->simdclone->args[i].arg_type
+ == SIMD_CLONE_ARG_TYPE_LINEAR_REF_VARIABLE_STEP))
{
tree orig_arg = node->simdclone->args[i].orig_arg;
gcc_assert (INTEGRAL_TYPE_P (TREE_TYPE (orig_arg))
? PLUS_EXPR : POINTER_PLUS_EXPR;
tree addtype = INTEGRAL_TYPE_P (TREE_TYPE (orig_arg))
? TREE_TYPE (orig_arg) : sizetype;
- tree addcst
- = build_int_cst (addtype, node->simdclone->args[i].linear_step);
- g = gimple_build_assign (iter2, code, iter1, addcst);
+ tree addcst = simd_clone_linear_addend (node, i, addtype,
+ entry_bb);
gsi = gsi_last_bb (incr_bb);
+ g = gimple_build_assign (iter2, code, iter1, addcst);
gsi_insert_before (&gsi, g, GSI_SAME_STMT);
imm_use_iterator iter;
}
}
else if (node->simdclone->args[i].arg_type
- == SIMD_CLONE_ARG_TYPE_LINEAR_UVAL_CONSTANT_STEP)
+ == SIMD_CLONE_ARG_TYPE_LINEAR_UVAL_CONSTANT_STEP
+ || (node->simdclone->args[i].arg_type
+ == SIMD_CLONE_ARG_TYPE_LINEAR_UVAL_VARIABLE_STEP))
{
tree orig_arg = node->simdclone->args[i].orig_arg;
tree def = ssa_default_def (cfun, orig_arg);
? PLUS_EXPR : POINTER_PLUS_EXPR;
tree addtype = INTEGRAL_TYPE_P (TREE_TYPE (iter3))
? TREE_TYPE (iter3) : sizetype;
- tree addcst
- = build_int_cst (addtype, node->simdclone->args[i].linear_step);
+ tree addcst = simd_clone_linear_addend (node, i, addtype,
+ entry_bb);
g = gimple_build_assign (iter5, code, iter4, addcst);
gsi = gsi_last_bb (incr_bb);
gsi_insert_before (&gsi, g, GSI_SAME_STMT);
+2015-11-05 Jakub Jelinek <jakub@redhat.com>
+
+ * c-c++-common/gomp/clauses-2.c (foo): Adjust for diagnostics
+ of variables in both data sharing and mapping clauses and for
+ structure element based array sections being mapped rather than
+ privatized.
+ * c-c++-common/gomp/declare-target-2.c: Add various new tests. Adjust
+ expected diagnostics wording in one case.
+ * c-c++-common/gomp/distribute-1.c: New test.
+ * c-c++-common/gomp/element-1.c: New test.
+ * c-c++-common/gomp/pr61486-2.c: Add #pragma omp declare target
+ and #pragma omp end declare target pair around the function.
+ Change s from a parameter to a file scope variable.
+ * c-c++-common/gomp/pr67521.c: Add dg-error directives.
+ * c-c++-common/gomp/reduction-1.c (foo): Don't expect diagnostics
+ on non-zero low-bound in reduction array sections. Add further
+ tests.
+ * c-c++-common/gomp/schedule-modifiers-1.c: New test.
+ * c-c++-common/gomp/target-teams-1.c: New test.
+ * gcc.dg/gomp/declare-simd-1.c: Add scan-assembler-times directives
+ for expected mangling on x86_64/i?86.
+ * gcc.dg/gomp/declare-simd-3.c: New test.
+ * gcc.dg/gomp/declare-simd-4.c: New test.
+ * gcc.dg/gomp/for-20.c: New test.
+ * gcc.dg/gomp/for-21.c: New test.
+ * gcc.dg/gomp/for-22.c: New test.
+ * gcc.dg/gomp/for-23.c: New test.
+ * gcc.dg/gomp/for-24.c: New test.
+ * gcc.dg/gomp/linear-1.c: New test.
+ * gcc.dg/gomp/loop-1.c: New test.
+ * g++.dg/gomp/atomic-17.C: New test.
+ * g++.dg/gomp/clause-1.C (T::test): Don't expect error on
+ non-static data member in shared clause. Add single construct.
+ * g++.dg/gomp/declare-simd-1.C: Add dg-options. Add
+ scan-assembler-times directives for expected mangling on x86_64/i?86.
+ * g++.dg/gomp/declare-simd-3.C: Likewise.
+ * g++.dg/gomp/declare-simd-4.C: New test.
+ * g++.dg/gomp/declare-simd-5.C: New test.
+ * g++.dg/gomp/declare-target-1.C: New test.
+ * g++.dg/gomp/linear-2.C: New test.
+ * g++.dg/gomp/loop-1.C: New test.
+ * g++.dg/gomp/loop-2.C: New test.
+ * g++.dg/gomp/loop-3.C: New test.
+ * g++.dg/gomp/member-2.C (B::m2, B::m4): Don't expect error on
+ non-static data member in shared clause.
+ * g++.dg/gomp/member-3.C: New test.
+ * g++.dg/gomp/member-4.C: New test.
+ * g++.dg/gomp/pr38639.C (foo): Adjust dg-error.
+ (bar): Remove dg-message.
+ * g++.dg/gomp/target-teams-1.C: New test.
+
2015-11-05 Richard Biener <rguenther@suse.de>
* gcc.dg/tree-ssa/loadpre2.c: Avoid undefined behavior due to
void
foo (int *p, int q, struct S t, int i, int j, int k, int l)
{
- #pragma omp target map (q), firstprivate (q)
+ #pragma omp target map (q), firstprivate (q) /* { dg-error "appears both in data and map clauses" } */
bar (&q);
#pragma omp target map (p[0]) firstprivate (p) /* { dg-error "appears more than once in data clauses" } */
bar (p);
#pragma omp target firstprivate (p), map (p[0]) /* { dg-error "appears more than once in data clauses" } */
bar (p);
- #pragma omp target map (p[0]) map (p)
+ #pragma omp target map (p[0]) map (p) /* { dg-error "appears both in data and map clauses" } */
bar (p);
- #pragma omp target map (p) , map (p[0])
+ #pragma omp target map (p) , map (p[0]) /* { dg-error "appears both in data and map clauses" } */
bar (p);
#pragma omp target map (q) map (q) /* { dg-error "appears more than once in map clauses" } */
bar (&q);
bar (&t.r);
#pragma omp target map (t.r) map (t.r) /* { dg-error "appears more than once in map clauses" } */
bar (&t.r);
- #pragma omp target firstprivate (t), map (t.r)
+ #pragma omp target firstprivate (t), map (t.r) /* { dg-error "appears both in data and map clauses" } */
bar (&t.r);
- #pragma omp target map (t.r) firstprivate (t)
+ #pragma omp target map (t.r) firstprivate (t) /* { dg-error "appears both in data and map clauses" } */
bar (&t.r);
- #pragma omp target map (t.s[0]) map (t)
+ #pragma omp target map (t.s[0]) map (t) /* { dg-error "appears more than once in map clauses" } */
bar (t.s);
- #pragma omp target map (t) map(t.s[0])
+ #pragma omp target map (t) map(t.s[0]) /* { dg-error "appears more than once in map clauses" } */
bar (t.s);
- #pragma omp target firstprivate (t) map (t.s[0]) /* { dg-error "appears more than once in data clauses" } */
+ #pragma omp target firstprivate (t) map (t.s[0]) /* { dg-error "appears both in data and map clauses" } */
bar (t.s);
- #pragma omp target map (t.s[0]) firstprivate (t) /* { dg-error "appears more than once in data clauses" } */
+ #pragma omp target map (t.s[0]) firstprivate (t) /* { dg-error "appears both in data and map clauses" } */
bar (t.s);
#pragma omp target map (t.s[0]) map (t.s[2]) /* { dg-error "appears more than once in map clauses" } */
bar (t.s);
bar (t.s);
#pragma omp target map (t.r) ,map (t.s[0])
bar (t.s);
- #pragma omp target map (t.r) map (t) map (t.s[0]) firstprivate (t) /* { dg-error "appears more than once in map clauses" } */
- bar (t.s); /* { dg-error "appears more than once in data clauses" "" { target *-*-* } 49 } */
- #pragma omp target map (t) map (t.r) firstprivate (t) map (t.s[0]) /* { dg-error "appears more than once in map clauses" } */
- bar (t.s); /* { dg-error "appears more than once in data clauses" "" { target *-*-* } 51 } */
+ #pragma omp target map (t.r) map (t) map (t.s[0]) firstprivate (t) /* { dg-error "appears both in data and map clauses" } */
+ bar (t.s);
+ #pragma omp target map (t) map (t.r) firstprivate (t) map (t.s[0]) /* { dg-error "appears both in data and map clauses" } */
+ bar (t.s); /* { dg-error "appears more than once in map clauses" "" { target *-*-* } 51 } */
}
#pragma omp declare target to (a) /* { dg-error "with clauses in between" } */
#pragma omp end declare target
int b;
-#pragma omp declare target to (b) link (b) /* { dg-error "specified both in declare target" } */
+#pragma omp declare target to (b) link (b) /* { dg-error "appears more than once on the same .declare target. directive" } */
int c;
#pragma omp declare target (c)
#pragma omp declare target link (c) /* { dg-error "specified both in declare target" } */
#pragma omp declare target link (h) /* { dg-error "is threadprivate variable in" } */
int j[10];
#pragma omp declare target to (j[0:4]) /* { dg-error "expected" } */
+int k, l;
+#pragma omp declare target
+int m;
+#pragma omp end declare target
+#pragma omp declare target to (k)
+#pragma omp declare target (k)
+#pragma omp declare target to (k, m) link (l)
+#pragma omp declare target link (l)
+int n, o, s, t;
+#pragma omp declare target to (n) to (n) /* { dg-error "appears more than once on the same .declare target. directive" } */
+#pragma omp declare target link (o, o) /* { dg-error "appears more than once on the same .declare target. directive" } */
+#pragma omp declare target (s, t, s) /* { dg-error "appears more than once on the same .declare target. directive" } */
+int p, q, r;
+#pragma omp declare target (p) to (q) /* { dg-error "expected end of line before .to." } */
+#pragma omp declare target to (p) (q) link (r) /* { dg-error "expected .#pragma omp. clause before" } */
+#pragma omp declare target link (r) (p) /* { dg-error "expected .#pragma omp. clause before" } */
+#pragma omp declare target
+#pragma omp end declare target to (p) /* { dg-error "expected end of line before .to." } */
--- /dev/null
+int s1, s2, s3, s4, s5, s6, s7, s8;
+#pragma omp declare target (s1, s2, s3, s4, s5, s6, s7, s8)
+
+void
+f1 (void)
+{
+ int i;
+ #pragma omp distribute
+ for (i = 0; i < 64; i++)
+ ;
+ #pragma omp distribute private (i)
+ for (i = 0; i < 64; i++)
+ ;
+ #pragma omp distribute
+ for (int j = 0; j < 64; j++)
+ ;
+ #pragma omp distribute lastprivate (s1)
+ for (s1 = 0; s1 < 64; s1 += 2)
+ ;
+ #pragma omp distribute lastprivate (s2)
+ for (i = 0; i < 64; i++)
+ s2 = 2 * i;
+ #pragma omp distribute simd
+ for (i = 0; i < 64; i++)
+ ;
+ #pragma omp distribute simd lastprivate (s3, s4) collapse(2)
+ for (s3 = 0; s3 < 64; s3++)
+ for (s4 = 0; s4 < 3; s4++)
+ ;
+ #pragma omp distribute parallel for
+ for (i = 0; i < 64; i++)
+ ;
+ #pragma omp distribute parallel for private (i)
+ for (i = 0; i < 64; i++)
+ ;
+ #pragma omp distribute parallel for lastprivate (s5)
+ for (s5 = 0; s5 < 64; s5++)
+ ;
+ #pragma omp distribute firstprivate (s7) private (s8)
+ for (i = 0; i < 64; i++)
+ s8 = s7++;
+}
+
+void
+f2 (void)
+{
+ int i;
+ #pragma omp distribute lastprivate (i) /* { dg-error "lastprivate variable .i. is private in outer context" } */
+ for (i = 0; i < 64; i++)
+ ;
+ #pragma omp distribute firstprivate (s6) lastprivate (s6) /* { dg-error "same variable used in .firstprivate. and .lastprivate. clauses on .distribute. construct" } */
+ for (i = 0; i < 64; i++)
+ s6 += i;
+}
+
+#pragma omp declare target to(f1, f2)
--- /dev/null
+/* { dg-do compile } */
+/* { dg-options "-fopenmp" } */
+
+struct S { int a; };
+
+void
+foo (struct S *x)
+{
+ struct S b;
+ #pragma omp parallel private (b.a) /* { dg-error "expected .\\). before .\\.. token" } */
+ ;
+ #pragma omp parallel private (x->a) /* { dg-error "expected .\\). before .->. token" } */
+ ;
+}
int q, i, j;
+#pragma omp declare target
+int s;
+
void
-test2 (int n, int o, int p, int r, int s, int *pp)
+test2 (int n, int o, int p, int r, int *pp)
{
int a[o];
#pragma omp distribute collapse (2) dist_schedule (static, 4) firstprivate (q)
s = i * 10;
}
}
+#pragma omp end declare target
{
int i = 0;
#pragma omp parallel for simd
- for (i = (i & x); i < 10; i = i + 2)
+ for (i = (i & x); i < 10; i = i + 2) /* { dg-error "initializer expression refers to iteration variable" } */
;
i = 0;
#pragma omp parallel for simd
- for (i = 0; i < (i & x) + 10; i = i + 2)
+ for (i = 0; i < (i & x) + 10; i = i + 2) /* { dg-error "condition expression refers to iteration variable" } */
;
i = 0;
#pragma omp parallel for simd
- for (i = 0; i < 10; i = i + ((i & x) + 2))
+ for (i = 0; i < 10; i = i + ((i & x) + 2)) /* { dg-error "increment expression refers to iteration variable" } */
;
}
bar (a);
#pragma omp parallel reduction(+: a[0:4])
bar (a);
- #pragma omp parallel reduction(+: a[2:4]) /* { dg-error "array section has to be zero-based" } */
+ #pragma omp parallel reduction(+: a[2:4])
bar (a);
- #pragma omp parallel reduction(+: e[2:4]) /* { dg-error "array section has to be zero-based" } */
+ #pragma omp parallel reduction(+: e[2:4])
+ bar (a);
+ #pragma omp parallel reduction(+: a[x:4])
+ bar (a);
+ #pragma omp parallel reduction(+: e[x:4])
+ bar (a);
+ #pragma omp parallel reduction(+: a[x:x])
+ bar (a);
+ #pragma omp parallel reduction(+: e[x:x])
bar (a);
#pragma omp parallel reduction(+: a[0.5:2]) /* { dg-error "low bound \[^\n\r]* of array section does not have integral type" } */
bar (a);
--- /dev/null
+/* { dg-do compile } */
+/* { dg-options "-fopenmp" } */
+
+void
+foo (void)
+{
+ int i;
+ #pragma omp for simd schedule (simd, simd: static, 5)
+ for (i = 0; i < 64; i++)
+ ;
+ #pragma omp for simd schedule (monotonic, simd: static)
+ for (i = 0; i < 64; i++)
+ ;
+ #pragma omp for simd schedule (simd , monotonic : static, 6)
+ for (i = 0; i < 64; i++)
+ ;
+ #pragma omp for schedule (monotonic, monotonic : static, 7)
+ for (i = 0; i < 64; i++)
+ ;
+ #pragma omp for schedule (nonmonotonic, nonmonotonic : dynamic)
+ for (i = 0; i < 64; i++)
+ ;
+ #pragma omp for simd schedule (nonmonotonic , simd : dynamic, 3)
+ for (i = 0; i < 64; i++)
+ ;
+ #pragma omp for simd schedule (nonmonotonic,simd:guided,4)
+ for (i = 0; i < 64; i++)
+ ;
+ #pragma omp for schedule (monotonic: static, 2)
+ for (i = 0; i < 64; i++)
+ ;
+ #pragma omp for schedule (monotonic : static)
+ for (i = 0; i < 64; i++)
+ ;
+ #pragma omp for schedule (monotonic : dynamic)
+ for (i = 0; i < 64; i++)
+ ;
+ #pragma omp for schedule (monotonic : dynamic, 3)
+ for (i = 0; i < 64; i++)
+ ;
+ #pragma omp for schedule (monotonic : guided)
+ for (i = 0; i < 64; i++)
+ ;
+ #pragma omp for schedule (monotonic : guided, 7)
+ for (i = 0; i < 64; i++)
+ ;
+ #pragma omp for schedule (monotonic : runtime)
+ for (i = 0; i < 64; i++)
+ ;
+ #pragma omp for schedule (monotonic : auto)
+ for (i = 0; i < 64; i++)
+ ;
+ #pragma omp for schedule (nonmonotonic : dynamic)
+ for (i = 0; i < 64; i++)
+ ;
+ #pragma omp for schedule (nonmonotonic : dynamic, 3)
+ for (i = 0; i < 64; i++)
+ ;
+ #pragma omp for schedule (nonmonotonic : guided)
+ for (i = 0; i < 64; i++)
+ ;
+ #pragma omp for schedule (nonmonotonic : guided, 7)
+ for (i = 0; i < 64; i++)
+ ;
+}
+
+void
+bar (void)
+{
+ int i;
+ #pragma omp for schedule (nonmonotonic: static, 2) /* { dg-error ".nonmonotonic. modifier specified for .static. schedule kind" } */
+ for (i = 0; i < 64; i++)
+ ;
+ #pragma omp for schedule (nonmonotonic : static) /* { dg-error ".nonmonotonic. modifier specified for .static. schedule kind" } */
+ for (i = 0; i < 64; i++)
+ ;
+ #pragma omp for schedule (nonmonotonic : runtime) /* { dg-error ".nonmonotonic. modifier specified for .runtime. schedule kind" } */
+ for (i = 0; i < 64; i++)
+ ;
+ #pragma omp for schedule (nonmonotonic : auto) /* { dg-error ".nonmonotonic. modifier specified for .auto. schedule kind" } */
+ for (i = 0; i < 64; i++)
+ ;
+ #pragma omp for schedule (nonmonotonic, dynamic) ordered /* { dg-error ".nonmonotonic. schedule modifier specified together with .ordered. clause" } */
+ for (i = 0; i < 64; i++)
+ #pragma omp ordered
+ ;
+ #pragma omp for ordered schedule(nonmonotonic, dynamic, 5) /* { dg-error ".nonmonotonic. schedule modifier specified together with .ordered. clause" } */
+ for (i = 0; i < 64; i++)
+ #pragma omp ordered
+ ;
+ #pragma omp for schedule (nonmonotonic, guided) ordered(1) /* { dg-error ".nonmonotonic. schedule modifier specified together with .ordered. clause" } */
+ for (i = 0; i < 64; i++)
+ {
+ #pragma omp ordered depend(sink: i - 1)
+ #pragma omp ordered depend(source)
+ }
+ #pragma omp for ordered(1) schedule(nonmonotonic, guided, 2) /* { dg-error ".nonmonotonic. schedule modifier specified together with .ordered. clause" } */
+ for (i = 0; i < 64; i++)
+ {
+ #pragma omp ordered depend(source)
+ #pragma omp ordered depend(sink: i - 1)
+ }
+ #pragma omp for schedule (nonmonotonic , monotonic : dynamic) /* { dg-error "both .monotonic. and .nonmonotonic. modifiers specified" } */
+ for (i = 0; i < 64; i++)
+ ;
+ #pragma omp for schedule (monotonic,nonmonotonic:dynamic) /* { dg-error "both .monotonic. and .nonmonotonic. modifiers specified" } */
+ for (i = 0; i < 64; i++)
+ ;
+}
--- /dev/null
+/* { dg-do compile } */
+/* { dg-options "-fopenmp -fdump-tree-gimple" } */
+
+int v = 6;
+void bar (int);
+void bar2 (int, long *, long *);
+int baz (void);
+#pragma omp declare target to (bar, baz, v)
+
+void
+foo (int a, int b, long c, long d)
+{
+ /* The OpenMP 4.5 spec says that these expressions are evaluated before
+ target region on combined target teams, so those cases are always
+ fine. */
+ #pragma omp target
+ bar (0);
+ #pragma omp target
+ #pragma omp teams
+ bar (1);
+ #pragma omp target teams
+ bar (2);
+ #pragma omp target
+ #pragma omp teams num_teams (4)
+ bar (3);
+ #pragma omp target teams num_teams (4)
+ bar (4);
+ #pragma omp target
+ #pragma omp teams thread_limit (7)
+ bar (5);
+ #pragma omp target teams thread_limit (7)
+ bar (6);
+ #pragma omp target
+ #pragma omp teams num_teams (4) thread_limit (8)
+ {
+ {
+ bar (7);
+ }
+ }
+ #pragma omp target teams num_teams (4) thread_limit (8)
+ bar (8);
+ #pragma omp target
+ #pragma omp teams num_teams (a) thread_limit (b)
+ bar (9);
+ #pragma omp target teams num_teams (a) thread_limit (b)
+ bar (10);
+ #pragma omp target
+ #pragma omp teams num_teams (c + 1) thread_limit (d - 1)
+ bar (11);
+ #pragma omp target teams num_teams (c + 1) thread_limit (d - 1)
+ bar (12);
+ #pragma omp target map (always, to: c, d)
+ #pragma omp teams num_teams (c + 1) thread_limit (d - 1)
+ bar (13);
+ #pragma omp target data map (to: c, d)
+ {
+ #pragma omp target defaultmap (tofrom: scalar)
+ bar2 (14, &c, &d);
+ /* This is one of the cases which can't be generally optimized,
+ the c and d are (or could be) already mapped and whether
+ their device and original values match is unclear. */
+ #pragma omp target map (to: c, d)
+ #pragma omp teams num_teams (c + 1) thread_limit (d - 1)
+ bar (15);
+ }
+ /* This can't be optimized, there are function calls inside of
+ target involved. */
+ #pragma omp target
+ #pragma omp teams num_teams (baz () + 1) thread_limit (baz () - 1)
+ bar (16);
+ #pragma omp target teams num_teams (baz () + 1) thread_limit (baz () - 1)
+ bar (17);
+ /* This one can't be optimized, as v might have different value between
+ host and target. */
+ #pragma omp target
+ #pragma omp teams num_teams (v + 1) thread_limit (v - 1)
+ bar (18);
+}
+
+/* { dg-final { scan-tree-dump-times "num_teams\\(-1\\)" 3 "gimple" } } */
+/* { dg-final { scan-tree-dump-times "thread_limit\\(-1\\)" 3 "gimple" } } */
+/* { dg-final { scan-tree-dump-times "num_teams\\(0\\)" 4 "gimple" } } */
+/* { dg-final { scan-tree-dump-times "thread_limit\\(0\\)" 6 "gimple" } } */
+/* { dg-final { scan-tree-dump-times "num_teams\\(1\\)" 2 "gimple" } } */
+/* { dg-final { scan-tree-dump-times "thread_limit\\(1\\)" 0 "gimple" } } */
--- /dev/null
+template <typename T>
+struct A { int foo (); int c; };
+
+template <typename T>
+int
+A<T>::foo ()
+{
+ int j;
+ #pragma omp atomic read
+ j = A::c;
+ return j;
+}
#pragma omp parallel private(n)
n = 1;
- #pragma omp parallel shared(n) // { dg-error "T::n" }
+ #pragma omp parallel shared(n)
+ #pragma omp single
n = 1;
#pragma omp parallel firstprivate(n)
// Test parsing of #pragma omp declare simd
// { dg-do compile }
+// { dg-options "-fopenmp -ffat-lto-objects" }
#pragma omp declare simd uniform (a) aligned (b : 8 * sizeof (int)) \
linear (c : 4) simdlen (8) notinbranch
return a + *b + c;
}
+// { dg-final { scan-assembler-times "_ZGVbM8uva32l4__Z2f2iPii:" 1 { target { i?86-*-* x86_64-*-* } } } }
+// { dg-final { scan-assembler-times "_ZGVbN8uva32l4__Z2f2iPii:" 1 { target { i?86-*-* x86_64-*-* } } } }
+// { dg-final { scan-assembler-times "_ZGVcM8uva32l4__Z2f2iPii:" 1 { target { i?86-*-* x86_64-*-* } } } }
+// { dg-final { scan-assembler-times "_ZGVcN8uva32l4__Z2f2iPii:" 1 { target { i?86-*-* x86_64-*-* } } } }
+// { dg-final { scan-assembler-times "_ZGVdM8uva32l4__Z2f2iPii:" 1 { target { i?86-*-* x86_64-*-* } } } }
+// { dg-final { scan-assembler-times "_ZGVdN8uva32l4__Z2f2iPii:" 1 { target { i?86-*-* x86_64-*-* } } } }
+
#pragma omp declare simd uniform (c) aligned (b : 4 * sizeof (int)) linear (a : 4) simdlen (4)
template <typename T>
T f3 (int a, int *b, T c);
}
}
+// { dg-final { scan-assembler-times "_ZGVbM2va16__ZN2N12N23f10EPx:" 1 { target { i?86-*-* x86_64-*-* } } } }
+// { dg-final { scan-assembler-times "_ZGVbN2va16__ZN2N12N23f10EPx:" 1 { target { i?86-*-* x86_64-*-* } } } }
+// { dg-final { scan-assembler-times "_ZGVcM2va16__ZN2N12N23f10EPx:" 1 { target { i?86-*-* x86_64-*-* } } } }
+// { dg-final { scan-assembler-times "_ZGVcN2va16__ZN2N12N23f10EPx:" 1 { target { i?86-*-* x86_64-*-* } } } }
+// { dg-final { scan-assembler-times "_ZGVdM2va16__ZN2N12N23f10EPx:" 1 { target { i?86-*-* x86_64-*-* } } } }
+// { dg-final { scan-assembler-times "_ZGVdN2va16__ZN2N12N23f10EPx:" 1 { target { i?86-*-* x86_64-*-* } } } }
+
struct A
{
#pragma omp declare simd uniform (a) aligned (b : 8 * sizeof (int)) linear (c : 4) simdlen (8)
return a + *b + c;
}
+// { dg-final { scan-assembler-times "_ZGVbM8vuva32u__ZN1BIiE3f25ILi7EEEiiPii:" 1 { target { i?86-*-* x86_64-*-* } } } }
+// { dg-final { scan-assembler-times "_ZGVbN8vuva32u__ZN1BIiE3f25ILi7EEEiiPii:" 1 { target { i?86-*-* x86_64-*-* } } } }
+// { dg-final { scan-assembler-times "_ZGVcM8vuva32u__ZN1BIiE3f25ILi7EEEiiPii:" 1 { target { i?86-*-* x86_64-*-* } } } }
+// { dg-final { scan-assembler-times "_ZGVcN8vuva32u__ZN1BIiE3f25ILi7EEEiiPii:" 1 { target { i?86-*-* x86_64-*-* } } } }
+// { dg-final { scan-assembler-times "_ZGVdM8vuva32u__ZN1BIiE3f25ILi7EEEiiPii:" 1 { target { i?86-*-* x86_64-*-* } } } }
+// { dg-final { scan-assembler-times "_ZGVdN8vuva32u__ZN1BIiE3f25ILi7EEEiiPii:" 1 { target { i?86-*-* x86_64-*-* } } } }
+
#pragma omp declare simd simdlen (4) aligned (b : 8 * sizeof (int)) linear (a, c : 2)
template <>
template <>
return a + *b + c;
}
+// { dg-final { scan-assembler-times "_ZGVbM4vl2va32__ZN1BIiE3f26ILin1EEEiiPii:" 1 { target { i?86-*-* x86_64-*-* } } } }
+// { dg-final { scan-assembler-times "_ZGVbN4vl2va32__ZN1BIiE3f26ILin1EEEiiPii:" 1 { target { i?86-*-* x86_64-*-* } } } }
+// { dg-final { scan-assembler-times "_ZGVcM4vl2va32__ZN1BIiE3f26ILin1EEEiiPii:" 1 { target { i?86-*-* x86_64-*-* } } } }
+// { dg-final { scan-assembler-times "_ZGVcN4vl2va32__ZN1BIiE3f26ILin1EEEiiPii:" 1 { target { i?86-*-* x86_64-*-* } } } }
+// { dg-final { scan-assembler-times "_ZGVdM4vl2va32__ZN1BIiE3f26ILin1EEEiiPii:" 1 { target { i?86-*-* x86_64-*-* } } } }
+// { dg-final { scan-assembler-times "_ZGVdN4vl2va32__ZN1BIiE3f26ILin1EEEiiPii:" 1 { target { i?86-*-* x86_64-*-* } } } }
+
int
f27 (int x)
{
return x;
}
+// { dg-final { scan-assembler-times "_ZGVbM16v__Z3f30i:" 1 { target { i?86-*-* x86_64-*-* } } } }
+// { dg-final { scan-assembler-times "_ZGVbN16v__Z3f30i:" 1 { target { i?86-*-* x86_64-*-* } } } }
+// { dg-final { scan-assembler-times "_ZGVcM16v__Z3f30i:" 1 { target { i?86-*-* x86_64-*-* } } } }
+// { dg-final { scan-assembler-times "_ZGVcN16v__Z3f30i:" 1 { target { i?86-*-* x86_64-*-* } } } }
+// { dg-final { scan-assembler-times "_ZGVdM16v__Z3f30i:" 1 { target { i?86-*-* x86_64-*-* } } } }
+// { dg-final { scan-assembler-times "_ZGVdN16v__Z3f30i:" 1 { target { i?86-*-* x86_64-*-* } } } }
+
template <int N>
struct C
{
// { dg-do compile }
+// { dg-options "-fopenmp -ffat-lto-objects" }
#pragma omp declare simd uniform(b) linear(c, d) linear(uval(e)) linear(ref(f))
int f1 (int a, int b, int c, int &d, int &e, int &f)
return a + b + c + d + e + f;
}
+// { dg-final { scan-assembler-times "_ZGVbM4vulLUR4__Z2f1iiiRiS_S_:" 1 { target { i?86-*-* x86_64-*-* } } } }
+// { dg-final { scan-assembler-times "_ZGVbN4vulLUR4__Z2f1iiiRiS_S_:" 1 { target { i?86-*-* x86_64-*-* } } } }
+// { dg-final { scan-assembler-times "_ZGVcM4vulLUR4__Z2f1iiiRiS_S_:" 1 { target { i?86-*-* x86_64-*-* } } } }
+// { dg-final { scan-assembler-times "_ZGVcN4vulLUR4__Z2f1iiiRiS_S_:" 1 { target { i?86-*-* x86_64-*-* } } } }
+// { dg-final { scan-assembler-times "_ZGVdM8vulLUR4__Z2f1iiiRiS_S_:" 1 { target { i?86-*-* x86_64-*-* } } } }
+// { dg-final { scan-assembler-times "_ZGVdN8vulLUR4__Z2f1iiiRiS_S_:" 1 { target { i?86-*-* x86_64-*-* } } } }
+
#pragma omp declare simd uniform(b) linear(c, d) linear(uval(e)) linear(ref(f))
int f2 (int a, int b, int c, int &d, int &e, int &f)
{
return a + b + c + d + e + f;
}
+// { dg-final { scan-assembler-times "_ZGVbM4vulLUR4__Z2f2iiiRiS_S_:" 1 { target { i?86-*-* x86_64-*-* } } } }
+// { dg-final { scan-assembler-times "_ZGVbN4vulLUR4__Z2f2iiiRiS_S_:" 1 { target { i?86-*-* x86_64-*-* } } } }
+// { dg-final { scan-assembler-times "_ZGVcM4vulLUR4__Z2f2iiiRiS_S_:" 1 { target { i?86-*-* x86_64-*-* } } } }
+// { dg-final { scan-assembler-times "_ZGVcN4vulLUR4__Z2f2iiiRiS_S_:" 1 { target { i?86-*-* x86_64-*-* } } } }
+// { dg-final { scan-assembler-times "_ZGVdM8vulLUR4__Z2f2iiiRiS_S_:" 1 { target { i?86-*-* x86_64-*-* } } } }
+// { dg-final { scan-assembler-times "_ZGVdN8vulLUR4__Z2f2iiiRiS_S_:" 1 { target { i?86-*-* x86_64-*-* } } } }
+
#pragma omp declare simd uniform(b) linear(c, d) linear(uval(e)) linear(ref(f))
int f3 (const int a, const int b, const int c, const int &d, const int &e, const int &f)
{
return a + b + c + d + e + f;
}
+// { dg-final { scan-assembler-times "_ZGVbM4vulLUR4__Z2f3iiiRKiS0_S0_:" 1 { target { i?86-*-* x86_64-*-* } } } }
+// { dg-final { scan-assembler-times "_ZGVbN4vulLUR4__Z2f3iiiRKiS0_S0_:" 1 { target { i?86-*-* x86_64-*-* } } } }
+// { dg-final { scan-assembler-times "_ZGVcM4vulLUR4__Z2f3iiiRKiS0_S0_:" 1 { target { i?86-*-* x86_64-*-* } } } }
+// { dg-final { scan-assembler-times "_ZGVcN4vulLUR4__Z2f3iiiRKiS0_S0_:" 1 { target { i?86-*-* x86_64-*-* } } } }
+// { dg-final { scan-assembler-times "_ZGVdM8vulLUR4__Z2f3iiiRKiS0_S0_:" 1 { target { i?86-*-* x86_64-*-* } } } }
+// { dg-final { scan-assembler-times "_ZGVdN8vulLUR4__Z2f3iiiRKiS0_S0_:" 1 { target { i?86-*-* x86_64-*-* } } } }
+
#pragma omp declare simd uniform(b) linear(c, d) linear(uval(e)) linear(ref(f))
int f4 (const int a, const int b, const int c, const int &d, const int &e, const int &f)
{
asm volatile ("" : : "r" (&f));
return a + b + c + d + e + f;
}
+
+// { dg-final { scan-assembler-times "_ZGVbM4vulLUR4__Z2f4iiiRKiS0_S0_:" 1 { target { i?86-*-* x86_64-*-* } } } }
+// { dg-final { scan-assembler-times "_ZGVbN4vulLUR4__Z2f4iiiRKiS0_S0_:" 1 { target { i?86-*-* x86_64-*-* } } } }
+// { dg-final { scan-assembler-times "_ZGVcM4vulLUR4__Z2f4iiiRKiS0_S0_:" 1 { target { i?86-*-* x86_64-*-* } } } }
+// { dg-final { scan-assembler-times "_ZGVcN4vulLUR4__Z2f4iiiRKiS0_S0_:" 1 { target { i?86-*-* x86_64-*-* } } } }
+// { dg-final { scan-assembler-times "_ZGVdM8vulLUR4__Z2f4iiiRKiS0_S0_:" 1 { target { i?86-*-* x86_64-*-* } } } }
+// { dg-final { scan-assembler-times "_ZGVdN8vulLUR4__Z2f4iiiRKiS0_S0_:" 1 { target { i?86-*-* x86_64-*-* } } } }
--- /dev/null
+#pragma omp declare simd linear(p:1) linear(q:-1) linear(s:-3)
+int
+f1 (int *p, int *q, short *s)
+{
+ return *p + *q + *s;
+}
+
+// { dg-final { scan-assembler-times "_ZGVbM4l4ln4ln6__Z2f1PiS_Ps:" 1 { target { i?86-*-* x86_64-*-* } } } }
+// { dg-final { scan-assembler-times "_ZGVbN4l4ln4ln6__Z2f1PiS_Ps:" 1 { target { i?86-*-* x86_64-*-* } } } }
+// { dg-final { scan-assembler-times "_ZGVcM4l4ln4ln6__Z2f1PiS_Ps:" 1 { target { i?86-*-* x86_64-*-* } } } }
+// { dg-final { scan-assembler-times "_ZGVcN4l4ln4ln6__Z2f1PiS_Ps:" 1 { target { i?86-*-* x86_64-*-* } } } }
+// { dg-final { scan-assembler-times "_ZGVdM8l4ln4ln6__Z2f1PiS_Ps:" 1 { target { i?86-*-* x86_64-*-* } } } }
+// { dg-final { scan-assembler-times "_ZGVdN8l4ln4ln6__Z2f1PiS_Ps:" 1 { target { i?86-*-* x86_64-*-* } } } }
+
+#pragma omp declare simd linear(p:s) linear(q:t) uniform (s) linear(r:s) notinbranch simdlen(8) uniform(t)
+int
+f2 (int *p, short *q, int s, int r, int &t)
+{
+ return *p + *q + r;
+}
+
+// { dg-final { scan-assembler-times "_ZGVbN8ls2ls4uls2u__Z2f2PiPsiiRi:" 1 { target { i?86-*-* x86_64-*-* } } } }
+// { dg-final { scan-assembler-times "_ZGVcN8ls2ls4uls2u__Z2f2PiPsiiRi:" 1 { target { i?86-*-* x86_64-*-* } } } }
+// { dg-final { scan-assembler-times "_ZGVdN8ls2ls4uls2u__Z2f2PiPsiiRi:" 1 { target { i?86-*-* x86_64-*-* } } } }
+
+#pragma omp declare simd linear(ref(p):s) linear(val(q):t) uniform (s) linear(uval(r):s) notinbranch simdlen(8) uniform(t)
+int
+f3 (int &p, short &q, int s, int &r, int &t)
+{
+ return p + q + r;
+}
+
+// { dg-final { scan-assembler-times "_ZGVbN8Rs2Ls4uUs2u__Z2f3RiRsiS_S_:" 1 { target { i?86-*-* x86_64-*-* } } } }
+// { dg-final { scan-assembler-times "_ZGVcN8Rs2Ls4uUs2u__Z2f3RiRsiS_S_:" 1 { target { i?86-*-* x86_64-*-* } } } }
+// { dg-final { scan-assembler-times "_ZGVdN8Rs2Ls4uUs2u__Z2f3RiRsiS_S_:" 1 { target { i?86-*-* x86_64-*-* } } } }
--- /dev/null
+// { dg-do compile }
+// { dg-options "-fopenmp-simd" }
+
+#pragma omp declare simd linear(a:1 + b) uniform(b) // { dg-error "use of parameter outside function body before .\\). token" }
+int f1 (int a, int b);
+#pragma omp declare simd linear(a:b + 1) uniform(b) // { dg-error "use of parameter outside function body before .\\+. token" }
+int f2 (int a, int b);
+#pragma omp declare simd linear(a:2 * b) uniform(b) // { dg-error "use of parameter outside function body before .\\). token" }
+int f3 (int a, int b);
+#pragma omp declare simd linear(a:b) // { dg-error ".linear. clause step is a parameter .b. not specified in .uniform. clause" }
+int f4 (int a, int b);
+#pragma omp declare simd linear(a:b) linear(b:1) // { dg-error ".linear. clause step is a parameter .b. not specified in .uniform. clause" }
+int f5 (int a, int b);
+#pragma omp declare simd linear(a:5 + 2 * 3)
+int f6 (int a, int b);
+const int c = 5;
+#pragma omp declare simd linear(a:c)
+int f7 (int a, int b);
+#pragma omp declare simd linear(a:2 * c + 1)
+int f8 (int a, int b);
+#pragma omp declare simd linear(a:0.5) // { dg-error "linear step expression must be integral" }
+int f9 (int a, int b);
--- /dev/null
+// { dg-do compile }
+// { dg-options "-fopenmp" }
+
+#pragma omp declare target
+void f1 (int);
+void f1 (double);
+template <typename T>
+void f2 (T);
+template<> void f2<int> (int);
+#pragma omp end declare target
+void f3 (int);
+void f4 (int);
+void f4 (short);
+template <typename T>
+void f5 (T);
+#pragma omp declare target (f3)
+#pragma omp declare target to (f4) // { dg-error "overloaded function name .f4. in clause .to." }
+#pragma omp declare target to (f5<int>) // { dg-error "template .f5<int>. in clause .to." }
+template <int N>
+void f6 (int)
+{
+ static int s;
+ #pragma omp declare target (s)
+}
+namespace N
+{
+ namespace M
+ {
+ void f7 (int);
+ }
+ void f8 (long);
+}
+void f9 (short);
+int v;
+#pragma omp declare target (N::M::f7)
+#pragma omp declare target to (::N::f8)
+#pragma omp declare target to (::f9) to (::v)
--- /dev/null
+// { dg-do compile }
+// { dg-options "-fopenmp" }
+
+#pragma omp declare target
+
+int i, j;
+
+void
+f1 ()
+{
+ #pragma omp for linear (i:1) // { dg-error "iteration variable .i. should not be linear" }
+ for (i = 0; i < 32; i++)
+ ;
+}
+
+void
+f2 ()
+{
+ #pragma omp distribute parallel for linear (i:1) // { dg-error ".linear. is not valid for .#pragma omp distribute parallel for." }
+ for (i = 0; i < 32; i++)
+ ;
+}
+
+void
+f3 ()
+{
+ #pragma omp parallel for linear (i:1) collapse(1)
+ for (i = 0; i < 32; i++) // { dg-error "iteration variable .i. should not be linear" }
+ ;
+}
+
+void
+f4 ()
+{
+ #pragma omp for linear (i:1) linear (j:2) collapse(2) // { dg-error "iteration variable .i. should not be linear" }
+ for (i = 0; i < 32; i++) // { dg-error "iteration variable .j. should not be linear" "" { target *-*-* } 35 }
+ for (j = 0; j < 32; j+=2)
+ ;
+}
+
+void
+f5 ()
+{
+ #pragma omp target teams distribute parallel for linear (i:1) linear (j:2) collapse(2) // { dg-error ".linear. is not valid for .#pragma omp target teams distribute parallel for." }
+ for (i = 0; i < 32; i++)
+ for (j = 0; j < 32; j+=2)
+ ;
+}
+
+void
+f6 ()
+{
+ #pragma omp parallel for linear (i:1) collapse(2) linear (j:2) // { dg-error "iteration variable .i. should not be linear" "" { target *-*-* } 54 }
+ for (i = 0; i < 32; i++) // { dg-error "iteration variable .j. should not be linear" }
+ for (j = 0; j < 32; j+=2)
+ ;
+}
+
+template <int N>
+void
+f7 ()
+{
+ #pragma omp for linear (i:1) // { dg-error "iteration variable .i. should not be linear" }
+ for (i = 0; i < 32; i++)
+ ;
+}
+
+template <int N>
+void
+f8 ()
+{
+ #pragma omp distribute parallel for linear (i:1) // { dg-error ".linear. is not valid for .#pragma omp distribute parallel for." }
+ for (i = 0; i < 32; i++)
+ ;
+}
+
+template <int N>
+void
+f9 ()
+{
+ #pragma omp parallel for linear (i:1) collapse(1)
+ for (i = 0; i < 32; i++) // { dg-error "iteration variable .i. should not be linear" }
+ ;
+}
+
+template <int N>
+void
+f10 ()
+{
+ #pragma omp for linear (i:1) linear (j:2) collapse(2) // { dg-error "iteration variable .i. should not be linear" }
+ for (i = 0; i < 32; i++) // { dg-error "iteration variable .j. should not be linear" "" { target *-*-* } 90 }
+ for (j = 0; j < 32; j+=2)
+ ;
+}
+
+template <int N>
+void
+f11 ()
+{
+ #pragma omp target teams distribute parallel for linear (i:1) linear (j:2) collapse(2) // { dg-error ".linear. is not valid for .#pragma omp target teams distribute parallel for." }
+ for (i = 0; i < 32; i++)
+ for (j = 0; j < 32; j+=2)
+ ;
+}
+
+template <int N>
+void
+f12 ()
+{
+ #pragma omp parallel for linear (i:1) collapse(2) linear (j:2) // { dg-error "iteration variable .i. should not be linear" "" { target *-*-* } 111 }
+ for (i = 0; i < 32; i++) // { dg-error "iteration variable .j. should not be linear" }
+ for (j = 0; j < 32; j+=2)
+ ;
+}
+
+#pragma omp end declare target
+
+void
+f13 ()
+{
+ f7 <0> ();
+ #pragma omp target teams
+ f8 <1> ();
+ f9 <2> ();
+ f10 <3> ();
+ f11 <4> ();
+ f12 <5> ();
+}
--- /dev/null
+int bar (int);
+int baz (int *);
+
+void
+f1 (int x)
+{
+ int i = 0, j = 0;
+ #pragma omp for
+ for (i = 0; i < 16; i++)
+ ;
+ #pragma omp for
+ for (i = 0; 16 > i; i++)
+ ;
+ #pragma omp for
+ for (i = 0; i < 16; i = i + 2)
+ ;
+ #pragma omp for
+ for (i = 0; i < 16; i = 2 + i)
+ ;
+ #pragma omp for
+ for (i = i; i < 16; i++) /* { dg-error "initializer expression refers to iteration variable" } */
+ ;
+ #pragma omp for
+ for (i = 2 * (i & x); i < 16; i++) /* { dg-error "initializer expression refers to iteration variable" } */
+ ;
+ #pragma omp for
+ for (i = bar (i); i < 16; i++) /* { dg-error "initializer expression refers to iteration variable" } */
+ ;
+ #pragma omp for
+ for (i = baz (&i); i < 16; i++) /* { dg-error "initializer expression refers to iteration variable" } */
+ ;
+ #pragma omp for
+ for (i = 5; i < 2 * i + 17; i++) /* { dg-error "condition expression refers to iteration variable" } */
+ ;
+ #pragma omp for
+ for (i = 5; 2 * i + 17 > i; i++) /* { dg-error "condition expression refers to iteration variable" } */
+ ;
+ #pragma omp for
+ for (i = 5; bar (i) > i; i++) /* { dg-error "condition expression refers to iteration variable" } */
+ ;
+ #pragma omp for
+ for (i = 5; i <= baz (&i); i++) /* { dg-error "condition expression refers to iteration variable" } */
+ ;
+ #pragma omp for
+ for (i = 5; i <= i; i++) /* { dg-error "invalid controlling predicate|condition expression refers to iteration variable" } */
+ ;
+ #pragma omp for
+ for (i = 5; i < 16; i += i) /* { dg-error "increment expression refers to iteration variable" } */
+ ;
+ #pragma omp for
+ for (i = 5; i < 16; i = i + 2 * i) /* { dg-error "invalid increment expression|increment expression refers to iteration variable" } */
+ ;
+ #pragma omp for
+ for (i = 5; i < 16; i = i + i) /* { dg-error "increment expression refers to iteration variable" } */
+ ;
+ #pragma omp for
+ for (i = 5; i < 16; i = i + bar (i)) /* { dg-error "increment expression refers to iteration variable" } */
+ ;
+ #pragma omp for
+ for (i = 5; i < 16; i = baz (&i) + i) /* { dg-error "increment expression refers to iteration variable" } */
+ ;
+ #pragma omp for
+ for (i = 5; i < 16; i += bar (i)) /* { dg-error "increment expression refers to iteration variable" } */
+ ;
+ #pragma omp for
+ for (i = 5; i < 16; i += baz (&i)) /* { dg-error "increment expression refers to iteration variable" } */
+ ;
+ #pragma omp for collapse(2)
+ for (i = 0; i < 16; i = i + 2)
+ for (j = 0; j < 16; j += 2)
+ ;
+ #pragma omp for collapse(2)
+ for (i = j; i < 16; i = i + 2) /* { dg-error "initializer expression refers to iteration variable" } */
+ for (j = 0; j < 16; j++)
+ ;
+ #pragma omp for collapse(2)
+ for (i = 0; i < 16; i = i + 2) /* { dg-error "initializer expression refers to iteration variable" } */
+ for (j = i; j < 16; j += 2)
+ ;
+ #pragma omp for collapse(2)
+ for (i = 0; i < 16; i = i + 2)
+ for (j = i + 3; j < 16; j += 2) /* { dg-error "initializer expression refers to iteration variable" } */
+ ;
+ #pragma omp for collapse(2)
+ for (i = 0; i < 16; i++) /* { dg-error "initializer expression refers to iteration variable" } */
+ for (j = baz (&i); j < 16; j += 2)
+ ;
+ #pragma omp for collapse(2)
+ for (i = 0; i < 16; i++) /* { dg-error "condition expression refers to iteration variable" } */
+ for (j = 16; j > (i & x); j--)
+ ;
+ #pragma omp for collapse(2)
+ for (i = 0; i < 16; i++) /* { dg-error "condition expression refers to iteration variable" } */
+ for (j = 0; j < i; j++)
+ ;
+ #pragma omp for collapse(2)
+ for (i = 0; i < 16; i++) /* { dg-error "condition expression refers to iteration variable" } */
+ for (j = 0; j < i + 4; j++)
+ ;
+ #pragma omp for collapse(2)
+ for (i = 0; i < j + 4; i++) /* { dg-error "condition expression refers to iteration variable" } */
+ for (j = 0; j < 16; j++)
+ ;
+ #pragma omp for collapse(2)
+ for (i = 0; i < j; i++) /* { dg-error "condition expression refers to iteration variable" } */
+ for (j = 0; j < 16; j++)
+ ;
+ #pragma omp for collapse(2)
+ for (i = 0; i < bar (j); i++) /* { dg-error "condition expression refers to iteration variable" } */
+ for (j = 0; j < 16; j++)
+ ;
+ #pragma omp for collapse(2)
+ for (i = 0; i < 16; i++) /* { dg-error "condition expression refers to iteration variable" } */
+ for (j = 0; j < baz (&i); j++)
+ ;
+ #pragma omp for collapse(2)
+ for (i = 0; i < 16; i += j) /* { dg-error "increment expression refers to iteration variable" } */
+ for (j = 0; j < 16; j++)
+ ;
+ #pragma omp for collapse(2)
+ for (i = 0; i < 16; i++) /* { dg-error "increment expression refers to iteration variable" } */
+ for (j = 0; j < 16; j += i)
+ ;
+ #pragma omp for collapse(2)
+ for (i = 0; i < 16; i = j + i) /* { dg-error "increment expression refers to iteration variable" } */
+ for (j = 0; j < 16; j++)
+ ;
+ #pragma omp for collapse(2)
+ for (i = 0; i < 16; i++) /* { dg-error "increment expression refers to iteration variable" } */
+ for (j = 0; j < 16; j = j + i)
+ ;
+ #pragma omp for collapse(2)
+ for (i = 0; i < 16; i = bar (j) + i) /* { dg-error "increment expression refers to iteration variable" } */
+ for (j = 0; j < 16; j++)
+ ;
+ #pragma omp for collapse(2)
+ for (i = 0; i < 16; i++)
+ for (j = 0; j < 16; j = j + baz (&i)) /* { dg-error "increment expression refers to iteration variable" } */
+ ;
+}
+
+void
+f2 (int x)
+{
+ #pragma omp for
+ for (int i = 0; i < 16; i++)
+ ;
+ #pragma omp for
+ for (int i = 0; 16 > i; i++)
+ ;
+ #pragma omp for
+ for (int i = 0; i < 16; i = i + 2)
+ ;
+ #pragma omp for
+ for (int i = 0; i < 16; i = 2 + i)
+ ;
+ #pragma omp for
+ for (int i = i; i < 16; i++) /* { dg-error "initializer expression refers to iteration variable" } */
+ ;
+ #pragma omp for
+ for (int i = 2 * (i & x); i < 16; i++) /* { dg-error "initializer expression refers to iteration variable" } */
+ ;
+ #pragma omp for
+ for (int i = bar (i); i < 16; i++) /* { dg-error "initializer expression refers to iteration variable" } */
+ ;
+ #pragma omp for
+ for (int i = baz (&i); i < 16; i++) /* { dg-error "initializer expression refers to iteration variable" } */
+ ;
+ #pragma omp for
+ for (int i = 5; i < 2 * i + 17; i++) /* { dg-error "condition expression refers to iteration variable" } */
+ ;
+ #pragma omp for
+ for (int i = 5; 2 * i + 17 > i; i++) /* { dg-error "condition expression refers to iteration variable" } */
+ ;
+ #pragma omp for
+ for (int i = 5; bar (i) > i; i++) /* { dg-error "condition expression refers to iteration variable" } */
+ ;
+ #pragma omp for
+ for (int i = 5; i <= baz (&i); i++) /* { dg-error "condition expression refers to iteration variable" } */
+ ;
+ #pragma omp for
+ for (int i = 5; i <= i; i++) /* { dg-error "invalid controlling predicate|condition expression refers to iteration variable" } */
+ ;
+ #pragma omp for
+ for (int i = 5; i < 16; i += i) /* { dg-error "increment expression refers to iteration variable" } */
+ ;
+ #pragma omp for
+ for (int i = 5; i < 16; i = i + 2 * i) /* { dg-error "invalid increment expression|increment expression refers to iteration variable" } */
+ ;
+ #pragma omp for
+ for (int i = 5; i < 16; i = i + i) /* { dg-error "increment expression refers to iteration variable" } */
+ ;
+ #pragma omp for
+ for (int i = 5; i < 16; i = i + bar (i)) /* { dg-error "increment expression refers to iteration variable" } */
+ ;
+ #pragma omp for
+ for (int i = 5; i < 16; i = baz (&i) + i) /* { dg-error "increment expression refers to iteration variable" } */
+ ;
+ #pragma omp for
+ for (int i = 5; i < 16; i += bar (i)) /* { dg-error "increment expression refers to iteration variable" } */
+ ;
+ #pragma omp for
+ for (int i = 5; i < 16; i += baz (&i)) /* { dg-error "increment expression refers to iteration variable" } */
+ ;
+ #pragma omp for collapse(2)
+ for (int i = 0; i < 16; i = i + 2)
+ for (int j = 0; j < 16; j += 2)
+ ;
+ #pragma omp for collapse(2)
+ for (int i = 0; i < 16; i = i + 2) /* { dg-error "initializer expression refers to iteration variable" } */
+ for (int j = i; j < 16; j += 2)
+ ;
+ #pragma omp for collapse(2)
+ for (int i = 0; i < 16; i = i + 2)
+ for (int j = i + 3; j < 16; j += 2) /* { dg-error "initializer expression refers to iteration variable" } */
+ ;
+ #pragma omp for collapse(2)
+ for (int i = 0; i < 16; i++) /* { dg-error "initializer expression refers to iteration variable" } */
+ for (int j = baz (&i); j < 16; j += 2)
+ ;
+ #pragma omp for collapse(2)
+ for (int i = 0; i < 16; i++) /* { dg-error "condition expression refers to iteration variable" } */
+ for (int j = 16; j > (i & x); j--)
+ ;
+ #pragma omp for collapse(2)
+ for (int i = 0; i < 16; i++) /* { dg-error "condition expression refers to iteration variable" } */
+ for (int j = 0; j < i; j++)
+ ;
+ #pragma omp for collapse(2)
+ for (int i = 0; i < 16; i++) /* { dg-error "condition expression refers to iteration variable" } */
+ for (int j = 0; j < i + 4; j++)
+ ;
+ #pragma omp for collapse(2)
+ for (int i = 0; i < 16; i++) /* { dg-error "condition expression refers to iteration variable" } */
+ for (int j = 0; j < baz (&i); j++)
+ ;
+ #pragma omp for collapse(2)
+ for (int i = 0; i < 16; i++) /* { dg-error "increment expression refers to iteration variable" } */
+ for (int j = 0; j < 16; j += i)
+ ;
+ #pragma omp for collapse(2)
+ for (int i = 0; i < 16; i++) /* { dg-error "increment expression refers to iteration variable" } */
+ for (int j = 0; j < 16; j = j + i)
+ ;
+ #pragma omp for collapse(2)
+ for (int i = 0; i < 16; i++)
+ for (int j = 0; j < 16; j = j + baz (&i)) /* { dg-error "increment expression refers to iteration variable" } */
+ ;
+}
+
+void
+f3 (void)
+{
+ int j = 0;
+ #pragma omp for collapse(2)
+ for (int i = j; i < 16; i = i + 2)
+ for (int j = 0; j < 16; j++)
+ ;
+ #pragma omp for collapse(2)
+ for (int i = 0; i < j + 4; i++)
+ for (int j = 0; j < 16; j++)
+ ;
+ #pragma omp for collapse(2)
+ for (int i = 0; i < j; i++)
+ for (int j = 0; j < 16; j++)
+ ;
+ #pragma omp for collapse(2)
+ for (int i = 0; i < bar (j); i++)
+ for (int j = 0; j < 16; j++)
+ ;
+ #pragma omp for collapse(2)
+ for (int i = 0; i < 16; i += j)
+ for (int j = 0; j < 16; j++)
+ ;
+ #pragma omp for collapse(2)
+ for (int i = 0; i < 16; i = j + i)
+ for (int j = 0; j < 16; j++)
+ ;
+ #pragma omp for collapse(2)
+ for (int i = 0; i < 16; i = bar (j) + i)
+ for (int j = 0; j < 16; j++)
+ ;
+}
--- /dev/null
+int bar (int);
+int baz (int *);
+
+template <int N>
+void
+f1 (int x)
+{
+ int i = 0, j = 0;
+ #pragma omp for
+ for (i = 0; i < 16; i++)
+ ;
+ #pragma omp for
+ for (i = 0; 16 > i; i++)
+ ;
+ #pragma omp for
+ for (i = 0; i < 16; i = i + 2)
+ ;
+ #pragma omp for
+ for (i = 0; i < 16; i = 2 + i)
+ ;
+ #pragma omp for
+ for (i = i; i < 16; i++) /* { dg-error "initializer expression refers to iteration variable" } */
+ ;
+ #pragma omp for
+ for (i = 2 * (i & x); i < 16; i++) /* { dg-error "initializer expression refers to iteration variable" } */
+ ;
+ #pragma omp for
+ for (i = bar (i); i < 16; i++) /* { dg-error "initializer expression refers to iteration variable" } */
+ ;
+ #pragma omp for
+ for (i = baz (&i); i < 16; i++) /* { dg-error "initializer expression refers to iteration variable" } */
+ ;
+ #pragma omp for
+ for (i = 5; i < 2 * i + 17; i++) /* { dg-error "condition expression refers to iteration variable" } */
+ ;
+ #pragma omp for
+ for (i = 5; 2 * i + 17 > i; i++) /* { dg-error "condition expression refers to iteration variable" } */
+ ;
+ #pragma omp for
+ for (i = 5; bar (i) > i; i++) /* { dg-error "condition expression refers to iteration variable" } */
+ ;
+ #pragma omp for
+ for (i = 5; i <= baz (&i); i++) /* { dg-error "condition expression refers to iteration variable" } */
+ ;
+ #pragma omp for
+ for (i = 5; i <= i; i++) /* { dg-error "invalid controlling predicate|condition expression refers to iteration variable" } */
+ ;
+ #pragma omp for
+ for (i = 5; i < 16; i += i) /* { dg-error "increment expression refers to iteration variable" } */
+ ;
+ #pragma omp for
+ for (i = 5; i < 16; i = i + 2 * i) /* { dg-error "invalid increment expression|increment expression refers to iteration variable" } */
+ ;
+ #pragma omp for
+ for (i = 5; i < 16; i = i + i) /* { dg-error "increment expression refers to iteration variable" } */
+ ;
+ #pragma omp for
+ for (i = 5; i < 16; i = i + bar (i)) /* { dg-error "increment expression refers to iteration variable" } */
+ ;
+ #pragma omp for
+ for (i = 5; i < 16; i = baz (&i) + i) /* { dg-error "increment expression refers to iteration variable" } */
+ ;
+ #pragma omp for
+ for (i = 5; i < 16; i += bar (i)) /* { dg-error "increment expression refers to iteration variable" } */
+ ;
+ #pragma omp for
+ for (i = 5; i < 16; i += baz (&i)) /* { dg-error "increment expression refers to iteration variable" } */
+ ;
+ #pragma omp for collapse(2)
+ for (i = 0; i < 16; i = i + 2)
+ for (j = 0; j < 16; j += 2)
+ ;
+ #pragma omp for collapse(2)
+ for (i = j; i < 16; i = i + 2) /* { dg-error "initializer expression refers to iteration variable" } */
+ for (j = 0; j < 16; j++)
+ ;
+ #pragma omp for collapse(2)
+ for (i = 0; i < 16; i = i + 2) /* { dg-error "initializer expression refers to iteration variable" } */
+ for (j = i; j < 16; j += 2)
+ ;
+ #pragma omp for collapse(2)
+ for (i = 0; i < 16; i = i + 2)
+ for (j = i + 3; j < 16; j += 2) /* { dg-error "initializer expression refers to iteration variable" } */
+ ;
+ #pragma omp for collapse(2)
+ for (i = 0; i < 16; i++)
+ for (j = baz (&i); j < 16; j += 2) /* { dg-error "initializer expression refers to iteration variable" } */
+ ;
+ #pragma omp for collapse(2)
+ for (i = 0; i < 16; i++) /* { dg-error "condition expression refers to iteration variable" } */
+ for (j = 16; j > (i & x); j--)
+ ;
+ #pragma omp for collapse(2)
+ for (i = 0; i < 16; i++) /* { dg-error "condition expression refers to iteration variable" } */
+ for (j = 0; j < i; j++)
+ ;
+ #pragma omp for collapse(2)
+ for (i = 0; i < 16; i++) /* { dg-error "condition expression refers to iteration variable" } */
+ for (j = 0; j < i + 4; j++)
+ ;
+ #pragma omp for collapse(2)
+ for (i = 0; i < j + 4; i++) /* { dg-error "condition expression refers to iteration variable" } */
+ for (j = 0; j < 16; j++)
+ ;
+ #pragma omp for collapse(2)
+ for (i = 0; i < j; i++) /* { dg-error "condition expression refers to iteration variable" } */
+ for (j = 0; j < 16; j++)
+ ;
+ #pragma omp for collapse(2)
+ for (i = 0; i < bar (j); i++) /* { dg-error "condition expression refers to iteration variable" } */
+ for (j = 0; j < 16; j++)
+ ;
+ #pragma omp for collapse(2)
+ for (i = 0; i < 16; i++) /* { dg-error "condition expression refers to iteration variable" } */
+ for (j = 0; j < baz (&i); j++)
+ ;
+ #pragma omp for collapse(2)
+ for (i = 0; i < 16; i += j) /* { dg-error "increment expression refers to iteration variable" } */
+ for (j = 0; j < 16; j++)
+ ;
+ #pragma omp for collapse(2)
+ for (i = 0; i < 16; i++) /* { dg-error "increment expression refers to iteration variable" } */
+ for (j = 0; j < 16; j += i)
+ ;
+ #pragma omp for collapse(2)
+ for (i = 0; i < 16; i = j + i) /* { dg-error "increment expression refers to iteration variable" } */
+ for (j = 0; j < 16; j++)
+ ;
+ #pragma omp for collapse(2)
+ for (i = 0; i < 16; i++) /* { dg-error "increment expression refers to iteration variable" } */
+ for (j = 0; j < 16; j = j + i)
+ ;
+ #pragma omp for collapse(2)
+ for (i = 0; i < 16; i = bar (j) + i) /* { dg-error "increment expression refers to iteration variable" } */
+ for (j = 0; j < 16; j++)
+ ;
+ #pragma omp for collapse(2)
+ for (i = 0; i < 16; i++)
+ for (j = 0; j < 16; j = j + baz (&i)) /* { dg-error "increment expression refers to iteration variable" } */
+ ;
+}
+
+template <int N>
+void
+f2 (int x)
+{
+ #pragma omp for
+ for (int i = 0; i < 16; i++)
+ ;
+ #pragma omp for
+ for (int i = 0; 16 > i; i++)
+ ;
+ #pragma omp for
+ for (int i = 0; i < 16; i = i + 2)
+ ;
+ #pragma omp for
+ for (int i = 0; i < 16; i = 2 + i)
+ ;
+ #pragma omp for
+ for (int i = i; i < 16; i++) /* { dg-error "initializer expression refers to iteration variable" } */
+ ;
+ #pragma omp for
+ for (int i = 2 * (i & x); i < 16; i++) /* { dg-error "initializer expression refers to iteration variable" } */
+ ;
+ #pragma omp for
+ for (int i = bar (i); i < 16; i++) /* { dg-error "initializer expression refers to iteration variable" } */
+ ;
+ #pragma omp for
+ for (int i = baz (&i); i < 16; i++) /* { dg-error "initializer expression refers to iteration variable" } */
+ ;
+ #pragma omp for
+ for (int i = 5; i < 2 * i + 17; i++) /* { dg-error "condition expression refers to iteration variable" } */
+ ;
+ #pragma omp for
+ for (int i = 5; 2 * i + 17 > i; i++) /* { dg-error "condition expression refers to iteration variable" } */
+ ;
+ #pragma omp for
+ for (int i = 5; bar (i) > i; i++) /* { dg-error "condition expression refers to iteration variable" } */
+ ;
+ #pragma omp for
+ for (int i = 5; i <= baz (&i); i++) /* { dg-error "condition expression refers to iteration variable" } */
+ ;
+ #pragma omp for
+ for (int i = 5; i <= i; i++) /* { dg-error "invalid controlling predicate|condition expression refers to iteration variable" } */
+ ;
+ #pragma omp for
+ for (int i = 5; i < 16; i += i) /* { dg-error "increment expression refers to iteration variable" } */
+ ;
+ #pragma omp for
+ for (int i = 5; i < 16; i = i + 2 * i) /* { dg-error "invalid increment expression|increment expression refers to iteration variable" } */
+ ;
+ #pragma omp for
+ for (int i = 5; i < 16; i = i + i) /* { dg-error "increment expression refers to iteration variable" } */
+ ;
+ #pragma omp for
+ for (int i = 5; i < 16; i = i + bar (i)) /* { dg-error "increment expression refers to iteration variable" } */
+ ;
+ #pragma omp for
+ for (int i = 5; i < 16; i = baz (&i) + i) /* { dg-error "increment expression refers to iteration variable" } */
+ ;
+ #pragma omp for
+ for (int i = 5; i < 16; i += bar (i)) /* { dg-error "increment expression refers to iteration variable" } */
+ ;
+ #pragma omp for
+ for (int i = 5; i < 16; i += baz (&i)) /* { dg-error "increment expression refers to iteration variable" } */
+ ;
+ #pragma omp for collapse(2)
+ for (int i = 0; i < 16; i = i + 2)
+ for (int j = 0; j < 16; j += 2)
+ ;
+ #pragma omp for collapse(2)
+ for (int i = 0; i < 16; i = i + 2) /* { dg-error "initializer expression refers to iteration variable" } */
+ for (int j = i; j < 16; j += 2)
+ ;
+ #pragma omp for collapse(2)
+ for (int i = 0; i < 16; i = i + 2)
+ for (int j = i + 3; j < 16; j += 2) /* { dg-error "initializer expression refers to iteration variable" } */
+ ;
+ #pragma omp for collapse(2)
+ for (int i = 0; i < 16; i++)
+ for (int j = baz (&i); j < 16; j += 2) /* { dg-error "initializer expression refers to iteration variable" } */
+ ;
+ #pragma omp for collapse(2)
+ for (int i = 0; i < 16; i++) /* { dg-error "condition expression refers to iteration variable" } */
+ for (int j = 16; j > (i & x); j--)
+ ;
+ #pragma omp for collapse(2)
+ for (int i = 0; i < 16; i++) /* { dg-error "condition expression refers to iteration variable" } */
+ for (int j = 0; j < i; j++)
+ ;
+ #pragma omp for collapse(2)
+ for (int i = 0; i < 16; i++) /* { dg-error "condition expression refers to iteration variable" } */
+ for (int j = 0; j < i + 4; j++)
+ ;
+ #pragma omp for collapse(2)
+ for (int i = 0; i < 16; i++) /* { dg-error "condition expression refers to iteration variable" } */
+ for (int j = 0; j < baz (&i); j++)
+ ;
+ #pragma omp for collapse(2)
+ for (int i = 0; i < 16; i++) /* { dg-error "increment expression refers to iteration variable" } */
+ for (int j = 0; j < 16; j += i)
+ ;
+ #pragma omp for collapse(2)
+ for (int i = 0; i < 16; i++) /* { dg-error "increment expression refers to iteration variable" } */
+ for (int j = 0; j < 16; j = j + i)
+ ;
+ #pragma omp for collapse(2)
+ for (int i = 0; i < 16; i++)
+ for (int j = 0; j < 16; j = j + baz (&i)) /* { dg-error "increment expression refers to iteration variable" } */
+ ;
+}
+
+template <int N>
+void
+f3 ()
+{
+ int j = 0;
+ #pragma omp for collapse(2)
+ for (int i = j; i < 16; i = i + 2)
+ for (int j = 0; j < 16; j++)
+ ;
+ #pragma omp for collapse(2)
+ for (int i = 0; i < j + 4; i++)
+ for (int j = 0; j < 16; j++)
+ ;
+ #pragma omp for collapse(2)
+ for (int i = 0; i < j; i++)
+ for (int j = 0; j < 16; j++)
+ ;
+ #pragma omp for collapse(2)
+ for (int i = 0; i < bar (j); i++)
+ for (int j = 0; j < 16; j++)
+ ;
+ #pragma omp for collapse(2)
+ for (int i = 0; i < 16; i += j)
+ for (int j = 0; j < 16; j++)
+ ;
+ #pragma omp for collapse(2)
+ for (int i = 0; i < 16; i = j + i)
+ for (int j = 0; j < 16; j++)
+ ;
+ #pragma omp for collapse(2)
+ for (int i = 0; i < 16; i = bar (j) + i)
+ for (int j = 0; j < 16; j++)
+ ;
+}
+
+void
+foo ()
+{
+ f1 <0> (0);
+ f2 <0> (0);
+ f3 <0> ();
+}
--- /dev/null
+typedef __PTRDIFF_TYPE__ ptrdiff_t;
+
+template <typename T>
+class I
+{
+public:
+ typedef ptrdiff_t difference_type;
+ I ();
+ ~I ();
+ I (T *);
+ I (const I &);
+ T &operator * ();
+ T *operator -> ();
+ T &operator [] (const difference_type &) const;
+ I &operator = (const I &);
+ I &operator ++ ();
+ I operator ++ (int);
+ I &operator -- ();
+ I operator -- (int);
+ I &operator += (const difference_type &);
+ I &operator -= (const difference_type &);
+ I operator + (const difference_type &) const;
+ I operator - (const difference_type &) const;
+ template <typename S> friend bool operator == (I<S> &, I<S> &);
+ template <typename S> friend bool operator == (const I<S> &, const I<S> &);
+ template <typename S> friend bool operator < (I<S> &, I<S> &);
+ template <typename S> friend bool operator < (const I<S> &, const I<S> &);
+ template <typename S> friend bool operator <= (I<S> &, I<S> &);
+ template <typename S> friend bool operator <= (const I<S> &, const I<S> &);
+ template <typename S> friend bool operator > (I<S> &, I<S> &);
+ template <typename S> friend bool operator > (const I<S> &, const I<S> &);
+ template <typename S> friend bool operator >= (I<S> &, I<S> &);
+ template <typename S> friend bool operator >= (const I<S> &, const I<S> &);
+ template <typename S> friend typename I<S>::difference_type operator - (I<S> &, I<S> &);
+ template <typename S> friend typename I<S>::difference_type operator - (const I<S> &, const I<S> &);
+ template <typename S> friend I<S> operator + (typename I<S>::difference_type , const I<S> &);
+private:
+ T *p;
+};
+
+template <typename T> bool operator == (I<T> &, I<T> &);
+template <typename T> bool operator == (const I<T> &, const I<T> &);
+template <typename T> bool operator != (I<T> &, I<T> &);
+template <typename T> bool operator != (const I<T> &, const I<T> &);
+template <typename T> bool operator < (I<T> &, I<T> &);
+template <typename T> bool operator < (const I<T> &, const I<T> &);
+template <typename T> bool operator <= (I<T> &, I<T> &);
+template <typename T> bool operator <= (const I<T> &, const I<T> &);
+template <typename T> bool operator > (I<T> &, I<T> &);
+template <typename T> bool operator > (const I<T> &, const I<T> &);
+template <typename T> bool operator >= (I<T> &, I<T> &);
+template <typename T> bool operator >= (const I<T> &, const I<T> &);
+template <typename T> typename I<T>::difference_type operator - (I<T> &, I<T> &);
+template <typename T> typename I<T>::difference_type operator - (const I<T> &, const I<T> &);
+template <typename T> I<T> operator + (typename I<T>::difference_type, const I<T> &);
+
+ptrdiff_t foo (I<int> &);
+I<int> &bar (I<int> &);
+I<int> &baz (I<int> *);
+
+void
+f1 (I<int> &x, I<int> &y, I<int> &u, I<int> &v)
+{
+ I<int> i, j;
+ #pragma omp for
+ for (i = x; i < y; i++)
+ ;
+ #pragma omp for
+ for (i = x; y > i; i++)
+ ;
+ #pragma omp for
+ for (i = x; i < y; i = i + 2)
+ ;
+ #pragma omp for
+ for (i = x; i < y; i = 2 + i)
+ ;
+ #pragma omp for
+ for (i = i; i < y; i++) /* { dg-error "initializer expression refers to iteration variable" } */
+ ;
+ #pragma omp for
+ for (i = i + 3; i < y; i++) /* { dg-error "initializer expression refers to iteration variable" } */
+ ;
+ #pragma omp for
+ for (i = bar (i); i < y; i++) /* { dg-error "initializer expression refers to iteration variable" } */
+ ;
+ #pragma omp for
+ for (i = baz (&i); i < y; i++) /* { dg-error "initializer expression refers to iteration variable" } */
+ ;
+ #pragma omp for
+ for (i = x; i <= i + 5; i++) /* { dg-error "condition expression refers to iteration variable" } */
+ ;
+ #pragma omp for
+ for (i = x; i <= baz (&i); i++) /* { dg-error "condition expression refers to iteration variable" } */
+ ;
+ #pragma omp for
+ for (i = x; baz (&i) > i; i++) /* { dg-error "condition expression refers to iteration variable" } */
+ ;
+ #pragma omp for
+ for (i = x; i <= i; i++) /* { dg-error "invalid controlling predicate|condition expression refers to iteration variable" } */
+ ;
+ #pragma omp for
+ for (i = x; i < y; i += foo (i)) /* { dg-error "increment expression refers to iteration variable" } */
+ ;
+ #pragma omp for
+ for (i = x; i < y; i = i + foo (i)) /* { dg-error "increment expression refers to iteration variable" } */
+ ;
+ #pragma omp for
+ for (i = x; i < y; i = foo (i) + i) /* { dg-error "increment expression refers to iteration variable" } */
+ ;
+ #pragma omp for collapse(2)
+ for (i = x; i < y; i = i + 2)
+ for (j = u; j < y; j += 2)
+ ;
+ #pragma omp for collapse(2)
+ for (i = j; i < y; i = i + 2) /* { dg-error "initializer expression refers to iteration variable" } */
+ for (j = x; j < y; j++)
+ ;
+ #pragma omp for collapse(2)
+ for (i = x; i < y; i = i + 2) /* { dg-error "initializer expression refers to iteration variable" } */
+ for (j = i; j < v; j += 2)
+ ;
+ #pragma omp for collapse(2)
+ for (i = x; i < y; i = i + 2)
+ for (j = i + 3; j < v; j += 2) /* { dg-error "initializer expression refers to iteration variable" } */
+ ;
+ #pragma omp for collapse(2)
+ for (i = x; i < y; i++)
+ for (j = baz (&i); j < v; j += 2) /* { dg-error "initializer expression refers to iteration variable" } */
+ ;
+ #pragma omp for collapse(2)
+ for (i = x; i < y; i++) /* { dg-error "condition expression refers to iteration variable" } */
+ for (j = v; j > i; j--)
+ ;
+ #pragma omp for collapse(2)
+ for (i = x; i < y; i++) /* { dg-error "condition expression refers to iteration variable" } */
+ for (j = x; j < i; j++)
+ ;
+ #pragma omp for collapse(2)
+ for (i = x; i < y; i++) /* { dg-error "condition expression refers to iteration variable" } */
+ for (j = u; j < i + 4; j++)
+ ;
+ #pragma omp for collapse(2)
+ for (i = x; i < j + 4; i++) /* { dg-error "condition expression refers to iteration variable" } */
+ for (j = u; j < v; j++)
+ ;
+ #pragma omp for collapse(2)
+ for (i = x; i < j; i++) /* { dg-error "condition expression refers to iteration variable" } */
+ for (j = u; j < v; j++)
+ ;
+ #pragma omp for collapse(2)
+ for (i = x; i < bar (j); i++) /* { dg-error "condition expression refers to iteration variable" } */
+ for (j = u; j < v; j++)
+ ;
+ #pragma omp for collapse(2)
+ for (i = x; i < y; i++) /* { dg-error "condition expression refers to iteration variable" } */
+ for (j = u; j < baz (&i); j++)
+ ;
+ #pragma omp for collapse(2)
+ for (i = x; i < y; i += foo (j)) /* { dg-error "increment expression refers to iteration variable" } */
+ for (j = u; j < v; j++)
+ ;
+ #pragma omp for collapse(2)
+ for (i = x; i < y; i++)
+ for (j = u; j < v; j += foo (i)) /* { dg-error "increment expression refers to iteration variable" } */
+ ;
+ #pragma omp for collapse(2)
+ for (i = x; i < y; i = foo (j) + i) /* { dg-error "increment expression refers to iteration variable" } */
+ for (j = u; j < v; j++)
+ ;
+ #pragma omp for collapse(2)
+ for (i = x; i < y; i++)
+ for (j = u; j < y; j = j + (i - v)) /* { dg-error "increment expression refers to iteration variable" } */
+ ;
+ #pragma omp for collapse(2)
+ for (i = x; i < y; i = foo (j) + i) /* { dg-error "increment expression refers to iteration variable" } */
+ for (j = u; j < v; j++)
+ ;
+ #pragma omp for collapse(2)
+ for (i = x; i < y; i++)
+ for (j = u; j < v; j = j + foo (i)) /* { dg-error "increment expression refers to iteration variable" } */
+ ;
+}
+
+void
+f2 (I<int> &x, I<int> &y, I<int> &u, I<int> &v)
+{
+ #pragma omp for
+ for (I<int> i = x; i < y; i++)
+ ;
+ #pragma omp for
+ for (I<int> i = x; y > i; i++)
+ ;
+ #pragma omp for
+ for (I<int> i = x; i < y; i = i + 2)
+ ;
+ #pragma omp for
+ for (I<int> i = x; i < y; i = 2 + i)
+ ;
+ #pragma omp for
+ for (I<int> i = i; i < y; i++) /* { dg-error "initializer expression refers to iteration variable" } */
+ ;
+ #pragma omp for
+ for (I<int> i = i + 3; i < y; i++) /* { dg-error "initializer expression refers to iteration variable" } */
+ ;
+ #pragma omp for
+ for (I<int> i = bar (i); i < y; i++) /* { dg-error "initializer expression refers to iteration variable" } */
+ ;
+ #pragma omp for
+ for (I<int> i = baz (&i); i < y; i++) /* { dg-error "initializer expression refers to iteration variable" } */
+ ;
+ #pragma omp for
+ for (I<int> i = x; i <= i + 5; i++) /* { dg-error "condition expression refers to iteration variable" } */
+ ;
+ #pragma omp for
+ for (I<int> i = x; i <= baz (&i); i++) /* { dg-error "condition expression refers to iteration variable" } */
+ ;
+ #pragma omp for
+ for (I<int> i = x; baz (&i) > i; i++) /* { dg-error "condition expression refers to iteration variable" } */
+ ;
+ #pragma omp for
+ for (I<int> i = x; i <= i; i++) /* { dg-error "invalid controlling predicate|condition expression refers to iteration variable" } */
+ ;
+ #pragma omp for
+ for (I<int> i = x; i < y; i += foo (i)) /* { dg-error "increment expression refers to iteration variable" } */
+ ;
+ #pragma omp for
+ for (I<int> i = x; i < y; i = i + foo (i)) /* { dg-error "increment expression refers to iteration variable" } */
+ ;
+ #pragma omp for
+ for (I<int> i = x; i < y; i = foo (i) + i) /* { dg-error "increment expression refers to iteration variable" } */
+ ;
+ #pragma omp for collapse(2)
+ for (I<int> i = x; i < y; i = i + 2)
+ for (I<int> j = u; j < y; j += 2)
+ ;
+ #pragma omp for collapse(2)
+ for (I<int> i = x; i < y; i = i + 2) /* { dg-error "initializer expression refers to iteration variable" } */
+ for (I<int> j = i; j < v; j += 2)
+ ;
+ #pragma omp for collapse(2)
+ for (I<int> i = x; i < y; i = i + 2)
+ for (I<int> j = i + 3; j < v; j += 2) /* { dg-error "initializer expression refers to iteration variable" } */
+ ;
+ #pragma omp for collapse(2)
+ for (I<int> i = x; i < y; i++)
+ for (I<int> j = baz (&i); j < v; j += 2) /* { dg-error "initializer expression refers to iteration variable" } */
+ ;
+ #pragma omp for collapse(2)
+ for (I<int> i = x; i < y; i++) /* { dg-error "condition expression refers to iteration variable" } */
+ for (I<int> j = v; j > i; j--)
+ ;
+ #pragma omp for collapse(2)
+ for (I<int> i = x; i < y; i++) /* { dg-error "condition expression refers to iteration variable" } */
+ for (I<int> j = x; j < i; j++)
+ ;
+ #pragma omp for collapse(2)
+ for (I<int> i = x; i < y; i++) /* { dg-error "condition expression refers to iteration variable" } */
+ for (I<int> j = u; j < i + 4; j++)
+ ;
+ #pragma omp for collapse(2)
+ for (I<int> i = x; i < y; i++) /* { dg-error "condition expression refers to iteration variable" } */
+ for (I<int> j = u; j < baz (&i); j++)
+ ;
+ #pragma omp for collapse(2)
+ for (I<int> i = x; i < y; i++)
+ for (I<int> j = u; j < v; j += foo (i)) /* { dg-error "increment expression refers to iteration variable" } */
+ ;
+ #pragma omp for collapse(2)
+ for (I<int> i = x; i < y; i++)
+ for (I<int> j = u; j < y; j = j + (i - v)) /* { dg-error "increment expression refers to iteration variable" } */
+ ;
+ #pragma omp for collapse(2)
+ for (I<int> i = x; i < y; i++)
+ for (I<int> j = u; j < v; j = j + foo (i)) /* { dg-error "increment expression refers to iteration variable" } */
+ ;
+}
+
+void
+f3 (I<int> &x, I<int> &y, I<int> &u, I<int> &v)
+{
+ I<int> j;
+ #pragma omp for collapse(2)
+ for (I<int> i = j; i < y; i = i + 2)
+ for (I<int> j = x; j < y; j++)
+ ;
+ #pragma omp for collapse(2)
+ for (I<int> i = x; i < j + 4; i++)
+ for (I<int> j = u; j < v; j++)
+ ;
+ #pragma omp for collapse(2)
+ for (I<int> i = x; i < j; i++)
+ for (I<int> j = u; j < v; j++)
+ ;
+ #pragma omp for collapse(2)
+ for (I<int> i = x; i < bar (j); i++)
+ for (I<int> j = u; j < v; j++)
+ ;
+ #pragma omp for collapse(2)
+ for (I<int> i = x; i < y; i += foo (j))
+ for (I<int> j = u; j < v; j++)
+ ;
+ #pragma omp for collapse(2)
+ for (I<int> i = x; i < y; i = foo (j) + i)
+ for (I<int> j = u; j < v; j++)
+ ;
+}
#pragma omp parallel for reduction (+:g) // { dg-error "has const type for .reduction." }
for (int i = 0; i < 10; i++)
;
- #pragma omp parallel shared (a) // { dg-error "is not a variable in clause" }
+ #pragma omp parallel shared (a)
;
- #pragma omp parallel shared (b) // { dg-error "is not a variable in clause" }
+ #pragma omp parallel shared (b)
;
- #pragma omp parallel shared (c) // { dg-error "is not a variable in clause" }
+ #pragma omp parallel shared (c)
;
- #pragma omp parallel shared (e) // { dg-error "is not a variable in clause" }
+ #pragma omp parallel shared (e)
;
- #pragma omp parallel shared (f) // { dg-error "is not a variable in clause" }
+ #pragma omp parallel shared (f)
;
- #pragma omp parallel shared (g) // { dg-error "is not a variable in clause" }
+ #pragma omp parallel shared (g)
;
- #pragma omp parallel shared (h) // { dg-error "is not a variable in clause" }
+ #pragma omp parallel shared (h) // { dg-error "is predetermined .shared. for .shared." }
;
return 0;
}
#pragma omp parallel for reduction (+:g) // { dg-error "has const type for .reduction." }
for (int i = 0; i < 10; i++)
;
- #pragma omp parallel shared (a) // { dg-error "is not a variable in clause" }
+ #pragma omp parallel shared (a) // { dg-error "is predetermined .shared. for .shared." }
;
- #pragma omp parallel shared (b) // { dg-error "is not a variable in clause" }
+ #pragma omp parallel shared (b)
;
- #pragma omp parallel shared (c) // { dg-error "is not a variable in clause" }
+ #pragma omp parallel shared (c)
;
- #pragma omp parallel shared (e) // { dg-error "is not a variable in clause" }
+ #pragma omp parallel shared (e)
;
- #pragma omp parallel shared (f) // { dg-error "is not a variable in clause" }
+ #pragma omp parallel shared (f)
;
- #pragma omp parallel shared (g) // { dg-error "is not a variable in clause" }
+ #pragma omp parallel shared (g)
;
- #pragma omp parallel shared (h) // { dg-error "is not a variable in clause" }
+ #pragma omp parallel shared (h) // { dg-error "is predetermined .shared. for .shared." }
;
return 0;
}
--- /dev/null
+// { dg-do compile }
+// { dg-options "-fopenmp" }
+
+struct S { int a; void foo (S *); static S &bar (); };
+
+void
+S::foo (S *x)
+{
+ S &b = bar ();
+ S c;
+ #pragma omp parallel private (b.a) // { dg-error "expected .\\). before .\\.. token" }
+ ;
+ #pragma omp parallel private (c.a) // { dg-error "expected .\\). before .\\.. token" }
+ ;
+ #pragma omp parallel private (x->a) // { dg-error "expected .\\). before .->. token" }
+ ;
+}
--- /dev/null
+// { dg-do compile }
+
+class C { int a; char b; void foo (); };
+
+void
+C::foo ()
+{
+ #pragma omp parallel shared (a, a) // { dg-error "appears more than once in data clauses" }
+ ;
+ #pragma omp parallel shared (a) private (b) shared(C::a) // { dg-error "appears more than once in data clauses" }
+ ;
+ #pragma omp task private (a) private (b)
+ ;
+ #pragma omp task firstprivate (a) shared (C::a) // { dg-error "appears more than once in data clauses" }
+ ;
+ #pragma omp parallel for lastprivate (b) firstprivate (a) lastprivate (b) // { dg-error "appears more than once in data clauses" }
+ for (int i = 0; i < 64; i++)
+ ;
+ #pragma omp parallel for lastprivate (b) firstprivate (b)
+ for (int i = 0; i < 64; i++)
+ ;
+}
foo ()
{
#pragma omp parallel for
- for (auto i = i = 0; i<4; ++i) // { dg-error "incomplete|unable|invalid|auto" }
+ for (auto i = i = 0; i<4; ++i) // { dg-error "initializer expression refers to iteration variable" }
;
}
void
bar ()
{
- foo<0> (); // { dg-message "required from here" }
+ foo<0> ();
}
--- /dev/null
+// { dg-do compile }
+// { dg-options "-fopenmp -fdump-tree-gimple" }
+
+int v = 6;
+void bar (int);
+void bar2 (int, long *, long *);
+int baz (void);
+#pragma omp declare target to (bar, baz, v)
+
+template <int N>
+void
+foo (int a, int b, long c, long d)
+{
+ /* The OpenMP 4.5 spec says that these expressions are evaluated before
+ target region on combined target teams, so those cases are always
+ fine. */
+ #pragma omp target
+ bar (0);
+ #pragma omp target
+ #pragma omp teams
+ bar (1);
+ #pragma omp target teams
+ bar (2);
+ #pragma omp target
+ #pragma omp teams num_teams (4)
+ bar (3);
+ #pragma omp target teams num_teams (4)
+ bar (4);
+ #pragma omp target
+ #pragma omp teams thread_limit (7)
+ bar (5);
+ #pragma omp target teams thread_limit (7)
+ bar (6);
+ #pragma omp target
+ #pragma omp teams num_teams (4) thread_limit (8)
+ {
+ {
+ bar (7);
+ }
+ }
+ #pragma omp target teams num_teams (4) thread_limit (8)
+ bar (8);
+ #pragma omp target
+ #pragma omp teams num_teams (a) thread_limit (b)
+ bar (9);
+ #pragma omp target teams num_teams (a) thread_limit (b)
+ bar (10);
+ #pragma omp target
+ #pragma omp teams num_teams (c + 1) thread_limit (d - 1)
+ bar (11);
+ #pragma omp target teams num_teams (c + 1) thread_limit (d - 1)
+ bar (12);
+ #pragma omp target map (always, to: c, d)
+ #pragma omp teams num_teams (c + 1) thread_limit (d - 1)
+ bar (13);
+ #pragma omp target data map (to: c, d)
+ {
+ #pragma omp target defaultmap (tofrom: scalar)
+ bar2 (14, &c, &d);
+ /* This is one of the cases which can't be generally optimized,
+ the c and d are (or could be) already mapped and whether
+ their device and original values match is unclear. */
+ #pragma omp target map (to: c, d)
+ #pragma omp teams num_teams (c + 1) thread_limit (d - 1)
+ bar (15);
+ }
+ /* This can't be optimized, there are function calls inside of
+ target involved. */
+ #pragma omp target
+ #pragma omp teams num_teams (baz () + 1) thread_limit (baz () - 1)
+ bar (16);
+ #pragma omp target teams num_teams (baz () + 1) thread_limit (baz () - 1)
+ bar (17);
+ /* This one can't be optimized, as v might have different value between
+ host and target. */
+ #pragma omp target
+ #pragma omp teams num_teams (v + 1) thread_limit (v - 1)
+ bar (18);
+}
+
+void
+foo (int a, int b, long c, long d)
+{
+ foo<0> (a, b, c, d);
+}
+
+/* { dg-final { scan-tree-dump-times "num_teams\\(-1\\)" 3 "gimple" } } */
+/* { dg-final { scan-tree-dump-times "thread_limit\\(-1\\)" 3 "gimple" } } */
+/* { dg-final { scan-tree-dump-times "num_teams\\(0\\)" 4 "gimple" } } */
+/* { dg-final { scan-tree-dump-times "thread_limit\\(0\\)" 6 "gimple" } } */
+/* { dg-final { scan-tree-dump-times "num_teams\\(1\\)" 2 "gimple" } } */
+/* { dg-final { scan-tree-dump-times "thread_limit\\(1\\)" 0 "gimple" } } */
return a + *b + c;
}
+/* { dg-final { scan-assembler-times "_ZGVbM8uva32l4_f2:" 1 { target { i?86-*-* x86_64-*-* } } } } */
+/* { dg-final { scan-assembler-times "_ZGVbN8uva32l4_f2:" 1 { target { i?86-*-* x86_64-*-* } } } } */
+/* { dg-final { scan-assembler-times "_ZGVcM8uva32l4_f2:" 1 { target { i?86-*-* x86_64-*-* } } } } */
+/* { dg-final { scan-assembler-times "_ZGVcN8uva32l4_f2:" 1 { target { i?86-*-* x86_64-*-* } } } } */
+/* { dg-final { scan-assembler-times "_ZGVdM8uva32l4_f2:" 1 { target { i?86-*-* x86_64-*-* } } } } */
+/* { dg-final { scan-assembler-times "_ZGVdN8uva32l4_f2:" 1 { target { i?86-*-* x86_64-*-* } } } } */
+
#pragma omp declare simd uniform (a) aligned (b : 8 * sizeof (long long)) linear (c : 4) simdlen (8)
__extension__
long long f3 (long long a, long long *b, long long c);
return x;
}
+/* { dg-final { scan-assembler-times "_ZGVbM16v_f7:" 1 { target { i?86-*-* x86_64-*-* } } } } */
+/* { dg-final { scan-assembler-times "_ZGVbN16v_f7:" 1 { target { i?86-*-* x86_64-*-* } } } } */
+/* { dg-final { scan-assembler-times "_ZGVcM16v_f7:" 1 { target { i?86-*-* x86_64-*-* } } } } */
+/* { dg-final { scan-assembler-times "_ZGVcN16v_f7:" 1 { target { i?86-*-* x86_64-*-* } } } } */
+/* { dg-final { scan-assembler-times "_ZGVdM16v_f7:" 1 { target { i?86-*-* x86_64-*-* } } } } */
+/* { dg-final { scan-assembler-times "_ZGVdN16v_f7:" 1 { target { i?86-*-* x86_64-*-* } } } } */
+
int
f9 (int x)
{
return a + *b + c;
}
+/* { dg-final { scan-assembler-times "_ZGVbM8uva32l4_f13:" 1 { target { i?86-*-* x86_64-*-* } } } } */
+/* { dg-final { scan-assembler-times "_ZGVbN8uva32l4_f13:" 1 { target { i?86-*-* x86_64-*-* } } } } */
+/* { dg-final { scan-assembler-times "_ZGVcM8uva32l4_f13:" 1 { target { i?86-*-* x86_64-*-* } } } } */
+/* { dg-final { scan-assembler-times "_ZGVcN8uva32l4_f13:" 1 { target { i?86-*-* x86_64-*-* } } } } */
+/* { dg-final { scan-assembler-times "_ZGVdM8uva32l4_f13:" 1 { target { i?86-*-* x86_64-*-* } } } } */
+/* { dg-final { scan-assembler-times "_ZGVdN8uva32l4_f13:" 1 { target { i?86-*-* x86_64-*-* } } } } */
+
#pragma omp declare simd uniform (a) aligned (b : 8 * sizeof (int)) linear (c : 4) simdlen (8)
int
f14 (a, b, c)
return a + *b + c;
}
+/* { dg-final { scan-assembler-times "_ZGVbM8uva32l4_f14:" 1 { target { i?86-*-* x86_64-*-* } } } } */
+/* { dg-final { scan-assembler-times "_ZGVbN8uva32l4_f14:" 1 { target { i?86-*-* x86_64-*-* } } } } */
+/* { dg-final { scan-assembler-times "_ZGVcM8uva32l4_f14:" 1 { target { i?86-*-* x86_64-*-* } } } } */
+/* { dg-final { scan-assembler-times "_ZGVcN8uva32l4_f14:" 1 { target { i?86-*-* x86_64-*-* } } } } */
+/* { dg-final { scan-assembler-times "_ZGVdM8uva32l4_f14:" 1 { target { i?86-*-* x86_64-*-* } } } } */
+/* { dg-final { scan-assembler-times "_ZGVdN8uva32l4_f14:" 1 { target { i?86-*-* x86_64-*-* } } } } */
+
#pragma omp declare simd uniform (a) aligned (b : 8 * sizeof (int)) linear (c : 4) simdlen (8)
int
f15 (int a, int *b, int c)
return a + *b + c;
}
+/* { dg-final { scan-assembler-times "_ZGVbM8uva32l4_f15:" 1 { target { i?86-*-* x86_64-*-* } } } } */
+/* { dg-final { scan-assembler-times "_ZGVbN8uva32l4_f15:" 1 { target { i?86-*-* x86_64-*-* } } } } */
+/* { dg-final { scan-assembler-times "_ZGVcM8uva32l4_f15:" 1 { target { i?86-*-* x86_64-*-* } } } } */
+/* { dg-final { scan-assembler-times "_ZGVcN8uva32l4_f15:" 1 { target { i?86-*-* x86_64-*-* } } } } */
+/* { dg-final { scan-assembler-times "_ZGVdM8uva32l4_f15:" 1 { target { i?86-*-* x86_64-*-* } } } } */
+/* { dg-final { scan-assembler-times "_ZGVdN8uva32l4_f15:" 1 { target { i?86-*-* x86_64-*-* } } } } */
+
#pragma omp declare simd uniform (d) aligned (e : 8 * sizeof (int)) linear (f : 4) simdlen (8)
int f15 (int d, int *e, int f);
return g + h[0];
}
+/* { dg-final { scan-assembler-times "_ZGVbM4l20va8_f17:" 1 { target { { i?86-*-* x86_64-*-* } && lp64 } } } } */
+/* { dg-final { scan-assembler-times "_ZGVbN4l20va8_f17:" 1 { target { { i?86-*-* x86_64-*-* } && lp64 } } } } */
+/* { dg-final { scan-assembler-times "_ZGVcM4l20va8_f17:" 1 { target { { i?86-*-* x86_64-*-* } && lp64 } } } } */
+/* { dg-final { scan-assembler-times "_ZGVcN4l20va8_f17:" 1 { target { { i?86-*-* x86_64-*-* } && lp64 } } } } */
+/* { dg-final { scan-assembler-times "_ZGVdM4l20va8_f17:" 1 { target { { i?86-*-* x86_64-*-* } && lp64 } } } } */
+/* { dg-final { scan-assembler-times "_ZGVdN4l20va8_f17:" 1 { target { { i?86-*-* x86_64-*-* } && lp64 } } } } */
+/* { dg-final { scan-assembler-times "_ZGVbM4l12va4_f17:" 1 { target { { i?86-*-* x86_64-*-* } && ilp32 } } } } */
+/* { dg-final { scan-assembler-times "_ZGVbN4l12va4_f17:" 1 { target { { i?86-*-* x86_64-*-* } && ilp32 } } } } */
+/* { dg-final { scan-assembler-times "_ZGVcM4l12va4_f17:" 1 { target { { i?86-*-* x86_64-*-* } && ilp32 } } } } */
+/* { dg-final { scan-assembler-times "_ZGVcN4l12va4_f17:" 1 { target { { i?86-*-* x86_64-*-* } && ilp32 } } } } */
+/* { dg-final { scan-assembler-times "_ZGVdM4l12va4_f17:" 1 { target { { i?86-*-* x86_64-*-* } && ilp32 } } } } */
+/* { dg-final { scan-assembler-times "_ZGVdN4l12va4_f17:" 1 { target { { i?86-*-* x86_64-*-* } && ilp32 } } } } */
+
#pragma omp declare simd aligned (i : sizeof (*i)) linear (j : 2 * sizeof (i[0]) + sizeof (j)) simdlen (4)
int
f18 (j, i)
{
return j + i[0];
}
+
+/* { dg-final { scan-assembler-times "_ZGVbM4l20va8_f18:" 1 { target { { i?86-*-* x86_64-*-* } && lp64 } } } } */
+/* { dg-final { scan-assembler-times "_ZGVbN4l20va8_f18:" 1 { target { { i?86-*-* x86_64-*-* } && lp64 } } } } */
+/* { dg-final { scan-assembler-times "_ZGVcM4l20va8_f18:" 1 { target { { i?86-*-* x86_64-*-* } && lp64 } } } } */
+/* { dg-final { scan-assembler-times "_ZGVcN4l20va8_f18:" 1 { target { { i?86-*-* x86_64-*-* } && lp64 } } } } */
+/* { dg-final { scan-assembler-times "_ZGVdM4l20va8_f18:" 1 { target { { i?86-*-* x86_64-*-* } && lp64 } } } } */
+/* { dg-final { scan-assembler-times "_ZGVdN4l20va8_f18:" 1 { target { { i?86-*-* x86_64-*-* } && lp64 } } } } */
+/* { dg-final { scan-assembler-times "_ZGVbM4l12va4_f18:" 1 { target { { i?86-*-* x86_64-*-* } && ilp32 } } } } */
+/* { dg-final { scan-assembler-times "_ZGVbN4l12va4_f18:" 1 { target { { i?86-*-* x86_64-*-* } && ilp32 } } } } */
+/* { dg-final { scan-assembler-times "_ZGVcM4l12va4_f18:" 1 { target { { i?86-*-* x86_64-*-* } && ilp32 } } } } */
+/* { dg-final { scan-assembler-times "_ZGVcN4l12va4_f18:" 1 { target { { i?86-*-* x86_64-*-* } && ilp32 } } } } */
+/* { dg-final { scan-assembler-times "_ZGVdM4l12va4_f18:" 1 { target { { i?86-*-* x86_64-*-* } && ilp32 } } } } */
+/* { dg-final { scan-assembler-times "_ZGVdN4l12va4_f18:" 1 { target { { i?86-*-* x86_64-*-* } && ilp32 } } } } */
--- /dev/null
+#pragma omp declare simd linear(p:1) linear(val(q):-1) linear(s:-3)
+int
+f1 (int *p, int *q, short *s)
+{
+ return *p + *q + *s;
+}
+
+/* { dg-final { scan-assembler-times "_ZGVbM4l4ln4ln6_f1:" 1 { target { i?86-*-* x86_64-*-* } } } } */
+/* { dg-final { scan-assembler-times "_ZGVbN4l4ln4ln6_f1:" 1 { target { i?86-*-* x86_64-*-* } } } } */
+/* { dg-final { scan-assembler-times "_ZGVcM4l4ln4ln6_f1:" 1 { target { i?86-*-* x86_64-*-* } } } } */
+/* { dg-final { scan-assembler-times "_ZGVcN4l4ln4ln6_f1:" 1 { target { i?86-*-* x86_64-*-* } } } } */
+/* { dg-final { scan-assembler-times "_ZGVdM8l4ln4ln6_f1:" 1 { target { i?86-*-* x86_64-*-* } } } } */
+/* { dg-final { scan-assembler-times "_ZGVdN8l4ln4ln6_f1:" 1 { target { i?86-*-* x86_64-*-* } } } } */
+
+#pragma omp declare simd linear(p:s) linear(q:t) uniform (s) linear(r:s) notinbranch simdlen(8) uniform(t)
+int
+f2 (int *p, short *q, int s, int r, int t)
+{
+ return *p + *q + r;
+}
+
+/* { dg-final { scan-assembler-times "_ZGVbN8ls2ls4uls2u_f2:" 1 { target { i?86-*-* x86_64-*-* } } } } */
+/* { dg-final { scan-assembler-times "_ZGVcN8ls2ls4uls2u_f2:" 1 { target { i?86-*-* x86_64-*-* } } } } */
+/* { dg-final { scan-assembler-times "_ZGVdN8ls2ls4uls2u_f2:" 1 { target { i?86-*-* x86_64-*-* } } } } */
--- /dev/null
+/* { dg-do compile } */
+/* { dg-options "-fopenmp-simd" } */
+
+#pragma omp declare simd linear(a:1 + b) uniform(b) /* { dg-error ".linear. clause step .b \\+ 1. is neither constant nor a parameter" } */
+int f1 (int a, int b);
+#pragma omp declare simd linear(a:b + 1) uniform(b) /* { dg-error ".linear. clause step .b \\+ 1. is neither constant nor a parameter" } */
+int f2 (int a, int b);
+#pragma omp declare simd linear(a:2 * b) uniform(b) /* { dg-error ".linear. clause step .b \\* 2. is neither constant nor a parameter" } */
+int f3 (int a, int b);
+#pragma omp declare simd linear(a:b) /* { dg-error ".linear. clause step is a parameter .b. not specified in .uniform. clause" } */
+int f4 (int a, int b);
+#pragma omp declare simd linear(a:b) linear(b:1) /* { dg-error ".linear. clause step is a parameter .b. not specified in .uniform. clause" } */
+int f5 (int a, int b);
+#pragma omp declare simd linear(a:5 + 2 * 3)
+int f6 (int a, int b);
+const int c = 5;
+#pragma omp declare simd linear(a:c) /* { dg-error ".linear. clause step .c. is neither constant nor a parameter" } */
+int f7 (int a, int b);
+#pragma omp declare simd linear(a:2 * c + 1) /* { dg-error ".linear. clause step .\[^\n\r]*. is neither constant nor a parameter" } */
+int f8 (int a, int b);
+#pragma omp declare simd linear(a:0.5) /* { dg-error ".linear. clause step expression must be integral" } */
+int f9 (int a, int b);
--- /dev/null
+/* { dg-do compile } */
+/* { dg-options "-fopenmp -fdump-tree-ompexp" } */
+
+extern void bar(int);
+
+void foo (int n)
+{
+ int i;
+
+ #pragma omp for schedule(nonmonotonic:guided)
+ for (i = 0; i < n; ++i)
+ bar(i);
+}
+
+/* { dg-final { scan-tree-dump-times "GOMP_loop_nonmonotonic_guided_start" 1 "ompexp" } } */
+/* { dg-final { scan-tree-dump-times "GOMP_loop_nonmonotonic_guided_next" 1 "ompexp" } } */
--- /dev/null
+/* { dg-do compile } */
+/* { dg-options "-fopenmp -fdump-tree-ompexp" } */
+
+extern void bar(int);
+
+void foo (int n)
+{
+ int i;
+
+ #pragma omp for schedule(nonmonotonic:dynamic, 2)
+ for (i = 0; i < n; ++i)
+ bar(i);
+}
+
+/* { dg-final { scan-tree-dump-times "GOMP_loop_nonmonotonic_dynamic_start" 1 "ompexp" } } */
+/* { dg-final { scan-tree-dump-times "GOMP_loop_nonmonotonic_dynamic_next" 1 "ompexp" } } */
--- /dev/null
+/* { dg-do compile } */
+/* { dg-options "-fopenmp -fdump-tree-ompexp" } */
+
+extern void bar(unsigned long long);
+
+void foo (unsigned long long n)
+{
+ unsigned long long i;
+
+ #pragma omp for schedule(nonmonotonic:guided, 7)
+ for (i = 0; i < n; ++i)
+ bar(i);
+}
+
+/* { dg-final { scan-tree-dump-times "GOMP_loop_ull_nonmonotonic_guided_start" 1 "ompexp" } } */
+/* { dg-final { scan-tree-dump-times "GOMP_loop_ull_nonmonotonic_guided_next" 1 "ompexp" } } */
--- /dev/null
+/* { dg-do compile } */
+/* { dg-options "-fopenmp -fdump-tree-ompexp" } */
+
+extern void bar(unsigned long long);
+
+void foo (unsigned long long n)
+{
+ unsigned long long i;
+
+ #pragma omp for schedule (nonmonotonic : dynamic)
+ for (i = 0; i < n; ++i)
+ bar(i);
+}
+
+/* { dg-final { scan-tree-dump-times "GOMP_loop_ull_nonmonotonic_dynamic_start" 1 "ompexp" } } */
+/* { dg-final { scan-tree-dump-times "GOMP_loop_ull_nonmonotonic_dynamic_next" 1 "ompexp" } } */
--- /dev/null
+/* { dg-do compile } */
+/* { dg-options "-O2 -fopenmp -fdump-tree-ssa" } */
+
+extern void bar(int);
+
+void foo (void)
+{
+ int i;
+
+ #pragma omp parallel for schedule (nonmonotonic : dynamic, 4)
+ for (i = 0; i < 37; ++i)
+ bar(i);
+}
+
+/* { dg-final { scan-tree-dump-times "GOMP_parallel_loop_nonmonotonic_dynamic" 1 "ssa" } } */
+/* { dg-final { scan-tree-dump-times "GOMP_loop_nonmonotonic_dynamic_start" 0 "ssa" } } */
+/* { dg-final { scan-tree-dump-times "GOMP_loop_nonmonotonic_dynamic_next" 2 "ssa" } } */
--- /dev/null
+/* { dg-do compile } */
+/* { dg-options "-fopenmp" } */
+
+int i, j;
+
+void
+f1 (void)
+{
+ #pragma omp for linear (i:1) /* { dg-error "iteration variable .i. should not be linear" } */
+ for (i = 0; i < 32; i++)
+ ;
+}
+
+void
+f2 (void)
+{
+ #pragma omp distribute parallel for linear (i:1) /* { dg-error ".linear. is not valid for .#pragma omp distribute parallel for." } */
+ for (i = 0; i < 32; i++)
+ ;
+}
+
+void
+f3 (void)
+{
+ #pragma omp parallel for linear (i:1) collapse(1) /* { dg-error "iteration variable .i. should not be linear" } */
+ for (i = 0; i < 32; i++)
+ ;
+}
+
+void
+f4 (void)
+{
+ #pragma omp for linear (i:1) linear (j:2) collapse(2) /* { dg-error "iteration variable .i. should not be linear" } */
+ for (i = 0; i < 32; i++) /* { dg-error "iteration variable .j. should not be linear" "" { target *-*-* } 33 } */
+ for (j = 0; j < 32; j+=2)
+ ;
+}
+
+void
+f5 (void)
+{
+ #pragma omp target teams distribute parallel for linear (i:1) linear (j:2) collapse(2) /* { dg-error ".linear. is not valid for .#pragma omp target teams distribute parallel for." } */
+ for (i = 0; i < 32; i++)
+ for (j = 0; j < 32; j+=2)
+ ;
+}
+
+void
+f6 (void)
+{
+ #pragma omp parallel for linear (i:1) collapse(2) linear (j:2) /* { dg-error "iteration variable .i. should not be linear" } */
+ for (i = 0; i < 32; i++) /* { dg-error "iteration variable .j. should not be linear" "" { target *-*-* } 51 } */
+ for (j = 0; j < 32; j+=2)
+ ;
+}
+
+#pragma omp declare target to (i, j, f2)
--- /dev/null
+int bar (int);
+int baz (int *);
+
+void
+f1 (int x)
+{
+ int i = 0, j = 0;
+ #pragma omp for
+ for (i = 0; i < 16; i++)
+ ;
+ #pragma omp for
+ for (i = 0; 16 > i; i++)
+ ;
+ #pragma omp for
+ for (i = 0; i < 16; i = i + 2)
+ ;
+ #pragma omp for
+ for (i = 0; i < 16; i = 2 + i)
+ ;
+ #pragma omp for /* { dg-error "initializer expression refers to iteration variable" } */
+ for (i = i; i < 16; i++)
+ ;
+ #pragma omp for
+ for (i = 2 * (i & x); i < 16; i++) /* { dg-error "initializer expression refers to iteration variable" } */
+ ;
+ #pragma omp for
+ for (i = bar (i); i < 16; i++) /* { dg-error "initializer expression refers to iteration variable" } */
+ ;
+ #pragma omp for
+ for (i = baz (&i); i < 16; i++) /* { dg-error "initializer expression refers to iteration variable" } */
+ ;
+ #pragma omp for
+ for (i = 5; i < 2 * i + 17; i++) /* { dg-error "condition expression refers to iteration variable" } */
+ ;
+ #pragma omp for
+ for (i = 5; 2 * i + 17 > i; i++) /* { dg-error "condition expression refers to iteration variable" } */
+ ;
+ #pragma omp for
+ for (i = 5; bar (i) > i; i++) /* { dg-error "condition expression refers to iteration variable" } */
+ ;
+ #pragma omp for
+ for (i = 5; i <= baz (&i); i++) /* { dg-error "condition expression refers to iteration variable" } */
+ ;
+ #pragma omp for
+ for (i = 5; i <= i; i++) /* { dg-error "invalid controlling predicate|condition expression refers to iteration variable" } */
+ ;
+ #pragma omp for /* { dg-error "increment expression refers to iteration variable" } */
+ for (i = 5; i < 16; i += i)
+ ;
+ #pragma omp for
+ for (i = 5; i < 16; i = i + 2 * i) /* { dg-error "invalid increment expression|increment expression refers to iteration variable" } */
+ ;
+ #pragma omp for /* { dg-error "increment expression refers to iteration variable" } */
+ for (i = 5; i < 16; i = i + i)
+ ;
+ #pragma omp for
+ for (i = 5; i < 16; i = i + bar (i)) /* { dg-error "increment expression refers to iteration variable" } */
+ ;
+ #pragma omp for
+ for (i = 5; i < 16; i = baz (&i) + i) /* { dg-error "increment expression refers to iteration variable" } */
+ ;
+ #pragma omp for
+ for (i = 5; i < 16; i += bar (i)) /* { dg-error "increment expression refers to iteration variable" } */
+ ;
+ #pragma omp for
+ for (i = 5; i < 16; i += baz (&i)) /* { dg-error "increment expression refers to iteration variable" } */
+ ;
+ #pragma omp for collapse(2)
+ for (i = 0; i < 16; i = i + 2)
+ for (j = 0; j < 16; j += 2)
+ ;
+ #pragma omp for collapse(2) /* { dg-error "initializer expression refers to iteration variable" } */
+ for (i = j; i < 16; i = i + 2)
+ for (j = 0; j < 16; j++)
+ ;
+ #pragma omp for collapse(2) /* { dg-error "initializer expression refers to iteration variable" } */
+ for (i = 0; i < 16; i = i + 2)
+ for (j = i; j < 16; j += 2)
+ ;
+ #pragma omp for collapse(2)
+ for (i = 0; i < 16; i = i + 2)
+ for (j = i + 3; j < 16; j += 2) /* { dg-error "initializer expression refers to iteration variable" } */
+ ;
+ #pragma omp for collapse(2)
+ for (i = 0; i < 16; i++)
+ for (j = baz (&i); j < 16; j += 2) /* { dg-error "initializer expression refers to iteration variable" } */
+ ;
+ #pragma omp for collapse(2)
+ for (i = 0; i < 16; i++)
+ for (j = 16; j > (i & x); j--) /* { dg-error "condition expression refers to iteration variable" } */
+ ;
+ #pragma omp for collapse(2)
+ for (i = 0; i < 16; i++)
+ for (j = 0; j < i; j++) /* { dg-error "condition expression refers to iteration variable" } */
+ ;
+ #pragma omp for collapse(2)
+ for (i = 0; i < 16; i++)
+ for (j = 0; j < i + 4; j++) /* { dg-error "condition expression refers to iteration variable" } */
+ ;
+ #pragma omp for collapse(2)
+ for (i = 0; i < j + 4; i++) /* { dg-error "condition expression refers to iteration variable" } */
+ for (j = 0; j < 16; j++)
+ ;
+ #pragma omp for collapse(2)
+ for (i = 0; i < j; i++) /* { dg-error "condition expression refers to iteration variable" } */
+ for (j = 0; j < 16; j++)
+ ;
+ #pragma omp for collapse(2)
+ for (i = 0; i < bar (j); i++) /* { dg-error "condition expression refers to iteration variable" } */
+ for (j = 0; j < 16; j++)
+ ;
+ #pragma omp for collapse(2)
+ for (i = 0; i < 16; i++)
+ for (j = 0; j < baz (&i); j++) /* { dg-error "condition expression refers to iteration variable" } */
+ ;
+ #pragma omp for collapse(2) /* { dg-error "increment expression refers to iteration variable" } */
+ for (i = 0; i < 16; i += j)
+ for (j = 0; j < 16; j++)
+ ;
+ #pragma omp for collapse(2) /* { dg-error "increment expression refers to iteration variable" } */
+ for (i = 0; i < 16; i++)
+ for (j = 0; j < 16; j += i)
+ ;
+ #pragma omp for collapse(2) /* { dg-error "increment expression refers to iteration variable" } */
+ for (i = 0; i < 16; i = j + i)
+ for (j = 0; j < 16; j++)
+ ;
+ #pragma omp for collapse(2) /* { dg-error "increment expression refers to iteration variable" } */
+ for (i = 0; i < 16; i++)
+ for (j = 0; j < 16; j = j + i)
+ ;
+ #pragma omp for collapse(2)
+ for (i = 0; i < 16; i = bar (j) + i) /* { dg-error "increment expression refers to iteration variable" } */
+ for (j = 0; j < 16; j++)
+ ;
+ #pragma omp for collapse(2)
+ for (i = 0; i < 16; i++)
+ for (j = 0; j < 16; j = j + baz (&i)) /* { dg-error "increment expression refers to iteration variable" } */
+ ;
+}
+
+void
+f2 (int x)
+{
+ #pragma omp for
+ for (int i = 0; i < 16; i++)
+ ;
+ #pragma omp for
+ for (int i = 0; 16 > i; i++)
+ ;
+ #pragma omp for
+ for (int i = 0; i < 16; i = i + 2)
+ ;
+ #pragma omp for
+ for (int i = 0; i < 16; i = 2 + i)
+ ;
+ #pragma omp for /* { dg-error "initializer expression refers to iteration variable" } */
+ for (int i = i; i < 16; i++)
+ ;
+ #pragma omp for
+ for (int i = 2 * (i & x); i < 16; i++) /* { dg-error "initializer expression refers to iteration variable" } */
+ ;
+ #pragma omp for
+ for (int i = bar (i); i < 16; i++) /* { dg-error "initializer expression refers to iteration variable" } */
+ ;
+ #pragma omp for
+ for (int i = baz (&i); i < 16; i++) /* { dg-error "initializer expression refers to iteration variable" } */
+ ;
+ #pragma omp for
+ for (int i = 5; i < 2 * i + 17; i++) /* { dg-error "condition expression refers to iteration variable" } */
+ ;
+ #pragma omp for
+ for (int i = 5; 2 * i + 17 > i; i++) /* { dg-error "condition expression refers to iteration variable" } */
+ ;
+ #pragma omp for
+ for (int i = 5; bar (i) > i; i++) /* { dg-error "condition expression refers to iteration variable" } */
+ ;
+ #pragma omp for
+ for (int i = 5; i <= baz (&i); i++) /* { dg-error "condition expression refers to iteration variable" } */
+ ;
+ #pragma omp for
+ for (int i = 5; i <= i; i++) /* { dg-error "invalid controlling predicate|condition expression refers to iteration variable" } */
+ ;
+ #pragma omp for /* { dg-error "increment expression refers to iteration variable" } */
+ for (int i = 5; i < 16; i += i)
+ ;
+ #pragma omp for
+ for (int i = 5; i < 16; i = i + 2 * i) /* { dg-error "invalid increment expression|increment expression refers to iteration variable" } */
+ ;
+ #pragma omp for /* { dg-error "increment expression refers to iteration variable" } */
+ for (int i = 5; i < 16; i = i + i)
+ ;
+ #pragma omp for
+ for (int i = 5; i < 16; i = i + bar (i)) /* { dg-error "increment expression refers to iteration variable" } */
+ ;
+ #pragma omp for
+ for (int i = 5; i < 16; i = baz (&i) + i) /* { dg-error "increment expression refers to iteration variable" } */
+ ;
+ #pragma omp for
+ for (int i = 5; i < 16; i += bar (i)) /* { dg-error "increment expression refers to iteration variable" } */
+ ;
+ #pragma omp for
+ for (int i = 5; i < 16; i += baz (&i)) /* { dg-error "increment expression refers to iteration variable" } */
+ ;
+ #pragma omp for collapse(2)
+ for (int i = 0; i < 16; i = i + 2)
+ for (int j = 0; j < 16; j += 2)
+ ;
+ #pragma omp for collapse(2) /* { dg-error "initializer expression refers to iteration variable" } */
+ for (int i = 0; i < 16; i = i + 2)
+ for (int j = i; j < 16; j += 2)
+ ;
+ #pragma omp for collapse(2)
+ for (int i = 0; i < 16; i = i + 2)
+ for (int j = i + 3; j < 16; j += 2) /* { dg-error "initializer expression refers to iteration variable" } */
+ ;
+ #pragma omp for collapse(2)
+ for (int i = 0; i < 16; i++)
+ for (int j = baz (&i); j < 16; j += 2) /* { dg-error "initializer expression refers to iteration variable" } */
+ ;
+ #pragma omp for collapse(2)
+ for (int i = 0; i < 16; i++)
+ for (int j = 16; j > (i & x); j--) /* { dg-error "condition expression refers to iteration variable" } */
+ ;
+ #pragma omp for collapse(2)
+ for (int i = 0; i < 16; i++)
+ for (int j = 0; j < i; j++) /* { dg-error "condition expression refers to iteration variable" } */
+ ;
+ #pragma omp for collapse(2)
+ for (int i = 0; i < 16; i++)
+ for (int j = 0; j < i + 4; j++) /* { dg-error "condition expression refers to iteration variable" } */
+ ;
+ #pragma omp for collapse(2)
+ for (int i = 0; i < 16; i++)
+ for (int j = 0; j < baz (&i); j++) /* { dg-error "condition expression refers to iteration variable" } */
+ ;
+ #pragma omp for collapse(2) /* { dg-error "increment expression refers to iteration variable" } */
+ for (int i = 0; i < 16; i++)
+ for (int j = 0; j < 16; j += i)
+ ;
+ #pragma omp for collapse(2) /* { dg-error "increment expression refers to iteration variable" } */
+ for (int i = 0; i < 16; i++)
+ for (int j = 0; j < 16; j = j + i)
+ ;
+ #pragma omp for collapse(2)
+ for (int i = 0; i < 16; i++)
+ for (int j = 0; j < 16; j = j + baz (&i)) /* { dg-error "increment expression refers to iteration variable" } */
+ ;
+}
+
+void
+f3 (void)
+{
+ int j = 0;
+ #pragma omp for collapse(2)
+ for (int i = j; i < 16; i = i + 2)
+ for (int j = 0; j < 16; j++)
+ ;
+ #pragma omp for collapse(2)
+ for (int i = 0; i < j + 4; i++)
+ for (int j = 0; j < 16; j++)
+ ;
+ #pragma omp for collapse(2)
+ for (int i = 0; i < j; i++)
+ for (int j = 0; j < 16; j++)
+ ;
+ #pragma omp for collapse(2)
+ for (int i = 0; i < bar (j); i++)
+ for (int j = 0; j < 16; j++)
+ ;
+ #pragma omp for collapse(2)
+ for (int i = 0; i < 16; i += j)
+ for (int j = 0; j < 16; j++)
+ ;
+ #pragma omp for collapse(2)
+ for (int i = 0; i < 16; i = j + i)
+ for (int j = 0; j < 16; j++)
+ ;
+ #pragma omp for collapse(2)
+ for (int i = 0; i < 16; i = bar (j) + i)
+ for (int j = 0; j < 16; j++)
+ ;
+}
OMP_CLAUSE_SCHEDULE_AUTO,
OMP_CLAUSE_SCHEDULE_RUNTIME,
OMP_CLAUSE_SCHEDULE_CILKFOR,
- OMP_CLAUSE_SCHEDULE_LAST
+ OMP_CLAUSE_SCHEDULE_MASK = (1 << 3) - 1,
+ OMP_CLAUSE_SCHEDULE_MONOTONIC = (1 << 3),
+ OMP_CLAUSE_SCHEDULE_NONMONOTONIC = (1 << 4),
+ OMP_CLAUSE_SCHEDULE_LAST = 2 * OMP_CLAUSE_SCHEDULE_NONMONOTONIC - 1
};
enum omp_clause_default_kind {
case OMP_CLAUSE_SCHEDULE:
pp_string (pp, "schedule(");
+ if (OMP_CLAUSE_SCHEDULE_KIND (clause)
+ & (OMP_CLAUSE_SCHEDULE_MONOTONIC
+ | OMP_CLAUSE_SCHEDULE_NONMONOTONIC))
+ {
+ if (OMP_CLAUSE_SCHEDULE_KIND (clause)
+ & OMP_CLAUSE_SCHEDULE_MONOTONIC)
+ pp_string (pp, "monotonic");
+ else
+ pp_string (pp, "nonmonotonic");
+ if (OMP_CLAUSE_SCHEDULE_SIMD (clause))
+ pp_comma (pp);
+ else
+ pp_colon (pp);
+ }
if (OMP_CLAUSE_SCHEDULE_SIMD (clause))
pp_string (pp, "simd:");
- switch (OMP_CLAUSE_SCHEDULE_KIND (clause))
+
+ switch (OMP_CLAUSE_SCHEDULE_KIND (clause) & OMP_CLAUSE_SCHEDULE_MASK)
{
case OMP_CLAUSE_SCHEDULE_STATIC:
pp_string (pp, "static");
case GOMP_MAP_FIRSTPRIVATE_POINTER:
pp_string (pp, "firstprivate");
break;
+ case GOMP_MAP_FIRSTPRIVATE_REFERENCE:
+ pp_string (pp, "firstprivate ref");
+ break;
case GOMP_MAP_STRUCT:
pp_string (pp, "struct");
break;
+ case GOMP_MAP_ALWAYS_POINTER:
+ pp_string (pp, "always_pointer");
+ break;
default:
gcc_unreachable ();
}
print_clause_size:
if (OMP_CLAUSE_SIZE (clause))
{
- if (OMP_CLAUSE_CODE (clause) == OMP_CLAUSE_MAP
- && (OMP_CLAUSE_MAP_KIND (clause) == GOMP_MAP_POINTER
- || OMP_CLAUSE_MAP_KIND (clause)
- == GOMP_MAP_FIRSTPRIVATE_POINTER))
- pp_string (pp, " [pointer assign, bias: ");
- else if (OMP_CLAUSE_CODE (clause) == OMP_CLAUSE_MAP
- && OMP_CLAUSE_MAP_KIND (clause) == GOMP_MAP_TO_PSET)
- pp_string (pp, " [pointer set, len: ");
- else
- pp_string (pp, " [len: ");
+ switch (OMP_CLAUSE_CODE (clause) == OMP_CLAUSE_MAP
+ ? OMP_CLAUSE_MAP_KIND (clause) : GOMP_MAP_TO)
+ {
+ case GOMP_MAP_POINTER:
+ case GOMP_MAP_FIRSTPRIVATE_POINTER:
+ case GOMP_MAP_FIRSTPRIVATE_REFERENCE:
+ case GOMP_MAP_ALWAYS_POINTER:
+ pp_string (pp, " [pointer assign, bias: ");
+ break;
+ case GOMP_MAP_TO_PSET:
+ pp_string (pp, " [pointer set, len: ");
+ break;
+ default:
+ pp_string (pp, " [len: ");
+ break;
+ }
dump_generic_node (pp, OMP_CLAUSE_SIZE (clause),
spc, flags, false);
pp_right_bracket (pp);
case SIMD_CLONE_ARG_TYPE_LINEAR_VARIABLE_STEP:
case SIMD_CLONE_ARG_TYPE_LINEAR_VAL_CONSTANT_STEP:
case SIMD_CLONE_ARG_TYPE_LINEAR_UVAL_CONSTANT_STEP:
+ case SIMD_CLONE_ARG_TYPE_LINEAR_REF_VARIABLE_STEP:
+ case SIMD_CLONE_ARG_TYPE_LINEAR_VAL_VARIABLE_STEP:
+ case SIMD_CLONE_ARG_TYPE_LINEAR_UVAL_VARIABLE_STEP:
/* FORNOW */
i = -1;
break;
}
break;
case SIMD_CLONE_ARG_TYPE_LINEAR_VARIABLE_STEP:
+ case SIMD_CLONE_ARG_TYPE_LINEAR_REF_VARIABLE_STEP:
+ case SIMD_CLONE_ARG_TYPE_LINEAR_VAL_VARIABLE_STEP:
+ case SIMD_CLONE_ARG_TYPE_LINEAR_UVAL_VARIABLE_STEP:
default:
gcc_unreachable ();
}
DEFTREECODE (OMP_FOR, "omp_for", tcc_statement, 7)
/* OpenMP - #pragma omp simd [clause1 ... clauseN]
- Operands like operands 1-6 of OMP_FOR. */
-DEFTREECODE (OMP_SIMD, "omp_simd", tcc_statement, 6)
+ Operands like for OMP_FOR. */
+DEFTREECODE (OMP_SIMD, "omp_simd", tcc_statement, 7)
/* Cilk Plus - #pragma simd [clause1 ... clauseN]
- Operands like operands 1-6 of OMP_FOR. */
-DEFTREECODE (CILK_SIMD, "cilk_simd", tcc_statement, 6)
+ Operands like for OMP_FOR. */
+DEFTREECODE (CILK_SIMD, "cilk_simd", tcc_statement, 7)
/* Cilk Plus - _Cilk_for (..)
- Operands like operands 1-6 of OMP_FOR. */
-DEFTREECODE (CILK_FOR, "cilk_for", tcc_statement, 6)
+ Operands like for OMP_FOR. */
+DEFTREECODE (CILK_FOR, "cilk_for", tcc_statement, 7)
/* OpenMP - #pragma omp distribute [clause1 ... clauseN]
- Operands like operands 1-6 of OMP_FOR. */
-DEFTREECODE (OMP_DISTRIBUTE, "omp_distribute", tcc_statement, 6)
+ Operands like for OMP_FOR. */
+DEFTREECODE (OMP_DISTRIBUTE, "omp_distribute", tcc_statement, 7)
/* OpenMP - #pragma omp taskloop [clause1 ... clauseN]
- Operands like operands 1-6 of OMP_FOR. */
-DEFTREECODE (OMP_TASKLOOP, "omp_taskloop", tcc_statement, 6)
+ Operands like for OMP_FOR. */
+DEFTREECODE (OMP_TASKLOOP, "omp_taskloop", tcc_statement, 7)
/* OpenMP - #pragma acc loop [clause1 ... clauseN]
- Operands like operands 1-6 of OMP_FOR. */
-DEFTREECODE (OACC_LOOP, "oacc_loop", tcc_statement, 6)
+ Operands like for OMP_FOR. */
+DEFTREECODE (OACC_LOOP, "oacc_loop", tcc_statement, 7)
/* OpenMP - #pragma omp teams [clause1 ... clauseN]
Operand 0: OMP_TEAMS_BODY: Teams body.
#define OMP_FOR_COND(NODE) TREE_OPERAND (OMP_LOOP_CHECK (NODE), 3)
#define OMP_FOR_INCR(NODE) TREE_OPERAND (OMP_LOOP_CHECK (NODE), 4)
#define OMP_FOR_PRE_BODY(NODE) TREE_OPERAND (OMP_LOOP_CHECK (NODE), 5)
-/* Note that this is only available for OMP_FOR, hence OMP_FOR_CHECK. */
-#define OMP_FOR_ORIG_DECLS(NODE) TREE_OPERAND (OMP_FOR_CHECK (NODE), 6)
+#define OMP_FOR_ORIG_DECLS(NODE) TREE_OPERAND (OMP_LOOP_CHECK (NODE), 6)
#define OMP_SECTIONS_BODY(NODE) TREE_OPERAND (OMP_SECTIONS_CHECK (NODE), 0)
#define OMP_SECTIONS_CLAUSES(NODE) TREE_OPERAND (OMP_SECTIONS_CHECK (NODE), 1)
OMP_CLAUSE_MAP with GOMP_MAP_POINTER are marked with this flag. */
#define OMP_CLAUSE_MAP_ZERO_BIAS_ARRAY_SECTION(NODE) \
(OMP_CLAUSE_SUBCODE_CHECK (NODE, OMP_CLAUSE_MAP)->base.public_flag)
-/* Nonzero if the same decl appears both in OMP_CLAUSE_MAP and either
- OMP_CLAUSE_PRIVATE or OMP_CLAUSE_FIRSTPRIVATE. */
-#define OMP_CLAUSE_MAP_PRIVATE(NODE) \
- TREE_PRIVATE (OMP_CLAUSE_SUBCODE_CHECK (NODE, OMP_CLAUSE_MAP))
/* Nonzero if this is a mapped array section, that might need special
treatment if OMP_CLAUSE_SIZE is zero. */
#define OMP_CLAUSE_MAP_MAYBE_ZERO_LENGTH_ARRAY_SECTION(NODE) \
+2015-11-05 Jakub Jelinek <jakub@redhat.com>
+ Ilya Verbin <ilya.verbin@intel.com>
+
+ * gomp-constants.h (GOMP_MAP_FLAG_SPECIAL_2): Define.
+ (GOMP_MAP_FLAG_ALWAYS): Remove.
+ (enum gomp_map_kind): Use GOMP_MAP_FLAG_SPECIAL_2 instead of
+ GOMP_MAP_FLAG_ALWAYS for GOMP_MAP_ALWAYS_TO, GOMP_MAP_ALWAYS_FROM,
+ GOMP_MAP_ALWAYS_TOFROM, GOMP_MAP_STRUCT, GOMP_MAP_RELEASE.
+ Add GOMP_MAP_ALWAYS_POINTER and GOMP_MAP_FIRSTPRIVATE_REFERENCE.
+ (GOMP_MAP_ALWAYS_P): Define.
+ (GOMP_TARGET_FLAG_NOWAIT): Adjust comment.
+
2015-10-27 Daniel Jacobowitz <dan@codesourcery.com>
Joseph Myers <joseph@codesourcery.com>
Mark Shinwell <shinwell@codesourcery.com>
/* Special map kinds, enumerated starting here. */
#define GOMP_MAP_FLAG_SPECIAL_0 (1 << 2)
#define GOMP_MAP_FLAG_SPECIAL_1 (1 << 3)
+#define GOMP_MAP_FLAG_SPECIAL_2 (1 << 4)
#define GOMP_MAP_FLAG_SPECIAL (GOMP_MAP_FLAG_SPECIAL_1 \
| GOMP_MAP_FLAG_SPECIAL_0)
-/* OpenMP always flag. */
-#define GOMP_MAP_FLAG_ALWAYS (1 << 6)
/* Flag to force a specific behavior (or else, trigger a run-time error). */
#define GOMP_MAP_FLAG_FORCE (1 << 7)
GOMP_MAP_FORCE_TOFROM = (GOMP_MAP_FLAG_FORCE | GOMP_MAP_TOFROM),
/* If not already present, allocate. And unconditionally copy to
device. */
- GOMP_MAP_ALWAYS_TO = (GOMP_MAP_FLAG_ALWAYS | GOMP_MAP_TO),
+ GOMP_MAP_ALWAYS_TO = (GOMP_MAP_FLAG_SPECIAL_2 | GOMP_MAP_TO),
/* If not already present, allocate. And unconditionally copy from
device. */
- GOMP_MAP_ALWAYS_FROM = (GOMP_MAP_FLAG_ALWAYS | GOMP_MAP_FROM),
+ GOMP_MAP_ALWAYS_FROM = (GOMP_MAP_FLAG_SPECIAL_2
+ | GOMP_MAP_FROM),
/* If not already present, allocate. And unconditionally copy to and from
device. */
- GOMP_MAP_ALWAYS_TOFROM = (GOMP_MAP_FLAG_ALWAYS | GOMP_MAP_TOFROM),
+ GOMP_MAP_ALWAYS_TOFROM = (GOMP_MAP_FLAG_SPECIAL_2
+ | GOMP_MAP_TOFROM),
/* Map a sparse struct; the address is the base of the structure, alignment
it's required alignment, and size is the number of adjacent entries
that belong to the struct. The adjacent entries should be sorted by
increasing address, so it is easy to determine lowest needed address
(address of the first adjacent entry) and highest needed address
(address of the last adjacent entry plus its size). */
- GOMP_MAP_STRUCT = (GOMP_MAP_FLAG_ALWAYS
+ GOMP_MAP_STRUCT = (GOMP_MAP_FLAG_SPECIAL_2
| GOMP_MAP_FLAG_SPECIAL | 0),
+ /* On a location of a pointer/reference that is assumed to be already mapped
+ earlier, store the translated address of the preceeding mapping.
+ No refcount is bumped by this, and the store is done unconditionally. */
+ GOMP_MAP_ALWAYS_POINTER = (GOMP_MAP_FLAG_SPECIAL_2
+ | GOMP_MAP_FLAG_SPECIAL | 1),
/* Forced deallocation of zero length array section. */
GOMP_MAP_DELETE_ZERO_LEN_ARRAY_SECTION
- = (GOMP_MAP_FLAG_ALWAYS
+ = (GOMP_MAP_FLAG_SPECIAL_2
| GOMP_MAP_FLAG_SPECIAL | 3),
- /* OpenMP 4.1 alias for forced deallocation. */
+ /* OpenMP 4.5 alias for forced deallocation. */
GOMP_MAP_DELETE = GOMP_MAP_FORCE_DEALLOC,
/* Decrement usage count and deallocate if zero. */
- GOMP_MAP_RELEASE = (GOMP_MAP_FLAG_ALWAYS
+ GOMP_MAP_RELEASE = (GOMP_MAP_FLAG_SPECIAL_2
| GOMP_MAP_FORCE_DEALLOC),
/* Internal to GCC, not used in libgomp. */
/* Do not map, but pointer assign a pointer instead. */
- GOMP_MAP_FIRSTPRIVATE_POINTER = (GOMP_MAP_LAST | 1)
+ GOMP_MAP_FIRSTPRIVATE_POINTER = (GOMP_MAP_LAST | 1),
+ /* Do not map, but pointer assign a reference instead. */
+ GOMP_MAP_FIRSTPRIVATE_REFERENCE = (GOMP_MAP_LAST | 2)
};
#define GOMP_MAP_COPY_TO_P(X) \
#define GOMP_MAP_ALWAYS_FROM_P(X) \
(((X) == GOMP_MAP_ALWAYS_FROM) || ((X) == GOMP_MAP_ALWAYS_TOFROM))
+#define GOMP_MAP_ALWAYS_P(X) \
+ (GOMP_MAP_ALWAYS_TO_P (X) || ((X) == GOMP_MAP_ALWAYS_FROM))
+
/* Asynchronous behavior. Keep in sync with
libgomp/{openacc.h,openacc.f90,openacc_lib.h}:acc_async_t. */
#define GOMP_TASK_FLAG_IF (1 << 10)
#define GOMP_TASK_FLAG_NOGROUP (1 << 11)
-/* GOMP_target{_41,update_41,enter_exit_data} flags argument. */
+/* GOMP_target{_ext,update_ext,enter_exit_data} flags argument. */
#define GOMP_TARGET_FLAG_NOWAIT (1 << 0)
#define GOMP_TARGET_FLAG_EXIT_DATA (1 << 1)
/* Internal to libgomp. */
+2015-11-05 Jakub Jelinek <jakub@redhat.com>
+ Ilya Verbin <ilya.verbin@intel.com>
+
+ * libgomp_g.h (GOMP_loop_nonmonotonic_dynamic_next,
+ GOMP_loop_nonmonotonic_dynamic_start,
+ GOMP_loop_nonmonotonic_guided_next,
+ GOMP_loop_nonmonotonic_guided_start,
+ GOMP_loop_ull_nonmonotonic_dynamic_next,
+ GOMP_loop_ull_nonmonotonic_dynamic_start,
+ GOMP_loop_ull_nonmonotonic_guided_next,
+ GOMP_loop_ull_nonmonotonic_guided_start,
+ GOMP_parallel_loop_nonmonotonic_dynamic,
+ GOMP_parallel_loop_nonmonotonic_guided): New prototypes.
+ (GOMP_target_41): Renamed to ...
+ (GOMP_target_ext): ... this. Add num_teams and thread_limit
+ arguments.
+ (GOMP_target_data_41): Renamed to ...
+ (GOMP_target_data_ext): ... this.
+ (GOMP_target_update_41): Renamed to ...
+ (GOMP_target_update_ext): ... this.
+ * libgomp.map (GOMP_4.5): Export GOMP_target_ext,
+ GOMP_target_data_ext and GOMP_target_update_ext instead of
+ GOMP_target_41, GOMP_target_data_41 and GOMP_target_update_41.
+ Export GOMP_loop_nonmonotonic_dynamic_next,
+ GOMP_loop_nonmonotonic_dynamic_start,
+ GOMP_loop_nonmonotonic_guided_next,
+ GOMP_loop_nonmonotonic_guided_start,
+ GOMP_loop_ull_nonmonotonic_dynamic_next,
+ GOMP_loop_ull_nonmonotonic_dynamic_start,
+ GOMP_loop_ull_nonmonotonic_guided_next,
+ GOMP_loop_ull_nonmonotonic_guided_start,
+ GOMP_parallel_loop_nonmonotonic_dynamic and
+ GOMP_parallel_loop_nonmonotonic_guided.
+ * loop.c (GOMP_parallel_loop_nonmonotonic_dynamic,
+ GOMP_parallel_loop_nonmonotonic_guided,
+ GOMP_loop_nonmonotonic_dynamic_start,
+ GOMP_loop_nonmonotonic_guided_start,
+ GOMP_loop_nonmonotonic_dynamic_next,
+ GOMP_loop_nonmonotonic_guided_next): New aliases or functions.
+ * loop_ull.c (GOMP_loop_ull_nonmonotonic_dynamic_start,
+ GOMP_loop_ull_nonmonotonic_guided_start,
+ GOMP_loop_ull_nonmonotonic_dynamic_next,
+ GOMP_loop_ull_nonmonotonic_guided_next): Likewise.
+ * target.c (gomp_map_0len_lookup, gomp_map_val): New inline
+ functions.
+ (gomp_map_vars): Handle GOMP_MAP_ALWAYS_POINTER. For
+ GOMP_MAP_ZERO_LEN_ARRAY_SECTION use gomp_map_0len_lookup.
+ Use gomp_map_val function.
+ (gomp_target_fallback_firstprivate): New static function.
+ (GOMP_target_41): Renamed to ...
+ (GOMP_target_ext): ... this. Add num_teams and thread_limit
+ arguments. Move firstprivate fallback handling into a new
+ function.
+ (GOMP_target_data_41): Renamed to ...
+ (GOMP_target_data_ext): ... this.
+ (GOMP_target_update_41): Renamed to ...
+ (GOMP_target_update_ext): ... this.
+ (gomp_exit_data): For GOMP_MAP_*ZERO_LEN* use
+ gomp_map_0len_lookup instead of gomp_map_lookup.
+ (omp_target_is_present): Use gomp_map_0len_lookup instead of
+ gomp_map_lookup.
+ * testsuite/libgomp.c/target-28.c: Likewise.
+ * testsuite/libgomp.c/monotonic-1.c: New test.
+ * testsuite/libgomp.c/monotonic-2.c: New test.
+ * testsuite/libgomp.c/nonmonotonic-1.c: New test.
+ * testsuite/libgomp.c/nonmonotonic-2.c: New test.
+ * testsuite/libgomp.c/pr66199-5.c: New test.
+ * testsuite/libgomp.c/pr66199-6.c: New test.
+ * testsuite/libgomp.c/pr66199-7.c: New test.
+ * testsuite/libgomp.c/pr66199-8.c: New test.
+ * testsuite/libgomp.c/pr66199-9.c: New test.
+ * testsuite/libgomp.c/reduction-11.c: New test.
+ * testsuite/libgomp.c/reduction-12.c: New test.
+ * testsuite/libgomp.c/reduction-13.c: New test.
+ * testsuite/libgomp.c/reduction-14.c: New test.
+ * testsuite/libgomp.c/reduction-15.c: New test.
+ * testsuite/libgomp.c/target-12.c (main): Adjust for
+ omp_target_is_present change for one-past-last element.
+ * testsuite/libgomp.c/target-17.c (foo): Drop tests where
+ the same var is both mapped and privatized.
+ * testsuite/libgomp.c/target-19.c (foo): Adjust for different
+ handling of zero-length array sections.
+ * testsuite/libgomp.c/target-28.c: New test.
+ * testsuite/libgomp.c/target-29.c: New test.
+ * testsuite/libgomp.c/target-30.c: New test.
+ * testsuite/libgomp.c/target-teams-1.c: New test.
+ * testsuite/libgomp.c++/member-6.C: New test.
+ * testsuite/libgomp.c++/member-7.C: New test.
+ * testsuite/libgomp.c++/monotonic-1.C: New test.
+ * testsuite/libgomp.c++/monotonic-2.C: New test.
+ * testsuite/libgomp.c++/nonmonotonic-1.C: New test.
+ * testsuite/libgomp.c++/nonmonotonic-2.C: New test.
+ * testsuite/libgomp.c++/pr66199-3.C: New test.
+ * testsuite/libgomp.c++/pr66199-4.C: New test.
+ * testsuite/libgomp.c++/pr66199-5.C: New test.
+ * testsuite/libgomp.c++/pr66199-6.C: New test.
+ * testsuite/libgomp.c++/pr66199-7.C: New test.
+ * testsuite/libgomp.c++/pr66199-8.C: New test.
+ * testsuite/libgomp.c++/pr66199-9.C: New test.
+ * testsuite/libgomp.c++/reduction-11.C: New test.
+ * testsuite/libgomp.c++/reduction-12.C: New test.
+ * testsuite/libgomp.c++/target-13.C: New test.
+ * testsuite/libgomp.c++/target-14.C: New test.
+ * testsuite/libgomp.c++/target-15.C: New test.
+ * testsuite/libgomp.c++/target-16.C: New test.
+ * testsuite/libgomp.c++/target-17.C: New test.
+ * testsuite/libgomp.c++/target-18.C: New test.
+ * testsuite/libgomp.c++/target-19.C: New test.
+
2015-11-04 Nathan Sidwell <nathan@codesourcery.com>
* testsuite/libgomp.oacc-fortran/reduction-1.f90: Fix dimensions
GOMP_4.5 {
global:
- GOMP_target_41;
- GOMP_target_data_41;
- GOMP_target_update_41;
+ GOMP_target_ext;
+ GOMP_target_data_ext;
+ GOMP_target_update_ext;
GOMP_target_enter_exit_data;
GOMP_taskloop;
GOMP_taskloop_ull;
GOMP_loop_ull_doacross_static_start;
GOMP_doacross_ull_post;
GOMP_doacross_ull_wait;
+ GOMP_loop_nonmonotonic_dynamic_next;
+ GOMP_loop_nonmonotonic_dynamic_start;
+ GOMP_loop_nonmonotonic_guided_next;
+ GOMP_loop_nonmonotonic_guided_start;
+ GOMP_loop_ull_nonmonotonic_dynamic_next;
+ GOMP_loop_ull_nonmonotonic_dynamic_start;
+ GOMP_loop_ull_nonmonotonic_guided_next;
+ GOMP_loop_ull_nonmonotonic_guided_start;
+ GOMP_parallel_loop_nonmonotonic_dynamic;
+ GOMP_parallel_loop_nonmonotonic_guided;
} GOMP_4.0.1;
OACC_2.0 {
extern bool GOMP_loop_dynamic_start (long, long, long, long, long *, long *);
extern bool GOMP_loop_guided_start (long, long, long, long, long *, long *);
extern bool GOMP_loop_runtime_start (long, long, long, long *, long *);
+extern bool GOMP_loop_nonmonotonic_dynamic_start (long, long, long, long,
+ long *, long *);
+extern bool GOMP_loop_nonmonotonic_guided_start (long, long, long, long,
+ long *, long *);
extern bool GOMP_loop_ordered_static_start (long, long, long, long,
long *, long *);
extern bool GOMP_loop_dynamic_next (long *, long *);
extern bool GOMP_loop_guided_next (long *, long *);
extern bool GOMP_loop_runtime_next (long *, long *);
+extern bool GOMP_loop_nonmonotonic_dynamic_next (long *, long *);
+extern bool GOMP_loop_nonmonotonic_guided_next (long *, long *);
extern bool GOMP_loop_ordered_static_next (long *, long *);
extern bool GOMP_loop_ordered_dynamic_next (long *, long *);
extern void GOMP_parallel_loop_runtime (void (*)(void *), void *,
unsigned, long, long, long,
unsigned);
+extern void GOMP_parallel_loop_nonmonotonic_dynamic (void (*)(void *), void *,
+ unsigned, long, long,
+ long, long, unsigned);
+extern void GOMP_parallel_loop_nonmonotonic_guided (void (*)(void *), void *,
+ unsigned, long, long,
+ long, long, unsigned);
extern void GOMP_loop_end (void);
extern void GOMP_loop_end_nowait (void);
unsigned long long,
unsigned long long *,
unsigned long long *);
+extern bool GOMP_loop_ull_nonmonotonic_dynamic_start (bool, unsigned long long,
+ unsigned long long,
+ unsigned long long,
+ unsigned long long,
+ unsigned long long *,
+ unsigned long long *);
+extern bool GOMP_loop_ull_nonmonotonic_guided_start (bool, unsigned long long,
+ unsigned long long,
+ unsigned long long,
+ unsigned long long,
+ unsigned long long *,
+ unsigned long long *);
extern bool GOMP_loop_ull_ordered_static_start (bool, unsigned long long,
unsigned long long,
unsigned long long *);
extern bool GOMP_loop_ull_runtime_next (unsigned long long *,
unsigned long long *);
+extern bool GOMP_loop_ull_nonmonotonic_dynamic_next (unsigned long long *,
+ unsigned long long *);
+extern bool GOMP_loop_ull_nonmonotonic_guided_next (unsigned long long *,
+ unsigned long long *);
extern bool GOMP_loop_ull_ordered_static_next (unsigned long long *,
unsigned long long *);
extern void GOMP_target (int, void (*) (void *), const void *,
size_t, void **, size_t *, unsigned char *);
-extern void GOMP_target_41 (int, void (*) (void *), size_t, void **, size_t *,
- unsigned short *, unsigned int, void **);
+extern void GOMP_target_ext (int, void (*) (void *), size_t, void **, size_t *,
+ unsigned short *, unsigned int, void **,
+ int, int);
extern void GOMP_target_data (int, const void *,
size_t, void **, size_t *, unsigned char *);
-extern void GOMP_target_data_41 (int, size_t, void **, size_t *,
- unsigned short *);
+extern void GOMP_target_data_ext (int, size_t, void **, size_t *,
+ unsigned short *);
extern void GOMP_target_end_data (void);
extern void GOMP_target_update (int, const void *,
size_t, void **, size_t *, unsigned char *);
-extern void GOMP_target_update_41 (int, size_t, void **, size_t *,
- unsigned short *, unsigned int, void **);
+extern void GOMP_target_update_ext (int, size_t, void **, size_t *,
+ unsigned short *, unsigned int, void **);
extern void GOMP_target_enter_exit_data (int, size_t, void **, size_t *,
unsigned short *, unsigned int,
void **);
return !gomp_iter_static_next (istart, iend);
}
+/* The current dynamic implementation is always monotonic. The
+ entrypoints without nonmonotonic in them have to be always monotonic,
+ but the nonmonotonic ones could be changed to use work-stealing for
+ improved scalability. */
+
static bool
gomp_loop_dynamic_start (long start, long end, long incr, long chunk_size,
long *istart, long *iend)
return ret;
}
+/* Similarly as for dynamic, though the question is how can the chunk sizes
+ be decreased without a central locking or atomics. */
+
static bool
gomp_loop_guided_start (long start, long end, long incr, long chunk_size,
long *istart, long *iend)
GOMP_parallel_end ();
}
+#ifdef HAVE_ATTRIBUTE_ALIAS
+extern __typeof(GOMP_parallel_loop_dynamic) GOMP_parallel_loop_nonmonotonic_dynamic
+ __attribute__((alias ("GOMP_parallel_loop_dynamic")));
+extern __typeof(GOMP_parallel_loop_guided) GOMP_parallel_loop_nonmonotonic_guided
+ __attribute__((alias ("GOMP_parallel_loop_guided")));
+#else
+void
+GOMP_parallel_loop_nonmonotonic_dynamic (void (*fn) (void *), void *data,
+ unsigned num_threads, long start,
+ long end, long incr, long chunk_size,
+ unsigned flags)
+{
+ gomp_parallel_loop_start (fn, data, num_threads, start, end, incr,
+ GFS_DYNAMIC, chunk_size, flags);
+ fn (data);
+ GOMP_parallel_end ();
+}
+
+void
+GOMP_parallel_loop_nonmonotonic_guided (void (*fn) (void *), void *data,
+ unsigned num_threads, long start,
+ long end, long incr, long chunk_size,
+ unsigned flags)
+{
+ gomp_parallel_loop_start (fn, data, num_threads, start, end, incr,
+ GFS_GUIDED, chunk_size, flags);
+ fn (data);
+ GOMP_parallel_end ();
+}
+#endif
+
void
GOMP_parallel_loop_runtime (void (*fn) (void *), void *data,
unsigned num_threads, long start, long end,
__attribute__((alias ("gomp_loop_dynamic_start")));
extern __typeof(gomp_loop_guided_start) GOMP_loop_guided_start
__attribute__((alias ("gomp_loop_guided_start")));
+extern __typeof(gomp_loop_dynamic_start) GOMP_loop_nonmonotonic_dynamic_start
+ __attribute__((alias ("gomp_loop_dynamic_start")));
+extern __typeof(gomp_loop_guided_start) GOMP_loop_nonmonotonic_guided_start
+ __attribute__((alias ("gomp_loop_guided_start")));
extern __typeof(gomp_loop_ordered_static_start) GOMP_loop_ordered_static_start
__attribute__((alias ("gomp_loop_ordered_static_start")));
__attribute__((alias ("gomp_loop_dynamic_next")));
extern __typeof(gomp_loop_guided_next) GOMP_loop_guided_next
__attribute__((alias ("gomp_loop_guided_next")));
+extern __typeof(gomp_loop_dynamic_next) GOMP_loop_nonmonotonic_dynamic_next
+ __attribute__((alias ("gomp_loop_dynamic_next")));
+extern __typeof(gomp_loop_guided_next) GOMP_loop_nonmonotonic_guided_next
+ __attribute__((alias ("gomp_loop_guided_next")));
extern __typeof(gomp_loop_ordered_static_next) GOMP_loop_ordered_static_next
__attribute__((alias ("gomp_loop_ordered_static_next")));
return gomp_loop_guided_start (start, end, incr, chunk_size, istart, iend);
}
+bool
+GOMP_loop_nonmonotonic_dynamic_start (long start, long end, long incr,
+ long chunk_size, long *istart,
+ long *iend)
+{
+ return gomp_loop_dynamic_start (start, end, incr, chunk_size, istart, iend);
+}
+
+bool
+GOMP_loop_nonmonotonic_guided_start (long start, long end, long incr,
+ long chunk_size, long *istart, long *iend)
+{
+ return gomp_loop_guided_start (start, end, incr, chunk_size, istart, iend);
+}
+
bool
GOMP_loop_ordered_static_start (long start, long end, long incr,
long chunk_size, long *istart, long *iend)
return gomp_loop_guided_next (istart, iend);
}
+bool
+GOMP_loop_nonmonotonic_dynamic_next (long *istart, long *iend)
+{
+ return gomp_loop_dynamic_next (istart, iend);
+}
+
+bool
+GOMP_loop_nonmonotonic_guided_next (long *istart, long *iend)
+{
+ return gomp_loop_guided_next (istart, iend);
+}
+
bool
GOMP_loop_ordered_static_next (long *istart, long *iend)
{
__attribute__((alias ("gomp_loop_ull_dynamic_start")));
extern __typeof(gomp_loop_ull_guided_start) GOMP_loop_ull_guided_start
__attribute__((alias ("gomp_loop_ull_guided_start")));
+extern __typeof(gomp_loop_ull_dynamic_start) GOMP_loop_ull_nonmonotonic_dynamic_start
+ __attribute__((alias ("gomp_loop_ull_dynamic_start")));
+extern __typeof(gomp_loop_ull_guided_start) GOMP_loop_ull_nonmonotonic_guided_start
+ __attribute__((alias ("gomp_loop_ull_guided_start")));
extern __typeof(gomp_loop_ull_ordered_static_start) GOMP_loop_ull_ordered_static_start
__attribute__((alias ("gomp_loop_ull_ordered_static_start")));
__attribute__((alias ("gomp_loop_ull_dynamic_next")));
extern __typeof(gomp_loop_ull_guided_next) GOMP_loop_ull_guided_next
__attribute__((alias ("gomp_loop_ull_guided_next")));
+extern __typeof(gomp_loop_ull_dynamic_next) GOMP_loop_ull_nonmonotonic_dynamic_next
+ __attribute__((alias ("gomp_loop_ull_dynamic_next")));
+extern __typeof(gomp_loop_ull_guided_next) GOMP_loop_ull_nonmonotonic_guided_next
+ __attribute__((alias ("gomp_loop_ull_guided_next")));
extern __typeof(gomp_loop_ull_ordered_static_next) GOMP_loop_ull_ordered_static_next
__attribute__((alias ("gomp_loop_ull_ordered_static_next")));
iend);
}
+bool
+GOMP_loop_ull_nonmonotonic_dynamic_start (bool up, gomp_ull start,
+ gomp_ull end, gomp_ull incr,
+ gomp_ull chunk_size,
+ gomp_ull *istart, gomp_ull *iend)
+{
+ return gomp_loop_ull_dynamic_start (up, start, end, incr, chunk_size, istart,
+ iend);
+}
+
+bool
+GOMP_loop_ull_nonmonotonic_guided_start (bool up, gomp_ull start, gomp_ull end,
+ gomp_ull incr, gomp_ull chunk_size,
+ gomp_ull *istart, gomp_ull *iend)
+{
+ return gomp_loop_ull_guided_start (up, start, end, incr, chunk_size, istart,
+ iend);
+}
+
bool
GOMP_loop_ull_ordered_static_start (bool up, gomp_ull start, gomp_ull end,
gomp_ull incr, gomp_ull chunk_size,
return gomp_loop_ull_guided_next (istart, iend);
}
+bool
+GOMP_loop_ull_nonmonotonic_dynamic_next (gomp_ull *istart, gomp_ull *iend)
+{
+ return gomp_loop_ull_dynamic_next (istart, iend);
+}
+
+bool
+GOMP_loop_ull_nonmonotonic_guided_next (gomp_ull *istart, gomp_ull *iend)
+{
+ return gomp_loop_ull_guided_next (istart, iend);
+}
+
bool
GOMP_loop_ull_ordered_static_next (gomp_ull *istart, gomp_ull *iend)
{
return splay_tree_lookup (mem_map, key);
}
-/* Handle the case where gomp_map_lookup found oldn for newn.
+static inline splay_tree_key
+gomp_map_0len_lookup (splay_tree mem_map, splay_tree_key key)
+{
+ if (key->host_start != key->host_end)
+ return splay_tree_lookup (mem_map, key);
+
+ key->host_end++;
+ splay_tree_key n = splay_tree_lookup (mem_map, key);
+ key->host_end--;
+ return n;
+}
+
+/* Handle the case where gomp_map_lookup, splay_tree_lookup or
+ gomp_map_0len_lookup found oldn for newn.
Helper function of gomp_map_vars. */
static inline void
(void *) cur_node.host_end);
}
+static inline uintptr_t
+gomp_map_val (struct target_mem_desc *tgt, void **hostaddrs, size_t i)
+{
+ if (tgt->list[i].key != NULL)
+ return tgt->list[i].key->tgt->tgt_start
+ + tgt->list[i].key->tgt_offset
+ + tgt->list[i].offset;
+ if (tgt->list[i].offset == ~(uintptr_t) 0)
+ return (uintptr_t) hostaddrs[i];
+ if (tgt->list[i].offset == ~(uintptr_t) 1)
+ return 0;
+ if (tgt->list[i].offset == ~(uintptr_t) 2)
+ return tgt->list[i + 1].key->tgt->tgt_start
+ + tgt->list[i + 1].key->tgt_offset
+ + tgt->list[i + 1].offset
+ + (uintptr_t) hostaddrs[i]
+ - (uintptr_t) hostaddrs[i + 1];
+ return tgt->tgt_start + tgt->list[i].offset;
+}
+
attribute_hidden struct target_mem_desc *
gomp_map_vars (struct gomp_device_descr *devicep, size_t mapnum,
void **hostaddrs, void **devaddrs, size_t *sizes, void *kinds,
i--;
continue;
}
+ else if ((kind & typemask) == GOMP_MAP_ALWAYS_POINTER)
+ {
+ tgt->list[i].key = NULL;
+ tgt->list[i].offset = ~(uintptr_t) 1;
+ has_firstprivate = true;
+ continue;
+ }
cur_node.host_start = (uintptr_t) hostaddrs[i];
if (!GOMP_MAP_POINTER_P (kind & typemask))
cur_node.host_end = cur_node.host_start + sizes[i];
splay_tree_key n;
if ((kind & typemask) == GOMP_MAP_ZERO_LEN_ARRAY_SECTION)
{
- n = gomp_map_lookup (mem_map, &cur_node);
+ n = gomp_map_0len_lookup (mem_map, &cur_node);
if (!n)
{
tgt->list[i].key = NULL;
sizes, kinds);
i--;
continue;
+ case GOMP_MAP_ALWAYS_POINTER:
+ cur_node.host_start = (uintptr_t) hostaddrs[i];
+ cur_node.host_end = cur_node.host_start + sizeof (void *);
+ n = splay_tree_lookup (mem_map, &cur_node);
+ if (n == NULL
+ || n->host_start > cur_node.host_start
+ || n->host_end < cur_node.host_end)
+ {
+ gomp_mutex_unlock (&devicep->lock);
+ gomp_fatal ("always pointer not mapped");
+ }
+ if ((get_kind (short_mapkind, kinds, i - 1) & typemask)
+ != GOMP_MAP_ALWAYS_POINTER)
+ cur_node.tgt_offset = gomp_map_val (tgt, hostaddrs, i - 1);
+ if (cur_node.tgt_offset)
+ cur_node.tgt_offset -= sizes[i];
+ devicep->host2dev_func (devicep->target_id,
+ (void *) (n->tgt->tgt_start
+ + n->tgt_offset
+ + cur_node.host_start
+ - n->host_start),
+ (void *) &cur_node.tgt_offset,
+ sizeof (void *));
+ cur_node.tgt_offset = n->tgt->tgt_start + n->tgt_offset
+ + cur_node.host_start - n->host_start;
+ continue;
default:
break;
}
{
for (i = 0; i < mapnum; i++)
{
- if (tgt->list[i].key == NULL)
- {
- if (tgt->list[i].offset == ~(uintptr_t) 0)
- cur_node.tgt_offset = (uintptr_t) hostaddrs[i];
- else if (tgt->list[i].offset == ~(uintptr_t) 1)
- cur_node.tgt_offset = 0;
- else if (tgt->list[i].offset == ~(uintptr_t) 2)
- cur_node.tgt_offset = tgt->list[i + 1].key->tgt->tgt_start
- + tgt->list[i + 1].key->tgt_offset
- + tgt->list[i + 1].offset
- + (uintptr_t) hostaddrs[i]
- - (uintptr_t) hostaddrs[i + 1];
- else
- cur_node.tgt_offset = tgt->tgt_start
- + tgt->list[i].offset;
- }
- else
- cur_node.tgt_offset = tgt->list[i].key->tgt->tgt_start
- + tgt->list[i].key->tgt_offset
- + tgt->list[i].offset;
+ cur_node.tgt_offset = gomp_map_val (tgt, hostaddrs, i);
/* FIXME: see above FIXME comment. */
devicep->host2dev_func (devicep->target_id,
(void *) (tgt->tgt_start
devicep->is_initialized = false;
}
-/* Host fallback for GOMP_target{,_41} routines. */
+/* Host fallback for GOMP_target{,_ext} routines. */
static void
gomp_target_fallback (void (*fn) (void *), void **hostaddrs)
*thr = old_thr;
}
-/* Helper function of GOMP_target{,_41} routines. */
+/* Host fallback with firstprivate map-type handling. */
+
+static void
+gomp_target_fallback_firstprivate (void (*fn) (void *), size_t mapnum,
+ void **hostaddrs, size_t *sizes,
+ unsigned short *kinds)
+{
+ size_t i, tgt_align = 0, tgt_size = 0;
+ char *tgt = NULL;
+ for (i = 0; i < mapnum; i++)
+ if ((kinds[i] & 0xff) == GOMP_MAP_FIRSTPRIVATE)
+ {
+ size_t align = (size_t) 1 << (kinds[i] >> 8);
+ if (tgt_align < align)
+ tgt_align = align;
+ tgt_size = (tgt_size + align - 1) & ~(align - 1);
+ tgt_size += sizes[i];
+ }
+ if (tgt_align)
+ {
+ tgt = gomp_alloca (tgt_size + tgt_align - 1);
+ uintptr_t al = (uintptr_t) tgt & (tgt_align - 1);
+ if (al)
+ tgt += tgt_align - al;
+ tgt_size = 0;
+ for (i = 0; i < mapnum; i++)
+ if ((kinds[i] & 0xff) == GOMP_MAP_FIRSTPRIVATE)
+ {
+ size_t align = (size_t) 1 << (kinds[i] >> 8);
+ tgt_size = (tgt_size + align - 1) & ~(align - 1);
+ memcpy (tgt + tgt_size, hostaddrs[i], sizes[i]);
+ hostaddrs[i] = tgt + tgt_size;
+ tgt_size = tgt_size + sizes[i];
+ }
+ }
+ gomp_target_fallback (fn, hostaddrs);
+}
+
+/* Helper function of GOMP_target{,_ext} routines. */
static void *
gomp_get_target_fn_addr (struct gomp_device_descr *devicep,
gomp_unmap_vars (tgt_vars, true);
}
+/* Like GOMP_target, but KINDS is 16-bit, UNUSED is no longer present,
+ and several arguments have been added:
+ FLAGS is a bitmask, see GOMP_TARGET_FLAG_* in gomp-constants.h.
+ DEPEND is array of dependencies, see GOMP_task for details.
+ NUM_TEAMS is positive if GOMP_teams will be called in the body with
+ that value, or 1 if teams construct is not present, or 0, if
+ teams construct does not have num_teams clause and so the choice is
+ implementation defined, and -1 if it can't be determined on the host
+ what value will GOMP_teams have on the device.
+ THREAD_LIMIT similarly is positive if GOMP_teams will be called in the
+ body with that value, or 0, if teams construct does not have thread_limit
+ clause or the teams construct is not present, or -1 if it can't be
+ determined on the host what value will GOMP_teams have on the device. */
+
void
-GOMP_target_41 (int device, void (*fn) (void *), size_t mapnum,
- void **hostaddrs, size_t *sizes, unsigned short *kinds,
- unsigned int flags, void **depend)
+GOMP_target_ext (int device, void (*fn) (void *), size_t mapnum,
+ void **hostaddrs, size_t *sizes, unsigned short *kinds,
+ unsigned int flags, void **depend, int num_teams,
+ int thread_limit)
{
struct gomp_device_descr *devicep = resolve_device (device);
+ (void) num_teams;
+ (void) thread_limit;
+
/* If there are depend clauses, but nowait is not present,
block the parent task until the dependencies are resolved
and then just continue with the rest of the function as if it
if (devicep == NULL
|| !(devicep->capabilities & GOMP_OFFLOAD_CAP_OPENMP_400))
{
- size_t i, tgt_align = 0, tgt_size = 0;
- char *tgt = NULL;
- for (i = 0; i < mapnum; i++)
- if ((kinds[i] & 0xff) == GOMP_MAP_FIRSTPRIVATE)
- {
- size_t align = (size_t) 1 << (kinds[i] >> 8);
- if (tgt_align < align)
- tgt_align = align;
- tgt_size = (tgt_size + align - 1) & ~(align - 1);
- tgt_size += sizes[i];
- }
- if (tgt_align)
- {
- tgt = gomp_alloca (tgt_size + tgt_align - 1);
- uintptr_t al = (uintptr_t) tgt & (tgt_align - 1);
- if (al)
- tgt += tgt_align - al;
- tgt_size = 0;
- for (i = 0; i < mapnum; i++)
- if ((kinds[i] & 0xff) == GOMP_MAP_FIRSTPRIVATE)
- {
- size_t align = (size_t) 1 << (kinds[i] >> 8);
- tgt_size = (tgt_size + align - 1) & ~(align - 1);
- memcpy (tgt + tgt_size, hostaddrs[i], sizes[i]);
- hostaddrs[i] = tgt + tgt_size;
- tgt_size = tgt_size + sizes[i];
- }
- }
- gomp_target_fallback (fn, hostaddrs);
+ gomp_target_fallback_firstprivate (fn, mapnum, hostaddrs, sizes, kinds);
return;
}
gomp_unmap_vars (tgt_vars, true);
}
-/* Host fallback for GOMP_target_data{,_41} routines. */
+/* Host fallback for GOMP_target_data{,_ext} routines. */
static void
gomp_target_data_fallback (void)
}
void
-GOMP_target_data_41 (int device, size_t mapnum, void **hostaddrs, size_t *sizes,
- unsigned short *kinds)
+GOMP_target_data_ext (int device, size_t mapnum, void **hostaddrs,
+ size_t *sizes, unsigned short *kinds)
{
struct gomp_device_descr *devicep = resolve_device (device);
}
void
-GOMP_target_update_41 (int device, size_t mapnum, void **hostaddrs,
- size_t *sizes, unsigned short *kinds,
- unsigned int flags, void **depend)
+GOMP_target_update_ext (int device, size_t mapnum, void **hostaddrs,
+ size_t *sizes, unsigned short *kinds,
+ unsigned int flags, void **depend)
{
struct gomp_device_descr *devicep = resolve_device (device);
cur_node.host_end = cur_node.host_start + sizes[i];
splay_tree_key k = (kind == GOMP_MAP_DELETE_ZERO_LEN_ARRAY_SECTION
|| kind == GOMP_MAP_ZERO_LEN_ARRAY_SECTION)
- ? gomp_map_lookup (&devicep->mem_map, &cur_node)
+ ? gomp_map_0len_lookup (&devicep->mem_map, &cur_node)
: splay_tree_lookup (&devicep->mem_map, &cur_node);
if (!k)
continue;
struct gomp_target_task *ttask = (struct gomp_target_task *) data;
if (ttask->fn != NULL)
{
- /* GOMP_target_41 */
+ /* GOMP_target_ext */
}
else if (ttask->devicep == NULL
|| !(ttask->devicep->capabilities & GOMP_OFFLOAD_CAP_OPENMP_400))
cur_node.host_start = (uintptr_t) ptr;
cur_node.host_end = cur_node.host_start;
- splay_tree_key n = gomp_map_lookup (mem_map, &cur_node);
+ splay_tree_key n = gomp_map_0len_lookup (mem_map, &cur_node);
int ret = n != NULL;
gomp_mutex_unlock (&devicep->lock);
return ret;
--- /dev/null
+// { dg-do run }
+
+#include <omp.h>
+
+struct R { R () {}; ~R () {}; int r; };
+struct T { T () {}; virtual ~T () {}; int t; };
+int c;
+struct A : public R, virtual public T { A () : b(c) {} int a; int &b; void m1 (); };
+
+void
+take (int &a, int &b, int &c, int &d)
+{
+ asm volatile ("" : : "g" (&a), "g" (&b), "g" (&c), "g" (&d) : "memory");
+}
+
+void
+A::m1 ()
+{
+ #pragma omp parallel private (a, T::t) shared (r, A::b) default(none)
+ {
+ int q = omp_get_thread_num (), q2;
+ a = q;
+ t = 3 * q;
+ #pragma omp single copyprivate (q2)
+ {
+ r = 2 * q;
+ b = 4 * q;
+ q2 = q;
+ }
+ take (a, r, t, b);
+ #pragma omp barrier
+ if (A::a != q || R::r != 2 * q2 || T::t != 3 * q || A::b != 4 * q2)
+ __builtin_abort ();
+ }
+ a = 7;
+ r = 8;
+ t = 9;
+ b = 10;
+ #pragma omp parallel shared (A::a) default (none) firstprivate (R::r, b) shared (t)
+ {
+ int q = omp_get_thread_num (), q2;
+ take (A::a, R::r, T::t, A::b);
+ if (a != 7 || r != 8 || t != 9 || b != 10)
+ __builtin_abort ();
+ R::r = 6 * q;
+ #pragma omp barrier
+ #pragma omp single copyprivate (q2)
+ {
+ A::a = 5 * q;
+ T::t = 7 * q;
+ q2 = q;
+ }
+ A::b = 8 * q;
+ take (a, r, t, b);
+ #pragma omp barrier
+ if (a != 5 * q2 || r != 6 * q || t != 7 * q2 || b != 8 * q)
+ __builtin_abort ();
+ }
+ a = 1;
+ b = 2;
+ R::r = 3;
+ t = 4;
+ bool f = false;
+ #pragma omp parallel private (f)
+ {
+ f = false;
+ #pragma omp single
+ #pragma omp taskloop default(none) firstprivate (r, A::a, f) shared (T::t, b)
+ for (int i = 0; i < 30; i++)
+ {
+ int q = omp_get_thread_num ();
+ int tv, bv;
+ #pragma omp atomic read
+ tv = t;
+ #pragma omp atomic read
+ bv = A::b;
+ if (i == 16)
+ {
+ if (bv != 2 || tv != 4)
+ __builtin_abort ();
+ }
+ else
+ {
+ if ((bv != 2 && bv != 8) || (tv != 4 && tv != 9))
+ __builtin_abort ();
+ }
+ if (!f)
+ {
+ if (A::a != 1 || R::r != 3)
+ __builtin_abort ();
+ }
+ else if (a != 7 * q || r != 9 * q)
+ __builtin_abort ();
+ take (a, r, t, b);
+ A::a = 7 * q;
+ R::r = 9 * q;
+ if (i == 16)
+ {
+ #pragma omp atomic write
+ A::b = 8;
+ #pragma omp atomic write
+ T::t = 9;
+ }
+ f = true;
+ }
+ }
+}
+
+int
+main ()
+{
+ A a;
+ a.m1 ();
+}
--- /dev/null
+// { dg-do run }
+
+#include <omp.h>
+
+int c, d, e;
+struct R { R () {}; ~R () {}; int r; };
+template <typename Q>
+struct T { T () : t(d) {}; virtual ~T () {}; Q t; };
+template <typename Q>
+struct A : public R, virtual public T<Q> { A () : b(c), a(e) {} Q a; int &b; void m1 (); };
+
+void
+take (int &a, int &b, int &c, int &d)
+{
+ asm volatile ("" : : "g" (&a), "g" (&b), "g" (&c), "g" (&d) : "memory");
+}
+
+template <typename Q>
+void
+A<Q>::m1 ()
+{
+ #pragma omp parallel private (a, T<Q>::t) shared (r, A::b) default(none)
+ {
+ int q = omp_get_thread_num (), q2;
+ a = q;
+ T<Q>::t = 3 * q;
+ #pragma omp single copyprivate (q2)
+ {
+ r = 2 * q;
+ b = 4 * q;
+ q2 = q;
+ }
+ take (a, r, T<Q>::t, b);
+ #pragma omp barrier
+ if (A::a != q || R::r != 2 * q2 || T<Q>::t != 3 * q || A::b != 4 * q2)
+ __builtin_abort ();
+ }
+ a = 7;
+ r = 8;
+ T<Q>::t = 9;
+ b = 10;
+ #pragma omp parallel shared (A::a) default (none) firstprivate (R::r, b) shared (T<Q>::t)
+ {
+ int q = omp_get_thread_num (), q2;
+ take (A::a, R::r, T<Q>::t, A::b);
+ if (a != 7 || r != 8 || T<Q>::t != 9 || b != 10)
+ __builtin_abort ();
+ R::r = 6 * q;
+ #pragma omp barrier
+ #pragma omp single copyprivate (q2)
+ {
+ A::a = 5 * q;
+ T<Q>::t = 7 * q;
+ q2 = q;
+ }
+ A::b = 8 * q;
+ take (a, r, T<Q>::t, b);
+ #pragma omp barrier
+ if (a != 5 * q2 || r != 6 * q || T<Q>::t != 7 * q2 || b != 8 * q)
+ __builtin_abort ();
+ }
+ a = 1;
+ b = 2;
+ R::r = 3;
+ T<Q>::t = 4;
+ bool f = false;
+ #pragma omp parallel private (f)
+ {
+ f = false;
+ #pragma omp single
+ #pragma omp taskloop default(none) firstprivate (r, A::a, f) shared (T<Q>::t, b)
+ for (int i = 0; i < 30; i++)
+ {
+ int q = omp_get_thread_num ();
+ int tv, bv;
+ #pragma omp atomic read
+ tv = T<Q>::t;
+ #pragma omp atomic read
+ bv = A::b;
+ if (i == 16)
+ {
+ if (bv != 2 || tv != 4)
+ __builtin_abort ();
+ }
+ else
+ {
+ if ((bv != 2 && bv != 8) || (tv != 4 && tv != 9))
+ __builtin_abort ();
+ }
+ if (!f)
+ {
+ if (A::a != 1 || R::r != 3)
+ __builtin_abort ();
+ }
+ else if (a != 7 * q || r != 9 * q)
+ __builtin_abort ();
+ take (a, r, T<Q>::t, b);
+ A::a = 7 * q;
+ R::r = 9 * q;
+ if (i == 16)
+ {
+ #pragma omp atomic write
+ A::b = 8;
+ #pragma omp atomic write
+ T<Q>::t = 9;
+ }
+ f = true;
+ }
+ }
+}
+
+int
+main ()
+{
+ A<int> a;
+ a.m1 ();
+ A<int &> b;
+ b.m1 ();
+}
--- /dev/null
+// { dg-do run }
+
+#include "../libgomp.c/monotonic-1.c"
--- /dev/null
+// { dg-do run }
+
+#include "../libgomp.c/monotonic-2.c"
--- /dev/null
+// { dg-do run }
+
+#include "../libgomp.c/nonmonotonic-1.c"
--- /dev/null
+// { dg-do run }
+
+#include "../libgomp.c/nonmonotonic-2.c"
--- /dev/null
+// PR middle-end/66199
+// { dg-do run }
+
+#include "../libgomp.c/pr66199-3.c"
--- /dev/null
+// PR middle-end/66199
+// { dg-do run }
+
+#include "../libgomp.c/pr66199-4.c"
--- /dev/null
+// PR middle-end/66199
+// { dg-do run }
+
+#include "../libgomp.c/pr66199-5.c"
--- /dev/null
+// PR middle-end/66199
+// { dg-do run }
+
+#include "../libgomp.c/pr66199-6.c"
--- /dev/null
+// PR middle-end/66199
+// { dg-do run }
+
+#include "../libgomp.c/pr66199-7.c"
--- /dev/null
+// PR middle-end/66199
+// { dg-do run }
+
+#include "../libgomp.c/pr66199-8.c"
--- /dev/null
+// PR middle-end/66199
+// { dg-do run }
+
+#include "../libgomp.c/pr66199-9.c"
--- /dev/null
+// { dg-do run { xfail *-*-* } }
+
+char z[10] = { 0 };
+
+__attribute__((noinline, noclone)) void
+foo (int (*&x)[3][2], int *y, long (&w)[1][2], int s, int t)
+{
+ unsigned long long a[9] = {};
+ short b[5] = {};
+ #pragma omp parallel for reduction(+:x[-1:2][:][0:2], z[t + 2:4]) \
+ reduction(*:y[-s:3]) reduction(|:a[s + 3:4]) \
+ reduction(&:w[s + 1:][t:2]) reduction(max:b[2:])
+ for (int i = 0; i < 128; i++)
+ {
+ x[i / 64 - 1][i % 3][(i / 4) & 1] += i;
+ if ((i & 15) == 1)
+ y[1] *= 3;
+ if ((i & 31) == 2)
+ y[2] *= 7;
+ if ((i & 63) == 3)
+ y[3] *= 17;
+ z[i / 32 + 2] += (i & 3);
+ if (i < 4)
+ z[i + 2] += i;
+ a[i / 32 + 2] |= 1ULL << (i & 30);
+ w[0][i & 1] &= ~(1L << (i / 17 * 3));
+ if ((i % 23) > b[2])
+ b[2] = i % 23;
+ if ((i % 85) > b[3])
+ b[3] = i % 85;
+ if ((i % 192) > b[4])
+ b[4] = i % 192;
+ }
+ for (int i = 0; i < 9; i++)
+ if (a[i] != ((i < 6 && i >= 2) ? 0x55555555ULL : 0))
+ __builtin_abort ();
+ if (b[0] != 0 || b[1] != 0 || b[2] != 22 || b[3] != 84 || b[4] != 127)
+ __builtin_abort ();
+}
+
+int a3[4][3][2];
+int (*p3)[3][2] = &a3[2];
+int y3[5] = { 0, 1, 1, 1, 0 };
+long w3[1][2] = { ~0L, ~0L };
+short bb[5];
+
+struct S
+{
+ int (*&x)[3][2];
+ int *y;
+ long (&w)[1][2];
+ char z[10];
+ short (&b)[5];
+ unsigned long long a[9];
+ S() : x(p3), y(y3), w(w3), z(), a(), b(bb) {}
+ __attribute__((noinline, noclone)) void foo (int s, int t);
+};
+
+void
+S::foo (int s, int t)
+{
+ #pragma omp parallel for reduction(+:x[-1:2][:][0:2], z[t + 2:4]) \
+ reduction(*:y[-s:3]) reduction(|:a[s + 3:4]) \
+ reduction(&:w[s + 1:][t:2]) reduction(max:b[2:])
+ for (int i = 0; i < 128; i++)
+ {
+ x[i / 64 - 1][i % 3][(i / 4) & 1] += i;
+ if ((i & 15) == 1)
+ y[1] *= 3;
+ if ((i & 31) == 2)
+ y[2] *= 7;
+ if ((i & 63) == 3)
+ y[3] *= 17;
+ z[i / 32 + 2] += (i & 3);
+ if (i < 4)
+ z[i + 2] += i;
+ a[i / 32 + 2] |= 1ULL << (i & 30);
+ w[0][i & 1] &= ~(1L << (i / 17 * 3));
+ if ((i % 23) > b[2])
+ b[2] = i % 23;
+ if ((i % 85) > b[3])
+ b[3] = i % 85;
+ if ((i % 192) > b[4])
+ b[4] = i % 192;
+ }
+}
+
+int
+main ()
+{
+ int a[4][3][2] = {};
+ static int a2[4][3][2] = {{{ 0, 0 }, { 0, 0 }, { 0, 0 }},
+ {{ 312, 381 }, { 295, 356 }, { 337, 335 }},
+ {{ 1041, 975 }, { 1016, 1085 }, { 935, 1060 }},
+ {{ 0, 0 }, { 0, 0 }, { 0, 0 }}};
+ int (*p)[3][2] = &a[2];
+ int y[5] = { 0, 1, 1, 1, 0 };
+ int y2[5] = { 0, 6561, 2401, 289, 0 };
+ char z2[10] = { 0, 0, 48, 49, 50, 51, 0, 0, 0, 0 };
+ long w[1][2] = { ~0L, ~0L };
+ foo (p, y, w, -1, 0);
+ if (__builtin_memcmp (a, a2, sizeof (a))
+ || __builtin_memcmp (y, y2, sizeof (y))
+ || __builtin_memcmp (z, z2, sizeof (z))
+ || w[0][0] != ~0x249249L
+ || w[0][1] != ~0x249249L)
+ __builtin_abort ();
+ S s;
+ s.foo (-1, 0);
+ for (int i = 0; i < 9; i++)
+ if (s.a[i] != ((i < 6 && i >= 2) ? 0x55555555ULL : 0))
+ __builtin_abort ();
+ if (__builtin_memcmp (a3, a2, sizeof (a3))
+ || __builtin_memcmp (y3, y2, sizeof (y3))
+ || __builtin_memcmp (s.z, z2, sizeof (s.z))
+ || w3[0][0] != ~0x249249L
+ || w3[0][1] != ~0x249249L)
+ __builtin_abort ();
+ if (bb[0] != 0 || bb[1] != 0 || bb[2] != 22 || bb[3] != 84 || bb[4] != 127)
+ __builtin_abort ();
+}
--- /dev/null
+// { dg-do run { xfail *-*-* } }
+
+template <typename T>
+struct A
+{
+ A () { t = 0; }
+ A (T x) { t = x; }
+ A (const A &x) { t = x.t; }
+ ~A () {}
+ T t;
+};
+template <typename T>
+struct M
+{
+ M () { t = 1; }
+ M (T x) { t = x; }
+ M (const M &x) { t = x.t; }
+ ~M () {}
+ T t;
+};
+template <typename T>
+struct B
+{
+ B () { t = ~(T) 0; }
+ B (T x) { t = x; }
+ B (const B &x) { t = x.t; }
+ ~B () {}
+ T t;
+};
+template <typename T>
+void
+add (T &x, T &y)
+{
+ x.t += y.t;
+}
+template <typename T>
+void
+zero (T &x)
+{
+ x.t = 0;
+}
+template <typename T>
+void
+orit (T *x, T *y)
+{
+ y->t |= x->t;
+}
+B<long> bb;
+#pragma omp declare reduction(+:A<int>:omp_out.t += omp_in.t)
+#pragma omp declare reduction(+:A<char>:add (omp_out, omp_in)) initializer(zero (omp_priv))
+#pragma omp declare reduction(*:M<int>:omp_out.t *= omp_in.t) initializer(omp_priv = 1)
+#pragma omp declare reduction(|:A<unsigned long long>:orit (&omp_in, &omp_out))
+#pragma omp declare reduction(&:B<long>:omp_out.t = omp_out.t & omp_in.t) initializer(orit (&omp_priv, &omp_orig))
+#pragma omp declare reduction(maxb:short:omp_out = omp_in > omp_out ? omp_in : omp_out) initializer(omp_priv = -6)
+
+A<char> z[10];
+
+template <int N>
+__attribute__((noinline, noclone)) void
+foo (A<int> (*&x)[3][N], M<int> *y, B<long> (&w)[1][N], int p1, long p2, long p3, int p4,
+ int p5, long p6, short p7, int s, int t)
+{
+ A<unsigned long long> a[p7 + 4];
+ short bb[p7];
+ short (&b)[p7] = bb;
+ for (int i = 0; i < p7; i++)
+ bb[i] = -6;
+ #pragma omp parallel for reduction(+:x[-1:p1 + 1][:p2 + N - 2], z[t + N:p3]) \
+ reduction(*:y[-s:p4]) reduction(|:a[s + 3:p5 - N + 2]) \
+ reduction(&:w[s + 1:p6 - 3 + N][t:p6]) reduction(maxb:b[N:])
+ for (int i = 0; i < 128; i++)
+ {
+ x[i / 64 - 1][i % 3][(i / 4) & 1].t += i;
+ if ((i & 15) == 1)
+ y[1].t *= 3;
+ if ((i & 31) == N)
+ y[2].t *= 7;
+ if ((i & 63) == 3)
+ y[N + 1].t *= 17;
+ z[i / 32 + 2].t += (i & 3);
+ if (i < 4)
+ z[i + N].t += i;
+ a[i / 32 + 2].t |= 1ULL << (i & 30);
+ w[0][i & 1].t &= ~(1L << (i / 17 * 3));
+ if ((i % 23) > b[N])
+ b[N] = i % 23;
+ if ((i % 85) > b[3])
+ b[3] = i % 85;
+ if ((i % 192) > b[4])
+ b[4] = i % 192;
+ }
+ for (int i = 0; i < 9; i++)
+ if (a[i].t != ((i < 6 && i >= 2) ? 0x55555555ULL : 0))
+ __builtin_abort ();
+ if (bb[0] != -6 || bb[1] != -6 || bb[N] != 22 || bb[3] != 84 || bb[4] != 127)
+ __builtin_abort ();
+}
+
+A<int> a3[4][3][2];
+A<int> (*p3)[3][2] = &a3[2];
+M<int> y3[5] = { 0, 1, 1, 1, 0 };
+B<long> w3[1][2];
+
+template <int N>
+struct S
+{
+ A<int> (*&x)[3][N];
+ M<int> *y;
+ B<long> (&w)[1][N];
+ A<char> z[10];
+ short b[5];
+ A<unsigned long long> a[9];
+ S() : x(p3), y(y3), w(w3), z(), a(), b() {}
+ __attribute__((noinline, noclone)) void foo (int, long, long, int, int, long, short, int, int);
+};
+
+template <int N>
+void
+S<N>::foo (int p1, long p2, long p3, int p4, int p5, long p6, short p7, int s, int t)
+{
+ #pragma omp parallel for reduction(+:x[-1:p1 + 1][:p2][0:N], z[t + N:p3 + N - 2]) \
+ reduction(*:y[-s:p4]) reduction(|:a[s + 3:p5]) \
+ reduction(&:w[s + 1:p6 - 3 + N][t:p6]) reduction(maxb:b[N:])
+ for (int i = 0; i < 128; i++)
+ {
+ x[i / 64 - 1][i % 3][(i / 4) & 1].t += i;
+ if ((i & 15) == 1)
+ y[1].t *= 3;
+ if ((i & 31) == N)
+ y[2].t *= 7;
+ if ((i & 63) == 3)
+ y[N + 1].t *= 17;
+ z[i / 32 + 2].t += (i & 3);
+ if (i < 4)
+ z[i + N].t += i;
+ a[i / 32 + 2].t |= 1ULL << (i & 30);
+ w[0][i & 1].t &= ~(1L << (i / 17 * 3));
+ if ((i % 23) > b[N])
+ b[N] = i % 23;
+ if ((i % 85) > b[3])
+ b[3] = i % 85;
+ if ((i % 192) > b[4])
+ b[4] = i % 192;
+ }
+}
+
+int
+main ()
+{
+ A<int> a[4][3][2];
+ static int a2[4][3][2] = {{{ 0, 0 }, { 0, 0 }, { 0, 0 }},
+ {{ 312, 381 }, { 295, 356 }, { 337, 335 }},
+ {{ 1041, 975 }, { 1016, 1085 }, { 935, 1060 }},
+ {{ 0, 0 }, { 0, 0 }, { 0, 0 }}};
+ A<int> (*p)[3][2] = &a[2];
+ M<int> y[5] = { 0, 1, 1, 1, 0 };
+ int y2[5] = { 0, 6561, 2401, 289, 0 };
+ char z2[10] = { 0, 0, 48, 49, 50, 51, 0, 0, 0, 0 };
+ B<long> w[1][2];
+ foo<2> (p, y, w, 1, 3L, 4L, 3, 4, 2L, 5, -1, 0);
+ for (int i = 0; i < 4; i++)
+ for (int j = 0; j < 3; j++)
+ for (int k = 0; k < 2; k++)
+ if (a[i][j][k].t != a2[i][j][k])
+ __builtin_abort ();
+ for (int i = 0; i < 5; i++)
+ if (y[i].t != y2[i])
+ __builtin_abort ();
+ for (int i = 0; i < 10; i++)
+ if (z[i].t != z2[i])
+ __builtin_abort ();
+ if (w[0][0].t != ~0x249249L || w[0][1].t != ~0x249249L)
+ __builtin_abort ();
+ S<2> s;
+ s.foo (1, 3L, 4L, 3, 4, 2L, 5, -1, 0);
+ for (int i = 0; i < 9; i++)
+ if (s.a[i].t != ((i < 6 && i >= 2) ? 0x55555555ULL : 0))
+ __builtin_abort ();
+ for (int i = 0; i < 4; i++)
+ for (int j = 0; j < 3; j++)
+ for (int k = 0; k < 2; k++)
+ if (a3[i][j][k].t != a2[i][j][k])
+ __builtin_abort ();
+ for (int i = 0; i < 5; i++)
+ if (y3[i].t != y2[i])
+ __builtin_abort ();
+ for (int i = 0; i < 10; i++)
+ if (s.z[i].t != z2[i])
+ __builtin_abort ();
+ if (w3[0][0].t != ~0x249249L || w3[0][1].t != ~0x249249L)
+ __builtin_abort ();
+ if (s.b[0] != 0 || s.b[1] != 0 || s.b[2] != 22
+ || s.b[3] != 84 || s.b[4] != 127)
+ __builtin_abort ();
+}
--- /dev/null
+extern "C" void abort (void);
+
+int g;
+#pragma omp declare target (g)
+
+#pragma omp declare target
+int
+foo (void)
+{
+ static int s;
+ return ++s + g;
+}
+#pragma omp end declare target
+
+int
+bar (void)
+{
+ static int s;
+ #pragma omp declare target to (s)
+ return ++s;
+}
+#pragma omp declare target (bar)
+
+int
+main ()
+{
+ int r;
+ #pragma omp target map(from:r)
+ {
+ r = (foo () == 1) + (bar () == 1);
+ r += (foo () == 2) + (bar () == 2);
+ }
+ if (r != 4)
+ abort ();
+ return 0;
+}
--- /dev/null
+extern "C" void abort ();
+int x;
+
+__attribute__((noinline, noclone)) void
+foo (int &a, int (&b)[10], short &c, long (&d)[5], int n)
+{
+ int err;
+ int &t = x;
+ int y[n + 1];
+ int (&z)[n + 1] = y;
+ for (int i = 0; i < n + 1; i++)
+ z[i] = i + 27;
+ #pragma omp target enter data map (to: z, c) map (alloc: b, t)
+ #pragma omp target update to (b, t)
+ #pragma omp target map (tofrom: a, d) map (from: b, c) map (alloc: t, z) map (from: err)
+ {
+ err = a++ != 7;
+ for (int i = 0; i < 10; i++)
+ {
+ err |= b[i] != 10 - i;
+ b[i] = i - 16;
+ if (i >= 6) continue;
+ err |= z[i] != i + 27;
+ z[i] = 2 * i + 9;
+ if (i == 5) continue;
+ err |= d[i] != 12L + i;
+ d[i] = i + 7;
+ }
+ err |= c != 25;
+ c = 142;
+ err |= t != 8;
+ t = 19;
+ }
+ if (err) abort ();
+ #pragma omp target update from (z, c)
+ #pragma omp target exit data map (from: b, t) map (release: z, c)
+ if (a != 8 || c != 142 || t != 19)
+ abort ();
+ a = 29;
+ c = 149;
+ t = 15;
+ for (int i = 0; i < 10; i++)
+ {
+ if (b[i] != i - 16) abort ();
+ b[i] = i ^ 1;
+ if (i >= 6) continue;
+ if (z[i] != 2 * i + 9) abort ();
+ z[i]++;
+ if (i == 5) continue;
+ if (d[i] != i + 7) abort ();
+ d[i] = 7 - i;
+ }
+ #pragma omp target defaultmap(tofrom: scalar)
+ {
+ err = a++ != 29;
+ for (int i = 0; i < 10; i++)
+ {
+ err |= b[i] != i ^ 1;
+ b[i] = i + 5;
+ if (i >= 6) continue;
+ err |= z[i] != 2 * i + 10;
+ z[i] = 9 - 3 * i;
+ if (i == 5) continue;
+ err |= d[i] != 7L - i;
+ d[i] = i;
+ }
+ err |= c != 149;
+ c = -2;
+ err |= t != 15;
+ t = 155;
+ }
+ if (err || a != 30 || c != -2 || t != 155)
+ abort ();
+ for (int i = 0; i < 10; i++)
+ {
+ if (b[i] != i + 5) abort ();
+ if (i >= 6) continue;
+ if (z[i] != 9 - 3 * i) abort ();
+ z[i]++;
+ if (i == 5) continue;
+ if (d[i] != i) abort ();
+ }
+ #pragma omp target data map (alloc: z)
+ {
+ #pragma omp target update to (z)
+ #pragma omp target map(from: err)
+ {
+ err = 0;
+ for (int i = 0; i < 6; i++)
+ if (z[i] != 10 - 3 * i) err = 1;
+ else z[i] = i;
+ }
+ if (err) abort ();
+ #pragma omp target update from (z)
+ }
+ for (int i = 0; i < 6; i++)
+ if (z[i] != i)
+ abort ();
+}
+
+int
+main ()
+{
+ int a = 7;
+ int b[10] = { 10, 9, 8, 7, 6, 5, 4, 3, 2, 1 };
+ short c = 25;
+ long d[5] = { 12, 13, 14, 15, 16 };
+ x = 8;
+ foo (a, b, c, d, 5);
+}
--- /dev/null
+#include <omp.h>
+#include <stdlib.h>
+
+struct S { char p[64]; int a; int b[2]; long c[4]; int *d; unsigned char &e; char (&f)[2]; short (&g)[4]; int *&h; char q[64]; };
+
+__attribute__((noinline, noclone)) void
+foo (S s)
+{
+ int d = omp_get_default_device ();
+ int id = omp_get_initial_device ();
+ int sep = 1;
+
+ if (d < 0 || d >= omp_get_num_devices ())
+ d = id;
+
+ int err;
+ #pragma omp target map(tofrom: s.a, s.b, s.c[1:2], s.d[-2:3], s.e, s.f, s.g[1:2], s.h[2:3]) map(to: sep) map(from: err)
+ {
+ err = s.a != 11 || s.b[0] != 12 || s.b[1] != 13;
+ err |= s.c[1] != 15 || s.c[2] != 16 || s.d[-2] != 18 || s.d[-1] != 19 || s.d[0] != 20;
+ err |= s.e != 21 || s.f[0] != 22 || s.f[1] != 23 || s.g[1] != 25 || s.g[2] != 26;
+ err |= s.h[2] != 31 || s.h[3] != 32 || s.h[4] != 33;
+ s.a = 35; s.b[0] = 36; s.b[1] = 37;
+ s.c[1] = 38; s.c[2] = 39; s.d[-2] = 40; s.d[-1] = 41; s.d[0] = 42;
+ s.e = 43; s.f[0] = 44; s.f[1] = 45; s.g[1] = 46; s.g[2] = 47;
+ s.h[2] = 48; s.h[3] = 49; s.h[4] = 50;
+ sep = 0;
+ }
+ if (err) abort ();
+ err = s.a != 35 || s.b[0] != 36 || s.b[1] != 37;
+ err |= s.c[1] != 38 || s.c[2] != 39 || s.d[-2] != 40 || s.d[-1] != 41 || s.d[0] != 42;
+ err |= s.e != 43 || s.f[0] != 44 || s.f[1] != 45 || s.g[1] != 46 || s.g[2] != 47;
+ err |= s.h[2] != 48 || s.h[3] != 49 || s.h[4] != 50;
+ if (err) abort ();
+ s.a = 50; s.b[0] = 49; s.b[1] = 48;
+ s.c[1] = 47; s.c[2] = 46; s.d[-2] = 45; s.d[-1] = 44; s.d[0] = 43;
+ s.e = 42; s.f[0] = 41; s.f[1] = 40; s.g[1] = 39; s.g[2] = 38;
+ s.h[2] = 37; s.h[3] = 36; s.h[4] = 35;
+ if (sep
+ && (omp_target_is_present (&s.a, d)
+ || omp_target_is_present (s.b, d)
+ || omp_target_is_present (&s.c[1], d)
+ || omp_target_is_present (s.d, d)
+ || omp_target_is_present (&s.d[-2], d)
+ || omp_target_is_present (&s.e, d)
+ || omp_target_is_present (s.f, d)
+ || omp_target_is_present (&s.g[1], d)
+ || omp_target_is_present (&s.h, d)
+ || omp_target_is_present (&s.h[2], d)))
+ abort ();
+ #pragma omp target data map(alloc: s.a, s.b, s.c[1:2], s.d[-2:3], s.e, s.f, s.g[1:2], s.h[2:3])
+ {
+ if (!omp_target_is_present (&s.a, d)
+ || !omp_target_is_present (s.b, d)
+ || !omp_target_is_present (&s.c[1], d)
+ || !omp_target_is_present (s.d, d)
+ || !omp_target_is_present (&s.d[-2], d)
+ || !omp_target_is_present (&s.e, d)
+ || !omp_target_is_present (s.f, d)
+ || !omp_target_is_present (&s.g[1], d)
+ || !omp_target_is_present (&s.h, d)
+ || !omp_target_is_present (&s.h[2], d))
+ abort ();
+ #pragma omp target update to(s.a, s.b, s.c[1:2], s.d[-2:3], s.e, s.f, s.g[1:2], s.h[2:3])
+ #pragma omp target map(alloc: s.a, s.b, s.c[1:2], s.d[-2:3], s.e, s.f, s.g[1:2], s.h[2:3]) map(from: err)
+ {
+ err = s.a != 50 || s.b[0] != 49 || s.b[1] != 48;
+ err |= s.c[1] != 47 || s.c[2] != 46 || s.d[-2] != 45 || s.d[-1] != 44 || s.d[0] != 43;
+ err |= s.e != 42 || s.f[0] != 41 || s.f[1] != 40 || s.g[1] != 39 || s.g[2] != 38;
+ err |= s.h[2] != 37 || s.h[3] != 36 || s.h[4] != 35;
+ s.a = 17; s.b[0] = 18; s.b[1] = 19;
+ s.c[1] = 20; s.c[2] = 21; s.d[-2] = 22; s.d[-1] = 23; s.d[0] = 24;
+ s.e = 25; s.f[0] = 26; s.f[1] = 27; s.g[1] = 28; s.g[2] = 29;
+ s.h[2] = 30; s.h[3] = 31; s.h[4] = 32;
+ }
+ #pragma omp target update from(s.a, s.b, s.c[1:2], s.d[-2:3], s.e, s.f, s.g[1:2], s.h[2:3])
+ }
+ if (sep
+ && (omp_target_is_present (&s.a, d)
+ || omp_target_is_present (s.b, d)
+ || omp_target_is_present (&s.c[1], d)
+ || omp_target_is_present (s.d, d)
+ || omp_target_is_present (&s.d[-2], d)
+ || omp_target_is_present (&s.e, d)
+ || omp_target_is_present (s.f, d)
+ || omp_target_is_present (&s.g[1], d)
+ || omp_target_is_present (&s.h, d)
+ || omp_target_is_present (&s.h[2], d)))
+ abort ();
+ if (err) abort ();
+ err = s.a != 17 || s.b[0] != 18 || s.b[1] != 19;
+ err |= s.c[1] != 20 || s.c[2] != 21 || s.d[-2] != 22 || s.d[-1] != 23 || s.d[0] != 24;
+ err |= s.e != 25 || s.f[0] != 26 || s.f[1] != 27 || s.g[1] != 28 || s.g[2] != 29;
+ err |= s.h[2] != 30 || s.h[3] != 31 || s.h[4] != 32;
+ if (err) abort ();
+ s.a = 33; s.b[0] = 34; s.b[1] = 35;
+ s.c[1] = 36; s.c[2] = 37; s.d[-2] = 38; s.d[-1] = 39; s.d[0] = 40;
+ s.e = 41; s.f[0] = 42; s.f[1] = 43; s.g[1] = 44; s.g[2] = 45;
+ s.h[2] = 46; s.h[3] = 47; s.h[4] = 48;
+ #pragma omp target enter data map(alloc: s.a, s.b, s.c[1:2], s.d[-2:3], s.e, s.f, s.g[1:2], s.h[2:3])
+ if (!omp_target_is_present (&s.a, d)
+ || !omp_target_is_present (s.b, d)
+ || !omp_target_is_present (&s.c[1], d)
+ || !omp_target_is_present (s.d, d)
+ || !omp_target_is_present (&s.d[-2], d)
+ || !omp_target_is_present (&s.e, d)
+ || !omp_target_is_present (s.f, d)
+ || !omp_target_is_present (&s.g[1], d)
+ || !omp_target_is_present (&s.h, d)
+ || !omp_target_is_present (&s.h[2], d))
+ abort ();
+ #pragma omp target enter data map(always, to: s.a, s.b, s.c[1:2], s.d[-2:3], s.e, s.f, s.g[1:2], s.h[2:3])
+ #pragma omp target map(alloc: s.a, s.b, s.c[1:2], s.d[-2:3], s.e, s.f, s.g[1:2], s.h[2:3]) map(from: err)
+ {
+ err = s.a != 33 || s.b[0] != 34 || s.b[1] != 35;
+ err |= s.c[1] != 36 || s.c[2] != 37 || s.d[-2] != 38 || s.d[-1] != 39 || s.d[0] != 40;
+ err |= s.e != 41 || s.f[0] != 42 || s.f[1] != 43 || s.g[1] != 44 || s.g[2] != 45;
+ err |= s.h[2] != 46 || s.h[3] != 47 || s.h[4] != 48;
+ s.a = 49; s.b[0] = 48; s.b[1] = 47;
+ s.c[1] = 46; s.c[2] = 45; s.d[-2] = 44; s.d[-1] = 43; s.d[0] = 42;
+ s.e = 31; s.f[0] = 40; s.f[1] = 39; s.g[1] = 38; s.g[2] = 37;
+ s.h[2] = 36; s.h[3] = 35; s.h[4] = 34;
+ }
+ #pragma omp target exit data map(always, from: s.a, s.b, s.c[1:2], s.d[-2:3], s.e, s.f, s.g[1:2], s.h[2:3])
+ if (!omp_target_is_present (&s.a, d)
+ || !omp_target_is_present (s.b, d)
+ || !omp_target_is_present (&s.c[1], d)
+ || !omp_target_is_present (s.d, d)
+ || !omp_target_is_present (&s.d[-2], d)
+ || !omp_target_is_present (&s.e, d)
+ || !omp_target_is_present (s.f, d)
+ || !omp_target_is_present (&s.g[1], d)
+ || !omp_target_is_present (&s.h, d)
+ || !omp_target_is_present (&s.h[2], d))
+ abort ();
+ #pragma omp target exit data map(release: s.a, s.b, s.c[1:2], s.d[-2:3], s.e, s.f, s.g[1:2], s.h[2:3])
+ if (sep
+ && (omp_target_is_present (&s.a, d)
+ || omp_target_is_present (s.b, d)
+ || omp_target_is_present (&s.c[1], d)
+ || omp_target_is_present (s.d, d)
+ || omp_target_is_present (&s.d[-2], d)
+ || omp_target_is_present (&s.e, d)
+ || omp_target_is_present (s.f, d)
+ || omp_target_is_present (&s.g[1], d)
+ || omp_target_is_present (&s.h, d)
+ || omp_target_is_present (&s.h[2], d)))
+ abort ();
+ if (err) abort ();
+ err = s.a != 49 || s.b[0] != 48 || s.b[1] != 47;
+ err |= s.c[1] != 46 || s.c[2] != 45 || s.d[-2] != 44 || s.d[-1] != 43 || s.d[0] != 42;
+ err |= s.e != 31 || s.f[0] != 40 || s.f[1] != 39 || s.g[1] != 38 || s.g[2] != 37;
+ err |= s.h[2] != 36 || s.h[3] != 35 || s.h[4] != 34;
+ if (err) abort ();
+}
+
+int
+main ()
+{
+ int d[3] = { 18, 19, 20 };
+ unsigned char e = 21;
+ char f[2] = { 22, 23 };
+ short g[4] = { 24, 25, 26, 27 };
+ int hb[7] = { 28, 29, 30, 31, 32, 33, 34 };
+ int *h = hb + 1;
+ S s = { {}, 11, { 12, 13 }, { 14, 15, 16, 17 }, d + 2, e, f, g, h, {} };
+ foo (s);
+}
--- /dev/null
+#include <omp.h>
+#include <stdlib.h>
+
+template <typename C, typename I, typename L, typename UC, typename SH>
+struct S { C p[64]; I a; I b[2]; L c[4]; I *d; UC &e; C (&f)[2]; SH (&g)[4]; I *&h; C q[64]; };
+
+template <typename C, typename I, typename L, typename UC, typename SH>
+__attribute__((noinline, noclone)) void
+foo (S<C, I, L, UC, SH> s)
+{
+ int d = omp_get_default_device ();
+ int id = omp_get_initial_device ();
+ int sep = 1;
+
+ if (d < 0 || d >= omp_get_num_devices ())
+ d = id;
+
+ int err;
+ #pragma omp target map(tofrom: s.a, s.b, s.c[1:2], s.d[-2:3], s.e, s.f, s.g[1:2], s.h[2:3]) map(to: sep) map(from: err)
+ {
+ err = s.a != 11 || s.b[0] != 12 || s.b[1] != 13;
+ err |= s.c[1] != 15 || s.c[2] != 16 || s.d[-2] != 18 || s.d[-1] != 19 || s.d[0] != 20;
+ err |= s.e != 21 || s.f[0] != 22 || s.f[1] != 23 || s.g[1] != 25 || s.g[2] != 26;
+ err |= s.h[2] != 31 || s.h[3] != 32 || s.h[4] != 33;
+ s.a = 35; s.b[0] = 36; s.b[1] = 37;
+ s.c[1] = 38; s.c[2] = 39; s.d[-2] = 40; s.d[-1] = 41; s.d[0] = 42;
+ s.e = 43; s.f[0] = 44; s.f[1] = 45; s.g[1] = 46; s.g[2] = 47;
+ s.h[2] = 48; s.h[3] = 49; s.h[4] = 50;
+ sep = 0;
+ }
+ if (err) abort ();
+ err = s.a != 35 || s.b[0] != 36 || s.b[1] != 37;
+ err |= s.c[1] != 38 || s.c[2] != 39 || s.d[-2] != 40 || s.d[-1] != 41 || s.d[0] != 42;
+ err |= s.e != 43 || s.f[0] != 44 || s.f[1] != 45 || s.g[1] != 46 || s.g[2] != 47;
+ err |= s.h[2] != 48 || s.h[3] != 49 || s.h[4] != 50;
+ if (err) abort ();
+ s.a = 50; s.b[0] = 49; s.b[1] = 48;
+ s.c[1] = 47; s.c[2] = 46; s.d[-2] = 45; s.d[-1] = 44; s.d[0] = 43;
+ s.e = 42; s.f[0] = 41; s.f[1] = 40; s.g[1] = 39; s.g[2] = 38;
+ s.h[2] = 37; s.h[3] = 36; s.h[4] = 35;
+ if (sep
+ && (omp_target_is_present (&s.a, d)
+ || omp_target_is_present (s.b, d)
+ || omp_target_is_present (&s.c[1], d)
+ || omp_target_is_present (s.d, d)
+ || omp_target_is_present (&s.d[-2], d)
+ || omp_target_is_present (&s.e, d)
+ || omp_target_is_present (s.f, d)
+ || omp_target_is_present (&s.g[1], d)
+ || omp_target_is_present (&s.h, d)
+ || omp_target_is_present (&s.h[2], d)))
+ abort ();
+ #pragma omp target data map(alloc: s.a, s.b, s.c[1:2], s.d[-2:3], s.e, s.f, s.g[1:2], s.h[2:3])
+ {
+ if (!omp_target_is_present (&s.a, d)
+ || !omp_target_is_present (s.b, d)
+ || !omp_target_is_present (&s.c[1], d)
+ || !omp_target_is_present (s.d, d)
+ || !omp_target_is_present (&s.d[-2], d)
+ || !omp_target_is_present (&s.e, d)
+ || !omp_target_is_present (s.f, d)
+ || !omp_target_is_present (&s.g[1], d)
+ || !omp_target_is_present (&s.h, d)
+ || !omp_target_is_present (&s.h[2], d))
+ abort ();
+ #pragma omp target update to(s.a, s.b, s.c[1:2], s.d[-2:3], s.e, s.f, s.g[1:2], s.h[2:3])
+ #pragma omp target map(alloc: s.a, s.b, s.c[1:2], s.d[-2:3], s.e, s.f, s.g[1:2], s.h[2:3]) map(from: err)
+ {
+ err = s.a != 50 || s.b[0] != 49 || s.b[1] != 48;
+ err |= s.c[1] != 47 || s.c[2] != 46 || s.d[-2] != 45 || s.d[-1] != 44 || s.d[0] != 43;
+ err |= s.e != 42 || s.f[0] != 41 || s.f[1] != 40 || s.g[1] != 39 || s.g[2] != 38;
+ err |= s.h[2] != 37 || s.h[3] != 36 || s.h[4] != 35;
+ s.a = 17; s.b[0] = 18; s.b[1] = 19;
+ s.c[1] = 20; s.c[2] = 21; s.d[-2] = 22; s.d[-1] = 23; s.d[0] = 24;
+ s.e = 25; s.f[0] = 26; s.f[1] = 27; s.g[1] = 28; s.g[2] = 29;
+ s.h[2] = 30; s.h[3] = 31; s.h[4] = 32;
+ }
+ #pragma omp target update from(s.a, s.b, s.c[1:2], s.d[-2:3], s.e, s.f, s.g[1:2], s.h[2:3])
+ }
+ if (sep
+ && (omp_target_is_present (&s.a, d)
+ || omp_target_is_present (s.b, d)
+ || omp_target_is_present (&s.c[1], d)
+ || omp_target_is_present (s.d, d)
+ || omp_target_is_present (&s.d[-2], d)
+ || omp_target_is_present (&s.e, d)
+ || omp_target_is_present (s.f, d)
+ || omp_target_is_present (&s.g[1], d)
+ || omp_target_is_present (&s.h, d)
+ || omp_target_is_present (&s.h[2], d)))
+ abort ();
+ if (err) abort ();
+ err = s.a != 17 || s.b[0] != 18 || s.b[1] != 19;
+ err |= s.c[1] != 20 || s.c[2] != 21 || s.d[-2] != 22 || s.d[-1] != 23 || s.d[0] != 24;
+ err |= s.e != 25 || s.f[0] != 26 || s.f[1] != 27 || s.g[1] != 28 || s.g[2] != 29;
+ err |= s.h[2] != 30 || s.h[3] != 31 || s.h[4] != 32;
+ if (err) abort ();
+ s.a = 33; s.b[0] = 34; s.b[1] = 35;
+ s.c[1] = 36; s.c[2] = 37; s.d[-2] = 38; s.d[-1] = 39; s.d[0] = 40;
+ s.e = 41; s.f[0] = 42; s.f[1] = 43; s.g[1] = 44; s.g[2] = 45;
+ s.h[2] = 46; s.h[3] = 47; s.h[4] = 48;
+ #pragma omp target enter data map(alloc: s.a, s.b, s.c[1:2], s.d[-2:3], s.e, s.f, s.g[1:2], s.h[2:3])
+ if (!omp_target_is_present (&s.a, d)
+ || !omp_target_is_present (s.b, d)
+ || !omp_target_is_present (&s.c[1], d)
+ || !omp_target_is_present (s.d, d)
+ || !omp_target_is_present (&s.d[-2], d)
+ || !omp_target_is_present (&s.e, d)
+ || !omp_target_is_present (s.f, d)
+ || !omp_target_is_present (&s.g[1], d)
+ || !omp_target_is_present (&s.h, d)
+ || !omp_target_is_present (&s.h[2], d))
+ abort ();
+ #pragma omp target enter data map(always, to: s.a, s.b, s.c[1:2], s.d[-2:3], s.e, s.f, s.g[1:2], s.h[2:3])
+ #pragma omp target map(alloc: s.a, s.b, s.c[1:2], s.d[-2:3], s.e, s.f, s.g[1:2], s.h[2:3]) map(from: err)
+ {
+ err = s.a != 33 || s.b[0] != 34 || s.b[1] != 35;
+ err |= s.c[1] != 36 || s.c[2] != 37 || s.d[-2] != 38 || s.d[-1] != 39 || s.d[0] != 40;
+ err |= s.e != 41 || s.f[0] != 42 || s.f[1] != 43 || s.g[1] != 44 || s.g[2] != 45;
+ err |= s.h[2] != 46 || s.h[3] != 47 || s.h[4] != 48;
+ s.a = 49; s.b[0] = 48; s.b[1] = 47;
+ s.c[1] = 46; s.c[2] = 45; s.d[-2] = 44; s.d[-1] = 43; s.d[0] = 42;
+ s.e = 31; s.f[0] = 40; s.f[1] = 39; s.g[1] = 38; s.g[2] = 37;
+ s.h[2] = 36; s.h[3] = 35; s.h[4] = 34;
+ }
+ #pragma omp target exit data map(always, from: s.a, s.b, s.c[1:2], s.d[-2:3], s.e, s.f, s.g[1:2], s.h[2:3])
+ if (!omp_target_is_present (&s.a, d)
+ || !omp_target_is_present (s.b, d)
+ || !omp_target_is_present (&s.c[1], d)
+ || !omp_target_is_present (s.d, d)
+ || !omp_target_is_present (&s.d[-2], d)
+ || !omp_target_is_present (&s.e, d)
+ || !omp_target_is_present (s.f, d)
+ || !omp_target_is_present (&s.g[1], d)
+ || !omp_target_is_present (&s.h, d)
+ || !omp_target_is_present (&s.h[2], d))
+ abort ();
+ #pragma omp target exit data map(release: s.a, s.b, s.c[1:2], s.d[-2:3], s.e, s.f, s.g[1:2], s.h[2:3])
+ if (sep
+ && (omp_target_is_present (&s.a, d)
+ || omp_target_is_present (s.b, d)
+ || omp_target_is_present (&s.c[1], d)
+ || omp_target_is_present (s.d, d)
+ || omp_target_is_present (&s.d[-2], d)
+ || omp_target_is_present (&s.e, d)
+ || omp_target_is_present (s.f, d)
+ || omp_target_is_present (&s.g[1], d)
+ || omp_target_is_present (&s.h, d)
+ || omp_target_is_present (&s.h[2], d)))
+ abort ();
+ if (err) abort ();
+ err = s.a != 49 || s.b[0] != 48 || s.b[1] != 47;
+ err |= s.c[1] != 46 || s.c[2] != 45 || s.d[-2] != 44 || s.d[-1] != 43 || s.d[0] != 42;
+ err |= s.e != 31 || s.f[0] != 40 || s.f[1] != 39 || s.g[1] != 38 || s.g[2] != 37;
+ err |= s.h[2] != 36 || s.h[3] != 35 || s.h[4] != 34;
+ if (err) abort ();
+}
+
+int
+main ()
+{
+ int d[3] = { 18, 19, 20 };
+ unsigned char e = 21;
+ char f[2] = { 22, 23 };
+ short g[4] = { 24, 25, 26, 27 };
+ int hb[7] = { 28, 29, 30, 31, 32, 33, 34 };
+ int *h = hb + 1;
+ S<char, int, long, unsigned char, short> s = { {}, 11, { 12, 13 }, { 14, 15, 16, 17 }, d + 2, e, f, g, h, {} };
+ foo (s);
+}
--- /dev/null
+#include <omp.h>
+#include <stdlib.h>
+
+template <typename C, typename I, typename L, typename UCR, typename CAR, typename SH, typename IPR>
+struct S { C p[64]; I a; I b[2]; L c[4]; I *d; UCR e; CAR f; SH g; IPR h; C q[64]; };
+
+template <typename C, typename I, typename L, typename UCR, typename CAR, typename SH, typename IPR>
+__attribute__((noinline, noclone)) void
+foo (S<C, I, L, UCR, CAR, SH, IPR> s)
+{
+ int d = omp_get_default_device ();
+ int id = omp_get_initial_device ();
+ int sep = 1;
+
+ if (d < 0 || d >= omp_get_num_devices ())
+ d = id;
+
+ int err;
+ #pragma omp target map(tofrom: s.a, s.b, s.c[1:2], s.d[-2:3], s.e, s.f, s.g[1:2], s.h[2:3]) map(to: sep) map(from: err)
+ {
+ err = s.a != 11 || s.b[0] != 12 || s.b[1] != 13;
+ err |= s.c[1] != 15 || s.c[2] != 16 || s.d[-2] != 18 || s.d[-1] != 19 || s.d[0] != 20;
+ err |= s.e != 21 || s.f[0] != 22 || s.f[1] != 23 || s.g[1] != 25 || s.g[2] != 26;
+ err |= s.h[2] != 31 || s.h[3] != 32 || s.h[4] != 33;
+ s.a = 35; s.b[0] = 36; s.b[1] = 37;
+ s.c[1] = 38; s.c[2] = 39; s.d[-2] = 40; s.d[-1] = 41; s.d[0] = 42;
+ s.e = 43; s.f[0] = 44; s.f[1] = 45; s.g[1] = 46; s.g[2] = 47;
+ s.h[2] = 48; s.h[3] = 49; s.h[4] = 50;
+ sep = 0;
+ }
+ if (err) abort ();
+ err = s.a != 35 || s.b[0] != 36 || s.b[1] != 37;
+ err |= s.c[1] != 38 || s.c[2] != 39 || s.d[-2] != 40 || s.d[-1] != 41 || s.d[0] != 42;
+ err |= s.e != 43 || s.f[0] != 44 || s.f[1] != 45 || s.g[1] != 46 || s.g[2] != 47;
+ err |= s.h[2] != 48 || s.h[3] != 49 || s.h[4] != 50;
+ if (err) abort ();
+ s.a = 50; s.b[0] = 49; s.b[1] = 48;
+ s.c[1] = 47; s.c[2] = 46; s.d[-2] = 45; s.d[-1] = 44; s.d[0] = 43;
+ s.e = 42; s.f[0] = 41; s.f[1] = 40; s.g[1] = 39; s.g[2] = 38;
+ s.h[2] = 37; s.h[3] = 36; s.h[4] = 35;
+ if (sep
+ && (omp_target_is_present (&s.a, d)
+ || omp_target_is_present (s.b, d)
+ || omp_target_is_present (&s.c[1], d)
+ || omp_target_is_present (s.d, d)
+ || omp_target_is_present (&s.d[-2], d)
+ || omp_target_is_present (&s.e, d)
+ || omp_target_is_present (s.f, d)
+ || omp_target_is_present (&s.g[1], d)
+ || omp_target_is_present (&s.h, d)
+ || omp_target_is_present (&s.h[2], d)))
+ abort ();
+ #pragma omp target data map(alloc: s.a, s.b, s.c[1:2], s.d[-2:3], s.e, s.f, s.g[1:2], s.h[2:3])
+ {
+ if (!omp_target_is_present (&s.a, d)
+ || !omp_target_is_present (s.b, d)
+ || !omp_target_is_present (&s.c[1], d)
+ || !omp_target_is_present (s.d, d)
+ || !omp_target_is_present (&s.d[-2], d)
+ || !omp_target_is_present (&s.e, d)
+ || !omp_target_is_present (s.f, d)
+ || !omp_target_is_present (&s.g[1], d)
+ || !omp_target_is_present (&s.h, d)
+ || !omp_target_is_present (&s.h[2], d))
+ abort ();
+ #pragma omp target update to(s.a, s.b, s.c[1:2], s.d[-2:3], s.e, s.f, s.g[1:2], s.h[2:3])
+ #pragma omp target map(alloc: s.a, s.b, s.c[1:2], s.d[-2:3], s.e, s.f, s.g[1:2], s.h[2:3]) map(from: err)
+ {
+ err = s.a != 50 || s.b[0] != 49 || s.b[1] != 48;
+ err |= s.c[1] != 47 || s.c[2] != 46 || s.d[-2] != 45 || s.d[-1] != 44 || s.d[0] != 43;
+ err |= s.e != 42 || s.f[0] != 41 || s.f[1] != 40 || s.g[1] != 39 || s.g[2] != 38;
+ err |= s.h[2] != 37 || s.h[3] != 36 || s.h[4] != 35;
+ s.a = 17; s.b[0] = 18; s.b[1] = 19;
+ s.c[1] = 20; s.c[2] = 21; s.d[-2] = 22; s.d[-1] = 23; s.d[0] = 24;
+ s.e = 25; s.f[0] = 26; s.f[1] = 27; s.g[1] = 28; s.g[2] = 29;
+ s.h[2] = 30; s.h[3] = 31; s.h[4] = 32;
+ }
+ #pragma omp target update from(s.a, s.b, s.c[1:2], s.d[-2:3], s.e, s.f, s.g[1:2], s.h[2:3])
+ }
+ if (sep
+ && (omp_target_is_present (&s.a, d)
+ || omp_target_is_present (s.b, d)
+ || omp_target_is_present (&s.c[1], d)
+ || omp_target_is_present (s.d, d)
+ || omp_target_is_present (&s.d[-2], d)
+ || omp_target_is_present (&s.e, d)
+ || omp_target_is_present (s.f, d)
+ || omp_target_is_present (&s.g[1], d)
+ || omp_target_is_present (&s.h, d)
+ || omp_target_is_present (&s.h[2], d)))
+ abort ();
+ if (err) abort ();
+ err = s.a != 17 || s.b[0] != 18 || s.b[1] != 19;
+ err |= s.c[1] != 20 || s.c[2] != 21 || s.d[-2] != 22 || s.d[-1] != 23 || s.d[0] != 24;
+ err |= s.e != 25 || s.f[0] != 26 || s.f[1] != 27 || s.g[1] != 28 || s.g[2] != 29;
+ err |= s.h[2] != 30 || s.h[3] != 31 || s.h[4] != 32;
+ if (err) abort ();
+ s.a = 33; s.b[0] = 34; s.b[1] = 35;
+ s.c[1] = 36; s.c[2] = 37; s.d[-2] = 38; s.d[-1] = 39; s.d[0] = 40;
+ s.e = 41; s.f[0] = 42; s.f[1] = 43; s.g[1] = 44; s.g[2] = 45;
+ s.h[2] = 46; s.h[3] = 47; s.h[4] = 48;
+ #pragma omp target enter data map(alloc: s.a, s.b, s.c[1:2], s.d[-2:3], s.e, s.f, s.g[1:2], s.h[2:3])
+ if (!omp_target_is_present (&s.a, d)
+ || !omp_target_is_present (s.b, d)
+ || !omp_target_is_present (&s.c[1], d)
+ || !omp_target_is_present (s.d, d)
+ || !omp_target_is_present (&s.d[-2], d)
+ || !omp_target_is_present (&s.e, d)
+ || !omp_target_is_present (s.f, d)
+ || !omp_target_is_present (&s.g[1], d)
+ || !omp_target_is_present (&s.h, d)
+ || !omp_target_is_present (&s.h[2], d))
+ abort ();
+ #pragma omp target enter data map(always, to: s.a, s.b, s.c[1:2], s.d[-2:3], s.e, s.f, s.g[1:2], s.h[2:3])
+ #pragma omp target map(alloc: s.a, s.b, s.c[1:2], s.d[-2:3], s.e, s.f, s.g[1:2], s.h[2:3]) map(from: err)
+ {
+ err = s.a != 33 || s.b[0] != 34 || s.b[1] != 35;
+ err |= s.c[1] != 36 || s.c[2] != 37 || s.d[-2] != 38 || s.d[-1] != 39 || s.d[0] != 40;
+ err |= s.e != 41 || s.f[0] != 42 || s.f[1] != 43 || s.g[1] != 44 || s.g[2] != 45;
+ err |= s.h[2] != 46 || s.h[3] != 47 || s.h[4] != 48;
+ s.a = 49; s.b[0] = 48; s.b[1] = 47;
+ s.c[1] = 46; s.c[2] = 45; s.d[-2] = 44; s.d[-1] = 43; s.d[0] = 42;
+ s.e = 31; s.f[0] = 40; s.f[1] = 39; s.g[1] = 38; s.g[2] = 37;
+ s.h[2] = 36; s.h[3] = 35; s.h[4] = 34;
+ }
+ #pragma omp target exit data map(always, from: s.a, s.b, s.c[1:2], s.d[-2:3], s.e, s.f, s.g[1:2], s.h[2:3])
+ if (!omp_target_is_present (&s.a, d)
+ || !omp_target_is_present (s.b, d)
+ || !omp_target_is_present (&s.c[1], d)
+ || !omp_target_is_present (s.d, d)
+ || !omp_target_is_present (&s.d[-2], d)
+ || !omp_target_is_present (&s.e, d)
+ || !omp_target_is_present (s.f, d)
+ || !omp_target_is_present (&s.g[1], d)
+ || !omp_target_is_present (&s.h, d)
+ || !omp_target_is_present (&s.h[2], d))
+ abort ();
+ #pragma omp target exit data map(release: s.a, s.b, s.c[1:2], s.d[-2:3], s.e, s.f, s.g[1:2], s.h[2:3])
+ if (sep
+ && (omp_target_is_present (&s.a, d)
+ || omp_target_is_present (s.b, d)
+ || omp_target_is_present (&s.c[1], d)
+ || omp_target_is_present (s.d, d)
+ || omp_target_is_present (&s.d[-2], d)
+ || omp_target_is_present (&s.e, d)
+ || omp_target_is_present (s.f, d)
+ || omp_target_is_present (&s.g[1], d)
+ || omp_target_is_present (&s.h, d)
+ || omp_target_is_present (&s.h[2], d)))
+ abort ();
+ if (err) abort ();
+ err = s.a != 49 || s.b[0] != 48 || s.b[1] != 47;
+ err |= s.c[1] != 46 || s.c[2] != 45 || s.d[-2] != 44 || s.d[-1] != 43 || s.d[0] != 42;
+ err |= s.e != 31 || s.f[0] != 40 || s.f[1] != 39 || s.g[1] != 38 || s.g[2] != 37;
+ err |= s.h[2] != 36 || s.h[3] != 35 || s.h[4] != 34;
+ if (err) abort ();
+}
+
+int
+main ()
+{
+ int d[3] = { 18, 19, 20 };
+ unsigned char e = 21;
+ char f[2] = { 22, 23 };
+ short g[4] = { 24, 25, 26, 27 };
+ int hb[7] = { 28, 29, 30, 31, 32, 33, 34 };
+ int *h = hb + 1;
+ typedef char (&CAR)[2];
+ typedef short (&SH)[4];
+ S<char, int, long, unsigned char &, CAR, SH, int *&> s
+ = { {}, 11, { 12, 13 }, { 14, 15, 16, 17 }, d + 2, e, f, g, h, {} };
+ foo (s);
+}
--- /dev/null
+extern "C" void abort ();
+
+__attribute__((noinline, noclone)) void
+foo (int *&p, int *&q, int *&r, int n, int m)
+{
+ int i, err, *s = r;
+ int sep = 1;
+ #pragma omp target map(to:sep)
+ sep = 0;
+ #pragma omp target data map(to:p[0:8])
+ {
+ /* For zero length array sections, p points to the start of
+ already mapped range, q to the end of it (with nothing mapped
+ after it), and r does not point to an mapped range. */
+ #pragma omp target map(alloc:p[:0]) map(to:q[:0]) map(from:r[:0]) private(i) map(from:err) firstprivate (s)
+ {
+ err = 0;
+ for (i = 0; i < 8; i++)
+ if (p[i] != i + 1)
+ err = 1;
+ if (sep)
+ {
+ if (q != (int *) 0 || r != (int *) 0)
+ err = 1;
+ }
+ else if (p + 8 != q || r != s)
+ err = 1;
+ }
+ if (err)
+ abort ();
+ /* Implicit mapping of pointers behaves the same way. */
+ #pragma omp target private(i) map(from:err) firstprivate (s)
+ {
+ err = 0;
+ for (i = 0; i < 8; i++)
+ if (p[i] != i + 1)
+ err = 1;
+ if (sep)
+ {
+ if (q != (int *) 0 || r != (int *) 0)
+ err = 1;
+ }
+ else if (p + 8 != q || r != s)
+ err = 1;
+ }
+ if (err)
+ abort ();
+ /* And zero-length array sections, though not known at compile
+ time, behave the same. */
+ #pragma omp target map(p[:n]) map(tofrom:q[:n]) map(alloc:r[:n]) private(i) map(from:err) firstprivate (s)
+ {
+ err = 0;
+ for (i = 0; i < 8; i++)
+ if (p[i] != i + 1)
+ err = 1;
+ if (sep)
+ {
+ if (q != (int *) 0 || r != (int *) 0)
+ err = 1;
+ }
+ else if (p + 8 != q || r != s)
+ err = 1;
+ }
+ if (err)
+ abort ();
+ /* Non-zero length array sections, though not known at compile,
+ behave differently. */
+ #pragma omp target map(p[:m]) map(tofrom:q[:m]) map(to:r[:m]) private(i) map(from:err)
+ {
+ err = 0;
+ for (i = 0; i < 8; i++)
+ if (p[i] != i + 1)
+ err = 1;
+ if (q[0] != 9 || r[0] != 10)
+ err = 1;
+ }
+ if (err)
+ abort ();
+ #pragma omp target data map(to:q[0:1])
+ {
+ /* For zero length array sections, p points to the start of
+ already mapped range, q points to the start of another one,
+ and r to the end of the second one. */
+ #pragma omp target map(to:p[:0]) map(from:q[:0]) map(tofrom:r[:0]) private(i) map(from:err)
+ {
+ err = 0;
+ for (i = 0; i < 8; i++)
+ if (p[i] != i + 1)
+ err = 1;
+ if (q[0] != 9)
+ err = 1;
+ else if (sep)
+ {
+ if (r != (int *) 0)
+ err = 1;
+ }
+ else if (r != q + 1)
+ err = 1;
+ }
+ if (err)
+ abort ();
+ /* Implicit mapping of pointers behaves the same way. */
+ #pragma omp target private(i) map(from:err)
+ {
+ err = 0;
+ for (i = 0; i < 8; i++)
+ if (p[i] != i + 1)
+ err = 1;
+ if (q[0] != 9)
+ err = 1;
+ else if (sep)
+ {
+ if (r != (int *) 0)
+ err = 1;
+ }
+ else if (r != q + 1)
+ err = 1;
+ }
+ if (err)
+ abort ();
+ /* And zero-length array sections, though not known at compile
+ time, behave the same. */
+ #pragma omp target map(p[:n]) map(alloc:q[:n]) map(from:r[:n]) private(i) map(from:err)
+ {
+ err = 0;
+ for (i = 0; i < 8; i++)
+ if (p[i] != i + 1)
+ err = 1;
+ if (q[0] != 9)
+ err = 1;
+ else if (sep)
+ {
+ if (r != (int *) 0)
+ err = 1;
+ }
+ else if (r != q + 1)
+ err = 1;
+ }
+ if (err)
+ abort ();
+ /* Non-zero length array sections, though not known at compile,
+ behave differently. */
+ #pragma omp target map(p[:m]) map(alloc:q[:m]) map(tofrom:r[:m]) private(i) map(from:err)
+ {
+ err = 0;
+ for (i = 0; i < 8; i++)
+ if (p[i] != i + 1)
+ err = 1;
+ if (q[0] != 9 || r[0] != 10)
+ err = 1;
+ }
+ if (err)
+ abort ();
+ }
+ }
+}
+
+int
+main ()
+{
+ int a[32], i;
+ for (i = 0; i < 32; i++)
+ a[i] = i;
+ int *p = a + 1, *q = a + 9, *r = a + 10;
+ foo (p, q, r, 0, 1);
+ return 0;
+}
--- /dev/null
+extern "C" void abort ();
+struct S { char a[64]; int (&r)[2]; char b[64]; };
+
+__attribute__((noinline, noclone)) void
+foo (S s, int (&t)[3], int z)
+{
+ int err, sep = 1;
+ // Test that implicit mapping of reference to array does NOT
+ // behave like zero length array sections. s.r can't be used
+ // implicitly, as that means implicit mapping of the whole s
+ // and trying to dereference the references in there is unspecified.
+ #pragma omp target map(from: err) map(to: sep)
+ {
+ err = t[0] != 1 || t[1] != 2 || t[2] != 3;
+ sep = 0;
+ }
+ if (err) abort ();
+ // But explicit zero length array section mapping does.
+ #pragma omp target map(from: err) map(tofrom: s.r[:0], t[:0])
+ {
+ if (sep)
+ err = s.r != (int *) 0 || t != (int *) 0;
+ else
+ err = t[0] != 1 || t[1] != 2 || t[2] != 3 || s.r[0] != 6 || s.r[1] != 7;
+ }
+ if (err) abort ();
+ // Similarly zero length array section, but unknown at compile time.
+ #pragma omp target map(from: err) map(tofrom: s.r[:z], t[:z])
+ {
+ if (sep)
+ err = s.r != (int *) 0 || t != (int *) 0;
+ else
+ err = t[0] != 1 || t[1] != 2 || t[2] != 3 || s.r[0] != 6 || s.r[1] != 7;
+ }
+ if (err) abort ();
+ #pragma omp target enter data map (to: s.r, t)
+ // But when already mapped, it binds to existing mappings.
+ #pragma omp target map(from: err) map(tofrom: s.r[:0], t[:0])
+ {
+ err = t[0] != 1 || t[1] != 2 || t[2] != 3 || s.r[0] != 6 || s.r[1] != 7;
+ sep = 0;
+ }
+ if (err) abort ();
+ #pragma omp target map(from: err) map(tofrom: s.r[:z], t[:z])
+ {
+ err = t[0] != 1 || t[1] != 2 || t[2] != 3 || s.r[0] != 6 || s.r[1] != 7;
+ sep = 0;
+ }
+ if (err) abort ();
+}
+
+int
+main ()
+{
+ int t[3] = { 1, 2, 3 };
+ int r[2] = { 6, 7 };
+ S s = { {}, r, {} };
+ foo (s, t, 0);
+}
--- /dev/null
+/* { dg-do run } */
+
+#ifndef MONOTONIC_TYPE
+#include <omp.h>
+#include <stdlib.h>
+#define MONOTONIC_TYPE int
+#define MONOTONIC_UNDEF -1
+#define MONOTONIC_END(n) n
+#endif
+
+int
+main ()
+{
+ MONOTONIC_TYPE i;
+ #pragma omp parallel
+ {
+ int cnt = omp_get_num_threads ();
+ int thr = omp_get_thread_num ();
+ MONOTONIC_TYPE l = MONOTONIC_UNDEF;
+ int c = 0;
+ int n = 0;
+ #pragma omp for nowait schedule(static, 5)
+ for (i = 0; i < MONOTONIC_END (73); i++)
+ {
+ if (l == MONOTONIC_UNDEF)
+ {
+ n = 1;
+ c++;
+ }
+ else if (l == i - 1)
+ n++;
+ else
+ {
+ if (l >= i)
+ abort ();
+ if (cnt == 1)
+ abort ();
+ if (n != 5)
+ abort ();
+ n = 1;
+ c++;
+ }
+ if (n == 1)
+ {
+ if ((i % 5) != 0)
+ abort ();
+ if ((i / 5) % cnt != thr)
+ abort ();
+ }
+ l = i;
+ }
+ if (cnt == 1)
+ {
+ if (n != 73 || l != 73 - 1 || c != 1)
+ abort ();
+ }
+ else if (thr > 73 / 5)
+ {
+ if (l != MONOTONIC_UNDEF || c != 0 || n != 0)
+ abort ();
+ }
+ else if (thr == 73 / 5)
+ {
+ if (l != 73 - 1 || c != 1 || n != 73 % 5)
+ abort ();
+ }
+ else if (c == 0)
+ abort ();
+ else if (l == 73 - 1)
+ {
+ if (thr != (73 / 5) % cnt || n != 73 % 5)
+ abort ();
+ }
+ else if ((n % 5) != 0)
+ abort ();
+ l = MONOTONIC_UNDEF;
+ c = 0;
+ n = 0;
+ #pragma omp for schedule( monotonic: static, 7) nowait
+ for (i = 0; i < MONOTONIC_END (73); i++)
+ {
+ if (l == MONOTONIC_UNDEF)
+ {
+ n = 1;
+ c++;
+ }
+ else if (l == i - 1)
+ n++;
+ else
+ {
+ if (l >= i)
+ abort ();
+ if (cnt == 1)
+ abort ();
+ if (n != 7)
+ abort ();
+ n = 1;
+ c++;
+ }
+ if (n == 1)
+ {
+ if ((i % 7) != 0)
+ abort ();
+ if ((i / 7) % cnt != thr)
+ abort ();
+ }
+ l = i;
+ }
+ if (cnt == 1)
+ {
+ if (n != 73 || l != 73 - 1 || c != 1)
+ abort ();
+ }
+ else if (thr > 73 / 7)
+ {
+ if (l != MONOTONIC_UNDEF || c != 0 || n != 0)
+ abort ();
+ }
+ else if (thr == 73 / 7)
+ {
+ if (l != 73 - 1 || c != 1 || n != 73 % 7)
+ abort ();
+ }
+ else if (c == 0)
+ abort ();
+ else if (l == 73 - 1)
+ {
+ if (thr != (73 / 7) % cnt || n != 73 % 7)
+ abort ();
+ }
+ else if ((n % 7) != 0)
+ abort ();
+ l = MONOTONIC_UNDEF;
+ c = 0;
+ n = 0;
+ #pragma omp for nowait schedule(static)
+ for (i = 0; i < MONOTONIC_END (73); i++)
+ {
+ if (l == MONOTONIC_UNDEF)
+ {
+ n = 1;
+ c++;
+ }
+ else if (l == i - 1)
+ n++;
+ else
+ abort ();
+ l = i;
+ }
+ if (c > 1)
+ abort ();
+ l = MONOTONIC_UNDEF;
+ c = 0;
+ n = 0;
+ #pragma omp for nowait schedule(monotonic,simd:static)
+ for (i = 0; i < MONOTONIC_END (73); i++)
+ {
+ if (l == MONOTONIC_UNDEF)
+ {
+ n = 1;
+ c++;
+ }
+ else if (l == i - 1)
+ n++;
+ else
+ abort ();
+ l = i;
+ }
+ if (c > 1)
+ abort ();
+ l = MONOTONIC_UNDEF;
+ c = 0;
+ n = 0;
+ #pragma omp for schedule(monotonic : dynamic, 5) nowait
+ for (i = 0; i < MONOTONIC_END (73); i++)
+ {
+ if (l == MONOTONIC_UNDEF)
+ {
+ n = 1;
+ c++;
+ }
+ else if (l == i - 1)
+ n++;
+ else
+ {
+ if (l >= i)
+ abort ();
+ if ((n % 5) != 0 || n == 0)
+ abort ();
+ n = 1;
+ c++;
+ }
+ l = i;
+ }
+ if (l == 73 - 1)
+ {
+ if (n % 5 != 73 % 5)
+ abort ();
+ }
+ else if (l == MONOTONIC_UNDEF)
+ {
+ if (n != 0 || c != 0)
+ abort ();
+ }
+ else if ((n % 5) != 0 || n == 0)
+ abort ();
+ l = MONOTONIC_UNDEF;
+ c = 0;
+ n = 0;
+ #pragma omp for nowait schedule(dynamic, 7) ordered(1)
+ for (i = 0; i < MONOTONIC_END (73); i++)
+ {
+ if (l == MONOTONIC_UNDEF)
+ {
+ n = 1;
+ c++;
+ }
+ else if (l == i - 1)
+ n++;
+ else
+ {
+ if (l >= i)
+ abort ();
+ if ((n % 7) != 0 || n == 0)
+ abort ();
+ n = 1;
+ c++;
+ }
+ #pragma omp ordered depend(source)
+ if (MONOTONIC_UNDEF > 0)
+ {
+ #pragma omp ordered depend(sink: i)
+ }
+ else
+ {
+ #pragma omp ordered depend(sink: i - 1)
+ }
+ l = i;
+ }
+ if (l == 73 - 1)
+ {
+ if (n % 7 != 73 % 7)
+ abort ();
+ }
+ else if (l == MONOTONIC_UNDEF)
+ {
+ if (n != 0 || c != 0)
+ abort ();
+ }
+ else if ((n % 7) != 0 || n == 0)
+ abort ();
+ l = MONOTONIC_UNDEF;
+ c = 0;
+ n = 0;
+ #pragma omp for schedule (monotonic :guided , 7) nowait
+ for (i = 0; i < MONOTONIC_END (73); i++)
+ {
+ if (l == MONOTONIC_UNDEF)
+ {
+ n = 1;
+ c++;
+ }
+ else if (l == i - 1)
+ n++;
+ else
+ {
+ if (l >= i)
+ abort ();
+ if (n < 7)
+ abort ();
+ n = 1;
+ c++;
+ }
+ l = i;
+ }
+ l = MONOTONIC_UNDEF;
+ c = 0;
+ n = 0;
+ #pragma omp for nowait schedule(guided, 7) ordered
+ for (i = 0; i < MONOTONIC_END (73); i++)
+ {
+ if (l == MONOTONIC_UNDEF)
+ {
+ n = 1;
+ c++;
+ }
+ else if (l == i - 1)
+ n++;
+ else
+ {
+ if (l >= i)
+ abort ();
+ if (n < 7)
+ abort ();
+ n = 1;
+ c++;
+ }
+ #pragma omp ordered
+ l = i;
+ }
+ }
+ return 0;
+}
--- /dev/null
+/* { dg-do run } */
+
+#include <omp.h>
+#include <stdlib.h>
+#define MONOTONIC_TYPE unsigned long long
+#define MONOTONIC_UNDEF -1ULL
+#define MONOTONIC_END(n) n + v
+
+volatile int v;
+
+#include "monotonic-1.c"
--- /dev/null
+/* { dg-do run } */
+
+#ifndef NONMONOTONIC_TYPE
+#include <omp.h>
+#include <stdlib.h>
+#define NONMONOTONIC_TYPE int
+#define NONMONOTONIC_END(n) n
+#endif
+
+int a[73];
+
+int
+main ()
+{
+ NONMONOTONIC_TYPE i;
+ #pragma omp parallel for schedule(nonmonotonic: dynamic)
+ for (i = 0; i < NONMONOTONIC_END (73); i++)
+ a[i]++;
+ #pragma omp parallel for schedule(nonmonotonic: dynamic, 5)
+ for (i = 0; i < NONMONOTONIC_END (73); i++)
+ a[i]++;
+ #pragma omp parallel for schedule(nonmonotonic: guided)
+ for (i = 0; i < NONMONOTONIC_END (73); i++)
+ a[i]++;
+ #pragma omp parallel for schedule(nonmonotonic: guided, 7)
+ for (i = 0; i < NONMONOTONIC_END (73); i++)
+ a[i]++;
+ #pragma omp parallel
+ {
+ int cnt = omp_get_num_threads ();
+ int thr = omp_get_thread_num ();
+ if (thr < 73)
+ a[thr]++;
+ #pragma omp barrier
+ #pragma omp for schedule(nonmonotonic: dynamic)
+ for (i = 0; i < NONMONOTONIC_END (73); i++)
+ a[i]++;
+ #pragma omp for schedule(nonmonotonic: dynamic, 7)
+ for (i = 0; i < NONMONOTONIC_END (73); i++)
+ a[i]++;
+ #pragma omp for schedule(nonmonotonic: guided)
+ for (i = 0; i < NONMONOTONIC_END (73); i++)
+ a[i]++;
+ #pragma omp for schedule(nonmonotonic: guided, 5)
+ for (i = 0; i < NONMONOTONIC_END (73); i++)
+ a[i]++;
+ #pragma omp single private (i)
+ for (i = 0; i < 73; i++)
+ if (a[i] != 8 + (i < cnt))
+ abort ();
+ }
+ return 0;
+}
--- /dev/null
+/* { dg-do run } */
+
+#include <omp.h>
+#include <stdlib.h>
+#define NONMONOTONIC_TYPE unsigned long long
+#define NONMONOTONIC_END(n) n + v
+
+volatile int v;
+
+#include "nonmonotonic-1.c"
--- /dev/null
+/* PR middle-end/66199 */
+/* { dg-do run } */
+
+#pragma omp declare target
+int u[1024], v[1024], w[1024];
+#pragma omp end declare target
+
+__attribute__((noinline, noclone)) long
+f1 (long a, long b)
+{
+ long d;
+ #pragma omp target map(from: d)
+ #pragma omp teams distribute parallel for simd default(none) firstprivate (a, b) shared(u, v, w)
+ for (d = a; d < b; d++)
+ u[d] = v[d] + w[d];
+ return d;
+}
+
+__attribute__((noinline, noclone)) long
+f2 (long a, long b, long c)
+{
+ long d, e;
+ #pragma omp target map(from: d, e)
+ #pragma omp teams distribute parallel for simd default(none) firstprivate (a, b, c) shared(u, v, w) linear(d) lastprivate(e)
+ for (d = a; d < b; d++)
+ {
+ u[d] = v[d] + w[d];
+ e = c + d * 5;
+ }
+ return d + e;
+}
+
+__attribute__((noinline, noclone)) long
+f3 (long a1, long b1, long a2, long b2)
+{
+ long d1, d2;
+ #pragma omp target map(from: d1, d2)
+ #pragma omp teams distribute parallel for simd default(none) firstprivate (a1, b1, a2, b2) shared(u, v, w) lastprivate(d1, d2) collapse(2)
+ for (d1 = a1; d1 < b1; d1++)
+ for (d2 = a2; d2 < b2; d2++)
+ u[d1 * 32 + d2] = v[d1 * 32 + d2] + w[d1 * 32 + d2];
+ return d1 + d2;
+}
+
+__attribute__((noinline, noclone)) long
+f4 (long a1, long b1, long a2, long b2)
+{
+ long d1, d2;
+ #pragma omp target map(from: d1, d2)
+ #pragma omp teams distribute parallel for simd default(none) firstprivate (a1, b1, a2, b2) shared(u, v, w) collapse(2)
+ for (d1 = a1; d1 < b1; d1++)
+ for (d2 = a2; d2 < b2; d2++)
+ u[d1 * 32 + d2] = v[d1 * 32 + d2] + w[d1 * 32 + d2];
+ return d1 + d2;
+}
+
+int
+main ()
+{
+ if (f1 (0, 1024) != 1024
+ || f2 (0, 1024, 17) != 1024 + (17 + 5 * 1023)
+ || f3 (0, 32, 0, 32) != 64
+ || f4 (0, 32, 0, 32) != 64)
+ __builtin_abort ();
+ return 0;
+}
--- /dev/null
+/* PR middle-end/66199 */
+/* { dg-do run } */
+/* { dg-options "-O2 -fopenmp" } */
+
+#pragma omp declare target
+int u[1024], v[1024], w[1024];
+#pragma omp end declare target
+
+__attribute__((noinline, noclone)) long
+f2 (long a, long b, long c)
+{
+ long d, e;
+ #pragma omp target map(from: d, e)
+ #pragma omp teams distribute parallel for default(none) firstprivate (a, b, c) shared(u, v, w) lastprivate(d, e)
+ for (d = a; d < b; d++)
+ {
+ u[d] = v[d] + w[d];
+ e = c + d * 5;
+ }
+ return d + e;
+}
+
+__attribute__((noinline, noclone)) long
+f3 (long a1, long b1, long a2, long b2)
+{
+ long d1, d2;
+ #pragma omp target map(from: d1, d2)
+ #pragma omp teams distribute parallel for default(none) firstprivate (a1, b1, a2, b2) shared(u, v, w) lastprivate(d1, d2) collapse(2)
+ for (d1 = a1; d1 < b1; d1++)
+ for (d2 = a2; d2 < b2; d2++)
+ u[d1 * 32 + d2] = v[d1 * 32 + d2] + w[d1 * 32 + d2];
+ return d1 + d2;
+}
+
+int
+main ()
+{
+ if (f2 (0, 1024, 17) != 1024 + (17 + 5 * 1023)
+ || f3 (0, 32, 0, 32) != 64)
+ __builtin_abort ();
+ return 0;
+}
--- /dev/null
+/* PR middle-end/66199 */
+/* { dg-do run } */
+
+#pragma omp declare target
+int u[1024], v[1024], w[1024];
+#pragma omp end declare target
+
+__attribute__((noinline, noclone)) long
+f1 (long a, long b)
+{
+ long d;
+ #pragma omp target map(from: d)
+ #pragma omp teams distribute simd default(none) firstprivate (a, b) shared(u, v, w)
+ for (d = a; d < b; d++)
+ u[d] = v[d] + w[d];
+ return d;
+}
+
+__attribute__((noinline, noclone)) long
+f2 (long a, long b, long c)
+{
+ long d, e;
+ #pragma omp target map(from: d, e)
+ #pragma omp teams distribute simd default(none) firstprivate (a, b, c) shared(u, v, w) linear(d) lastprivate(e)
+ for (d = a; d < b; d++)
+ {
+ u[d] = v[d] + w[d];
+ e = c + d * 5;
+ }
+ return d + e;
+}
+
+__attribute__((noinline, noclone)) long
+f3 (long a1, long b1, long a2, long b2)
+{
+ long d1, d2;
+ #pragma omp target map(from: d1, d2)
+ #pragma omp teams distribute simd default(none) firstprivate (a1, b1, a2, b2) shared(u, v, w) lastprivate(d1, d2) collapse(2)
+ for (d1 = a1; d1 < b1; d1++)
+ for (d2 = a2; d2 < b2; d2++)
+ u[d1 * 32 + d2] = v[d1 * 32 + d2] + w[d1 * 32 + d2];
+ return d1 + d2;
+}
+
+__attribute__((noinline, noclone)) long
+f4 (long a1, long b1, long a2, long b2)
+{
+ long d1, d2;
+ #pragma omp target map(from: d1, d2)
+ #pragma omp teams distribute simd default(none) firstprivate (a1, b1, a2, b2) shared(u, v, w) collapse(2)
+ for (d1 = a1; d1 < b1; d1++)
+ for (d2 = a2; d2 < b2; d2++)
+ u[d1 * 32 + d2] = v[d1 * 32 + d2] + w[d1 * 32 + d2];
+ return d1 + d2;
+}
+
+int
+main ()
+{
+ if (f1 (0, 1024) != 1024
+ || f2 (0, 1024, 17) != 1024 + (17 + 5 * 1023)
+ || f3 (0, 32, 0, 32) != 64
+ || f4 (0, 32, 0, 32) != 64)
+ __builtin_abort ();
+ return 0;
+}
--- /dev/null
+/* PR middle-end/66199 */
+/* { dg-do run } */
+
+#pragma omp declare target
+int u[1024], v[1024], w[1024];
+#pragma omp end declare target
+
+__attribute__((noinline, noclone)) long
+f1 (long a, long b)
+{
+ long d;
+ #pragma omp target map(from: d)
+ #pragma omp teams default(none) shared(a, b, d, u, v, w)
+ #pragma omp distribute simd firstprivate (a, b)
+ for (d = a; d < b; d++)
+ u[d] = v[d] + w[d];
+ return d;
+}
+
+__attribute__((noinline, noclone)) long
+f2 (long a, long b, long c)
+{
+ long d, e;
+ #pragma omp target map(from: d, e)
+ #pragma omp teams default(none) firstprivate (a, b, c) shared(d, e, u, v, w)
+ #pragma omp distribute simd linear(d) lastprivate(e)
+ for (d = a; d < b; d++)
+ {
+ u[d] = v[d] + w[d];
+ e = c + d * 5;
+ }
+ return d + e;
+}
+
+__attribute__((noinline, noclone)) long
+f3 (long a1, long b1, long a2, long b2)
+{
+ long d1, d2;
+ #pragma omp target map(from: d1, d2)
+ #pragma omp teams default(none) shared(a1, b1, a2, b2, d1, d2, u, v, w)
+ #pragma omp distribute simd firstprivate (a1, b1, a2, b2) lastprivate(d1, d2) collapse(2)
+ for (d1 = a1; d1 < b1; d1++)
+ for (d2 = a2; d2 < b2; d2++)
+ u[d1 * 32 + d2] = v[d1 * 32 + d2] + w[d1 * 32 + d2];
+ return d1 + d2;
+}
+
+__attribute__((noinline, noclone)) long
+f4 (long a1, long b1, long a2, long b2)
+{
+ long d1, d2;
+ #pragma omp target map(from: d1, d2)
+ #pragma omp teams default(none) firstprivate (a1, b1, a2, b2) shared(d1, d2, u, v, w)
+ #pragma omp distribute simd collapse(2)
+ for (d1 = a1; d1 < b1; d1++)
+ for (d2 = a2; d2 < b2; d2++)
+ u[d1 * 32 + d2] = v[d1 * 32 + d2] + w[d1 * 32 + d2];
+ return d1 + d2;
+}
+
+int
+main ()
+{
+ if (f1 (0, 1024) != 1024
+ || f2 (0, 1024, 17) != 1024 + (17 + 5 * 1023)
+ || f3 (0, 32, 0, 32) != 64
+ || f4 (0, 32, 0, 32) != 64)
+ __builtin_abort ();
+ return 0;
+}
--- /dev/null
+/* PR middle-end/66199 */
+/* { dg-do run } */
+
+#pragma omp declare target
+int u[1024], v[1024], w[1024];
+#pragma omp end declare target
+
+__attribute__((noinline, noclone)) long
+f2 (long a, long b, long c)
+{
+ long d, e;
+ #pragma omp target map(from: d, e)
+ #pragma omp teams default(none) firstprivate (a, b, c) shared(d, e, u, v, w)
+ #pragma omp distribute lastprivate(d, e)
+ for (d = a; d < b; d++)
+ {
+ u[d] = v[d] + w[d];
+ e = c + d * 5;
+ }
+ return d + e;
+}
+
+__attribute__((noinline, noclone)) long
+f3 (long a1, long b1, long a2, long b2)
+{
+ long d1, d2;
+ #pragma omp target map(from: d1, d2)
+ #pragma omp teams default(none) shared(a1, b1, a2, b2, d1, d2, u, v, w)
+ #pragma omp distribute firstprivate (a1, b1, a2, b2) lastprivate(d1, d2) collapse(2)
+ for (d1 = a1; d1 < b1; d1++)
+ for (d2 = a2; d2 < b2; d2++)
+ u[d1 * 32 + d2] = v[d1 * 32 + d2] + w[d1 * 32 + d2];
+ return d1 + d2;
+}
+
+int
+main ()
+{
+ if (f2 (0, 1024, 17) != 1024 + (17 + 5 * 1023)
+ || f3 (0, 32, 0, 32) != 64)
+ __builtin_abort ();
+ return 0;
+}
--- /dev/null
+/* { dg-do run { xfail *-*-* } } */
+
+char z[10] = { 0 };
+
+__attribute__((noinline, noclone)) void
+foo (int (*x)[3][2], int *y, long w[1][2], int s, int t)
+{
+ unsigned long long a[9] = {};
+ short b[5] = {};
+ int i;
+ #pragma omp parallel for reduction(+:x[-1:2][:][0:2], z[t + 2:4]) \
+ reduction(*:y[-s:3]) reduction(|:a[s + 3:4]) \
+ reduction(&:w[s + 1:1][t:2]) reduction(max:b[2:])
+ for (i = 0; i < 128; i++)
+ {
+ x[i / 64 - 1][i % 3][(i / 4) & 1] += i;
+ if ((i & 15) == 1)
+ y[1] *= 3;
+ if ((i & 31) == 2)
+ y[2] *= 7;
+ if ((i & 63) == 3)
+ y[3] *= 17;
+ z[i / 32 + 2] += (i & 3);
+ if (i < 4)
+ z[i + 2] += i;
+ a[i / 32 + 2] |= 1ULL << (i & 30);
+ w[0][i & 1] &= ~(1L << (i / 17 * 3));
+ if ((i % 23) > b[2])
+ b[2] = i % 23;
+ if ((i % 85) > b[3])
+ b[3] = i % 85;
+ if ((i % 192) > b[4])
+ b[4] = i % 192;
+ }
+ for (i = 0; i < 9; i++)
+ if (a[i] != ((i < 6 && i >= 2) ? 0x55555555ULL : 0))
+ __builtin_abort ();
+ if (b[0] != 0 || b[1] != 0 || b[2] != 22 || b[3] != 84 || b[4] != 127)
+ __builtin_abort ();
+}
+
+int
+main ()
+{
+ int a[4][3][2] = {};
+ static int a2[4][3][2] = {{{ 0, 0 }, { 0, 0 }, { 0, 0 }},
+ {{ 312, 381 }, { 295, 356 }, { 337, 335 }},
+ {{ 1041, 975 }, { 1016, 1085 }, { 935, 1060 }},
+ {{ 0, 0 }, { 0, 0 }, { 0, 0 }}};
+ int y[5] = { 0, 1, 1, 1, 0 };
+ int y2[5] = { 0, 6561, 2401, 289, 0 };
+ char z2[10] = { 0, 0, 48, 49, 50, 51, 0, 0, 0, 0 };
+ long w[1][2] = { ~0L, ~0L };
+ foo (&a[2], y, w, -1, 0);
+ if (__builtin_memcmp (a, a2, sizeof (a))
+ || __builtin_memcmp (y, y2, sizeof (y))
+ || __builtin_memcmp (z, z2, sizeof (z))
+ || w[0][0] != ~0x249249L
+ || w[0][1] != ~0x249249L)
+ __builtin_abort ();
+ return 0;
+}
--- /dev/null
+/* { dg-do run { xfail *-*-* } } */
+
+struct A { int t; };
+struct B { char t; };
+struct C { unsigned long long t; };
+struct D { long t; };
+void
+add (struct B *x, struct B *y)
+{
+ x->t += y->t;
+}
+void
+zero (struct B *x)
+{
+ x->t = 0;
+}
+void
+orit (struct C *x, struct C *y)
+{
+ y->t |= x->t;
+}
+#pragma omp declare reduction(+:struct A:omp_out.t += omp_in.t)
+#pragma omp declare reduction(+:struct B:add (&omp_out, &omp_in)) initializer(zero (&omp_priv))
+#pragma omp declare reduction(*:struct A:omp_out.t *= omp_in.t) initializer(omp_priv = { 1 })
+#pragma omp declare reduction(|:struct C:orit (&omp_in, &omp_out))
+#pragma omp declare reduction(&:struct D:omp_out.t = omp_out.t & omp_in.t) initializer(omp_priv = { ~0L })
+#pragma omp declare reduction(maxb:short:omp_out = omp_in > omp_out ? omp_in : omp_out) initializer(omp_priv = -6)
+
+struct B z[10];
+
+__attribute__((noinline, noclone)) void
+foo (struct A (*x)[3][2], struct A *y, struct D w[1][2], int s, int t)
+{
+ struct C a[9] = {};
+ short b[5] = {};
+ int i;
+ #pragma omp parallel for reduction(+:x[-1:2][:][0:2], z[t + 2:4]) \
+ reduction(*:y[-s:3]) reduction(|:a[s + 3:4]) \
+ reduction(&:w[s + 1:1][t:2]) reduction(maxb:b[2:])
+ for (i = 0; i < 128; i++)
+ {
+ x[i / 64 - 1][i % 3][(i / 4) & 1].t += i;
+ if ((i & 15) == 1)
+ y[1].t *= 3;
+ if ((i & 31) == 2)
+ y[2].t *= 7;
+ if ((i & 63) == 3)
+ y[3].t *= 17;
+ z[i / 32 + 2].t += (i & 3);
+ if (i < 4)
+ z[i + 2].t += i;
+ a[i / 32 + 2].t |= 1ULL << (i & 30);
+ w[0][i & 1].t &= ~(1L << (i / 17 * 3));
+ if ((i % 23) > b[2])
+ b[2] = i % 23;
+ if ((i % 85) > b[3])
+ b[3] = i % 85;
+ if ((i % 192) > b[4])
+ b[4] = i % 192;
+ }
+ for (i = 0; i < 9; i++)
+ if (a[i].t != ((i < 6 && i >= 2) ? 0x55555555ULL : 0))
+ __builtin_abort ();
+ if (b[0] != 0 || b[1] != 0 || b[2] != 22 || b[3] != 84 || b[4] != 127)
+ __builtin_abort ();
+}
+
+int
+main ()
+{
+ struct A a[4][3][2] = {};
+ static int a2[4][3][2] = {{{ 0, 0 }, { 0, 0 }, { 0, 0 }},
+ {{ 312, 381 }, { 295, 356 }, { 337, 335 }},
+ {{ 1041, 975 }, { 1016, 1085 }, { 935, 1060 }},
+ {{ 0, 0 }, { 0, 0 }, { 0, 0 }}};
+ struct A y[5] = { { 0 }, { 1 }, { 1 }, { 1 }, { 0 } };
+ int y2[5] = { 0, 6561, 2401, 289, 0 };
+ char z2[10] = { 0, 0, 48, 49, 50, 51, 0, 0, 0, 0 };
+ struct D w[1][2] = { { { ~0L }, { ~0L } } };
+ foo (&a[2], y, w, -1, 0);
+ int i, j, k;
+ for (i = 0; i < 4; i++)
+ for (j = 0; j < 3; j++)
+ for (k = 0; k < 2; k++)
+ if (a[i][j][k].t != a2[i][j][k])
+ __builtin_abort ();
+ for (i = 0; i < 5; i++)
+ if (y[i].t != y2[i])
+ __builtin_abort ();
+ for (i = 0; i < 10; i++)
+ if (z[i].t != z2[i])
+ __builtin_abort ();
+ if (w[0][0].t != ~0x249249L || w[0][1].t != ~0x249249L)
+ __builtin_abort ();
+ return 0;
+}
--- /dev/null
+char z[10] = { 0 };
+
+__attribute__((noinline, noclone)) void
+foo (int (*x)[3][2], int *y, long w[1][2], int p1, long p2, long p3, int p4,
+ int p5, long p6, short p7, int s, int t)
+{
+ unsigned long long a[p7 + 4];
+ short b[p7];
+ int i;
+ for (i = 0; i < p7 + 4; i++)
+ {
+ if (i < p7)
+ b[i] = -6;
+ a[i] = 0;
+ }
+ #pragma omp parallel for reduction(+:x[-1:p1 + 1][:p2], z[t + 2:p3]) \
+ reduction(*:y[-s:p4]) reduction(|:a[s + 3:p5]) \
+ reduction(&:w[s + 1:p6 - 1][t:p6]) reduction(max:b[2:])
+ for (i = 0; i < 128; i++)
+ {
+ x[i / 64 - 1][i % 3][(i / 4) & 1] += i;
+ if ((i & 15) == 1)
+ y[1] *= 3;
+ if ((i & 31) == 2)
+ y[2] *= 7;
+ if ((i & 63) == 3)
+ y[3] *= 17;
+ z[i / 32 + 2] += (i & 3);
+ if (i < 4)
+ z[i + 2] += i;
+ a[i / 32 + 2] |= 1ULL << (i & 30);
+ w[0][i & 1] &= ~(1L << (i / 17 * 3));
+ if ((i % 23) > b[2])
+ b[2] = i % 23;
+ if ((i % 85) > b[3])
+ b[3] = i % 85;
+ if ((i % 192) > b[4])
+ b[4] = i % 192;
+ }
+ for (i = 0; i < 9; i++)
+ if (a[i] != ((i < 6 && i >= 2) ? 0x55555555ULL : 0))
+ __builtin_abort ();
+ if (b[0] != -6 || b[1] != -6 || b[2] != 22 || b[3] != 84 || b[4] != 127)
+ __builtin_abort ();
+}
+
+int
+main ()
+{
+ int a[4][3][2] = {};
+ static int a2[4][3][2] = {{{ 0, 0 }, { 0, 0 }, { 0, 0 }},
+ {{ 312, 381 }, { 295, 356 }, { 337, 335 }},
+ {{ 1041, 975 }, { 1016, 1085 }, { 935, 1060 }},
+ {{ 0, 0 }, { 0, 0 }, { 0, 0 }}};
+ int y[5] = { 0, 1, 1, 1, 0 };
+ int y2[5] = { 0, 6561, 2401, 289, 0 };
+ char z2[10] = { 0, 0, 48, 49, 50, 51, 0, 0, 0, 0 };
+ long w[1][2] = { ~0L, ~0L };
+ foo (&a[2], y, w, 1, 3L, 4L, 3, 4, 2L, 5, -1, 0);
+ if (__builtin_memcmp (a, a2, sizeof (a))
+ || __builtin_memcmp (y, y2, sizeof (y))
+ || __builtin_memcmp (z, z2, sizeof (z))
+ || w[0][0] != ~0x249249L
+ || w[0][1] != ~0x249249L)
+ __builtin_abort ();
+ return 0;
+}
--- /dev/null
+struct A { int t; };
+struct B { char t; };
+struct C { unsigned long long t; };
+struct D { long t; };
+void
+add (struct B *x, struct B *y)
+{
+ x->t += y->t;
+}
+void
+zero (struct B *x)
+{
+ x->t = 0;
+}
+void
+orit (struct C *x, struct C *y)
+{
+ y->t |= x->t;
+}
+#pragma omp declare reduction(+:struct A:omp_out.t += omp_in.t)
+#pragma omp declare reduction(+:struct B:add (&omp_out, &omp_in)) initializer(zero (&omp_priv))
+#pragma omp declare reduction(*:struct A:omp_out.t *= omp_in.t) initializer(omp_priv = { 1 })
+#pragma omp declare reduction(|:struct C:orit (&omp_in, &omp_out))
+#pragma omp declare reduction(&:struct D:omp_out.t = omp_out.t & omp_in.t) initializer(omp_priv = { ~0L })
+#pragma omp declare reduction(maxb:short:omp_out = omp_in > omp_out ? omp_in : omp_out) initializer(omp_priv = -6)
+
+struct B z[10];
+
+__attribute__((noinline, noclone)) void
+foo (struct A (*x)[3][2], struct A *y, struct D w[1][2], int p1, long p2, long p3, int p4,
+ int p5, long p6, short p7, int s, int t)
+{
+ struct C a[p7 + 4];
+ short b[p7];
+ int i;
+ for (i = 0; i < p7 + 4; i++)
+ {
+ if (i < p7)
+ b[i] = -6;
+ a[i].t = 0;
+ }
+ #pragma omp parallel for reduction(+:x[-1:p1 + 1][:p2], z[t + 2:p3]) \
+ reduction(*:y[-s:p4]) reduction(|:a[s + 3:p5]) \
+ reduction(&:w[s + 1:p6 - 1][t:p6]) reduction(maxb:b[2:])
+ for (i = 0; i < 128; i++)
+ {
+ x[i / 64 - 1][i % 3][(i / 4) & 1].t += i;
+ if ((i & 15) == 1)
+ y[1].t *= 3;
+ if ((i & 31) == 2)
+ y[2].t *= 7;
+ if ((i & 63) == 3)
+ y[3].t *= 17;
+ z[i / 32 + 2].t += (i & 3);
+ if (i < 4)
+ z[i + 2].t += i;
+ a[i / 32 + 2].t |= 1ULL << (i & 30);
+ w[0][i & 1].t &= ~(1L << (i / 17 * 3));
+ if ((i % 23) > b[2])
+ b[2] = i % 23;
+ if ((i % 85) > b[3])
+ b[3] = i % 85;
+ if ((i % 192) > b[4])
+ b[4] = i % 192;
+ }
+ for (i = 0; i < 9; i++)
+ if (a[i].t != ((i < 6 && i >= 2) ? 0x55555555ULL : 0))
+ __builtin_abort ();
+ if (b[0] != -6 || b[1] != -6 || b[2] != 22 || b[3] != 84 || b[4] != 127)
+ __builtin_abort ();
+}
+
+int
+main ()
+{
+ struct A a[4][3][2] = {};
+ static int a2[4][3][2] = {{{ 0, 0 }, { 0, 0 }, { 0, 0 }},
+ {{ 312, 381 }, { 295, 356 }, { 337, 335 }},
+ {{ 1041, 975 }, { 1016, 1085 }, { 935, 1060 }},
+ {{ 0, 0 }, { 0, 0 }, { 0, 0 }}};
+ struct A y[5] = { { 0 }, { 1 }, { 1 }, { 1 }, { 0 } };
+ int y2[5] = { 0, 6561, 2401, 289, 0 };
+ char z2[10] = { 0, 0, 48, 49, 50, 51, 0, 0, 0, 0 };
+ struct D w[1][2] = { { { ~0L }, { ~0L } } };
+ foo (&a[2], y, w, 1, 3L, 4L, 3, 4, 2L, 5, -1, 0);
+ int i, j, k;
+ for (i = 0; i < 4; i++)
+ for (j = 0; j < 3; j++)
+ for (k = 0; k < 2; k++)
+ if (a[i][j][k].t != a2[i][j][k])
+ __builtin_abort ();
+ for (i = 0; i < 5; i++)
+ if (y[i].t != y2[i])
+ __builtin_abort ();
+ for (i = 0; i < 10; i++)
+ if (z[i].t != z2[i])
+ __builtin_abort ();
+ if (w[0][0].t != ~0x249249L || w[0][1].t != ~0x249249L)
+ __builtin_abort ();
+ return 0;
+}
--- /dev/null
+extern void abort (void);
+int a[16], b[16], c[16], d[5][2];
+
+__attribute__((noinline, noclone)) void
+foo (int x, int y)
+{
+ int i;
+ #pragma omp for schedule (static, 1) reduction (+:a[:3])
+ for (i = 0; i < 64; i++)
+ {
+ a[0] += i;
+ a[1] += 2 * i;
+ a[2] += 3 * i;
+ }
+ #pragma omp for schedule (guided) reduction (+:b[4:3])
+ for (i = 0; i < 64; i++)
+ {
+ b[4] += i;
+ b[5] += 2 * i;
+ b[6] += 3 * i;
+ }
+ #pragma omp for schedule (static) reduction (+:c[x:4])
+ for (i = 0; i < 64; i++)
+ {
+ c[9] += i;
+ c[10] += 2 * i;
+ c[11] += 3 * i;
+ c[12] += 4 * i;
+ }
+ #pragma omp for reduction (+:d[x - 8:2][y:])
+ for (i = 0; i < 64; i++)
+ {
+ d[1][0] += i;
+ d[1][1] += 2 * i;
+ d[2][0] += 3 * i;
+ d[2][1] += 4 * i;
+ }
+}
+
+int
+main ()
+{
+ int i;
+ #pragma omp parallel
+ foo (9, 0);
+ for (i = 0; i < 16; i++)
+ if (a[i] != (i < 3 ? 64 * 63 / 2 * (i + 1) : 0)
+ || b[i] != ((i >= 4 && i < 7) ? 64 * 63 / 2 * (i - 3) : 0)
+ || c[i] != ((i >= 9 && i < 13) ? 64 * 63 / 2 * (i - 8) : 0))
+ abort ();
+ for (i = 0; i < 5; i++)
+ if (d[i][0] != ((i && i <= 2) ? 64 * 63 / 2 * (2 * i - 1) : 0)
+ || d[i][1] != ((i && i <= 2) ? 64 * 63 / 2 * (2 * i) : 0))
+ abort ();
+ return 0;
+}
if (omp_target_is_present (q, d) != 1
|| omp_target_is_present (&q[32], d) != 1
- || omp_target_is_present (&q[128], d) != 1)
+ || omp_target_is_present (&q[127], d) != 1)
abort ();
if (omp_target_memcpy (p, q, 128 * sizeof (int), sizeof (int), 0,
}
if (err)
abort ();
- int on = n;
- #pragma omp target firstprivate (n) map(tofrom: n)
- {
- n++;
- }
- if (on != n)
- abort ();
- #pragma omp target map(tofrom: n) private (n)
- {
- n = 25;
- }
- if (on != n)
- abort ();
- for (i = 0; i < n; i++)
- a[i] += i;
- #pragma omp target map(to:a) firstprivate (a) map(from:err) private(i)
- {
- err = 0;
- for (i = 0; i < n; i++)
- if (a[i] != 8 * i)
- err = 1;
- }
- if (err)
- abort ();
- for (i = 0; i < n; i++)
- a[i] += i;
- #pragma omp target firstprivate (a) map(to:a) map(from:err) private(i)
- {
- err = 0;
- for (i = 0; i < n; i++)
- if (a[i] != 9 * i)
- err = 1;
- }
- if (err)
- abort ();
- for (i = 0; i < n; i++)
- a[i] += i;
- #pragma omp target map(tofrom:a) map(from:err) private(a, i)
- {
- err = 0;
- for (i = 0; i < n; i++)
- a[i] = 7;
- #pragma omp parallel for reduction(|:err)
- for (i = 0; i < n; i++)
- if (a[i] != 7)
- err |= 1;
- }
- if (err)
- abort ();
- for (i = 0; i < n; i++)
- if (a[i] != 10 * i)
- abort ();
}
int
extern void abort (void);
-void
+__attribute__((noinline, noclone)) void
foo (int *p, int *q, int *r, int n, int m)
{
int i, err, *s = r;
+ int sep = 1;
+ #pragma omp target map(to:sep)
+ sep = 0;
#pragma omp target data map(to:p[0:8])
{
/* For zero length array sections, p points to the start of
- already mapped range, q to the end of it, and r does not point
- to an mapped range. */
+ already mapped range, q to the end of it (with nothing mapped
+ after it), and r does not point to an mapped range. */
#pragma omp target map(alloc:p[:0]) map(to:q[:0]) map(from:r[:0]) private(i) map(from:err) firstprivate (s)
{
err = 0;
for (i = 0; i < 8; i++)
- if (p[i] != i + 1 || q[i - 8] != i + 1)
+ if (p[i] != i + 1)
err = 1;
- if (p + 8 != q || (r != (int *) 0 && r != s))
+ if (sep)
+ {
+ if (q != (int *) 0 || r != (int *) 0)
+ err = 1;
+ }
+ else if (p + 8 != q || r != s)
err = 1;
}
if (err)
{
err = 0;
for (i = 0; i < 8; i++)
- if (p[i] != i + 1 || q[i - 8] != i + 1)
+ if (p[i] != i + 1)
err = 1;
- if (p + 8 != q || (r != (int *) 0 && r != s))
+ if (sep)
+ {
+ if (q != (int *) 0 || r != (int *) 0)
+ err = 1;
+ }
+ else if (p + 8 != q || r != s)
err = 1;
}
if (err)
{
err = 0;
for (i = 0; i < 8; i++)
- if (p[i] != i + 1 || q[i - 8] != i + 1)
+ if (p[i] != i + 1)
err = 1;
- if (p + 8 != q || (r != (int *) 0 && r != s))
+ if (sep)
+ {
+ if (q != (int *) 0 || r != (int *) 0)
+ err = 1;
+ }
+ else if (p + 8 != q || r != s)
err = 1;
}
if (err)
for (i = 0; i < 8; i++)
if (p[i] != i + 1)
err = 1;
- if (q[0] != 9 || r != q + 1)
+ if (q[0] != 9)
+ err = 1;
+ else if (sep)
+ {
+ if (r != (int *) 0)
+ err = 1;
+ }
+ else if (r != q + 1)
err = 1;
}
if (err)
for (i = 0; i < 8; i++)
if (p[i] != i + 1)
err = 1;
- if (q[0] != 9 || r != q + 1)
+ if (q[0] != 9)
+ err = 1;
+ else if (sep)
+ {
+ if (r != (int *) 0)
+ err = 1;
+ }
+ else if (r != q + 1)
err = 1;
}
if (err)
for (i = 0; i < 8; i++)
if (p[i] != i + 1)
err = 1;
- if (q[0] != 9 || r != q + 1)
+ if (q[0] != 9)
+ err = 1;
+ else if (sep)
+ {
+ if (r != (int *) 0)
+ err = 1;
+ }
+ else if (r != q + 1)
err = 1;
}
if (err)
--- /dev/null
+extern void abort (void);
+
+int g;
+#pragma omp declare target (g)
+
+#pragma omp declare target
+int
+foo (void)
+{
+ static int s;
+ return ++s + g;
+}
+#pragma omp end declare target
+
+int
+bar (void)
+{
+ static int s;
+ #pragma omp declare target to (s)
+ return ++s;
+}
+#pragma omp declare target (bar)
+
+int
+main ()
+{
+ int r;
+ #pragma omp target map(from:r)
+ {
+ r = (foo () == 1) + (bar () == 1);
+ r += (foo () == 2) + (bar () == 2);
+ }
+ if (r != 4)
+ abort ();
+ return 0;
+}
--- /dev/null
+#include <omp.h>
+#include <stdlib.h>
+
+struct S { char p[64]; int a; int b[2]; long c[4]; int *d; char q[64]; };
+
+__attribute__((noinline, noclone)) void
+foo (struct S s)
+{
+ int d = omp_get_default_device ();
+ int id = omp_get_initial_device ();
+ int sep = 1;
+
+ if (d < 0 || d >= omp_get_num_devices ())
+ d = id;
+
+ int err;
+ #pragma omp target map(tofrom: s.a, s.b, s.c[1:2], s.d[-2:3]) map(to: sep) map(from: err)
+ {
+ err = s.a != 11 || s.b[0] != 12 || s.b[1] != 13;
+ err |= s.c[1] != 15 || s.c[2] != 16 || s.d[-2] != 18 || s.d[-1] != 19 || s.d[0] != 20;
+ s.a = 35; s.b[0] = 36; s.b[1] = 37;
+ s.c[1] = 38; s.c[2] = 39; s.d[-2] = 40; s.d[-1] = 41; s.d[0] = 42;
+ sep = 0;
+ }
+ if (err) abort ();
+ err = s.a != 35 || s.b[0] != 36 || s.b[1] != 37;
+ err |= s.c[1] != 38 || s.c[2] != 39 || s.d[-2] != 40 || s.d[-1] != 41 || s.d[0] != 42;
+ if (err) abort ();
+ s.a = 50; s.b[0] = 49; s.b[1] = 48;
+ s.c[1] = 47; s.c[2] = 46; s.d[-2] = 45; s.d[-1] = 44; s.d[0] = 43;
+ if (sep
+ && (omp_target_is_present (&s.a, d)
+ || omp_target_is_present (s.b, d)
+ || omp_target_is_present (&s.c[1], d)
+ || omp_target_is_present (s.d, d)
+ || omp_target_is_present (&s.d[-2], d)))
+ abort ();
+ #pragma omp target data map(alloc: s.a, s.b, s.c[1:2], s.d[-2:3])
+ {
+ if (!omp_target_is_present (&s.a, d)
+ || !omp_target_is_present (s.b, d)
+ || !omp_target_is_present (&s.c[1], d)
+ || !omp_target_is_present (s.d, d)
+ || !omp_target_is_present (&s.d[-2], d))
+ abort ();
+ #pragma omp target update to(s.a, s.b, s.c[1:2], s.d[-2:3])
+ #pragma omp target map(alloc: s.a, s.b, s.c[1:2], s.d[-2:3]) map(from: err)
+ {
+ err = s.a != 50 || s.b[0] != 49 || s.b[1] != 48;
+ err |= s.c[1] != 47 || s.c[2] != 46 || s.d[-2] != 45 || s.d[-1] != 44 || s.d[0] != 43;
+ s.a = 17; s.b[0] = 18; s.b[1] = 19;
+ s.c[1] = 20; s.c[2] = 21; s.d[-2] = 22; s.d[-1] = 23; s.d[0] = 24;
+ }
+ #pragma omp target update from(s.a, s.b, s.c[1:2], s.d[-2:3])
+ }
+ if (sep
+ && (omp_target_is_present (&s.a, d)
+ || omp_target_is_present (s.b, d)
+ || omp_target_is_present (&s.c[1], d)
+ || omp_target_is_present (s.d, d)
+ || omp_target_is_present (&s.d[-2], d)))
+ abort ();
+ if (err) abort ();
+ err = s.a != 17 || s.b[0] != 18 || s.b[1] != 19;
+ err |= s.c[1] != 20 || s.c[2] != 21 || s.d[-2] != 22 || s.d[-1] != 23 || s.d[0] != 24;
+ if (err) abort ();
+ s.a = 33; s.b[0] = 34; s.b[1] = 35;
+ s.c[1] = 36; s.c[2] = 37; s.d[-2] = 38; s.d[-1] = 39; s.d[0] = 40;
+ #pragma omp target enter data map(alloc: s.a, s.b, s.c[1:2], s.d[-2:3])
+ if (!omp_target_is_present (&s.a, d)
+ || !omp_target_is_present (s.b, d)
+ || !omp_target_is_present (&s.c[1], d)
+ || !omp_target_is_present (s.d, d)
+ || !omp_target_is_present (&s.d[-2], d))
+ abort ();
+ #pragma omp target enter data map(always, to: s.a, s.b, s.c[1:2], s.d[-2:3])
+ #pragma omp target map(alloc: s.a, s.b, s.c[1:2], s.d[-2:3]) map(from: err)
+ {
+ err = s.a != 33 || s.b[0] != 34 || s.b[1] != 35;
+ err |= s.c[1] != 36 || s.c[2] != 37 || s.d[-2] != 38 || s.d[-1] != 39 || s.d[0] != 40;
+ s.a = 49; s.b[0] = 48; s.b[1] = 47;
+ s.c[1] = 46; s.c[2] = 45; s.d[-2] = 44; s.d[-1] = 43; s.d[0] = 42;
+ }
+ #pragma omp target exit data map(always, from: s.a, s.b, s.c[1:2], s.d[-2:3])
+ if (!omp_target_is_present (&s.a, d)
+ || !omp_target_is_present (s.b, d)
+ || !omp_target_is_present (&s.c[1], d)
+ || !omp_target_is_present (s.d, d)
+ || !omp_target_is_present (&s.d[-2], d))
+ abort ();
+ #pragma omp target exit data map(release: s.a, s.b, s.c[1:2], s.d[-2:3])
+ if (sep
+ && (omp_target_is_present (&s.a, d)
+ || omp_target_is_present (s.b, d)
+ || omp_target_is_present (&s.c[1], d)
+ || omp_target_is_present (s.d, d)
+ || omp_target_is_present (&s.d[-2], d)))
+ abort ();
+ if (err) abort ();
+ err = s.a != 49 || s.b[0] != 48 || s.b[1] != 47;
+ err |= s.c[1] != 46 || s.c[2] != 45 || s.d[-2] != 44 || s.d[-1] != 43 || s.d[0] != 42;
+ if (err) abort ();
+}
+
+int
+main ()
+{
+ int d[3] = { 18, 19, 20 };
+ struct S s = { {}, 11, { 12, 13 }, { 14, 15, 16, 17 }, d + 2, {} };
+ foo (s);
+ return 0;
+}
--- /dev/null
+extern void abort (void);
+
+#pragma omp declare target
+int v = 6;
+#pragma omp end declare target
+
+int
+main ()
+{
+ #pragma omp target /* predetermined map(tofrom: v) */
+ v++;
+ #pragma omp target update from (v)
+ if (v != 7)
+ abort ();
+ #pragma omp parallel private (v) num_threads (1)
+ {
+ #pragma omp target /* predetermined firstprivate(v) */
+ v++;
+ }
+ #pragma omp target update from (v)
+ if (v != 7)
+ abort ();
+ return 0;
+}
--- /dev/null
+/* { dg-do run } */
+
+#include <omp.h>
+#include <stdlib.h>
+
+int v = 6;
+
+void
+bar (long *x, long *y)
+{
+ *x += 2;
+ *y += 3;
+}
+
+int
+baz (void)
+{
+ return 5;
+}
+
+#pragma omp declare target to (bar, baz, v)
+
+__attribute__((noinline, noclone)) void
+foo (int a, int b, long c, long d)
+{
+ int err;
+ if (omp_get_num_teams () != 1)
+ abort ();
+ /* The OpenMP 4.5 spec says that these expressions are evaluated before
+ target region on combined target teams, so those cases are always
+ fine. */
+ #pragma omp target map(from: err)
+ err = omp_get_num_teams () != 1;
+ if (err)
+ abort ();
+ #pragma omp target map(from: err)
+ #pragma omp teams
+ err = omp_get_num_teams () < 1 || omp_get_thread_limit () < 1;
+ if (err)
+ abort ();
+ #pragma omp target teams map(from: err)
+ err = omp_get_num_teams () < 1 || omp_get_thread_limit () < 1;
+ if (err)
+ abort ();
+ #pragma omp target map(from: err)
+ #pragma omp teams num_teams (4)
+ err = omp_get_num_teams () < 1 || omp_get_thread_limit () < 1
+ || omp_get_num_teams () > 4;
+ if (err)
+ abort ();
+ #pragma omp target teams num_teams (4) map(from: err)
+ err = omp_get_num_teams () < 1 || omp_get_thread_limit () < 1
+ || omp_get_num_teams () > 4;
+ if (err)
+ abort ();
+ #pragma omp target map(from: err)
+ #pragma omp teams thread_limit (7)
+ err = omp_get_num_teams () < 1 || omp_get_thread_limit () < 1
+ || omp_get_thread_limit () > 7;
+ if (err)
+ abort ();
+ #pragma omp target teams thread_limit (7) map(from: err)
+ err = omp_get_num_teams () < 1 || omp_get_thread_limit () < 1
+ || omp_get_thread_limit () > 7;
+ if (err)
+ abort ();
+ #pragma omp target map(from: err)
+ #pragma omp teams num_teams (4) thread_limit (8)
+ {
+ {
+ err = omp_get_num_teams () < 1 || omp_get_thread_limit () < 1
+ || omp_get_num_teams () > 4 || omp_get_thread_limit () > 8;
+ }
+ }
+ if (err)
+ abort ();
+ #pragma omp target teams num_teams (4) thread_limit (8) map(from: err)
+ err = omp_get_num_teams () < 1 || omp_get_thread_limit () < 1
+ || omp_get_num_teams () > 4 || omp_get_thread_limit () > 8;
+ if (err)
+ abort ();
+ #pragma omp target map(from: err)
+ #pragma omp teams num_teams (a) thread_limit (b)
+ err = omp_get_num_teams () < 1 || omp_get_thread_limit () < 1
+ || omp_get_num_teams () > a || omp_get_thread_limit () > b;
+ if (err)
+ abort ();
+ #pragma omp target teams num_teams (a) thread_limit (b) map(from: err)
+ err = omp_get_num_teams () < 1 || omp_get_thread_limit () < 1
+ || omp_get_num_teams () > a || omp_get_thread_limit () > b;
+ if (err)
+ abort ();
+ #pragma omp target map(from: err)
+ #pragma omp teams num_teams (c + 1) thread_limit (d - 1)
+ err = omp_get_num_teams () < 1 || omp_get_thread_limit () < 1
+ || omp_get_num_teams () > c + 1 || omp_get_thread_limit () > d - 1;
+ if (err)
+ abort ();
+ #pragma omp target teams num_teams (c + 1) thread_limit (d - 1) map(from: err)
+ err = omp_get_num_teams () < 1 || omp_get_thread_limit () < 1
+ || omp_get_num_teams () > c + 1 || omp_get_thread_limit () > d - 1;
+ if (err)
+ abort ();
+ #pragma omp target map (always, to: c, d) map(from: err)
+ #pragma omp teams num_teams (c + 1) thread_limit (d - 1)
+ err = omp_get_num_teams () < 1 || omp_get_thread_limit () < 1
+ || omp_get_num_teams () > c + 1 || omp_get_thread_limit () > d - 1;
+ if (err)
+ abort ();
+ #pragma omp target data map (to: c, d)
+ {
+ #pragma omp target defaultmap (tofrom: scalar)
+ bar (&c, &d);
+ /* This is one of the cases which can't be generally optimized,
+ the c and d are (or could be) already mapped and whether
+ their device and original values match is unclear. */
+ #pragma omp target map (to: c, d) map(from: err)
+ #pragma omp teams num_teams (c + 1) thread_limit (d - 1)
+ err = omp_get_num_teams () < 1 || omp_get_thread_limit () < 1
+ || omp_get_num_teams () > c + 1 || omp_get_thread_limit () > d - 1;
+ if (err)
+ abort ();
+ }
+ /* This can't be optimized, there are function calls inside of
+ target involved. */
+ #pragma omp target map(from: err)
+ #pragma omp teams num_teams (baz () + 1) thread_limit (baz () - 1)
+ err = omp_get_num_teams () < 1 || omp_get_thread_limit () < 1
+ || omp_get_num_teams () > baz () + 1 || omp_get_thread_limit () > baz () - 1;
+ if (err)
+ abort ();
+ #pragma omp target teams num_teams (baz () + 1) thread_limit (baz () - 1) map(from: err)
+ err = omp_get_num_teams () < 1 || omp_get_thread_limit () < 1
+ || omp_get_num_teams () > baz () + 1 || omp_get_thread_limit () > baz () - 1;
+ if (err)
+ abort ();
+ /* This one can't be optimized, as v might have different value between
+ host and target. */
+ #pragma omp target map(from: err)
+ #pragma omp teams num_teams (v + 1) thread_limit (v - 1)
+ err = omp_get_num_teams () < 1 || omp_get_thread_limit () < 1
+ || omp_get_num_teams () > v + 1 || omp_get_thread_limit () > v - 1;
+ if (err)
+ abort ();
+}
+
+int
+main ()
+{
+ foo (3, 5, 7, 9);
+ return 0;
+}