number of queries is algorithmically limited to the number of
stores on all paths from the load to the function entry.
-@item max-pre-hoist-insert-iterations
-The maximum number of iterations doing insertion during code
-hoisting which is done as part of the partial redundancy elimination
-insertion phase.
-
@item ira-max-loops-num
IRA uses regional register allocation by default. If a function
contains more loops than the number given by this parameter, only at most
Common Joined UInteger Var(param_max_predicted_iterations) Init(100) IntegerRange(0, 65536) Param Optimization
The maximum number of loop iterations we predict statically.
--param=max-pre-hoist-insert-iterations=
-Common Joined UInteger Var(param_max_pre_hoist_insert_iterations) Init(3) Param Optimization
-The maximum number of insert iterations done for PRE code hoisting.
-
-param=max-reload-search-insns=
Common Joined UInteger Var(param_max_reload_search_insns) Init(100) Param Optimization
The maximum number of instructions to search backward when looking for equivalent reload.
/* Now inserting x + y five times is unnecessary but the cascading
cannot be avoided with the simple-minded dataflow. But make sure
- we do the insertions all in the first iteration. */
-/* { dg-final { scan-tree-dump "insert iterations == 2" "pre" } } */
+ we do not iterate PRE insertion. */
+/* { dg-final { scan-tree-dump "insert iterations == 1" "pre" } } */
/* { dg-final { scan-tree-dump "HOIST inserted: 5" "pre" } } */
fprintf (dump_file, "Starting insert iteration %d\n", num_iterations);
changed = false;
- /* Insert expressions for hoisting. Do a backward walk here since
- inserting into BLOCK exposes new opportunities in its predecessors.
- Since PRE and hoist insertions can cause back-to-back iteration
- limit that on the hoist side. */
- if (flag_code_hoisting
- && num_iterations <= param_max_pre_hoist_insert_iterations)
- for (int idx = rpo_num - 1; idx >= 0; --idx)
- {
- basic_block block = BASIC_BLOCK_FOR_FN (cfun, rpo[idx]);
- if (EDGE_COUNT (block->succs) >= 2)
- changed |= do_hoist_insertion (block);
- }
for (int idx = 0; idx < rpo_num; ++idx)
{
basic_block block = BASIC_BLOCK_FOR_FN (cfun, rpo[idx]);
statistics_histogram_event (cfun, "insert iterations", num_iterations);
+ /* AVAIL_OUT is not needed after insertion so we don't have to
+ propagate NEW_SETS from hoist insertion. */
+ FOR_ALL_BB_FN (bb, cfun)
+ {
+ bitmap_set_pool.remove (NEW_SETS (bb));
+ NEW_SETS (bb) = NULL;
+ }
+
+ /* Insert expressions for hoisting. Do a backward walk here since
+ inserting into BLOCK exposes new opportunities in its predecessors.
+ Since PRE and hoist insertions can cause back-to-back iteration
+ and we are interested in PRE insertion exposed hoisting opportunities
+ but not in hoisting exposed PRE ones do hoist insertion only after
+ PRE insertion iteration finished and do not iterate it. */
+ if (flag_code_hoisting)
+ for (int idx = rpo_num - 1; idx >= 0; --idx)
+ {
+ basic_block block = BASIC_BLOCK_FOR_FN (cfun, rpo[idx]);
+ if (EDGE_COUNT (block->succs) >= 2)
+ changed |= do_hoist_insertion (block);
+ }
+
free (rpo);
}