Merge dataflow branch into mainline
[gcc.git] / gcc / haifa-sched.c
1 /* Instruction scheduling pass.
2 Copyright (C) 1992, 1993, 1994, 1995, 1996, 1997, 1998, 1999, 2000,
3 2001, 2002, 2003, 2004, 2005, 2006, 2007 Free Software Foundation, Inc.
4 Contributed by Michael Tiemann (tiemann@cygnus.com) Enhanced by,
5 and currently maintained by, Jim Wilson (wilson@cygnus.com)
6
7 This file is part of GCC.
8
9 GCC is free software; you can redistribute it and/or modify it under
10 the terms of the GNU General Public License as published by the Free
11 Software Foundation; either version 2, or (at your option) any later
12 version.
13
14 GCC is distributed in the hope that it will be useful, but WITHOUT ANY
15 WARRANTY; without even the implied warranty of MERCHANTABILITY or
16 FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
17 for more details.
18
19 You should have received a copy of the GNU General Public License
20 along with GCC; see the file COPYING. If not, write to the Free
21 Software Foundation, 51 Franklin Street, Fifth Floor, Boston, MA
22 02110-1301, USA. */
23
24 /* Instruction scheduling pass. This file, along with sched-deps.c,
25 contains the generic parts. The actual entry point is found for
26 the normal instruction scheduling pass is found in sched-rgn.c.
27
28 We compute insn priorities based on data dependencies. Flow
29 analysis only creates a fraction of the data-dependencies we must
30 observe: namely, only those dependencies which the combiner can be
31 expected to use. For this pass, we must therefore create the
32 remaining dependencies we need to observe: register dependencies,
33 memory dependencies, dependencies to keep function calls in order,
34 and the dependence between a conditional branch and the setting of
35 condition codes are all dealt with here.
36
37 The scheduler first traverses the data flow graph, starting with
38 the last instruction, and proceeding to the first, assigning values
39 to insn_priority as it goes. This sorts the instructions
40 topologically by data dependence.
41
42 Once priorities have been established, we order the insns using
43 list scheduling. This works as follows: starting with a list of
44 all the ready insns, and sorted according to priority number, we
45 schedule the insn from the end of the list by placing its
46 predecessors in the list according to their priority order. We
47 consider this insn scheduled by setting the pointer to the "end" of
48 the list to point to the previous insn. When an insn has no
49 predecessors, we either queue it until sufficient time has elapsed
50 or add it to the ready list. As the instructions are scheduled or
51 when stalls are introduced, the queue advances and dumps insns into
52 the ready list. When all insns down to the lowest priority have
53 been scheduled, the critical path of the basic block has been made
54 as short as possible. The remaining insns are then scheduled in
55 remaining slots.
56
57 The following list shows the order in which we want to break ties
58 among insns in the ready list:
59
60 1. choose insn with the longest path to end of bb, ties
61 broken by
62 2. choose insn with least contribution to register pressure,
63 ties broken by
64 3. prefer in-block upon interblock motion, ties broken by
65 4. prefer useful upon speculative motion, ties broken by
66 5. choose insn with largest control flow probability, ties
67 broken by
68 6. choose insn with the least dependences upon the previously
69 scheduled insn, or finally
70 7 choose the insn which has the most insns dependent on it.
71 8. choose insn with lowest UID.
72
73 Memory references complicate matters. Only if we can be certain
74 that memory references are not part of the data dependency graph
75 (via true, anti, or output dependence), can we move operations past
76 memory references. To first approximation, reads can be done
77 independently, while writes introduce dependencies. Better
78 approximations will yield fewer dependencies.
79
80 Before reload, an extended analysis of interblock data dependences
81 is required for interblock scheduling. This is performed in
82 compute_block_backward_dependences ().
83
84 Dependencies set up by memory references are treated in exactly the
85 same way as other dependencies, by using insn backward dependences
86 INSN_BACK_DEPS. INSN_BACK_DEPS are translated into forward dependences
87 INSN_FORW_DEPS the purpose of forward list scheduling.
88
89 Having optimized the critical path, we may have also unduly
90 extended the lifetimes of some registers. If an operation requires
91 that constants be loaded into registers, it is certainly desirable
92 to load those constants as early as necessary, but no earlier.
93 I.e., it will not do to load up a bunch of registers at the
94 beginning of a basic block only to use them at the end, if they
95 could be loaded later, since this may result in excessive register
96 utilization.
97
98 Note that since branches are never in basic blocks, but only end
99 basic blocks, this pass will not move branches. But that is ok,
100 since we can use GNU's delayed branch scheduling pass to take care
101 of this case.
102
103 Also note that no further optimizations based on algebraic
104 identities are performed, so this pass would be a good one to
105 perform instruction splitting, such as breaking up a multiply
106 instruction into shifts and adds where that is profitable.
107
108 Given the memory aliasing analysis that this pass should perform,
109 it should be possible to remove redundant stores to memory, and to
110 load values from registers instead of hitting memory.
111
112 Before reload, speculative insns are moved only if a 'proof' exists
113 that no exception will be caused by this, and if no live registers
114 exist that inhibit the motion (live registers constraints are not
115 represented by data dependence edges).
116
117 This pass must update information that subsequent passes expect to
118 be correct. Namely: reg_n_refs, reg_n_sets, reg_n_deaths,
119 reg_n_calls_crossed, and reg_live_length. Also, BB_HEAD, BB_END.
120
121 The information in the line number notes is carefully retained by
122 this pass. Notes that refer to the starting and ending of
123 exception regions are also carefully retained by this pass. All
124 other NOTE insns are grouped in their same relative order at the
125 beginning of basic blocks and regions that have been scheduled. */
126 \f
127 #include "config.h"
128 #include "system.h"
129 #include "coretypes.h"
130 #include "tm.h"
131 #include "toplev.h"
132 #include "rtl.h"
133 #include "tm_p.h"
134 #include "hard-reg-set.h"
135 #include "regs.h"
136 #include "function.h"
137 #include "flags.h"
138 #include "insn-config.h"
139 #include "insn-attr.h"
140 #include "except.h"
141 #include "toplev.h"
142 #include "recog.h"
143 #include "sched-int.h"
144 #include "target.h"
145 #include "output.h"
146 #include "params.h"
147 #include "dbgcnt.h"
148
149 #ifdef INSN_SCHEDULING
150
151 /* issue_rate is the number of insns that can be scheduled in the same
152 machine cycle. It can be defined in the config/mach/mach.h file,
153 otherwise we set it to 1. */
154
155 static int issue_rate;
156
157 /* sched-verbose controls the amount of debugging output the
158 scheduler prints. It is controlled by -fsched-verbose=N:
159 N>0 and no -DSR : the output is directed to stderr.
160 N>=10 will direct the printouts to stderr (regardless of -dSR).
161 N=1: same as -dSR.
162 N=2: bb's probabilities, detailed ready list info, unit/insn info.
163 N=3: rtl at abort point, control-flow, regions info.
164 N=5: dependences info. */
165
166 static int sched_verbose_param = 0;
167 int sched_verbose = 0;
168
169 /* Debugging file. All printouts are sent to dump, which is always set,
170 either to stderr, or to the dump listing file (-dRS). */
171 FILE *sched_dump = 0;
172
173 /* Highest uid before scheduling. */
174 static int old_max_uid;
175
176 /* fix_sched_param() is called from toplev.c upon detection
177 of the -fsched-verbose=N option. */
178
179 void
180 fix_sched_param (const char *param, const char *val)
181 {
182 if (!strcmp (param, "verbose"))
183 sched_verbose_param = atoi (val);
184 else
185 warning (0, "fix_sched_param: unknown param: %s", param);
186 }
187
188 struct haifa_insn_data *h_i_d;
189
190 #define INSN_TICK(INSN) (h_i_d[INSN_UID (INSN)].tick)
191 #define INTER_TICK(INSN) (h_i_d[INSN_UID (INSN)].inter_tick)
192
193 /* If INSN_TICK of an instruction is equal to INVALID_TICK,
194 then it should be recalculated from scratch. */
195 #define INVALID_TICK (-(max_insn_queue_index + 1))
196 /* The minimal value of the INSN_TICK of an instruction. */
197 #define MIN_TICK (-max_insn_queue_index)
198
199 /* Issue points are used to distinguish between instructions in max_issue ().
200 For now, all instructions are equally good. */
201 #define ISSUE_POINTS(INSN) 1
202
203 /* List of important notes we must keep around. This is a pointer to the
204 last element in the list. */
205 static rtx note_list;
206
207 static struct spec_info_def spec_info_var;
208 /* Description of the speculative part of the scheduling.
209 If NULL - no speculation. */
210 static spec_info_t spec_info;
211
212 /* True, if recovery block was added during scheduling of current block.
213 Used to determine, if we need to fix INSN_TICKs. */
214 static bool added_recovery_block_p;
215
216 /* Counters of different types of speculative instructions. */
217 static int nr_begin_data, nr_be_in_data, nr_begin_control, nr_be_in_control;
218
219 /* Array used in {unlink, restore}_bb_notes. */
220 static rtx *bb_header = 0;
221
222 /* Number of basic_blocks. */
223 static int old_last_basic_block;
224
225 /* Basic block after which recovery blocks will be created. */
226 static basic_block before_recovery;
227
228 /* Queues, etc. */
229
230 /* An instruction is ready to be scheduled when all insns preceding it
231 have already been scheduled. It is important to ensure that all
232 insns which use its result will not be executed until its result
233 has been computed. An insn is maintained in one of four structures:
234
235 (P) the "Pending" set of insns which cannot be scheduled until
236 their dependencies have been satisfied.
237 (Q) the "Queued" set of insns that can be scheduled when sufficient
238 time has passed.
239 (R) the "Ready" list of unscheduled, uncommitted insns.
240 (S) the "Scheduled" list of insns.
241
242 Initially, all insns are either "Pending" or "Ready" depending on
243 whether their dependencies are satisfied.
244
245 Insns move from the "Ready" list to the "Scheduled" list as they
246 are committed to the schedule. As this occurs, the insns in the
247 "Pending" list have their dependencies satisfied and move to either
248 the "Ready" list or the "Queued" set depending on whether
249 sufficient time has passed to make them ready. As time passes,
250 insns move from the "Queued" set to the "Ready" list.
251
252 The "Pending" list (P) are the insns in the INSN_FORW_DEPS of the
253 unscheduled insns, i.e., those that are ready, queued, and pending.
254 The "Queued" set (Q) is implemented by the variable `insn_queue'.
255 The "Ready" list (R) is implemented by the variables `ready' and
256 `n_ready'.
257 The "Scheduled" list (S) is the new insn chain built by this pass.
258
259 The transition (R->S) is implemented in the scheduling loop in
260 `schedule_block' when the best insn to schedule is chosen.
261 The transitions (P->R and P->Q) are implemented in `schedule_insn' as
262 insns move from the ready list to the scheduled list.
263 The transition (Q->R) is implemented in 'queue_to_insn' as time
264 passes or stalls are introduced. */
265
266 /* Implement a circular buffer to delay instructions until sufficient
267 time has passed. For the new pipeline description interface,
268 MAX_INSN_QUEUE_INDEX is a power of two minus one which is not less
269 than maximal time of instruction execution computed by genattr.c on
270 the base maximal time of functional unit reservations and getting a
271 result. This is the longest time an insn may be queued. */
272
273 static rtx *insn_queue;
274 static int q_ptr = 0;
275 static int q_size = 0;
276 #define NEXT_Q(X) (((X)+1) & max_insn_queue_index)
277 #define NEXT_Q_AFTER(X, C) (((X)+C) & max_insn_queue_index)
278
279 #define QUEUE_SCHEDULED (-3)
280 #define QUEUE_NOWHERE (-2)
281 #define QUEUE_READY (-1)
282 /* QUEUE_SCHEDULED - INSN is scheduled.
283 QUEUE_NOWHERE - INSN isn't scheduled yet and is neither in
284 queue or ready list.
285 QUEUE_READY - INSN is in ready list.
286 N >= 0 - INSN queued for X [where NEXT_Q_AFTER (q_ptr, X) == N] cycles. */
287
288 #define QUEUE_INDEX(INSN) (h_i_d[INSN_UID (INSN)].queue_index)
289
290 /* The following variable value refers for all current and future
291 reservations of the processor units. */
292 state_t curr_state;
293
294 /* The following variable value is size of memory representing all
295 current and future reservations of the processor units. */
296 static size_t dfa_state_size;
297
298 /* The following array is used to find the best insn from ready when
299 the automaton pipeline interface is used. */
300 static char *ready_try;
301
302 /* Describe the ready list of the scheduler.
303 VEC holds space enough for all insns in the current region. VECLEN
304 says how many exactly.
305 FIRST is the index of the element with the highest priority; i.e. the
306 last one in the ready list, since elements are ordered by ascending
307 priority.
308 N_READY determines how many insns are on the ready list. */
309
310 struct ready_list
311 {
312 rtx *vec;
313 int veclen;
314 int first;
315 int n_ready;
316 };
317
318 /* The pointer to the ready list. */
319 static struct ready_list *readyp;
320
321 /* Scheduling clock. */
322 static int clock_var;
323
324 /* Number of instructions in current scheduling region. */
325 static int rgn_n_insns;
326
327 static int may_trap_exp (rtx, int);
328
329 /* Nonzero iff the address is comprised from at most 1 register. */
330 #define CONST_BASED_ADDRESS_P(x) \
331 (REG_P (x) \
332 || ((GET_CODE (x) == PLUS || GET_CODE (x) == MINUS \
333 || (GET_CODE (x) == LO_SUM)) \
334 && (CONSTANT_P (XEXP (x, 0)) \
335 || CONSTANT_P (XEXP (x, 1)))))
336
337 /* Returns a class that insn with GET_DEST(insn)=x may belong to,
338 as found by analyzing insn's expression. */
339
340 static int
341 may_trap_exp (rtx x, int is_store)
342 {
343 enum rtx_code code;
344
345 if (x == 0)
346 return TRAP_FREE;
347 code = GET_CODE (x);
348 if (is_store)
349 {
350 if (code == MEM && may_trap_p (x))
351 return TRAP_RISKY;
352 else
353 return TRAP_FREE;
354 }
355 if (code == MEM)
356 {
357 /* The insn uses memory: a volatile load. */
358 if (MEM_VOLATILE_P (x))
359 return IRISKY;
360 /* An exception-free load. */
361 if (!may_trap_p (x))
362 return IFREE;
363 /* A load with 1 base register, to be further checked. */
364 if (CONST_BASED_ADDRESS_P (XEXP (x, 0)))
365 return PFREE_CANDIDATE;
366 /* No info on the load, to be further checked. */
367 return PRISKY_CANDIDATE;
368 }
369 else
370 {
371 const char *fmt;
372 int i, insn_class = TRAP_FREE;
373
374 /* Neither store nor load, check if it may cause a trap. */
375 if (may_trap_p (x))
376 return TRAP_RISKY;
377 /* Recursive step: walk the insn... */
378 fmt = GET_RTX_FORMAT (code);
379 for (i = GET_RTX_LENGTH (code) - 1; i >= 0; i--)
380 {
381 if (fmt[i] == 'e')
382 {
383 int tmp_class = may_trap_exp (XEXP (x, i), is_store);
384 insn_class = WORST_CLASS (insn_class, tmp_class);
385 }
386 else if (fmt[i] == 'E')
387 {
388 int j;
389 for (j = 0; j < XVECLEN (x, i); j++)
390 {
391 int tmp_class = may_trap_exp (XVECEXP (x, i, j), is_store);
392 insn_class = WORST_CLASS (insn_class, tmp_class);
393 if (insn_class == TRAP_RISKY || insn_class == IRISKY)
394 break;
395 }
396 }
397 if (insn_class == TRAP_RISKY || insn_class == IRISKY)
398 break;
399 }
400 return insn_class;
401 }
402 }
403
404 /* Classifies insn for the purpose of verifying that it can be
405 moved speculatively, by examining it's patterns, returning:
406 TRAP_RISKY: store, or risky non-load insn (e.g. division by variable).
407 TRAP_FREE: non-load insn.
408 IFREE: load from a globally safe location.
409 IRISKY: volatile load.
410 PFREE_CANDIDATE, PRISKY_CANDIDATE: load that need to be checked for
411 being either PFREE or PRISKY. */
412
413 int
414 haifa_classify_insn (rtx insn)
415 {
416 rtx pat = PATTERN (insn);
417 int tmp_class = TRAP_FREE;
418 int insn_class = TRAP_FREE;
419 enum rtx_code code;
420
421 if (GET_CODE (pat) == PARALLEL)
422 {
423 int i, len = XVECLEN (pat, 0);
424
425 for (i = len - 1; i >= 0; i--)
426 {
427 code = GET_CODE (XVECEXP (pat, 0, i));
428 switch (code)
429 {
430 case CLOBBER:
431 /* Test if it is a 'store'. */
432 tmp_class = may_trap_exp (XEXP (XVECEXP (pat, 0, i), 0), 1);
433 break;
434 case SET:
435 /* Test if it is a store. */
436 tmp_class = may_trap_exp (SET_DEST (XVECEXP (pat, 0, i)), 1);
437 if (tmp_class == TRAP_RISKY)
438 break;
439 /* Test if it is a load. */
440 tmp_class
441 = WORST_CLASS (tmp_class,
442 may_trap_exp (SET_SRC (XVECEXP (pat, 0, i)),
443 0));
444 break;
445 case COND_EXEC:
446 case TRAP_IF:
447 tmp_class = TRAP_RISKY;
448 break;
449 default:
450 ;
451 }
452 insn_class = WORST_CLASS (insn_class, tmp_class);
453 if (insn_class == TRAP_RISKY || insn_class == IRISKY)
454 break;
455 }
456 }
457 else
458 {
459 code = GET_CODE (pat);
460 switch (code)
461 {
462 case CLOBBER:
463 /* Test if it is a 'store'. */
464 tmp_class = may_trap_exp (XEXP (pat, 0), 1);
465 break;
466 case SET:
467 /* Test if it is a store. */
468 tmp_class = may_trap_exp (SET_DEST (pat), 1);
469 if (tmp_class == TRAP_RISKY)
470 break;
471 /* Test if it is a load. */
472 tmp_class =
473 WORST_CLASS (tmp_class,
474 may_trap_exp (SET_SRC (pat), 0));
475 break;
476 case COND_EXEC:
477 case TRAP_IF:
478 tmp_class = TRAP_RISKY;
479 break;
480 default:;
481 }
482 insn_class = tmp_class;
483 }
484
485 return insn_class;
486 }
487
488 /* A typedef for rtx vector. */
489 typedef VEC(rtx, heap) *rtx_vec_t;
490
491 /* Forward declarations. */
492
493 static int priority (rtx);
494 static int rank_for_schedule (const void *, const void *);
495 static void swap_sort (rtx *, int);
496 static void queue_insn (rtx, int);
497 static int schedule_insn (rtx);
498 static int find_set_reg_weight (rtx);
499 static void find_insn_reg_weight (basic_block);
500 static void find_insn_reg_weight1 (rtx);
501 static void adjust_priority (rtx);
502 static void advance_one_cycle (void);
503
504 /* Notes handling mechanism:
505 =========================
506 Generally, NOTES are saved before scheduling and restored after scheduling.
507 The scheduler distinguishes between two types of notes:
508
509 (1) LOOP_BEGIN, LOOP_END, SETJMP, EHREGION_BEG, EHREGION_END notes:
510 Before scheduling a region, a pointer to the note is added to the insn
511 that follows or precedes it. (This happens as part of the data dependence
512 computation). After scheduling an insn, the pointer contained in it is
513 used for regenerating the corresponding note (in reemit_notes).
514
515 (2) All other notes (e.g. INSN_DELETED): Before scheduling a block,
516 these notes are put in a list (in rm_other_notes() and
517 unlink_other_notes ()). After scheduling the block, these notes are
518 inserted at the beginning of the block (in schedule_block()). */
519
520 static rtx unlink_other_notes (rtx, rtx);
521 static void reemit_notes (rtx);
522
523 static rtx *ready_lastpos (struct ready_list *);
524 static void ready_add (struct ready_list *, rtx, bool);
525 static void ready_sort (struct ready_list *);
526 static rtx ready_remove_first (struct ready_list *);
527
528 static void queue_to_ready (struct ready_list *);
529 static int early_queue_to_ready (state_t, struct ready_list *);
530
531 static void debug_ready_list (struct ready_list *);
532
533 static void move_insn (rtx);
534
535 /* The following functions are used to implement multi-pass scheduling
536 on the first cycle. */
537 static rtx ready_element (struct ready_list *, int);
538 static rtx ready_remove (struct ready_list *, int);
539 static void ready_remove_insn (rtx);
540 static int max_issue (struct ready_list *, int *, int);
541
542 static rtx choose_ready (struct ready_list *);
543
544 static void fix_inter_tick (rtx, rtx);
545 static int fix_tick_ready (rtx);
546 static void change_queue_index (rtx, int);
547
548 /* The following functions are used to implement scheduling of data/control
549 speculative instructions. */
550
551 static void extend_h_i_d (void);
552 static void extend_ready (int);
553 static void extend_global (rtx);
554 static void extend_all (rtx);
555 static void init_h_i_d (rtx);
556 static void generate_recovery_code (rtx);
557 static void process_insn_forw_deps_be_in_spec (deps_list_t, rtx, ds_t);
558 static void begin_speculative_block (rtx);
559 static void add_to_speculative_block (rtx);
560 static dw_t dep_weak (ds_t);
561 static edge find_fallthru_edge (basic_block);
562 static void init_before_recovery (void);
563 static basic_block create_recovery_block (void);
564 static void create_check_block_twin (rtx, bool);
565 static void fix_recovery_deps (basic_block);
566 static void change_pattern (rtx, rtx);
567 static int speculate_insn (rtx, ds_t, rtx *);
568 static void dump_new_block_header (int, basic_block, rtx, rtx);
569 static void restore_bb_notes (basic_block);
570 static void extend_bb (void);
571 static void fix_jump_move (rtx);
572 static void move_block_after_check (rtx);
573 static void move_succs (VEC(edge,gc) **, basic_block);
574 static void sched_remove_insn (rtx);
575 static void clear_priorities (rtx, rtx_vec_t *);
576 static void calc_priorities (rtx_vec_t);
577 static void add_jump_dependencies (rtx, rtx);
578 #ifdef ENABLE_CHECKING
579 static int has_edge_p (VEC(edge,gc) *, int);
580 static void check_cfg (rtx, rtx);
581 static void check_sched_flags (void);
582 #endif
583
584 #endif /* INSN_SCHEDULING */
585 \f
586 /* Point to state used for the current scheduling pass. */
587 struct sched_info *current_sched_info;
588 \f
589 #ifndef INSN_SCHEDULING
590 void
591 schedule_insns (void)
592 {
593 }
594 #else
595
596 /* Working copy of frontend's sched_info variable. */
597 static struct sched_info current_sched_info_var;
598
599 /* Pointer to the last instruction scheduled. Used by rank_for_schedule,
600 so that insns independent of the last scheduled insn will be preferred
601 over dependent instructions. */
602
603 static rtx last_scheduled_insn;
604
605 /* Cached cost of the instruction. Use below function to get cost of the
606 insn. -1 here means that the field is not initialized. */
607 #define INSN_COST(INSN) (h_i_d[INSN_UID (INSN)].cost)
608
609 /* Compute cost of executing INSN.
610 This is the number of cycles between instruction issue and
611 instruction results. */
612 HAIFA_INLINE int
613 insn_cost (rtx insn)
614 {
615 int cost = INSN_COST (insn);
616
617 if (cost < 0)
618 {
619 /* A USE insn, or something else we don't need to
620 understand. We can't pass these directly to
621 result_ready_cost or insn_default_latency because it will
622 trigger a fatal error for unrecognizable insns. */
623 if (recog_memoized (insn) < 0)
624 {
625 INSN_COST (insn) = 0;
626 return 0;
627 }
628 else
629 {
630 cost = insn_default_latency (insn);
631 if (cost < 0)
632 cost = 0;
633
634 INSN_COST (insn) = cost;
635 }
636 }
637
638 return cost;
639 }
640
641 /* Compute cost of dependence LINK.
642 This is the number of cycles between instruction issue and
643 instruction results. */
644 int
645 dep_cost (dep_t link)
646 {
647 rtx used = DEP_CON (link);
648 int cost;
649
650 /* A USE insn should never require the value used to be computed.
651 This allows the computation of a function's result and parameter
652 values to overlap the return and call. */
653 if (recog_memoized (used) < 0)
654 cost = 0;
655 else
656 {
657 rtx insn = DEP_PRO (link);
658 enum reg_note dep_type = DEP_KIND (link);
659
660 cost = insn_cost (insn);
661
662 if (INSN_CODE (insn) >= 0)
663 {
664 if (dep_type == REG_DEP_ANTI)
665 cost = 0;
666 else if (dep_type == REG_DEP_OUTPUT)
667 {
668 cost = (insn_default_latency (insn)
669 - insn_default_latency (used));
670 if (cost <= 0)
671 cost = 1;
672 }
673 else if (bypass_p (insn))
674 cost = insn_latency (insn, used);
675 }
676
677 if (targetm.sched.adjust_cost != NULL)
678 {
679 /* This variable is used for backward compatibility with the
680 targets. */
681 rtx dep_cost_rtx_link = alloc_INSN_LIST (NULL_RTX, NULL_RTX);
682
683 /* Make it self-cycled, so that if some tries to walk over this
684 incomplete list he/she will be caught in an endless loop. */
685 XEXP (dep_cost_rtx_link, 1) = dep_cost_rtx_link;
686
687 /* Targets use only REG_NOTE_KIND of the link. */
688 PUT_REG_NOTE_KIND (dep_cost_rtx_link, DEP_KIND (link));
689
690 cost = targetm.sched.adjust_cost (used, dep_cost_rtx_link,
691 insn, cost);
692
693 free_INSN_LIST_node (dep_cost_rtx_link);
694 }
695
696 if (cost < 0)
697 cost = 0;
698 }
699
700 return cost;
701 }
702
703 /* Return 'true' if DEP should be included in priority calculations. */
704 static bool
705 contributes_to_priority_p (dep_t dep)
706 {
707 /* Critical path is meaningful in block boundaries only. */
708 if (!current_sched_info->contributes_to_priority (DEP_CON (dep),
709 DEP_PRO (dep)))
710 return false;
711
712 /* If flag COUNT_SPEC_IN_CRITICAL_PATH is set,
713 then speculative instructions will less likely be
714 scheduled. That is because the priority of
715 their producers will increase, and, thus, the
716 producers will more likely be scheduled, thus,
717 resolving the dependence. */
718 if ((current_sched_info->flags & DO_SPECULATION)
719 && !(spec_info->flags & COUNT_SPEC_IN_CRITICAL_PATH)
720 && (DEP_STATUS (dep) & SPECULATIVE))
721 return false;
722
723 return true;
724 }
725
726 /* Compute the priority number for INSN. */
727 static int
728 priority (rtx insn)
729 {
730 dep_link_t link;
731
732 if (! INSN_P (insn))
733 return 0;
734
735 /* We should not be interested in priority of an already scheduled insn. */
736 gcc_assert (QUEUE_INDEX (insn) != QUEUE_SCHEDULED);
737
738 if (!INSN_PRIORITY_KNOWN (insn))
739 {
740 int this_priority = 0;
741
742 if (deps_list_empty_p (INSN_FORW_DEPS (insn)))
743 /* ??? We should set INSN_PRIORITY to insn_cost when and insn has
744 some forward deps but all of them are ignored by
745 contributes_to_priority hook. At the moment we set priority of
746 such insn to 0. */
747 this_priority = insn_cost (insn);
748 else
749 {
750 rtx prev_first, twin;
751 basic_block rec;
752
753 /* For recovery check instructions we calculate priority slightly
754 different than that of normal instructions. Instead of walking
755 through INSN_FORW_DEPS (check) list, we walk through
756 INSN_FORW_DEPS list of each instruction in the corresponding
757 recovery block. */
758
759 rec = RECOVERY_BLOCK (insn);
760 if (!rec || rec == EXIT_BLOCK_PTR)
761 {
762 prev_first = PREV_INSN (insn);
763 twin = insn;
764 }
765 else
766 {
767 prev_first = NEXT_INSN (BB_HEAD (rec));
768 twin = PREV_INSN (BB_END (rec));
769 }
770
771 do
772 {
773 FOR_EACH_DEP_LINK (link, INSN_FORW_DEPS (twin))
774 {
775 rtx next;
776 int next_priority;
777 dep_t dep = DEP_LINK_DEP (link);
778
779 next = DEP_CON (dep);
780
781 if (BLOCK_FOR_INSN (next) != rec)
782 {
783 int cost;
784
785 if (!contributes_to_priority_p (dep))
786 continue;
787
788 if (twin == insn)
789 cost = dep_cost (dep);
790 else
791 {
792 struct _dep _dep1, *dep1 = &_dep1;
793
794 init_dep (dep1, insn, next, REG_DEP_ANTI);
795
796 cost = dep_cost (dep1);
797 }
798
799 next_priority = cost + priority (next);
800
801 if (next_priority > this_priority)
802 this_priority = next_priority;
803 }
804 }
805
806 twin = PREV_INSN (twin);
807 }
808 while (twin != prev_first);
809 }
810 INSN_PRIORITY (insn) = this_priority;
811 INSN_PRIORITY_STATUS (insn) = 1;
812 }
813
814 return INSN_PRIORITY (insn);
815 }
816 \f
817 /* Macros and functions for keeping the priority queue sorted, and
818 dealing with queuing and dequeuing of instructions. */
819
820 #define SCHED_SORT(READY, N_READY) \
821 do { if ((N_READY) == 2) \
822 swap_sort (READY, N_READY); \
823 else if ((N_READY) > 2) \
824 qsort (READY, N_READY, sizeof (rtx), rank_for_schedule); } \
825 while (0)
826
827 /* Returns a positive value if x is preferred; returns a negative value if
828 y is preferred. Should never return 0, since that will make the sort
829 unstable. */
830
831 static int
832 rank_for_schedule (const void *x, const void *y)
833 {
834 rtx tmp = *(const rtx *) y;
835 rtx tmp2 = *(const rtx *) x;
836 dep_link_t link1, link2;
837 int tmp_class, tmp2_class;
838 int val, priority_val, weight_val, info_val;
839
840 /* The insn in a schedule group should be issued the first. */
841 if (SCHED_GROUP_P (tmp) != SCHED_GROUP_P (tmp2))
842 return SCHED_GROUP_P (tmp2) ? 1 : -1;
843
844 /* Make sure that priority of TMP and TMP2 are initialized. */
845 gcc_assert (INSN_PRIORITY_KNOWN (tmp) && INSN_PRIORITY_KNOWN (tmp2));
846
847 /* Prefer insn with higher priority. */
848 priority_val = INSN_PRIORITY (tmp2) - INSN_PRIORITY (tmp);
849
850 if (priority_val)
851 return priority_val;
852
853 /* Prefer speculative insn with greater dependencies weakness. */
854 if (spec_info)
855 {
856 ds_t ds1, ds2;
857 dw_t dw1, dw2;
858 int dw;
859
860 ds1 = TODO_SPEC (tmp) & SPECULATIVE;
861 if (ds1)
862 dw1 = dep_weak (ds1);
863 else
864 dw1 = NO_DEP_WEAK;
865
866 ds2 = TODO_SPEC (tmp2) & SPECULATIVE;
867 if (ds2)
868 dw2 = dep_weak (ds2);
869 else
870 dw2 = NO_DEP_WEAK;
871
872 dw = dw2 - dw1;
873 if (dw > (NO_DEP_WEAK / 8) || dw < -(NO_DEP_WEAK / 8))
874 return dw;
875 }
876
877 /* Prefer an insn with smaller contribution to registers-pressure. */
878 if (!reload_completed &&
879 (weight_val = INSN_REG_WEIGHT (tmp) - INSN_REG_WEIGHT (tmp2)))
880 return weight_val;
881
882 info_val = (*current_sched_info->rank) (tmp, tmp2);
883 if (info_val)
884 return info_val;
885
886 /* Compare insns based on their relation to the last-scheduled-insn. */
887 if (INSN_P (last_scheduled_insn))
888 {
889 /* Classify the instructions into three classes:
890 1) Data dependent on last schedule insn.
891 2) Anti/Output dependent on last scheduled insn.
892 3) Independent of last scheduled insn, or has latency of one.
893 Choose the insn from the highest numbered class if different. */
894 link1
895 = find_link_by_con_in_deps_list (INSN_FORW_DEPS (last_scheduled_insn),
896 tmp);
897
898 if (link1 == NULL || dep_cost (DEP_LINK_DEP (link1)) == 1)
899 tmp_class = 3;
900 else if (/* Data dependence. */
901 DEP_LINK_KIND (link1) == REG_DEP_TRUE)
902 tmp_class = 1;
903 else
904 tmp_class = 2;
905
906 link2
907 = find_link_by_con_in_deps_list (INSN_FORW_DEPS (last_scheduled_insn),
908 tmp2);
909
910 if (link2 == NULL || dep_cost (DEP_LINK_DEP (link2)) == 1)
911 tmp2_class = 3;
912 else if (/* Data dependence. */
913 DEP_LINK_KIND (link2) == REG_DEP_TRUE)
914 tmp2_class = 1;
915 else
916 tmp2_class = 2;
917
918 if ((val = tmp2_class - tmp_class))
919 return val;
920 }
921
922 /* Prefer the insn which has more later insns that depend on it.
923 This gives the scheduler more freedom when scheduling later
924 instructions at the expense of added register pressure. */
925
926 link1 = DEPS_LIST_FIRST (INSN_FORW_DEPS (tmp));
927 link2 = DEPS_LIST_FIRST (INSN_FORW_DEPS (tmp2));
928
929 while (link1 != NULL && link2 != NULL)
930 {
931 link1 = DEP_LINK_NEXT (link1);
932 link2 = DEP_LINK_NEXT (link2);
933 }
934
935 if (link1 != NULL && link2 == NULL)
936 /* TMP (Y) has more insns that depend on it. */
937 return -1;
938 if (link1 == NULL && link2 != NULL)
939 /* TMP2 (X) has more insns that depend on it. */
940 return 1;
941
942 /* If insns are equally good, sort by INSN_LUID (original insn order),
943 so that we make the sort stable. This minimizes instruction movement,
944 thus minimizing sched's effect on debugging and cross-jumping. */
945 return INSN_LUID (tmp) - INSN_LUID (tmp2);
946 }
947
948 /* Resort the array A in which only element at index N may be out of order. */
949
950 HAIFA_INLINE static void
951 swap_sort (rtx *a, int n)
952 {
953 rtx insn = a[n - 1];
954 int i = n - 2;
955
956 while (i >= 0 && rank_for_schedule (a + i, &insn) >= 0)
957 {
958 a[i + 1] = a[i];
959 i -= 1;
960 }
961 a[i + 1] = insn;
962 }
963
964 /* Add INSN to the insn queue so that it can be executed at least
965 N_CYCLES after the currently executing insn. Preserve insns
966 chain for debugging purposes. */
967
968 HAIFA_INLINE static void
969 queue_insn (rtx insn, int n_cycles)
970 {
971 int next_q = NEXT_Q_AFTER (q_ptr, n_cycles);
972 rtx link = alloc_INSN_LIST (insn, insn_queue[next_q]);
973
974 gcc_assert (n_cycles <= max_insn_queue_index);
975
976 insn_queue[next_q] = link;
977 q_size += 1;
978
979 if (sched_verbose >= 2)
980 {
981 fprintf (sched_dump, ";;\t\tReady-->Q: insn %s: ",
982 (*current_sched_info->print_insn) (insn, 0));
983
984 fprintf (sched_dump, "queued for %d cycles.\n", n_cycles);
985 }
986
987 QUEUE_INDEX (insn) = next_q;
988 }
989
990 /* Remove INSN from queue. */
991 static void
992 queue_remove (rtx insn)
993 {
994 gcc_assert (QUEUE_INDEX (insn) >= 0);
995 remove_free_INSN_LIST_elem (insn, &insn_queue[QUEUE_INDEX (insn)]);
996 q_size--;
997 QUEUE_INDEX (insn) = QUEUE_NOWHERE;
998 }
999
1000 /* Return a pointer to the bottom of the ready list, i.e. the insn
1001 with the lowest priority. */
1002
1003 HAIFA_INLINE static rtx *
1004 ready_lastpos (struct ready_list *ready)
1005 {
1006 gcc_assert (ready->n_ready >= 1);
1007 return ready->vec + ready->first - ready->n_ready + 1;
1008 }
1009
1010 /* Add an element INSN to the ready list so that it ends up with the
1011 lowest/highest priority depending on FIRST_P. */
1012
1013 HAIFA_INLINE static void
1014 ready_add (struct ready_list *ready, rtx insn, bool first_p)
1015 {
1016 if (!first_p)
1017 {
1018 if (ready->first == ready->n_ready)
1019 {
1020 memmove (ready->vec + ready->veclen - ready->n_ready,
1021 ready_lastpos (ready),
1022 ready->n_ready * sizeof (rtx));
1023 ready->first = ready->veclen - 1;
1024 }
1025 ready->vec[ready->first - ready->n_ready] = insn;
1026 }
1027 else
1028 {
1029 if (ready->first == ready->veclen - 1)
1030 {
1031 if (ready->n_ready)
1032 /* ready_lastpos() fails when called with (ready->n_ready == 0). */
1033 memmove (ready->vec + ready->veclen - ready->n_ready - 1,
1034 ready_lastpos (ready),
1035 ready->n_ready * sizeof (rtx));
1036 ready->first = ready->veclen - 2;
1037 }
1038 ready->vec[++(ready->first)] = insn;
1039 }
1040
1041 ready->n_ready++;
1042
1043 gcc_assert (QUEUE_INDEX (insn) != QUEUE_READY);
1044 QUEUE_INDEX (insn) = QUEUE_READY;
1045 }
1046
1047 /* Remove the element with the highest priority from the ready list and
1048 return it. */
1049
1050 HAIFA_INLINE static rtx
1051 ready_remove_first (struct ready_list *ready)
1052 {
1053 rtx t;
1054
1055 gcc_assert (ready->n_ready);
1056 t = ready->vec[ready->first--];
1057 ready->n_ready--;
1058 /* If the queue becomes empty, reset it. */
1059 if (ready->n_ready == 0)
1060 ready->first = ready->veclen - 1;
1061
1062 gcc_assert (QUEUE_INDEX (t) == QUEUE_READY);
1063 QUEUE_INDEX (t) = QUEUE_NOWHERE;
1064
1065 return t;
1066 }
1067
1068 /* The following code implements multi-pass scheduling for the first
1069 cycle. In other words, we will try to choose ready insn which
1070 permits to start maximum number of insns on the same cycle. */
1071
1072 /* Return a pointer to the element INDEX from the ready. INDEX for
1073 insn with the highest priority is 0, and the lowest priority has
1074 N_READY - 1. */
1075
1076 HAIFA_INLINE static rtx
1077 ready_element (struct ready_list *ready, int index)
1078 {
1079 gcc_assert (ready->n_ready && index < ready->n_ready);
1080
1081 return ready->vec[ready->first - index];
1082 }
1083
1084 /* Remove the element INDEX from the ready list and return it. INDEX
1085 for insn with the highest priority is 0, and the lowest priority
1086 has N_READY - 1. */
1087
1088 HAIFA_INLINE static rtx
1089 ready_remove (struct ready_list *ready, int index)
1090 {
1091 rtx t;
1092 int i;
1093
1094 if (index == 0)
1095 return ready_remove_first (ready);
1096 gcc_assert (ready->n_ready && index < ready->n_ready);
1097 t = ready->vec[ready->first - index];
1098 ready->n_ready--;
1099 for (i = index; i < ready->n_ready; i++)
1100 ready->vec[ready->first - i] = ready->vec[ready->first - i - 1];
1101 QUEUE_INDEX (t) = QUEUE_NOWHERE;
1102 return t;
1103 }
1104
1105 /* Remove INSN from the ready list. */
1106 static void
1107 ready_remove_insn (rtx insn)
1108 {
1109 int i;
1110
1111 for (i = 0; i < readyp->n_ready; i++)
1112 if (ready_element (readyp, i) == insn)
1113 {
1114 ready_remove (readyp, i);
1115 return;
1116 }
1117 gcc_unreachable ();
1118 }
1119
1120 /* Sort the ready list READY by ascending priority, using the SCHED_SORT
1121 macro. */
1122
1123 HAIFA_INLINE static void
1124 ready_sort (struct ready_list *ready)
1125 {
1126 rtx *first = ready_lastpos (ready);
1127 SCHED_SORT (first, ready->n_ready);
1128 }
1129
1130 /* PREV is an insn that is ready to execute. Adjust its priority if that
1131 will help shorten or lengthen register lifetimes as appropriate. Also
1132 provide a hook for the target to tweek itself. */
1133
1134 HAIFA_INLINE static void
1135 adjust_priority (rtx prev)
1136 {
1137 /* ??? There used to be code here to try and estimate how an insn
1138 affected register lifetimes, but it did it by looking at REG_DEAD
1139 notes, which we removed in schedule_region. Nor did it try to
1140 take into account register pressure or anything useful like that.
1141
1142 Revisit when we have a machine model to work with and not before. */
1143
1144 if (targetm.sched.adjust_priority)
1145 INSN_PRIORITY (prev) =
1146 targetm.sched.adjust_priority (prev, INSN_PRIORITY (prev));
1147 }
1148
1149 /* Advance time on one cycle. */
1150 HAIFA_INLINE static void
1151 advance_one_cycle (void)
1152 {
1153 if (targetm.sched.dfa_pre_cycle_insn)
1154 state_transition (curr_state,
1155 targetm.sched.dfa_pre_cycle_insn ());
1156
1157 state_transition (curr_state, NULL);
1158
1159 if (targetm.sched.dfa_post_cycle_insn)
1160 state_transition (curr_state,
1161 targetm.sched.dfa_post_cycle_insn ());
1162 }
1163
1164 /* Clock at which the previous instruction was issued. */
1165 static int last_clock_var;
1166
1167 /* INSN is the "currently executing insn". Launch each insn which was
1168 waiting on INSN. READY is the ready list which contains the insns
1169 that are ready to fire. CLOCK is the current cycle. The function
1170 returns necessary cycle advance after issuing the insn (it is not
1171 zero for insns in a schedule group). */
1172
1173 static int
1174 schedule_insn (rtx insn)
1175 {
1176 dep_link_t link;
1177 int advance = 0;
1178
1179 if (sched_verbose >= 1)
1180 {
1181 char buf[2048];
1182
1183 print_insn (buf, insn, 0);
1184 buf[40] = 0;
1185 fprintf (sched_dump, ";;\t%3i--> %-40s:", clock_var, buf);
1186
1187 if (recog_memoized (insn) < 0)
1188 fprintf (sched_dump, "nothing");
1189 else
1190 print_reservation (sched_dump, insn);
1191 fputc ('\n', sched_dump);
1192 }
1193
1194 /* Scheduling instruction should have all its dependencies resolved and
1195 should have been removed from the ready list. */
1196 gcc_assert (INSN_DEP_COUNT (insn) == 0
1197 && deps_list_empty_p (INSN_BACK_DEPS (insn)));
1198 free_deps_list (INSN_BACK_DEPS (insn));
1199
1200 /* Now we can free INSN_RESOLVED_BACK_DEPS list. */
1201 delete_deps_list (INSN_RESOLVED_BACK_DEPS (insn));
1202
1203 gcc_assert (QUEUE_INDEX (insn) == QUEUE_NOWHERE);
1204 QUEUE_INDEX (insn) = QUEUE_SCHEDULED;
1205
1206 gcc_assert (INSN_TICK (insn) >= MIN_TICK);
1207 if (INSN_TICK (insn) > clock_var)
1208 /* INSN has been prematurely moved from the queue to the ready list.
1209 This is possible only if following flag is set. */
1210 gcc_assert (flag_sched_stalled_insns);
1211
1212 /* ??? Probably, if INSN is scheduled prematurely, we should leave
1213 INSN_TICK untouched. This is a machine-dependent issue, actually. */
1214 INSN_TICK (insn) = clock_var;
1215
1216 /* Update dependent instructions. */
1217 FOR_EACH_DEP_LINK (link, INSN_FORW_DEPS (insn))
1218 {
1219 rtx next = DEP_LINK_CON (link);
1220
1221 /* Resolve the dependence between INSN and NEXT. */
1222
1223 INSN_DEP_COUNT (next)--;
1224
1225 move_dep_link (DEP_NODE_BACK (DEP_LINK_NODE (link)),
1226 INSN_RESOLVED_BACK_DEPS (next));
1227
1228 gcc_assert ((INSN_DEP_COUNT (next) == 0)
1229 == deps_list_empty_p (INSN_BACK_DEPS (next)));
1230
1231 if (!IS_SPECULATION_BRANCHY_CHECK_P (insn))
1232 {
1233 int effective_cost;
1234
1235 effective_cost = try_ready (next);
1236
1237 if (effective_cost >= 0
1238 && SCHED_GROUP_P (next)
1239 && advance < effective_cost)
1240 advance = effective_cost;
1241 }
1242 else
1243 /* Check always has only one forward dependence (to the first insn in
1244 the recovery block), therefore, this will be executed only once. */
1245 {
1246 gcc_assert (DEP_LINK_NEXT (link) == NULL);
1247 fix_recovery_deps (RECOVERY_BLOCK (insn));
1248 }
1249 }
1250
1251 /* Annotate the instruction with issue information -- TImode
1252 indicates that the instruction is expected not to be able
1253 to issue on the same cycle as the previous insn. A machine
1254 may use this information to decide how the instruction should
1255 be aligned. */
1256 if (issue_rate > 1
1257 && GET_CODE (PATTERN (insn)) != USE
1258 && GET_CODE (PATTERN (insn)) != CLOBBER)
1259 {
1260 if (reload_completed)
1261 PUT_MODE (insn, clock_var > last_clock_var ? TImode : VOIDmode);
1262 last_clock_var = clock_var;
1263 }
1264
1265 return advance;
1266 }
1267
1268 /* Functions for handling of notes. */
1269
1270 /* Delete notes beginning with INSN and put them in the chain
1271 of notes ended by NOTE_LIST.
1272 Returns the insn following the notes. */
1273
1274 static rtx
1275 unlink_other_notes (rtx insn, rtx tail)
1276 {
1277 rtx prev = PREV_INSN (insn);
1278
1279 while (insn != tail && NOTE_NOT_BB_P (insn))
1280 {
1281 rtx next = NEXT_INSN (insn);
1282 basic_block bb = BLOCK_FOR_INSN (insn);
1283
1284 /* Delete the note from its current position. */
1285 if (prev)
1286 NEXT_INSN (prev) = next;
1287 if (next)
1288 PREV_INSN (next) = prev;
1289
1290 if (bb)
1291 {
1292 /* Basic block can begin with either LABEL or
1293 NOTE_INSN_BASIC_BLOCK. */
1294 gcc_assert (BB_HEAD (bb) != insn);
1295
1296 /* Check if we are removing last insn in the BB. */
1297 if (BB_END (bb) == insn)
1298 BB_END (bb) = prev;
1299 }
1300
1301 /* See sched_analyze to see how these are handled. */
1302 if (NOTE_KIND (insn) != NOTE_INSN_EH_REGION_BEG
1303 && NOTE_KIND (insn) != NOTE_INSN_EH_REGION_END)
1304 {
1305 /* Insert the note at the end of the notes list. */
1306 PREV_INSN (insn) = note_list;
1307 if (note_list)
1308 NEXT_INSN (note_list) = insn;
1309 note_list = insn;
1310 }
1311
1312 insn = next;
1313 }
1314 return insn;
1315 }
1316
1317 /* Return the head and tail pointers of ebb starting at BEG and ending
1318 at END. */
1319
1320 void
1321 get_ebb_head_tail (basic_block beg, basic_block end, rtx *headp, rtx *tailp)
1322 {
1323 rtx beg_head = BB_HEAD (beg);
1324 rtx beg_tail = BB_END (beg);
1325 rtx end_head = BB_HEAD (end);
1326 rtx end_tail = BB_END (end);
1327
1328 /* Don't include any notes or labels at the beginning of the BEG
1329 basic block, or notes at the end of the END basic blocks. */
1330
1331 if (LABEL_P (beg_head))
1332 beg_head = NEXT_INSN (beg_head);
1333
1334 while (beg_head != beg_tail)
1335 if (NOTE_P (beg_head))
1336 beg_head = NEXT_INSN (beg_head);
1337 else
1338 break;
1339
1340 *headp = beg_head;
1341
1342 if (beg == end)
1343 end_head = beg_head;
1344 else if (LABEL_P (end_head))
1345 end_head = NEXT_INSN (end_head);
1346
1347 while (end_head != end_tail)
1348 if (NOTE_P (end_tail))
1349 end_tail = PREV_INSN (end_tail);
1350 else
1351 break;
1352
1353 *tailp = end_tail;
1354 }
1355
1356 /* Return nonzero if there are no real insns in the range [ HEAD, TAIL ]. */
1357
1358 int
1359 no_real_insns_p (rtx head, rtx tail)
1360 {
1361 while (head != NEXT_INSN (tail))
1362 {
1363 if (!NOTE_P (head) && !LABEL_P (head))
1364 return 0;
1365 head = NEXT_INSN (head);
1366 }
1367 return 1;
1368 }
1369
1370 /* Delete notes between HEAD and TAIL and put them in the chain
1371 of notes ended by NOTE_LIST. */
1372
1373 void
1374 rm_other_notes (rtx head, rtx tail)
1375 {
1376 rtx next_tail;
1377 rtx insn;
1378
1379 note_list = 0;
1380 if (head == tail && (! INSN_P (head)))
1381 return;
1382
1383 next_tail = NEXT_INSN (tail);
1384 for (insn = head; insn != next_tail; insn = NEXT_INSN (insn))
1385 {
1386 rtx prev;
1387
1388 /* Farm out notes, and maybe save them in NOTE_LIST.
1389 This is needed to keep the debugger from
1390 getting completely deranged. */
1391 if (NOTE_NOT_BB_P (insn))
1392 {
1393 prev = insn;
1394
1395 insn = unlink_other_notes (insn, next_tail);
1396
1397 gcc_assert (prev != tail && prev != head && insn != next_tail);
1398 }
1399 }
1400 }
1401
1402 /* Functions for computation of registers live/usage info. */
1403
1404 /* This function looks for a new register being defined.
1405 If the destination register is already used by the source,
1406 a new register is not needed. */
1407
1408 static int
1409 find_set_reg_weight (rtx x)
1410 {
1411 if (GET_CODE (x) == CLOBBER
1412 && register_operand (SET_DEST (x), VOIDmode))
1413 return 1;
1414 if (GET_CODE (x) == SET
1415 && register_operand (SET_DEST (x), VOIDmode))
1416 {
1417 if (REG_P (SET_DEST (x)))
1418 {
1419 if (!reg_mentioned_p (SET_DEST (x), SET_SRC (x)))
1420 return 1;
1421 else
1422 return 0;
1423 }
1424 return 1;
1425 }
1426 return 0;
1427 }
1428
1429 /* Calculate INSN_REG_WEIGHT for all insns of a block. */
1430
1431 static void
1432 find_insn_reg_weight (basic_block bb)
1433 {
1434 rtx insn, next_tail, head, tail;
1435
1436 get_ebb_head_tail (bb, bb, &head, &tail);
1437 next_tail = NEXT_INSN (tail);
1438
1439 for (insn = head; insn != next_tail; insn = NEXT_INSN (insn))
1440 find_insn_reg_weight1 (insn);
1441 }
1442
1443 /* Calculate INSN_REG_WEIGHT for single instruction.
1444 Separated from find_insn_reg_weight because of need
1445 to initialize new instruction in generate_recovery_code. */
1446 static void
1447 find_insn_reg_weight1 (rtx insn)
1448 {
1449 int reg_weight = 0;
1450 rtx x;
1451
1452 /* Handle register life information. */
1453 if (! INSN_P (insn))
1454 return;
1455
1456 /* Increment weight for each register born here. */
1457 x = PATTERN (insn);
1458 reg_weight += find_set_reg_weight (x);
1459 if (GET_CODE (x) == PARALLEL)
1460 {
1461 int j;
1462 for (j = XVECLEN (x, 0) - 1; j >= 0; j--)
1463 {
1464 x = XVECEXP (PATTERN (insn), 0, j);
1465 reg_weight += find_set_reg_weight (x);
1466 }
1467 }
1468 /* Decrement weight for each register that dies here. */
1469 for (x = REG_NOTES (insn); x; x = XEXP (x, 1))
1470 {
1471 if (REG_NOTE_KIND (x) == REG_DEAD
1472 || REG_NOTE_KIND (x) == REG_UNUSED)
1473 reg_weight--;
1474 }
1475
1476 INSN_REG_WEIGHT (insn) = reg_weight;
1477 }
1478
1479 /* Move insns that became ready to fire from queue to ready list. */
1480
1481 static void
1482 queue_to_ready (struct ready_list *ready)
1483 {
1484 rtx insn;
1485 rtx link;
1486
1487 q_ptr = NEXT_Q (q_ptr);
1488
1489 /* Add all pending insns that can be scheduled without stalls to the
1490 ready list. */
1491 for (link = insn_queue[q_ptr]; link; link = XEXP (link, 1))
1492 {
1493 insn = XEXP (link, 0);
1494 q_size -= 1;
1495
1496 if (sched_verbose >= 2)
1497 fprintf (sched_dump, ";;\t\tQ-->Ready: insn %s: ",
1498 (*current_sched_info->print_insn) (insn, 0));
1499
1500 /* If the ready list is full, delay the insn for 1 cycle.
1501 See the comment in schedule_block for the rationale. */
1502 if (!reload_completed
1503 && ready->n_ready > MAX_SCHED_READY_INSNS
1504 && !SCHED_GROUP_P (insn))
1505 {
1506 if (sched_verbose >= 2)
1507 fprintf (sched_dump, "requeued because ready full\n");
1508 queue_insn (insn, 1);
1509 }
1510 else
1511 {
1512 ready_add (ready, insn, false);
1513 if (sched_verbose >= 2)
1514 fprintf (sched_dump, "moving to ready without stalls\n");
1515 }
1516 }
1517 free_INSN_LIST_list (&insn_queue[q_ptr]);
1518
1519 /* If there are no ready insns, stall until one is ready and add all
1520 of the pending insns at that point to the ready list. */
1521 if (ready->n_ready == 0)
1522 {
1523 int stalls;
1524
1525 for (stalls = 1; stalls <= max_insn_queue_index; stalls++)
1526 {
1527 if ((link = insn_queue[NEXT_Q_AFTER (q_ptr, stalls)]))
1528 {
1529 for (; link; link = XEXP (link, 1))
1530 {
1531 insn = XEXP (link, 0);
1532 q_size -= 1;
1533
1534 if (sched_verbose >= 2)
1535 fprintf (sched_dump, ";;\t\tQ-->Ready: insn %s: ",
1536 (*current_sched_info->print_insn) (insn, 0));
1537
1538 ready_add (ready, insn, false);
1539 if (sched_verbose >= 2)
1540 fprintf (sched_dump, "moving to ready with %d stalls\n", stalls);
1541 }
1542 free_INSN_LIST_list (&insn_queue[NEXT_Q_AFTER (q_ptr, stalls)]);
1543
1544 advance_one_cycle ();
1545
1546 break;
1547 }
1548
1549 advance_one_cycle ();
1550 }
1551
1552 q_ptr = NEXT_Q_AFTER (q_ptr, stalls);
1553 clock_var += stalls;
1554 }
1555 }
1556
1557 /* Used by early_queue_to_ready. Determines whether it is "ok" to
1558 prematurely move INSN from the queue to the ready list. Currently,
1559 if a target defines the hook 'is_costly_dependence', this function
1560 uses the hook to check whether there exist any dependences which are
1561 considered costly by the target, between INSN and other insns that
1562 have already been scheduled. Dependences are checked up to Y cycles
1563 back, with default Y=1; The flag -fsched-stalled-insns-dep=Y allows
1564 controlling this value.
1565 (Other considerations could be taken into account instead (or in
1566 addition) depending on user flags and target hooks. */
1567
1568 static bool
1569 ok_for_early_queue_removal (rtx insn)
1570 {
1571 int n_cycles;
1572 rtx prev_insn = last_scheduled_insn;
1573
1574 if (targetm.sched.is_costly_dependence)
1575 {
1576 for (n_cycles = flag_sched_stalled_insns_dep; n_cycles; n_cycles--)
1577 {
1578 for ( ; prev_insn; prev_insn = PREV_INSN (prev_insn))
1579 {
1580 int cost;
1581
1582 if (!NOTE_P (prev_insn))
1583 {
1584 dep_link_t dep_link;
1585
1586 dep_link = (find_link_by_con_in_deps_list
1587 (INSN_FORW_DEPS (prev_insn), insn));
1588
1589 if (dep_link)
1590 {
1591 dep_t dep = DEP_LINK_DEP (dep_link);
1592
1593 cost = dep_cost (dep);
1594
1595 if (targetm.sched.is_costly_dependence (dep, cost,
1596 flag_sched_stalled_insns_dep - n_cycles))
1597 return false;
1598 }
1599 }
1600
1601 if (GET_MODE (prev_insn) == TImode) /* end of dispatch group */
1602 break;
1603 }
1604
1605 if (!prev_insn)
1606 break;
1607 prev_insn = PREV_INSN (prev_insn);
1608 }
1609 }
1610
1611 return true;
1612 }
1613
1614
1615 /* Remove insns from the queue, before they become "ready" with respect
1616 to FU latency considerations. */
1617
1618 static int
1619 early_queue_to_ready (state_t state, struct ready_list *ready)
1620 {
1621 rtx insn;
1622 rtx link;
1623 rtx next_link;
1624 rtx prev_link;
1625 bool move_to_ready;
1626 int cost;
1627 state_t temp_state = alloca (dfa_state_size);
1628 int stalls;
1629 int insns_removed = 0;
1630
1631 /*
1632 Flag '-fsched-stalled-insns=X' determines the aggressiveness of this
1633 function:
1634
1635 X == 0: There is no limit on how many queued insns can be removed
1636 prematurely. (flag_sched_stalled_insns = -1).
1637
1638 X >= 1: Only X queued insns can be removed prematurely in each
1639 invocation. (flag_sched_stalled_insns = X).
1640
1641 Otherwise: Early queue removal is disabled.
1642 (flag_sched_stalled_insns = 0)
1643 */
1644
1645 if (! flag_sched_stalled_insns)
1646 return 0;
1647
1648 for (stalls = 0; stalls <= max_insn_queue_index; stalls++)
1649 {
1650 if ((link = insn_queue[NEXT_Q_AFTER (q_ptr, stalls)]))
1651 {
1652 if (sched_verbose > 6)
1653 fprintf (sched_dump, ";; look at index %d + %d\n", q_ptr, stalls);
1654
1655 prev_link = 0;
1656 while (link)
1657 {
1658 next_link = XEXP (link, 1);
1659 insn = XEXP (link, 0);
1660 if (insn && sched_verbose > 6)
1661 print_rtl_single (sched_dump, insn);
1662
1663 memcpy (temp_state, state, dfa_state_size);
1664 if (recog_memoized (insn) < 0)
1665 /* non-negative to indicate that it's not ready
1666 to avoid infinite Q->R->Q->R... */
1667 cost = 0;
1668 else
1669 cost = state_transition (temp_state, insn);
1670
1671 if (sched_verbose >= 6)
1672 fprintf (sched_dump, "transition cost = %d\n", cost);
1673
1674 move_to_ready = false;
1675 if (cost < 0)
1676 {
1677 move_to_ready = ok_for_early_queue_removal (insn);
1678 if (move_to_ready == true)
1679 {
1680 /* move from Q to R */
1681 q_size -= 1;
1682 ready_add (ready, insn, false);
1683
1684 if (prev_link)
1685 XEXP (prev_link, 1) = next_link;
1686 else
1687 insn_queue[NEXT_Q_AFTER (q_ptr, stalls)] = next_link;
1688
1689 free_INSN_LIST_node (link);
1690
1691 if (sched_verbose >= 2)
1692 fprintf (sched_dump, ";;\t\tEarly Q-->Ready: insn %s\n",
1693 (*current_sched_info->print_insn) (insn, 0));
1694
1695 insns_removed++;
1696 if (insns_removed == flag_sched_stalled_insns)
1697 /* Remove no more than flag_sched_stalled_insns insns
1698 from Q at a time. */
1699 return insns_removed;
1700 }
1701 }
1702
1703 if (move_to_ready == false)
1704 prev_link = link;
1705
1706 link = next_link;
1707 } /* while link */
1708 } /* if link */
1709
1710 } /* for stalls.. */
1711
1712 return insns_removed;
1713 }
1714
1715
1716 /* Print the ready list for debugging purposes. Callable from debugger. */
1717
1718 static void
1719 debug_ready_list (struct ready_list *ready)
1720 {
1721 rtx *p;
1722 int i;
1723
1724 if (ready->n_ready == 0)
1725 {
1726 fprintf (sched_dump, "\n");
1727 return;
1728 }
1729
1730 p = ready_lastpos (ready);
1731 for (i = 0; i < ready->n_ready; i++)
1732 fprintf (sched_dump, " %s", (*current_sched_info->print_insn) (p[i], 0));
1733 fprintf (sched_dump, "\n");
1734 }
1735
1736 /* Search INSN for REG_SAVE_NOTE note pairs for
1737 NOTE_INSN_EHREGION_{BEG,END}; and convert them back into
1738 NOTEs. The REG_SAVE_NOTE note following first one is contains the
1739 saved value for NOTE_BLOCK_NUMBER which is useful for
1740 NOTE_INSN_EH_REGION_{BEG,END} NOTEs. */
1741
1742 static void
1743 reemit_notes (rtx insn)
1744 {
1745 rtx note, last = insn;
1746
1747 for (note = REG_NOTES (insn); note; note = XEXP (note, 1))
1748 {
1749 if (REG_NOTE_KIND (note) == REG_SAVE_NOTE)
1750 {
1751 enum insn_note note_type = INTVAL (XEXP (note, 0));
1752
1753 last = emit_note_before (note_type, last);
1754 remove_note (insn, note);
1755 }
1756 }
1757 }
1758
1759 /* Move INSN. Reemit notes if needed. Update CFG, if needed. */
1760 static void
1761 move_insn (rtx insn)
1762 {
1763 rtx last = last_scheduled_insn;
1764
1765 if (PREV_INSN (insn) != last)
1766 {
1767 basic_block bb;
1768 rtx note;
1769 int jump_p = 0;
1770
1771 bb = BLOCK_FOR_INSN (insn);
1772
1773 /* BB_HEAD is either LABEL or NOTE. */
1774 gcc_assert (BB_HEAD (bb) != insn);
1775
1776 if (BB_END (bb) == insn)
1777 /* If this is last instruction in BB, move end marker one
1778 instruction up. */
1779 {
1780 /* Jumps are always placed at the end of basic block. */
1781 jump_p = control_flow_insn_p (insn);
1782
1783 gcc_assert (!jump_p
1784 || ((current_sched_info->flags & SCHED_RGN)
1785 && IS_SPECULATION_BRANCHY_CHECK_P (insn))
1786 || (current_sched_info->flags & SCHED_EBB));
1787
1788 gcc_assert (BLOCK_FOR_INSN (PREV_INSN (insn)) == bb);
1789
1790 BB_END (bb) = PREV_INSN (insn);
1791 }
1792
1793 gcc_assert (BB_END (bb) != last);
1794
1795 if (jump_p)
1796 /* We move the block note along with jump. */
1797 {
1798 /* NT is needed for assertion below. */
1799 rtx nt = current_sched_info->next_tail;
1800
1801 note = NEXT_INSN (insn);
1802 while (NOTE_NOT_BB_P (note) && note != nt)
1803 note = NEXT_INSN (note);
1804
1805 if (note != nt
1806 && (LABEL_P (note)
1807 || BARRIER_P (note)))
1808 note = NEXT_INSN (note);
1809
1810 gcc_assert (NOTE_INSN_BASIC_BLOCK_P (note));
1811 }
1812 else
1813 note = insn;
1814
1815 NEXT_INSN (PREV_INSN (insn)) = NEXT_INSN (note);
1816 PREV_INSN (NEXT_INSN (note)) = PREV_INSN (insn);
1817
1818 NEXT_INSN (note) = NEXT_INSN (last);
1819 PREV_INSN (NEXT_INSN (last)) = note;
1820
1821 NEXT_INSN (last) = insn;
1822 PREV_INSN (insn) = last;
1823
1824 bb = BLOCK_FOR_INSN (last);
1825
1826 if (jump_p)
1827 {
1828 fix_jump_move (insn);
1829
1830 if (BLOCK_FOR_INSN (insn) != bb)
1831 move_block_after_check (insn);
1832
1833 gcc_assert (BB_END (bb) == last);
1834 }
1835
1836 set_block_for_insn (insn, bb);
1837 df_insn_change_bb (insn);
1838
1839 /* Update BB_END, if needed. */
1840 if (BB_END (bb) == last)
1841 BB_END (bb) = insn;
1842 }
1843
1844 reemit_notes (insn);
1845
1846 SCHED_GROUP_P (insn) = 0;
1847 }
1848
1849 /* The following structure describe an entry of the stack of choices. */
1850 struct choice_entry
1851 {
1852 /* Ordinal number of the issued insn in the ready queue. */
1853 int index;
1854 /* The number of the rest insns whose issues we should try. */
1855 int rest;
1856 /* The number of issued essential insns. */
1857 int n;
1858 /* State after issuing the insn. */
1859 state_t state;
1860 };
1861
1862 /* The following array is used to implement a stack of choices used in
1863 function max_issue. */
1864 static struct choice_entry *choice_stack;
1865
1866 /* The following variable value is number of essential insns issued on
1867 the current cycle. An insn is essential one if it changes the
1868 processors state. */
1869 static int cycle_issued_insns;
1870
1871 /* The following variable value is maximal number of tries of issuing
1872 insns for the first cycle multipass insn scheduling. We define
1873 this value as constant*(DFA_LOOKAHEAD**ISSUE_RATE). We would not
1874 need this constraint if all real insns (with non-negative codes)
1875 had reservations because in this case the algorithm complexity is
1876 O(DFA_LOOKAHEAD**ISSUE_RATE). Unfortunately, the dfa descriptions
1877 might be incomplete and such insn might occur. For such
1878 descriptions, the complexity of algorithm (without the constraint)
1879 could achieve DFA_LOOKAHEAD ** N , where N is the queue length. */
1880 static int max_lookahead_tries;
1881
1882 /* The following value is value of hook
1883 `first_cycle_multipass_dfa_lookahead' at the last call of
1884 `max_issue'. */
1885 static int cached_first_cycle_multipass_dfa_lookahead = 0;
1886
1887 /* The following value is value of `issue_rate' at the last call of
1888 `sched_init'. */
1889 static int cached_issue_rate = 0;
1890
1891 /* The following function returns maximal (or close to maximal) number
1892 of insns which can be issued on the same cycle and one of which
1893 insns is insns with the best rank (the first insn in READY). To
1894 make this function tries different samples of ready insns. READY
1895 is current queue `ready'. Global array READY_TRY reflects what
1896 insns are already issued in this try. MAX_POINTS is the sum of points
1897 of all instructions in READY. The function stops immediately,
1898 if it reached the such a solution, that all instruction can be issued.
1899 INDEX will contain index of the best insn in READY. The following
1900 function is used only for first cycle multipass scheduling. */
1901 static int
1902 max_issue (struct ready_list *ready, int *index, int max_points)
1903 {
1904 int n, i, all, n_ready, best, delay, tries_num, points = -1;
1905 struct choice_entry *top;
1906 rtx insn;
1907
1908 best = 0;
1909 memcpy (choice_stack->state, curr_state, dfa_state_size);
1910 top = choice_stack;
1911 top->rest = cached_first_cycle_multipass_dfa_lookahead;
1912 top->n = 0;
1913 n_ready = ready->n_ready;
1914 for (all = i = 0; i < n_ready; i++)
1915 if (!ready_try [i])
1916 all++;
1917 i = 0;
1918 tries_num = 0;
1919 for (;;)
1920 {
1921 if (top->rest == 0 || i >= n_ready)
1922 {
1923 if (top == choice_stack)
1924 break;
1925 if (best < top - choice_stack && ready_try [0])
1926 {
1927 best = top - choice_stack;
1928 *index = choice_stack [1].index;
1929 points = top->n;
1930 if (top->n == max_points || best == all)
1931 break;
1932 }
1933 i = top->index;
1934 ready_try [i] = 0;
1935 top--;
1936 memcpy (curr_state, top->state, dfa_state_size);
1937 }
1938 else if (!ready_try [i])
1939 {
1940 tries_num++;
1941 if (tries_num > max_lookahead_tries)
1942 break;
1943 insn = ready_element (ready, i);
1944 delay = state_transition (curr_state, insn);
1945 if (delay < 0)
1946 {
1947 if (state_dead_lock_p (curr_state))
1948 top->rest = 0;
1949 else
1950 top->rest--;
1951 n = top->n;
1952 if (memcmp (top->state, curr_state, dfa_state_size) != 0)
1953 n += ISSUE_POINTS (insn);
1954 top++;
1955 top->rest = cached_first_cycle_multipass_dfa_lookahead;
1956 top->index = i;
1957 top->n = n;
1958 memcpy (top->state, curr_state, dfa_state_size);
1959 ready_try [i] = 1;
1960 i = -1;
1961 }
1962 }
1963 i++;
1964 }
1965 while (top != choice_stack)
1966 {
1967 ready_try [top->index] = 0;
1968 top--;
1969 }
1970 memcpy (curr_state, choice_stack->state, dfa_state_size);
1971
1972 if (sched_verbose >= 4)
1973 fprintf (sched_dump, ";;\t\tChoosed insn : %s; points: %d/%d\n",
1974 (*current_sched_info->print_insn) (ready_element (ready, *index),
1975 0),
1976 points, max_points);
1977
1978 return best;
1979 }
1980
1981 /* The following function chooses insn from READY and modifies
1982 *N_READY and READY. The following function is used only for first
1983 cycle multipass scheduling. */
1984
1985 static rtx
1986 choose_ready (struct ready_list *ready)
1987 {
1988 int lookahead = 0;
1989
1990 if (targetm.sched.first_cycle_multipass_dfa_lookahead)
1991 lookahead = targetm.sched.first_cycle_multipass_dfa_lookahead ();
1992 if (lookahead <= 0 || SCHED_GROUP_P (ready_element (ready, 0)))
1993 return ready_remove_first (ready);
1994 else
1995 {
1996 /* Try to choose the better insn. */
1997 int index = 0, i, n;
1998 rtx insn;
1999 int more_issue, max_points, try_data = 1, try_control = 1;
2000
2001 if (cached_first_cycle_multipass_dfa_lookahead != lookahead)
2002 {
2003 cached_first_cycle_multipass_dfa_lookahead = lookahead;
2004 max_lookahead_tries = 100;
2005 for (i = 0; i < issue_rate; i++)
2006 max_lookahead_tries *= lookahead;
2007 }
2008 insn = ready_element (ready, 0);
2009 if (INSN_CODE (insn) < 0)
2010 return ready_remove_first (ready);
2011
2012 if (spec_info
2013 && spec_info->flags & (PREFER_NON_DATA_SPEC
2014 | PREFER_NON_CONTROL_SPEC))
2015 {
2016 for (i = 0, n = ready->n_ready; i < n; i++)
2017 {
2018 rtx x;
2019 ds_t s;
2020
2021 x = ready_element (ready, i);
2022 s = TODO_SPEC (x);
2023
2024 if (spec_info->flags & PREFER_NON_DATA_SPEC
2025 && !(s & DATA_SPEC))
2026 {
2027 try_data = 0;
2028 if (!(spec_info->flags & PREFER_NON_CONTROL_SPEC)
2029 || !try_control)
2030 break;
2031 }
2032
2033 if (spec_info->flags & PREFER_NON_CONTROL_SPEC
2034 && !(s & CONTROL_SPEC))
2035 {
2036 try_control = 0;
2037 if (!(spec_info->flags & PREFER_NON_DATA_SPEC) || !try_data)
2038 break;
2039 }
2040 }
2041 }
2042
2043 if ((!try_data && (TODO_SPEC (insn) & DATA_SPEC))
2044 || (!try_control && (TODO_SPEC (insn) & CONTROL_SPEC))
2045 || (targetm.sched.first_cycle_multipass_dfa_lookahead_guard_spec
2046 && !targetm.sched.first_cycle_multipass_dfa_lookahead_guard_spec
2047 (insn)))
2048 /* Discard speculative instruction that stands first in the ready
2049 list. */
2050 {
2051 change_queue_index (insn, 1);
2052 return 0;
2053 }
2054
2055 max_points = ISSUE_POINTS (insn);
2056 more_issue = issue_rate - cycle_issued_insns - 1;
2057
2058 for (i = 1; i < ready->n_ready; i++)
2059 {
2060 insn = ready_element (ready, i);
2061 ready_try [i]
2062 = (INSN_CODE (insn) < 0
2063 || (!try_data && (TODO_SPEC (insn) & DATA_SPEC))
2064 || (!try_control && (TODO_SPEC (insn) & CONTROL_SPEC))
2065 || (targetm.sched.first_cycle_multipass_dfa_lookahead_guard
2066 && !targetm.sched.first_cycle_multipass_dfa_lookahead_guard
2067 (insn)));
2068
2069 if (!ready_try [i] && more_issue-- > 0)
2070 max_points += ISSUE_POINTS (insn);
2071 }
2072
2073 if (max_issue (ready, &index, max_points) == 0)
2074 return ready_remove_first (ready);
2075 else
2076 return ready_remove (ready, index);
2077 }
2078 }
2079
2080 /* Use forward list scheduling to rearrange insns of block pointed to by
2081 TARGET_BB, possibly bringing insns from subsequent blocks in the same
2082 region. */
2083
2084 void
2085 schedule_block (basic_block *target_bb, int rgn_n_insns1)
2086 {
2087 struct ready_list ready;
2088 int i, first_cycle_insn_p;
2089 int can_issue_more;
2090 state_t temp_state = NULL; /* It is used for multipass scheduling. */
2091 int sort_p, advance, start_clock_var;
2092
2093 /* Head/tail info for this block. */
2094 rtx prev_head = current_sched_info->prev_head;
2095 rtx next_tail = current_sched_info->next_tail;
2096 rtx head = NEXT_INSN (prev_head);
2097 rtx tail = PREV_INSN (next_tail);
2098
2099 /* We used to have code to avoid getting parameters moved from hard
2100 argument registers into pseudos.
2101
2102 However, it was removed when it proved to be of marginal benefit
2103 and caused problems because schedule_block and compute_forward_dependences
2104 had different notions of what the "head" insn was. */
2105
2106 gcc_assert (head != tail || INSN_P (head));
2107
2108 added_recovery_block_p = false;
2109
2110 /* Debug info. */
2111 if (sched_verbose)
2112 dump_new_block_header (0, *target_bb, head, tail);
2113
2114 state_reset (curr_state);
2115
2116 /* Allocate the ready list. */
2117 readyp = &ready;
2118 ready.vec = NULL;
2119 ready_try = NULL;
2120 choice_stack = NULL;
2121
2122 rgn_n_insns = -1;
2123 extend_ready (rgn_n_insns1 + 1);
2124
2125 ready.first = ready.veclen - 1;
2126 ready.n_ready = 0;
2127
2128 /* It is used for first cycle multipass scheduling. */
2129 temp_state = alloca (dfa_state_size);
2130
2131 if (targetm.sched.md_init)
2132 targetm.sched.md_init (sched_dump, sched_verbose, ready.veclen);
2133
2134 /* We start inserting insns after PREV_HEAD. */
2135 last_scheduled_insn = prev_head;
2136
2137 gcc_assert (NOTE_P (last_scheduled_insn)
2138 && BLOCK_FOR_INSN (last_scheduled_insn) == *target_bb);
2139
2140 /* Initialize INSN_QUEUE. Q_SIZE is the total number of insns in the
2141 queue. */
2142 q_ptr = 0;
2143 q_size = 0;
2144
2145 insn_queue = alloca ((max_insn_queue_index + 1) * sizeof (rtx));
2146 memset (insn_queue, 0, (max_insn_queue_index + 1) * sizeof (rtx));
2147
2148 /* Start just before the beginning of time. */
2149 clock_var = -1;
2150
2151 /* We need queue and ready lists and clock_var be initialized
2152 in try_ready () (which is called through init_ready_list ()). */
2153 (*current_sched_info->init_ready_list) ();
2154
2155 /* The algorithm is O(n^2) in the number of ready insns at any given
2156 time in the worst case. Before reload we are more likely to have
2157 big lists so truncate them to a reasonable size. */
2158 if (!reload_completed && ready.n_ready > MAX_SCHED_READY_INSNS)
2159 {
2160 ready_sort (&ready);
2161
2162 /* Find first free-standing insn past MAX_SCHED_READY_INSNS. */
2163 for (i = MAX_SCHED_READY_INSNS; i < ready.n_ready; i++)
2164 if (!SCHED_GROUP_P (ready_element (&ready, i)))
2165 break;
2166
2167 if (sched_verbose >= 2)
2168 {
2169 fprintf (sched_dump,
2170 ";;\t\tReady list on entry: %d insns\n", ready.n_ready);
2171 fprintf (sched_dump,
2172 ";;\t\t before reload => truncated to %d insns\n", i);
2173 }
2174
2175 /* Delay all insns past it for 1 cycle. */
2176 while (i < ready.n_ready)
2177 queue_insn (ready_remove (&ready, i), 1);
2178 }
2179
2180 /* Now we can restore basic block notes and maintain precise cfg. */
2181 restore_bb_notes (*target_bb);
2182
2183 last_clock_var = -1;
2184
2185 advance = 0;
2186
2187 sort_p = TRUE;
2188 /* Loop until all the insns in BB are scheduled. */
2189 while ((*current_sched_info->schedule_more_p) ())
2190 {
2191 do
2192 {
2193 start_clock_var = clock_var;
2194
2195 clock_var++;
2196
2197 advance_one_cycle ();
2198
2199 /* Add to the ready list all pending insns that can be issued now.
2200 If there are no ready insns, increment clock until one
2201 is ready and add all pending insns at that point to the ready
2202 list. */
2203 queue_to_ready (&ready);
2204
2205 gcc_assert (ready.n_ready);
2206
2207 if (sched_verbose >= 2)
2208 {
2209 fprintf (sched_dump, ";;\t\tReady list after queue_to_ready: ");
2210 debug_ready_list (&ready);
2211 }
2212 advance -= clock_var - start_clock_var;
2213 }
2214 while (advance > 0);
2215
2216 if (sort_p)
2217 {
2218 /* Sort the ready list based on priority. */
2219 ready_sort (&ready);
2220
2221 if (sched_verbose >= 2)
2222 {
2223 fprintf (sched_dump, ";;\t\tReady list after ready_sort: ");
2224 debug_ready_list (&ready);
2225 }
2226 }
2227
2228 /* Allow the target to reorder the list, typically for
2229 better instruction bundling. */
2230 if (sort_p && targetm.sched.reorder
2231 && (ready.n_ready == 0
2232 || !SCHED_GROUP_P (ready_element (&ready, 0))))
2233 can_issue_more =
2234 targetm.sched.reorder (sched_dump, sched_verbose,
2235 ready_lastpos (&ready),
2236 &ready.n_ready, clock_var);
2237 else
2238 can_issue_more = issue_rate;
2239
2240 first_cycle_insn_p = 1;
2241 cycle_issued_insns = 0;
2242 for (;;)
2243 {
2244 rtx insn;
2245 int cost;
2246 bool asm_p = false;
2247
2248 if (sched_verbose >= 2)
2249 {
2250 fprintf (sched_dump, ";;\tReady list (t = %3d): ",
2251 clock_var);
2252 debug_ready_list (&ready);
2253 }
2254
2255 if (ready.n_ready == 0
2256 && can_issue_more
2257 && reload_completed)
2258 {
2259 /* Allow scheduling insns directly from the queue in case
2260 there's nothing better to do (ready list is empty) but
2261 there are still vacant dispatch slots in the current cycle. */
2262 if (sched_verbose >= 6)
2263 fprintf (sched_dump,";;\t\tSecond chance\n");
2264 memcpy (temp_state, curr_state, dfa_state_size);
2265 if (early_queue_to_ready (temp_state, &ready))
2266 ready_sort (&ready);
2267 }
2268
2269 if (ready.n_ready == 0 || !can_issue_more
2270 || state_dead_lock_p (curr_state)
2271 || !(*current_sched_info->schedule_more_p) ())
2272 break;
2273
2274 if (dbg_cnt (sched_insn) == false)
2275 {
2276 insn = NEXT_INSN (last_scheduled_insn);
2277 while ((*current_sched_info->schedule_more_p) ())
2278 {
2279 (*current_sched_info->begin_schedule_ready) (insn,
2280 last_scheduled_insn);
2281 if (QUEUE_INDEX (insn) >= 0)
2282 queue_remove (insn);
2283 last_scheduled_insn = insn;
2284 insn = NEXT_INSN (insn);
2285 }
2286 while (ready.n_ready)
2287 ready_remove_first (&ready);
2288 goto bail_out;
2289 }
2290
2291 /* Select and remove the insn from the ready list. */
2292 if (sort_p)
2293 {
2294 insn = choose_ready (&ready);
2295 if (!insn)
2296 continue;
2297 }
2298 else
2299 insn = ready_remove_first (&ready);
2300
2301 if (targetm.sched.dfa_new_cycle
2302 && targetm.sched.dfa_new_cycle (sched_dump, sched_verbose,
2303 insn, last_clock_var,
2304 clock_var, &sort_p))
2305 /* SORT_P is used by the target to override sorting
2306 of the ready list. This is needed when the target
2307 has modified its internal structures expecting that
2308 the insn will be issued next. As we need the insn
2309 to have the highest priority (so it will be returned by
2310 the ready_remove_first call above), we invoke
2311 ready_add (&ready, insn, true).
2312 But, still, there is one issue: INSN can be later
2313 discarded by scheduler's front end through
2314 current_sched_info->can_schedule_ready_p, hence, won't
2315 be issued next. */
2316 {
2317 ready_add (&ready, insn, true);
2318 break;
2319 }
2320
2321 sort_p = TRUE;
2322 memcpy (temp_state, curr_state, dfa_state_size);
2323 if (recog_memoized (insn) < 0)
2324 {
2325 asm_p = (GET_CODE (PATTERN (insn)) == ASM_INPUT
2326 || asm_noperands (PATTERN (insn)) >= 0);
2327 if (!first_cycle_insn_p && asm_p)
2328 /* This is asm insn which is tryed to be issued on the
2329 cycle not first. Issue it on the next cycle. */
2330 cost = 1;
2331 else
2332 /* A USE insn, or something else we don't need to
2333 understand. We can't pass these directly to
2334 state_transition because it will trigger a
2335 fatal error for unrecognizable insns. */
2336 cost = 0;
2337 }
2338 else
2339 {
2340 cost = state_transition (temp_state, insn);
2341 if (cost < 0)
2342 cost = 0;
2343 else if (cost == 0)
2344 cost = 1;
2345 }
2346
2347 if (cost >= 1)
2348 {
2349 queue_insn (insn, cost);
2350 if (SCHED_GROUP_P (insn))
2351 {
2352 advance = cost;
2353 break;
2354 }
2355
2356 continue;
2357 }
2358
2359 if (current_sched_info->can_schedule_ready_p
2360 && ! (*current_sched_info->can_schedule_ready_p) (insn))
2361 /* We normally get here only if we don't want to move
2362 insn from the split block. */
2363 {
2364 TODO_SPEC (insn) = (TODO_SPEC (insn) & ~SPECULATIVE) | HARD_DEP;
2365 continue;
2366 }
2367
2368 /* DECISION is made. */
2369
2370 if (TODO_SPEC (insn) & SPECULATIVE)
2371 generate_recovery_code (insn);
2372
2373 if (control_flow_insn_p (last_scheduled_insn)
2374 /* This is used to switch basic blocks by request
2375 from scheduler front-end (actually, sched-ebb.c only).
2376 This is used to process blocks with single fallthru
2377 edge. If succeeding block has jump, it [jump] will try
2378 move at the end of current bb, thus corrupting CFG. */
2379 || current_sched_info->advance_target_bb (*target_bb, insn))
2380 {
2381 *target_bb = current_sched_info->advance_target_bb
2382 (*target_bb, 0);
2383
2384 if (sched_verbose)
2385 {
2386 rtx x;
2387
2388 x = next_real_insn (last_scheduled_insn);
2389 gcc_assert (x);
2390 dump_new_block_header (1, *target_bb, x, tail);
2391 }
2392
2393 last_scheduled_insn = bb_note (*target_bb);
2394 }
2395
2396 /* Update counters, etc in the scheduler's front end. */
2397 (*current_sched_info->begin_schedule_ready) (insn,
2398 last_scheduled_insn);
2399
2400 move_insn (insn);
2401 last_scheduled_insn = insn;
2402
2403 if (memcmp (curr_state, temp_state, dfa_state_size) != 0)
2404 {
2405 cycle_issued_insns++;
2406 memcpy (curr_state, temp_state, dfa_state_size);
2407 }
2408
2409 if (targetm.sched.variable_issue)
2410 can_issue_more =
2411 targetm.sched.variable_issue (sched_dump, sched_verbose,
2412 insn, can_issue_more);
2413 /* A naked CLOBBER or USE generates no instruction, so do
2414 not count them against the issue rate. */
2415 else if (GET_CODE (PATTERN (insn)) != USE
2416 && GET_CODE (PATTERN (insn)) != CLOBBER)
2417 can_issue_more--;
2418
2419 advance = schedule_insn (insn);
2420
2421 /* After issuing an asm insn we should start a new cycle. */
2422 if (advance == 0 && asm_p)
2423 advance = 1;
2424 if (advance != 0)
2425 break;
2426
2427 first_cycle_insn_p = 0;
2428
2429 /* Sort the ready list based on priority. This must be
2430 redone here, as schedule_insn may have readied additional
2431 insns that will not be sorted correctly. */
2432 if (ready.n_ready > 0)
2433 ready_sort (&ready);
2434
2435 if (targetm.sched.reorder2
2436 && (ready.n_ready == 0
2437 || !SCHED_GROUP_P (ready_element (&ready, 0))))
2438 {
2439 can_issue_more =
2440 targetm.sched.reorder2 (sched_dump, sched_verbose,
2441 ready.n_ready
2442 ? ready_lastpos (&ready) : NULL,
2443 &ready.n_ready, clock_var);
2444 }
2445 }
2446 }
2447
2448 bail_out:
2449 /* Debug info. */
2450 if (sched_verbose)
2451 {
2452 fprintf (sched_dump, ";;\tReady list (final): ");
2453 debug_ready_list (&ready);
2454 }
2455
2456 if (current_sched_info->queue_must_finish_empty)
2457 /* Sanity check -- queue must be empty now. Meaningless if region has
2458 multiple bbs. */
2459 gcc_assert (!q_size && !ready.n_ready);
2460 else
2461 {
2462 /* We must maintain QUEUE_INDEX between blocks in region. */
2463 for (i = ready.n_ready - 1; i >= 0; i--)
2464 {
2465 rtx x;
2466
2467 x = ready_element (&ready, i);
2468 QUEUE_INDEX (x) = QUEUE_NOWHERE;
2469 TODO_SPEC (x) = (TODO_SPEC (x) & ~SPECULATIVE) | HARD_DEP;
2470 }
2471
2472 if (q_size)
2473 for (i = 0; i <= max_insn_queue_index; i++)
2474 {
2475 rtx link;
2476 for (link = insn_queue[i]; link; link = XEXP (link, 1))
2477 {
2478 rtx x;
2479
2480 x = XEXP (link, 0);
2481 QUEUE_INDEX (x) = QUEUE_NOWHERE;
2482 TODO_SPEC (x) = (TODO_SPEC (x) & ~SPECULATIVE) | HARD_DEP;
2483 }
2484 free_INSN_LIST_list (&insn_queue[i]);
2485 }
2486 }
2487
2488 if (!current_sched_info->queue_must_finish_empty
2489 || added_recovery_block_p)
2490 {
2491 /* INSN_TICK (minimum clock tick at which the insn becomes
2492 ready) may be not correct for the insn in the subsequent
2493 blocks of the region. We should use a correct value of
2494 `clock_var' or modify INSN_TICK. It is better to keep
2495 clock_var value equal to 0 at the start of a basic block.
2496 Therefore we modify INSN_TICK here. */
2497 fix_inter_tick (NEXT_INSN (prev_head), last_scheduled_insn);
2498 }
2499
2500 if (targetm.sched.md_finish)
2501 targetm.sched.md_finish (sched_dump, sched_verbose);
2502
2503 /* Update head/tail boundaries. */
2504 head = NEXT_INSN (prev_head);
2505 tail = last_scheduled_insn;
2506
2507 /* Restore-other-notes: NOTE_LIST is the end of a chain of notes
2508 previously found among the insns. Insert them at the beginning
2509 of the insns. */
2510 if (note_list != 0)
2511 {
2512 basic_block head_bb = BLOCK_FOR_INSN (head);
2513 rtx note_head = note_list;
2514
2515 while (PREV_INSN (note_head))
2516 {
2517 set_block_for_insn (note_head, head_bb);
2518 note_head = PREV_INSN (note_head);
2519 }
2520 /* In the above cycle we've missed this note: */
2521 set_block_for_insn (note_head, head_bb);
2522
2523 PREV_INSN (note_head) = PREV_INSN (head);
2524 NEXT_INSN (PREV_INSN (head)) = note_head;
2525 PREV_INSN (head) = note_list;
2526 NEXT_INSN (note_list) = head;
2527 head = note_head;
2528 }
2529
2530 /* Debugging. */
2531 if (sched_verbose)
2532 {
2533 fprintf (sched_dump, ";; total time = %d\n;; new head = %d\n",
2534 clock_var, INSN_UID (head));
2535 fprintf (sched_dump, ";; new tail = %d\n\n",
2536 INSN_UID (tail));
2537 }
2538
2539 current_sched_info->head = head;
2540 current_sched_info->tail = tail;
2541
2542 free (ready.vec);
2543
2544 free (ready_try);
2545 for (i = 0; i <= rgn_n_insns; i++)
2546 free (choice_stack [i].state);
2547 free (choice_stack);
2548 }
2549 \f
2550 /* Set_priorities: compute priority of each insn in the block. */
2551
2552 int
2553 set_priorities (rtx head, rtx tail)
2554 {
2555 rtx insn;
2556 int n_insn;
2557 int sched_max_insns_priority =
2558 current_sched_info->sched_max_insns_priority;
2559 rtx prev_head;
2560
2561 if (head == tail && (! INSN_P (head)))
2562 return 0;
2563
2564 n_insn = 0;
2565
2566 prev_head = PREV_INSN (head);
2567 for (insn = tail; insn != prev_head; insn = PREV_INSN (insn))
2568 {
2569 if (!INSN_P (insn))
2570 continue;
2571
2572 n_insn++;
2573 (void) priority (insn);
2574
2575 gcc_assert (INSN_PRIORITY_KNOWN (insn));
2576
2577 sched_max_insns_priority = MAX (sched_max_insns_priority,
2578 INSN_PRIORITY (insn));
2579 }
2580
2581 current_sched_info->sched_max_insns_priority = sched_max_insns_priority;
2582
2583 return n_insn;
2584 }
2585
2586 /* Next LUID to assign to an instruction. */
2587 static int luid;
2588
2589 /* Initialize some global state for the scheduler. */
2590
2591 void
2592 sched_init (void)
2593 {
2594 basic_block b;
2595 rtx insn;
2596 int i;
2597
2598 /* Switch to working copy of sched_info. */
2599 memcpy (&current_sched_info_var, current_sched_info,
2600 sizeof (current_sched_info_var));
2601 current_sched_info = &current_sched_info_var;
2602
2603 /* Disable speculative loads in their presence if cc0 defined. */
2604 #ifdef HAVE_cc0
2605 flag_schedule_speculative_load = 0;
2606 #endif
2607
2608 /* Set dump and sched_verbose for the desired debugging output. If no
2609 dump-file was specified, but -fsched-verbose=N (any N), print to stderr.
2610 For -fsched-verbose=N, N>=10, print everything to stderr. */
2611 sched_verbose = sched_verbose_param;
2612 if (sched_verbose_param == 0 && dump_file)
2613 sched_verbose = 1;
2614 sched_dump = ((sched_verbose_param >= 10 || !dump_file)
2615 ? stderr : dump_file);
2616
2617 /* Initialize SPEC_INFO. */
2618 if (targetm.sched.set_sched_flags)
2619 {
2620 spec_info = &spec_info_var;
2621 targetm.sched.set_sched_flags (spec_info);
2622 if (current_sched_info->flags & DO_SPECULATION)
2623 spec_info->weakness_cutoff =
2624 (PARAM_VALUE (PARAM_SCHED_SPEC_PROB_CUTOFF) * MAX_DEP_WEAK) / 100;
2625 else
2626 /* So we won't read anything accidentally. */
2627 spec_info = 0;
2628 #ifdef ENABLE_CHECKING
2629 check_sched_flags ();
2630 #endif
2631 }
2632 else
2633 /* So we won't read anything accidentally. */
2634 spec_info = 0;
2635
2636 /* Initialize issue_rate. */
2637 if (targetm.sched.issue_rate)
2638 issue_rate = targetm.sched.issue_rate ();
2639 else
2640 issue_rate = 1;
2641
2642 if (cached_issue_rate != issue_rate)
2643 {
2644 cached_issue_rate = issue_rate;
2645 /* To invalidate max_lookahead_tries: */
2646 cached_first_cycle_multipass_dfa_lookahead = 0;
2647 }
2648
2649 old_max_uid = 0;
2650 h_i_d = 0;
2651 extend_h_i_d ();
2652
2653 for (i = 0; i < old_max_uid; i++)
2654 {
2655 h_i_d[i].cost = -1;
2656 h_i_d[i].todo_spec = HARD_DEP;
2657 h_i_d[i].queue_index = QUEUE_NOWHERE;
2658 h_i_d[i].tick = INVALID_TICK;
2659 h_i_d[i].inter_tick = INVALID_TICK;
2660 }
2661
2662 if (targetm.sched.init_dfa_pre_cycle_insn)
2663 targetm.sched.init_dfa_pre_cycle_insn ();
2664
2665 if (targetm.sched.init_dfa_post_cycle_insn)
2666 targetm.sched.init_dfa_post_cycle_insn ();
2667
2668 dfa_start ();
2669 dfa_state_size = state_size ();
2670 curr_state = xmalloc (dfa_state_size);
2671
2672 h_i_d[0].luid = 0;
2673 luid = 1;
2674 FOR_EACH_BB (b)
2675 for (insn = BB_HEAD (b); ; insn = NEXT_INSN (insn))
2676 {
2677 INSN_LUID (insn) = luid;
2678
2679 /* Increment the next luid, unless this is a note. We don't
2680 really need separate IDs for notes and we don't want to
2681 schedule differently depending on whether or not there are
2682 line-number notes, i.e., depending on whether or not we're
2683 generating debugging information. */
2684 if (!NOTE_P (insn))
2685 ++luid;
2686
2687 if (insn == BB_END (b))
2688 break;
2689 }
2690
2691 init_dependency_caches (luid);
2692
2693 init_alias_analysis ();
2694
2695 old_last_basic_block = 0;
2696 extend_bb ();
2697
2698 /* Compute INSN_REG_WEIGHT for all blocks. We must do this before
2699 removing death notes. */
2700 FOR_EACH_BB_REVERSE (b)
2701 find_insn_reg_weight (b);
2702
2703 if (targetm.sched.md_init_global)
2704 targetm.sched.md_init_global (sched_dump, sched_verbose, old_max_uid);
2705
2706 nr_begin_data = nr_begin_control = nr_be_in_data = nr_be_in_control = 0;
2707 before_recovery = 0;
2708
2709 #ifdef ENABLE_CHECKING
2710 /* This is used preferably for finding bugs in check_cfg () itself. */
2711 check_cfg (0, 0);
2712 #endif
2713 }
2714
2715 /* Free global data used during insn scheduling. */
2716
2717 void
2718 sched_finish (void)
2719 {
2720 free (h_i_d);
2721 free (curr_state);
2722 dfa_finish ();
2723 free_dependency_caches ();
2724 end_alias_analysis ();
2725
2726 if (targetm.sched.md_finish_global)
2727 targetm.sched.md_finish_global (sched_dump, sched_verbose);
2728
2729 if (spec_info && spec_info->dump)
2730 {
2731 char c = reload_completed ? 'a' : 'b';
2732
2733 fprintf (spec_info->dump,
2734 ";; %s:\n", current_function_name ());
2735
2736 fprintf (spec_info->dump,
2737 ";; Procedure %cr-begin-data-spec motions == %d\n",
2738 c, nr_begin_data);
2739 fprintf (spec_info->dump,
2740 ";; Procedure %cr-be-in-data-spec motions == %d\n",
2741 c, nr_be_in_data);
2742 fprintf (spec_info->dump,
2743 ";; Procedure %cr-begin-control-spec motions == %d\n",
2744 c, nr_begin_control);
2745 fprintf (spec_info->dump,
2746 ";; Procedure %cr-be-in-control-spec motions == %d\n",
2747 c, nr_be_in_control);
2748 }
2749
2750 #ifdef ENABLE_CHECKING
2751 /* After reload ia64 backend clobbers CFG, so can't check anything. */
2752 if (!reload_completed)
2753 check_cfg (0, 0);
2754 #endif
2755
2756 current_sched_info = NULL;
2757 }
2758
2759 /* Fix INSN_TICKs of the instructions in the current block as well as
2760 INSN_TICKs of their dependents.
2761 HEAD and TAIL are the begin and the end of the current scheduled block. */
2762 static void
2763 fix_inter_tick (rtx head, rtx tail)
2764 {
2765 /* Set of instructions with corrected INSN_TICK. */
2766 bitmap_head processed;
2767 int next_clock = clock_var + 1;
2768
2769 bitmap_initialize (&processed, 0);
2770
2771 /* Iterates over scheduled instructions and fix their INSN_TICKs and
2772 INSN_TICKs of dependent instructions, so that INSN_TICKs are consistent
2773 across different blocks. */
2774 for (tail = NEXT_INSN (tail); head != tail; head = NEXT_INSN (head))
2775 {
2776 if (INSN_P (head))
2777 {
2778 int tick;
2779 dep_link_t link;
2780
2781 tick = INSN_TICK (head);
2782 gcc_assert (tick >= MIN_TICK);
2783
2784 /* Fix INSN_TICK of instruction from just scheduled block. */
2785 if (!bitmap_bit_p (&processed, INSN_LUID (head)))
2786 {
2787 bitmap_set_bit (&processed, INSN_LUID (head));
2788 tick -= next_clock;
2789
2790 if (tick < MIN_TICK)
2791 tick = MIN_TICK;
2792
2793 INSN_TICK (head) = tick;
2794 }
2795
2796 FOR_EACH_DEP_LINK (link, INSN_FORW_DEPS (head))
2797 {
2798 rtx next;
2799
2800 next = DEP_LINK_CON (link);
2801 tick = INSN_TICK (next);
2802
2803 if (tick != INVALID_TICK
2804 /* If NEXT has its INSN_TICK calculated, fix it.
2805 If not - it will be properly calculated from
2806 scratch later in fix_tick_ready. */
2807 && !bitmap_bit_p (&processed, INSN_LUID (next)))
2808 {
2809 bitmap_set_bit (&processed, INSN_LUID (next));
2810 tick -= next_clock;
2811
2812 if (tick < MIN_TICK)
2813 tick = MIN_TICK;
2814
2815 if (tick > INTER_TICK (next))
2816 INTER_TICK (next) = tick;
2817 else
2818 tick = INTER_TICK (next);
2819
2820 INSN_TICK (next) = tick;
2821 }
2822 }
2823 }
2824 }
2825 bitmap_clear (&processed);
2826 }
2827
2828 /* Check if NEXT is ready to be added to the ready or queue list.
2829 If "yes", add it to the proper list.
2830 Returns:
2831 -1 - is not ready yet,
2832 0 - added to the ready list,
2833 0 < N - queued for N cycles. */
2834 int
2835 try_ready (rtx next)
2836 {
2837 ds_t old_ts, *ts;
2838 dep_link_t link;
2839
2840 ts = &TODO_SPEC (next);
2841 old_ts = *ts;
2842
2843 gcc_assert (!(old_ts & ~(SPECULATIVE | HARD_DEP))
2844 && ((old_ts & HARD_DEP)
2845 || (old_ts & SPECULATIVE)));
2846
2847 if (!(current_sched_info->flags & DO_SPECULATION))
2848 {
2849 if (deps_list_empty_p (INSN_BACK_DEPS (next)))
2850 *ts &= ~HARD_DEP;
2851 }
2852 else
2853 {
2854 *ts &= ~SPECULATIVE & ~HARD_DEP;
2855
2856 link = DEPS_LIST_FIRST (INSN_BACK_DEPS (next));
2857
2858 if (link != NULL)
2859 {
2860 ds_t ds = DEP_LINK_STATUS (link) & SPECULATIVE;
2861
2862 /* Backward dependencies of the insn are maintained sorted.
2863 So if DEP_STATUS of the first dep is SPECULATIVE,
2864 than all other deps are speculative too. */
2865 if (ds != 0)
2866 {
2867 /* Now we've got NEXT with speculative deps only.
2868 1. Look at the deps to see what we have to do.
2869 2. Check if we can do 'todo'. */
2870 *ts = ds;
2871
2872 while ((link = DEP_LINK_NEXT (link)) != NULL)
2873 {
2874 ds = DEP_LINK_STATUS (link) & SPECULATIVE;
2875 *ts = ds_merge (*ts, ds);
2876 }
2877
2878 if (dep_weak (*ts) < spec_info->weakness_cutoff)
2879 /* Too few points. */
2880 *ts = (*ts & ~SPECULATIVE) | HARD_DEP;
2881 }
2882 else
2883 *ts |= HARD_DEP;
2884 }
2885 }
2886
2887 if (*ts & HARD_DEP)
2888 gcc_assert (*ts == old_ts
2889 && QUEUE_INDEX (next) == QUEUE_NOWHERE);
2890 else if (current_sched_info->new_ready)
2891 *ts = current_sched_info->new_ready (next, *ts);
2892
2893 /* * if !(old_ts & SPECULATIVE) (e.g. HARD_DEP or 0), then insn might
2894 have its original pattern or changed (speculative) one. This is due
2895 to changing ebb in region scheduling.
2896 * But if (old_ts & SPECULATIVE), then we are pretty sure that insn
2897 has speculative pattern.
2898
2899 We can't assert (!(*ts & HARD_DEP) || *ts == old_ts) here because
2900 control-speculative NEXT could have been discarded by sched-rgn.c
2901 (the same case as when discarded by can_schedule_ready_p ()). */
2902
2903 if ((*ts & SPECULATIVE)
2904 /* If (old_ts == *ts), then (old_ts & SPECULATIVE) and we don't
2905 need to change anything. */
2906 && *ts != old_ts)
2907 {
2908 int res;
2909 rtx new_pat;
2910
2911 gcc_assert ((*ts & SPECULATIVE) && !(*ts & ~SPECULATIVE));
2912
2913 res = speculate_insn (next, *ts, &new_pat);
2914
2915 switch (res)
2916 {
2917 case -1:
2918 /* It would be nice to change DEP_STATUS of all dependences,
2919 which have ((DEP_STATUS & SPECULATIVE) == *ts) to HARD_DEP,
2920 so we won't reanalyze anything. */
2921 *ts = (*ts & ~SPECULATIVE) | HARD_DEP;
2922 break;
2923
2924 case 0:
2925 /* We follow the rule, that every speculative insn
2926 has non-null ORIG_PAT. */
2927 if (!ORIG_PAT (next))
2928 ORIG_PAT (next) = PATTERN (next);
2929 break;
2930
2931 case 1:
2932 if (!ORIG_PAT (next))
2933 /* If we gonna to overwrite the original pattern of insn,
2934 save it. */
2935 ORIG_PAT (next) = PATTERN (next);
2936
2937 change_pattern (next, new_pat);
2938 break;
2939
2940 default:
2941 gcc_unreachable ();
2942 }
2943 }
2944
2945 /* We need to restore pattern only if (*ts == 0), because otherwise it is
2946 either correct (*ts & SPECULATIVE),
2947 or we simply don't care (*ts & HARD_DEP). */
2948
2949 gcc_assert (!ORIG_PAT (next)
2950 || !IS_SPECULATION_BRANCHY_CHECK_P (next));
2951
2952 if (*ts & HARD_DEP)
2953 {
2954 /* We can't assert (QUEUE_INDEX (next) == QUEUE_NOWHERE) here because
2955 control-speculative NEXT could have been discarded by sched-rgn.c
2956 (the same case as when discarded by can_schedule_ready_p ()). */
2957 /*gcc_assert (QUEUE_INDEX (next) == QUEUE_NOWHERE);*/
2958
2959 change_queue_index (next, QUEUE_NOWHERE);
2960 return -1;
2961 }
2962 else if (!(*ts & BEGIN_SPEC) && ORIG_PAT (next) && !IS_SPECULATION_CHECK_P (next))
2963 /* We should change pattern of every previously speculative
2964 instruction - and we determine if NEXT was speculative by using
2965 ORIG_PAT field. Except one case - speculation checks have ORIG_PAT
2966 pat too, so skip them. */
2967 {
2968 change_pattern (next, ORIG_PAT (next));
2969 ORIG_PAT (next) = 0;
2970 }
2971
2972 if (sched_verbose >= 2)
2973 {
2974 int s = TODO_SPEC (next);
2975
2976 fprintf (sched_dump, ";;\t\tdependencies resolved: insn %s",
2977 (*current_sched_info->print_insn) (next, 0));
2978
2979 if (spec_info && spec_info->dump)
2980 {
2981 if (s & BEGIN_DATA)
2982 fprintf (spec_info->dump, "; data-spec;");
2983 if (s & BEGIN_CONTROL)
2984 fprintf (spec_info->dump, "; control-spec;");
2985 if (s & BE_IN_CONTROL)
2986 fprintf (spec_info->dump, "; in-control-spec;");
2987 }
2988
2989 fprintf (sched_dump, "\n");
2990 }
2991
2992 adjust_priority (next);
2993
2994 return fix_tick_ready (next);
2995 }
2996
2997 /* Calculate INSN_TICK of NEXT and add it to either ready or queue list. */
2998 static int
2999 fix_tick_ready (rtx next)
3000 {
3001 int tick, delay;
3002
3003 if (!deps_list_empty_p (INSN_RESOLVED_BACK_DEPS (next)))
3004 {
3005 int full_p;
3006 dep_link_t link;
3007
3008 tick = INSN_TICK (next);
3009 /* if tick is not equal to INVALID_TICK, then update
3010 INSN_TICK of NEXT with the most recent resolved dependence
3011 cost. Otherwise, recalculate from scratch. */
3012 full_p = (tick == INVALID_TICK);
3013
3014 FOR_EACH_DEP_LINK (link, INSN_RESOLVED_BACK_DEPS (next))
3015 {
3016 dep_t dep = DEP_LINK_DEP (link);
3017 rtx pro = DEP_PRO (dep);
3018 int tick1;
3019
3020 gcc_assert (INSN_TICK (pro) >= MIN_TICK);
3021
3022 tick1 = INSN_TICK (pro) + dep_cost (dep);
3023 if (tick1 > tick)
3024 tick = tick1;
3025
3026 if (!full_p)
3027 break;
3028 }
3029 }
3030 else
3031 tick = -1;
3032
3033 INSN_TICK (next) = tick;
3034
3035 delay = tick - clock_var;
3036 if (delay <= 0)
3037 delay = QUEUE_READY;
3038
3039 change_queue_index (next, delay);
3040
3041 return delay;
3042 }
3043
3044 /* Move NEXT to the proper queue list with (DELAY >= 1),
3045 or add it to the ready list (DELAY == QUEUE_READY),
3046 or remove it from ready and queue lists at all (DELAY == QUEUE_NOWHERE). */
3047 static void
3048 change_queue_index (rtx next, int delay)
3049 {
3050 int i = QUEUE_INDEX (next);
3051
3052 gcc_assert (QUEUE_NOWHERE <= delay && delay <= max_insn_queue_index
3053 && delay != 0);
3054 gcc_assert (i != QUEUE_SCHEDULED);
3055
3056 if ((delay > 0 && NEXT_Q_AFTER (q_ptr, delay) == i)
3057 || (delay < 0 && delay == i))
3058 /* We have nothing to do. */
3059 return;
3060
3061 /* Remove NEXT from wherever it is now. */
3062 if (i == QUEUE_READY)
3063 ready_remove_insn (next);
3064 else if (i >= 0)
3065 queue_remove (next);
3066
3067 /* Add it to the proper place. */
3068 if (delay == QUEUE_READY)
3069 ready_add (readyp, next, false);
3070 else if (delay >= 1)
3071 queue_insn (next, delay);
3072
3073 if (sched_verbose >= 2)
3074 {
3075 fprintf (sched_dump, ";;\t\ttick updated: insn %s",
3076 (*current_sched_info->print_insn) (next, 0));
3077
3078 if (delay == QUEUE_READY)
3079 fprintf (sched_dump, " into ready\n");
3080 else if (delay >= 1)
3081 fprintf (sched_dump, " into queue with cost=%d\n", delay);
3082 else
3083 fprintf (sched_dump, " removed from ready or queue lists\n");
3084 }
3085 }
3086
3087 /* Extend H_I_D data. */
3088 static void
3089 extend_h_i_d (void)
3090 {
3091 /* We use LUID 0 for the fake insn (UID 0) which holds dependencies for
3092 pseudos which do not cross calls. */
3093 int new_max_uid = get_max_uid () + 1;
3094
3095 h_i_d = xrecalloc (h_i_d, new_max_uid, old_max_uid, sizeof (*h_i_d));
3096 old_max_uid = new_max_uid;
3097
3098 if (targetm.sched.h_i_d_extended)
3099 targetm.sched.h_i_d_extended ();
3100 }
3101
3102 /* Extend READY, READY_TRY and CHOICE_STACK arrays.
3103 N_NEW_INSNS is the number of additional elements to allocate. */
3104 static void
3105 extend_ready (int n_new_insns)
3106 {
3107 int i;
3108
3109 readyp->veclen = rgn_n_insns + n_new_insns + 1 + issue_rate;
3110 readyp->vec = XRESIZEVEC (rtx, readyp->vec, readyp->veclen);
3111
3112 ready_try = xrecalloc (ready_try, rgn_n_insns + n_new_insns + 1,
3113 rgn_n_insns + 1, sizeof (char));
3114
3115 rgn_n_insns += n_new_insns;
3116
3117 choice_stack = XRESIZEVEC (struct choice_entry, choice_stack,
3118 rgn_n_insns + 1);
3119
3120 for (i = rgn_n_insns; n_new_insns--; i--)
3121 choice_stack[i].state = xmalloc (dfa_state_size);
3122 }
3123
3124 /* Extend global scheduler structures (those, that live across calls to
3125 schedule_block) to include information about just emitted INSN. */
3126 static void
3127 extend_global (rtx insn)
3128 {
3129 gcc_assert (INSN_P (insn));
3130 /* These structures have scheduler scope. */
3131 extend_h_i_d ();
3132 init_h_i_d (insn);
3133
3134 extend_dependency_caches (1, 0);
3135 }
3136
3137 /* Extends global and local scheduler structures to include information
3138 about just emitted INSN. */
3139 static void
3140 extend_all (rtx insn)
3141 {
3142 extend_global (insn);
3143
3144 /* These structures have block scope. */
3145 extend_ready (1);
3146
3147 (*current_sched_info->add_remove_insn) (insn, 0);
3148 }
3149
3150 /* Initialize h_i_d entry of the new INSN with default values.
3151 Values, that are not explicitly initialized here, hold zero. */
3152 static void
3153 init_h_i_d (rtx insn)
3154 {
3155 INSN_LUID (insn) = luid++;
3156 INSN_COST (insn) = -1;
3157 TODO_SPEC (insn) = HARD_DEP;
3158 QUEUE_INDEX (insn) = QUEUE_NOWHERE;
3159 INSN_TICK (insn) = INVALID_TICK;
3160 INTER_TICK (insn) = INVALID_TICK;
3161 find_insn_reg_weight1 (insn);
3162
3163 /* These two lists will be freed in schedule_insn (). */
3164 INSN_BACK_DEPS (insn) = create_deps_list (false);
3165 INSN_RESOLVED_BACK_DEPS (insn) = create_deps_list (false);
3166
3167 /* This one should be allocated on the obstack because it should live till
3168 the scheduling ends. */
3169 INSN_FORW_DEPS (insn) = create_deps_list (true);
3170 }
3171
3172 /* Generates recovery code for INSN. */
3173 static void
3174 generate_recovery_code (rtx insn)
3175 {
3176 if (TODO_SPEC (insn) & BEGIN_SPEC)
3177 begin_speculative_block (insn);
3178
3179 /* Here we have insn with no dependencies to
3180 instructions other then CHECK_SPEC ones. */
3181
3182 if (TODO_SPEC (insn) & BE_IN_SPEC)
3183 add_to_speculative_block (insn);
3184 }
3185
3186 /* Helper function.
3187 Tries to add speculative dependencies of type FS between instructions
3188 in deps_list L and TWIN. */
3189 static void
3190 process_insn_forw_deps_be_in_spec (deps_list_t l, rtx twin, ds_t fs)
3191 {
3192 dep_link_t link;
3193
3194 FOR_EACH_DEP_LINK (link, l)
3195 {
3196 ds_t ds;
3197 rtx consumer;
3198
3199 consumer = DEP_LINK_CON (link);
3200
3201 ds = DEP_LINK_STATUS (link);
3202
3203 if (/* If we want to create speculative dep. */
3204 fs
3205 /* And we can do that because this is a true dep. */
3206 && (ds & DEP_TYPES) == DEP_TRUE)
3207 {
3208 gcc_assert (!(ds & BE_IN_SPEC));
3209
3210 if (/* If this dep can be overcome with 'begin speculation'. */
3211 ds & BEGIN_SPEC)
3212 /* Then we have a choice: keep the dep 'begin speculative'
3213 or transform it into 'be in speculative'. */
3214 {
3215 if (/* In try_ready we assert that if insn once became ready
3216 it can be removed from the ready (or queue) list only
3217 due to backend decision. Hence we can't let the
3218 probability of the speculative dep to decrease. */
3219 dep_weak (ds) <= dep_weak (fs))
3220 /* Transform it to be in speculative. */
3221 ds = (ds & ~BEGIN_SPEC) | fs;
3222 }
3223 else
3224 /* Mark the dep as 'be in speculative'. */
3225 ds |= fs;
3226 }
3227
3228 add_back_forw_dep (consumer, twin, DEP_LINK_KIND (link), ds);
3229 }
3230 }
3231
3232 /* Generates recovery code for BEGIN speculative INSN. */
3233 static void
3234 begin_speculative_block (rtx insn)
3235 {
3236 if (TODO_SPEC (insn) & BEGIN_DATA)
3237 nr_begin_data++;
3238 if (TODO_SPEC (insn) & BEGIN_CONTROL)
3239 nr_begin_control++;
3240
3241 create_check_block_twin (insn, false);
3242
3243 TODO_SPEC (insn) &= ~BEGIN_SPEC;
3244 }
3245
3246 /* Generates recovery code for BE_IN speculative INSN. */
3247 static void
3248 add_to_speculative_block (rtx insn)
3249 {
3250 ds_t ts;
3251 dep_link_t link;
3252 rtx twins = NULL;
3253 rtx_vec_t priorities_roots;
3254
3255 ts = TODO_SPEC (insn);
3256 gcc_assert (!(ts & ~BE_IN_SPEC));
3257
3258 if (ts & BE_IN_DATA)
3259 nr_be_in_data++;
3260 if (ts & BE_IN_CONTROL)
3261 nr_be_in_control++;
3262
3263 TODO_SPEC (insn) &= ~BE_IN_SPEC;
3264 gcc_assert (!TODO_SPEC (insn));
3265
3266 DONE_SPEC (insn) |= ts;
3267
3268 /* First we convert all simple checks to branchy. */
3269 for (link = DEPS_LIST_FIRST (INSN_BACK_DEPS (insn)); link != NULL;)
3270 {
3271 rtx check = DEP_LINK_PRO (link);
3272
3273 if (IS_SPECULATION_SIMPLE_CHECK_P (check))
3274 {
3275 create_check_block_twin (check, true);
3276
3277 /* Restart search. */
3278 link = DEPS_LIST_FIRST (INSN_BACK_DEPS (insn));
3279 }
3280 else
3281 /* Continue search. */
3282 link = DEP_LINK_NEXT (link);
3283 }
3284
3285 priorities_roots = NULL;
3286 clear_priorities (insn, &priorities_roots);
3287
3288 do
3289 {
3290 dep_link_t link;
3291 rtx check, twin;
3292 basic_block rec;
3293
3294 link = DEPS_LIST_FIRST (INSN_BACK_DEPS (insn));
3295
3296 gcc_assert ((DEP_LINK_STATUS (link) & BEGIN_SPEC) == 0
3297 && (DEP_LINK_STATUS (link) & BE_IN_SPEC) != 0
3298 && (DEP_LINK_STATUS (link) & DEP_TYPES) == DEP_TRUE);
3299
3300 check = DEP_LINK_PRO (link);
3301
3302 gcc_assert (!IS_SPECULATION_CHECK_P (check) && !ORIG_PAT (check)
3303 && QUEUE_INDEX (check) == QUEUE_NOWHERE);
3304
3305 rec = BLOCK_FOR_INSN (check);
3306
3307 twin = emit_insn_before (copy_rtx (PATTERN (insn)), BB_END (rec));
3308 extend_global (twin);
3309
3310 copy_deps_list_change_con (INSN_RESOLVED_BACK_DEPS (twin),
3311 INSN_RESOLVED_BACK_DEPS (insn),
3312 twin);
3313
3314 if (sched_verbose && spec_info->dump)
3315 /* INSN_BB (insn) isn't determined for twin insns yet.
3316 So we can't use current_sched_info->print_insn. */
3317 fprintf (spec_info->dump, ";;\t\tGenerated twin insn : %d/rec%d\n",
3318 INSN_UID (twin), rec->index);
3319
3320 twins = alloc_INSN_LIST (twin, twins);
3321
3322 /* Add dependences between TWIN and all appropriate
3323 instructions from REC. */
3324 do
3325 {
3326 add_back_forw_dep (twin, check, REG_DEP_TRUE, DEP_TRUE);
3327
3328 do
3329 {
3330 link = DEP_LINK_NEXT (link);
3331
3332 if (link != NULL)
3333 {
3334 check = DEP_LINK_PRO (link);
3335 if (BLOCK_FOR_INSN (check) == rec)
3336 break;
3337 }
3338 else
3339 break;
3340 }
3341 while (1);
3342 }
3343 while (link != NULL);
3344
3345 process_insn_forw_deps_be_in_spec (INSN_FORW_DEPS (insn), twin, ts);
3346
3347 /* Remove all dependencies between INSN and insns in REC. */
3348 for (link = DEPS_LIST_FIRST (INSN_BACK_DEPS (insn)); link != NULL;)
3349 {
3350 check = DEP_LINK_PRO (link);
3351
3352 if (BLOCK_FOR_INSN (check) == rec)
3353 {
3354 delete_back_forw_dep (link);
3355
3356 /* Restart search. */
3357 link = DEPS_LIST_FIRST (INSN_BACK_DEPS (insn));
3358 }
3359 else
3360 /* Continue search. */
3361 link = DEP_LINK_NEXT (link);
3362 }
3363 }
3364 while (!deps_list_empty_p (INSN_BACK_DEPS (insn)));
3365
3366 /* We couldn't have added the dependencies between INSN and TWINS earlier
3367 because that would make TWINS appear in the INSN_BACK_DEPS (INSN). */
3368 while (twins)
3369 {
3370 rtx twin;
3371
3372 twin = XEXP (twins, 0);
3373 add_back_forw_dep (twin, insn, REG_DEP_OUTPUT, DEP_OUTPUT);
3374
3375 twin = XEXP (twins, 1);
3376 free_INSN_LIST_node (twins);
3377 twins = twin;
3378 }
3379
3380 calc_priorities (priorities_roots);
3381 VEC_free (rtx, heap, priorities_roots);
3382 }
3383
3384 /* Extends and fills with zeros (only the new part) array pointed to by P. */
3385 void *
3386 xrecalloc (void *p, size_t new_nmemb, size_t old_nmemb, size_t size)
3387 {
3388 gcc_assert (new_nmemb >= old_nmemb);
3389 p = XRESIZEVAR (void, p, new_nmemb * size);
3390 memset (((char *) p) + old_nmemb * size, 0, (new_nmemb - old_nmemb) * size);
3391 return p;
3392 }
3393
3394 /* Return the probability of speculation success for the speculation
3395 status DS. */
3396 static dw_t
3397 dep_weak (ds_t ds)
3398 {
3399 ds_t res = 1, dt;
3400 int n = 0;
3401
3402 dt = FIRST_SPEC_TYPE;
3403 do
3404 {
3405 if (ds & dt)
3406 {
3407 res *= (ds_t) get_dep_weak (ds, dt);
3408 n++;
3409 }
3410
3411 if (dt == LAST_SPEC_TYPE)
3412 break;
3413 dt <<= SPEC_TYPE_SHIFT;
3414 }
3415 while (1);
3416
3417 gcc_assert (n);
3418 while (--n)
3419 res /= MAX_DEP_WEAK;
3420
3421 if (res < MIN_DEP_WEAK)
3422 res = MIN_DEP_WEAK;
3423
3424 gcc_assert (res <= MAX_DEP_WEAK);
3425
3426 return (dw_t) res;
3427 }
3428
3429 /* Helper function.
3430 Find fallthru edge from PRED. */
3431 static edge
3432 find_fallthru_edge (basic_block pred)
3433 {
3434 edge e;
3435 edge_iterator ei;
3436 basic_block succ;
3437
3438 succ = pred->next_bb;
3439 gcc_assert (succ->prev_bb == pred);
3440
3441 if (EDGE_COUNT (pred->succs) <= EDGE_COUNT (succ->preds))
3442 {
3443 FOR_EACH_EDGE (e, ei, pred->succs)
3444 if (e->flags & EDGE_FALLTHRU)
3445 {
3446 gcc_assert (e->dest == succ);
3447 return e;
3448 }
3449 }
3450 else
3451 {
3452 FOR_EACH_EDGE (e, ei, succ->preds)
3453 if (e->flags & EDGE_FALLTHRU)
3454 {
3455 gcc_assert (e->src == pred);
3456 return e;
3457 }
3458 }
3459
3460 return NULL;
3461 }
3462
3463 /* Initialize BEFORE_RECOVERY variable. */
3464 static void
3465 init_before_recovery (void)
3466 {
3467 basic_block last;
3468 edge e;
3469
3470 last = EXIT_BLOCK_PTR->prev_bb;
3471 e = find_fallthru_edge (last);
3472
3473 if (e)
3474 {
3475 /* We create two basic blocks:
3476 1. Single instruction block is inserted right after E->SRC
3477 and has jump to
3478 2. Empty block right before EXIT_BLOCK.
3479 Between these two blocks recovery blocks will be emitted. */
3480
3481 basic_block single, empty;
3482 rtx x, label;
3483
3484 single = create_empty_bb (last);
3485 empty = create_empty_bb (single);
3486
3487 single->count = last->count;
3488 empty->count = last->count;
3489 single->frequency = last->frequency;
3490 empty->frequency = last->frequency;
3491 BB_COPY_PARTITION (single, last);
3492 BB_COPY_PARTITION (empty, last);
3493
3494 redirect_edge_succ (e, single);
3495 make_single_succ_edge (single, empty, 0);
3496 make_single_succ_edge (empty, EXIT_BLOCK_PTR,
3497 EDGE_FALLTHRU | EDGE_CAN_FALLTHRU);
3498
3499 label = block_label (empty);
3500 x = emit_jump_insn_after (gen_jump (label), BB_END (single));
3501 JUMP_LABEL (x) = label;
3502 LABEL_NUSES (label)++;
3503 extend_global (x);
3504
3505 emit_barrier_after (x);
3506
3507 add_block (empty, 0);
3508 add_block (single, 0);
3509
3510 before_recovery = single;
3511
3512 if (sched_verbose >= 2 && spec_info->dump)
3513 fprintf (spec_info->dump,
3514 ";;\t\tFixed fallthru to EXIT : %d->>%d->%d->>EXIT\n",
3515 last->index, single->index, empty->index);
3516 }
3517 else
3518 before_recovery = last;
3519 }
3520
3521 /* Returns new recovery block. */
3522 static basic_block
3523 create_recovery_block (void)
3524 {
3525 rtx label;
3526 rtx barrier;
3527 basic_block rec;
3528
3529 added_recovery_block_p = true;
3530
3531 if (!before_recovery)
3532 init_before_recovery ();
3533
3534 barrier = get_last_bb_insn (before_recovery);
3535 gcc_assert (BARRIER_P (barrier));
3536
3537 label = emit_label_after (gen_label_rtx (), barrier);
3538
3539 rec = create_basic_block (label, label, before_recovery);
3540
3541 /* Recovery block always end with an unconditional jump. */
3542 emit_barrier_after (BB_END (rec));
3543
3544 if (BB_PARTITION (before_recovery) != BB_UNPARTITIONED)
3545 BB_SET_PARTITION (rec, BB_COLD_PARTITION);
3546
3547 if (sched_verbose && spec_info->dump)
3548 fprintf (spec_info->dump, ";;\t\tGenerated recovery block rec%d\n",
3549 rec->index);
3550
3551 before_recovery = rec;
3552
3553 return rec;
3554 }
3555
3556 /* This function creates recovery code for INSN. If MUTATE_P is nonzero,
3557 INSN is a simple check, that should be converted to branchy one. */
3558 static void
3559 create_check_block_twin (rtx insn, bool mutate_p)
3560 {
3561 basic_block rec;
3562 rtx label, check, twin;
3563 dep_link_t link;
3564 ds_t fs;
3565
3566 gcc_assert (ORIG_PAT (insn)
3567 && (!mutate_p
3568 || (IS_SPECULATION_SIMPLE_CHECK_P (insn)
3569 && !(TODO_SPEC (insn) & SPECULATIVE))));
3570
3571 /* Create recovery block. */
3572 if (mutate_p || targetm.sched.needs_block_p (insn))
3573 {
3574 rec = create_recovery_block ();
3575 label = BB_HEAD (rec);
3576 }
3577 else
3578 {
3579 rec = EXIT_BLOCK_PTR;
3580 label = 0;
3581 }
3582
3583 /* Emit CHECK. */
3584 check = targetm.sched.gen_check (insn, label, mutate_p);
3585
3586 if (rec != EXIT_BLOCK_PTR)
3587 {
3588 /* To have mem_reg alive at the beginning of second_bb,
3589 we emit check BEFORE insn, so insn after splitting
3590 insn will be at the beginning of second_bb, which will
3591 provide us with the correct life information. */
3592 check = emit_jump_insn_before (check, insn);
3593 JUMP_LABEL (check) = label;
3594 LABEL_NUSES (label)++;
3595 }
3596 else
3597 check = emit_insn_before (check, insn);
3598
3599 /* Extend data structures. */
3600 extend_all (check);
3601 RECOVERY_BLOCK (check) = rec;
3602
3603 if (sched_verbose && spec_info->dump)
3604 fprintf (spec_info->dump, ";;\t\tGenerated check insn : %s\n",
3605 (*current_sched_info->print_insn) (check, 0));
3606
3607 gcc_assert (ORIG_PAT (insn));
3608
3609 /* Initialize TWIN (twin is a duplicate of original instruction
3610 in the recovery block). */
3611 if (rec != EXIT_BLOCK_PTR)
3612 {
3613 FOR_EACH_DEP_LINK (link, INSN_RESOLVED_BACK_DEPS (insn))
3614 if ((DEP_LINK_STATUS (link) & DEP_OUTPUT) != 0)
3615 {
3616 struct _dep _dep, *dep = &_dep;
3617
3618 init_dep (dep, DEP_LINK_PRO (link), check, REG_DEP_TRUE);
3619
3620 add_back_dep_to_deps_list (INSN_RESOLVED_BACK_DEPS (check), dep);
3621 }
3622
3623 twin = emit_insn_after (ORIG_PAT (insn), BB_END (rec));
3624 extend_global (twin);
3625
3626 if (sched_verbose && spec_info->dump)
3627 /* INSN_BB (insn) isn't determined for twin insns yet.
3628 So we can't use current_sched_info->print_insn. */
3629 fprintf (spec_info->dump, ";;\t\tGenerated twin insn : %d/rec%d\n",
3630 INSN_UID (twin), rec->index);
3631 }
3632 else
3633 {
3634 ORIG_PAT (check) = ORIG_PAT (insn);
3635 HAS_INTERNAL_DEP (check) = 1;
3636 twin = check;
3637 /* ??? We probably should change all OUTPUT dependencies to
3638 (TRUE | OUTPUT). */
3639 }
3640
3641 copy_deps_list_change_con (INSN_RESOLVED_BACK_DEPS (twin),
3642 INSN_RESOLVED_BACK_DEPS (insn),
3643 twin);
3644
3645 if (rec != EXIT_BLOCK_PTR)
3646 /* In case of branchy check, fix CFG. */
3647 {
3648 basic_block first_bb, second_bb;
3649 rtx jump;
3650 edge e;
3651 int edge_flags;
3652
3653 first_bb = BLOCK_FOR_INSN (check);
3654 e = split_block (first_bb, check);
3655 /* split_block emits note if *check == BB_END. Probably it
3656 is better to rip that note off. */
3657 gcc_assert (e->src == first_bb);
3658 second_bb = e->dest;
3659
3660 /* This is fixing of incoming edge. */
3661 /* ??? Which other flags should be specified? */
3662 if (BB_PARTITION (first_bb) != BB_PARTITION (rec))
3663 /* Partition type is the same, if it is "unpartitioned". */
3664 edge_flags = EDGE_CROSSING;
3665 else
3666 edge_flags = 0;
3667
3668 e = make_edge (first_bb, rec, edge_flags);
3669
3670 add_block (second_bb, first_bb);
3671
3672 gcc_assert (NOTE_INSN_BASIC_BLOCK_P (BB_HEAD (second_bb)));
3673 label = block_label (second_bb);
3674 jump = emit_jump_insn_after (gen_jump (label), BB_END (rec));
3675 JUMP_LABEL (jump) = label;
3676 LABEL_NUSES (label)++;
3677 extend_global (jump);
3678
3679 if (BB_PARTITION (second_bb) != BB_PARTITION (rec))
3680 /* Partition type is the same, if it is "unpartitioned". */
3681 {
3682 /* Rewritten from cfgrtl.c. */
3683 if (flag_reorder_blocks_and_partition
3684 && targetm.have_named_sections
3685 /*&& !any_condjump_p (jump)*/)
3686 /* any_condjump_p (jump) == false.
3687 We don't need the same note for the check because
3688 any_condjump_p (check) == true. */
3689 {
3690 REG_NOTES (jump) = gen_rtx_EXPR_LIST (REG_CROSSING_JUMP,
3691 NULL_RTX,
3692 REG_NOTES (jump));
3693 }
3694 edge_flags = EDGE_CROSSING;
3695 }
3696 else
3697 edge_flags = 0;
3698
3699 make_single_succ_edge (rec, second_bb, edge_flags);
3700
3701 add_block (rec, EXIT_BLOCK_PTR);
3702 }
3703
3704 /* Move backward dependences from INSN to CHECK and
3705 move forward dependences from INSN to TWIN. */
3706 FOR_EACH_DEP_LINK (link, INSN_BACK_DEPS (insn))
3707 {
3708 rtx pro = DEP_LINK_PRO (link);
3709 enum reg_note dk = DEP_LINK_KIND (link);
3710 ds_t ds;
3711
3712 /* If BEGIN_DATA: [insn ~~TRUE~~> producer]:
3713 check --TRUE--> producer ??? or ANTI ???
3714 twin --TRUE--> producer
3715 twin --ANTI--> check
3716
3717 If BEGIN_CONTROL: [insn ~~ANTI~~> producer]:
3718 check --ANTI--> producer
3719 twin --ANTI--> producer
3720 twin --ANTI--> check
3721
3722 If BE_IN_SPEC: [insn ~~TRUE~~> producer]:
3723 check ~~TRUE~~> producer
3724 twin ~~TRUE~~> producer
3725 twin --ANTI--> check */
3726
3727 ds = DEP_LINK_STATUS (link);
3728
3729 if (ds & BEGIN_SPEC)
3730 {
3731 gcc_assert (!mutate_p);
3732 ds &= ~BEGIN_SPEC;
3733 }
3734
3735 if (rec != EXIT_BLOCK_PTR)
3736 {
3737 add_back_forw_dep (check, pro, dk, ds);
3738 add_back_forw_dep (twin, pro, dk, ds);
3739 }
3740 else
3741 add_back_forw_dep (check, pro, dk, ds);
3742 }
3743
3744 for (link = DEPS_LIST_FIRST (INSN_BACK_DEPS (insn)); link != NULL;)
3745 if ((DEP_LINK_STATUS (link) & BEGIN_SPEC)
3746 || mutate_p)
3747 /* We can delete this dep only if we totally overcome it with
3748 BEGIN_SPECULATION. */
3749 {
3750 delete_back_forw_dep (link);
3751
3752 /* Restart search. */
3753 link = DEPS_LIST_FIRST (INSN_BACK_DEPS (insn));
3754 }
3755 else
3756 /* Continue search. */
3757 link = DEP_LINK_NEXT (link);
3758
3759 fs = 0;
3760
3761 /* Fields (DONE_SPEC (x) & BEGIN_SPEC) and CHECK_SPEC (x) are set only
3762 here. */
3763
3764 gcc_assert (!DONE_SPEC (insn));
3765
3766 if (!mutate_p)
3767 {
3768 ds_t ts = TODO_SPEC (insn);
3769
3770 DONE_SPEC (insn) = ts & BEGIN_SPEC;
3771 CHECK_SPEC (check) = ts & BEGIN_SPEC;
3772
3773 if (ts & BEGIN_DATA)
3774 fs = set_dep_weak (fs, BE_IN_DATA, get_dep_weak (ts, BEGIN_DATA));
3775 if (ts & BEGIN_CONTROL)
3776 fs = set_dep_weak (fs, BE_IN_CONTROL, get_dep_weak (ts, BEGIN_CONTROL));
3777 }
3778 else
3779 CHECK_SPEC (check) = CHECK_SPEC (insn);
3780
3781 /* Future speculations: call the helper. */
3782 process_insn_forw_deps_be_in_spec (INSN_FORW_DEPS (insn), twin, fs);
3783
3784 if (rec != EXIT_BLOCK_PTR)
3785 {
3786 /* Which types of dependencies should we use here is,
3787 generally, machine-dependent question... But, for now,
3788 it is not. */
3789
3790 if (!mutate_p)
3791 {
3792 add_back_forw_dep (check, insn, REG_DEP_TRUE, DEP_TRUE);
3793 add_back_forw_dep (twin, insn, REG_DEP_OUTPUT, DEP_OUTPUT);
3794 }
3795 else
3796 {
3797 dep_link_t link;
3798
3799 if (spec_info->dump)
3800 fprintf (spec_info->dump, ";;\t\tRemoved simple check : %s\n",
3801 (*current_sched_info->print_insn) (insn, 0));
3802
3803 /* Remove all forward dependencies of the INSN. */
3804 link = DEPS_LIST_FIRST (INSN_FORW_DEPS (insn));
3805 while (link != NULL)
3806 {
3807 delete_back_forw_dep (link);
3808 link = DEPS_LIST_FIRST (INSN_FORW_DEPS (insn));
3809 }
3810
3811 if (QUEUE_INDEX (insn) != QUEUE_NOWHERE)
3812 try_ready (check);
3813
3814 sched_remove_insn (insn);
3815 }
3816
3817 add_back_forw_dep (twin, check, REG_DEP_ANTI, DEP_ANTI);
3818 }
3819 else
3820 add_back_forw_dep (check, insn, REG_DEP_TRUE, DEP_TRUE | DEP_OUTPUT);
3821
3822 if (!mutate_p)
3823 /* Fix priorities. If MUTATE_P is nonzero, this is not necessary,
3824 because it'll be done later in add_to_speculative_block. */
3825 {
3826 rtx_vec_t priorities_roots = NULL;
3827
3828 clear_priorities (twin, &priorities_roots);
3829 calc_priorities (priorities_roots);
3830 VEC_free (rtx, heap, priorities_roots);
3831 }
3832 }
3833
3834 /* Removes dependency between instructions in the recovery block REC
3835 and usual region instructions. It keeps inner dependences so it
3836 won't be necessary to recompute them. */
3837 static void
3838 fix_recovery_deps (basic_block rec)
3839 {
3840 dep_link_t link;
3841 rtx note, insn, jump, ready_list = 0;
3842 bitmap_head in_ready;
3843 rtx link1;
3844
3845 bitmap_initialize (&in_ready, 0);
3846
3847 /* NOTE - a basic block note. */
3848 note = NEXT_INSN (BB_HEAD (rec));
3849 gcc_assert (NOTE_INSN_BASIC_BLOCK_P (note));
3850 insn = BB_END (rec);
3851 gcc_assert (JUMP_P (insn));
3852 insn = PREV_INSN (insn);
3853
3854 do
3855 {
3856 for (link = DEPS_LIST_FIRST (INSN_FORW_DEPS (insn)); link != NULL;)
3857 {
3858 rtx consumer;
3859
3860 consumer = DEP_LINK_CON (link);
3861
3862 if (BLOCK_FOR_INSN (consumer) != rec)
3863 {
3864 delete_back_forw_dep (link);
3865
3866 if (!bitmap_bit_p (&in_ready, INSN_LUID (consumer)))
3867 {
3868 ready_list = alloc_INSN_LIST (consumer, ready_list);
3869 bitmap_set_bit (&in_ready, INSN_LUID (consumer));
3870 }
3871
3872 /* Restart search. */
3873 link = DEPS_LIST_FIRST (INSN_FORW_DEPS (insn));
3874 }
3875 else
3876 {
3877 gcc_assert ((DEP_LINK_STATUS (link) & DEP_TYPES) == DEP_TRUE);
3878
3879 /* Continue search. */
3880 link = DEP_LINK_NEXT (link);
3881 }
3882 }
3883
3884 insn = PREV_INSN (insn);
3885 }
3886 while (insn != note);
3887
3888 bitmap_clear (&in_ready);
3889
3890 /* Try to add instructions to the ready or queue list. */
3891 for (link1 = ready_list; link1; link1 = XEXP (link1, 1))
3892 try_ready (XEXP (link1, 0));
3893 free_INSN_LIST_list (&ready_list);
3894
3895 /* Fixing jump's dependences. */
3896 insn = BB_HEAD (rec);
3897 jump = BB_END (rec);
3898
3899 gcc_assert (LABEL_P (insn));
3900 insn = NEXT_INSN (insn);
3901
3902 gcc_assert (NOTE_INSN_BASIC_BLOCK_P (insn));
3903 add_jump_dependencies (insn, jump);
3904 }
3905
3906 /* Changes pattern of the INSN to NEW_PAT. */
3907 static void
3908 change_pattern (rtx insn, rtx new_pat)
3909 {
3910 int t;
3911
3912 t = validate_change (insn, &PATTERN (insn), new_pat, 0);
3913 gcc_assert (t);
3914 /* Invalidate INSN_COST, so it'll be recalculated. */
3915 INSN_COST (insn) = -1;
3916 /* Invalidate INSN_TICK, so it'll be recalculated. */
3917 INSN_TICK (insn) = INVALID_TICK;
3918 dfa_clear_single_insn_cache (insn);
3919 }
3920
3921
3922 /* -1 - can't speculate,
3923 0 - for speculation with REQUEST mode it is OK to use
3924 current instruction pattern,
3925 1 - need to change pattern for *NEW_PAT to be speculative. */
3926 static int
3927 speculate_insn (rtx insn, ds_t request, rtx *new_pat)
3928 {
3929 gcc_assert (current_sched_info->flags & DO_SPECULATION
3930 && (request & SPECULATIVE));
3931
3932 if (!NONJUMP_INSN_P (insn)
3933 || HAS_INTERNAL_DEP (insn)
3934 || SCHED_GROUP_P (insn)
3935 || side_effects_p (PATTERN (insn))
3936 || (request & spec_info->mask) != request)
3937 return -1;
3938
3939 gcc_assert (!IS_SPECULATION_CHECK_P (insn));
3940
3941 if (request & BE_IN_SPEC)
3942 {
3943 if (may_trap_p (PATTERN (insn)))
3944 return -1;
3945
3946 if (!(request & BEGIN_SPEC))
3947 return 0;
3948 }
3949
3950 return targetm.sched.speculate_insn (insn, request & BEGIN_SPEC, new_pat);
3951 }
3952
3953 /* Print some information about block BB, which starts with HEAD and
3954 ends with TAIL, before scheduling it.
3955 I is zero, if scheduler is about to start with the fresh ebb. */
3956 static void
3957 dump_new_block_header (int i, basic_block bb, rtx head, rtx tail)
3958 {
3959 if (!i)
3960 fprintf (sched_dump,
3961 ";; ======================================================\n");
3962 else
3963 fprintf (sched_dump,
3964 ";; =====================ADVANCING TO=====================\n");
3965 fprintf (sched_dump,
3966 ";; -- basic block %d from %d to %d -- %s reload\n",
3967 bb->index, INSN_UID (head), INSN_UID (tail),
3968 (reload_completed ? "after" : "before"));
3969 fprintf (sched_dump,
3970 ";; ======================================================\n");
3971 fprintf (sched_dump, "\n");
3972 }
3973
3974 /* Unlink basic block notes and labels and saves them, so they
3975 can be easily restored. We unlink basic block notes in EBB to
3976 provide back-compatibility with the previous code, as target backends
3977 assume, that there'll be only instructions between
3978 current_sched_info->{head and tail}. We restore these notes as soon
3979 as we can.
3980 FIRST (LAST) is the first (last) basic block in the ebb.
3981 NB: In usual case (FIRST == LAST) nothing is really done. */
3982 void
3983 unlink_bb_notes (basic_block first, basic_block last)
3984 {
3985 /* We DON'T unlink basic block notes of the first block in the ebb. */
3986 if (first == last)
3987 return;
3988
3989 bb_header = xmalloc (last_basic_block * sizeof (*bb_header));
3990
3991 /* Make a sentinel. */
3992 if (last->next_bb != EXIT_BLOCK_PTR)
3993 bb_header[last->next_bb->index] = 0;
3994
3995 first = first->next_bb;
3996 do
3997 {
3998 rtx prev, label, note, next;
3999
4000 label = BB_HEAD (last);
4001 if (LABEL_P (label))
4002 note = NEXT_INSN (label);
4003 else
4004 note = label;
4005 gcc_assert (NOTE_INSN_BASIC_BLOCK_P (note));
4006
4007 prev = PREV_INSN (label);
4008 next = NEXT_INSN (note);
4009 gcc_assert (prev && next);
4010
4011 NEXT_INSN (prev) = next;
4012 PREV_INSN (next) = prev;
4013
4014 bb_header[last->index] = label;
4015
4016 if (last == first)
4017 break;
4018
4019 last = last->prev_bb;
4020 }
4021 while (1);
4022 }
4023
4024 /* Restore basic block notes.
4025 FIRST is the first basic block in the ebb. */
4026 static void
4027 restore_bb_notes (basic_block first)
4028 {
4029 if (!bb_header)
4030 return;
4031
4032 /* We DON'T unlink basic block notes of the first block in the ebb. */
4033 first = first->next_bb;
4034 /* Remember: FIRST is actually a second basic block in the ebb. */
4035
4036 while (first != EXIT_BLOCK_PTR
4037 && bb_header[first->index])
4038 {
4039 rtx prev, label, note, next;
4040
4041 label = bb_header[first->index];
4042 prev = PREV_INSN (label);
4043 next = NEXT_INSN (prev);
4044
4045 if (LABEL_P (label))
4046 note = NEXT_INSN (label);
4047 else
4048 note = label;
4049 gcc_assert (NOTE_INSN_BASIC_BLOCK_P (note));
4050
4051 bb_header[first->index] = 0;
4052
4053 NEXT_INSN (prev) = label;
4054 NEXT_INSN (note) = next;
4055 PREV_INSN (next) = note;
4056
4057 first = first->next_bb;
4058 }
4059
4060 free (bb_header);
4061 bb_header = 0;
4062 }
4063
4064 /* Extend per basic block data structures of the scheduler.
4065 If BB is NULL, initialize structures for the whole CFG.
4066 Otherwise, initialize them for the just created BB. */
4067 static void
4068 extend_bb (void)
4069 {
4070 rtx insn;
4071
4072 old_last_basic_block = last_basic_block;
4073
4074 /* The following is done to keep current_sched_info->next_tail non null. */
4075
4076 insn = BB_END (EXIT_BLOCK_PTR->prev_bb);
4077 if (NEXT_INSN (insn) == 0
4078 || (!NOTE_P (insn)
4079 && !LABEL_P (insn)
4080 /* Don't emit a NOTE if it would end up before a BARRIER. */
4081 && !BARRIER_P (NEXT_INSN (insn))))
4082 {
4083 rtx note = emit_note_after (NOTE_INSN_DELETED, insn);
4084 /* Make insn appear outside BB. */
4085 set_block_for_insn (note, NULL);
4086 BB_END (EXIT_BLOCK_PTR->prev_bb) = insn;
4087 }
4088 }
4089
4090 /* Add a basic block BB to extended basic block EBB.
4091 If EBB is EXIT_BLOCK_PTR, then BB is recovery block.
4092 If EBB is NULL, then BB should be a new region. */
4093 void
4094 add_block (basic_block bb, basic_block ebb)
4095 {
4096 gcc_assert (current_sched_info->flags & NEW_BBS);
4097
4098 extend_bb ();
4099
4100 if (current_sched_info->add_block)
4101 /* This changes only data structures of the front-end. */
4102 current_sched_info->add_block (bb, ebb);
4103 }
4104
4105 /* Helper function.
4106 Fix CFG after both in- and inter-block movement of
4107 control_flow_insn_p JUMP. */
4108 static void
4109 fix_jump_move (rtx jump)
4110 {
4111 basic_block bb, jump_bb, jump_bb_next;
4112
4113 bb = BLOCK_FOR_INSN (PREV_INSN (jump));
4114 jump_bb = BLOCK_FOR_INSN (jump);
4115 jump_bb_next = jump_bb->next_bb;
4116
4117 gcc_assert (current_sched_info->flags & SCHED_EBB
4118 || IS_SPECULATION_BRANCHY_CHECK_P (jump));
4119
4120 if (!NOTE_INSN_BASIC_BLOCK_P (BB_END (jump_bb_next)))
4121 /* if jump_bb_next is not empty. */
4122 BB_END (jump_bb) = BB_END (jump_bb_next);
4123
4124 if (BB_END (bb) != PREV_INSN (jump))
4125 /* Then there are instruction after jump that should be placed
4126 to jump_bb_next. */
4127 BB_END (jump_bb_next) = BB_END (bb);
4128 else
4129 /* Otherwise jump_bb_next is empty. */
4130 BB_END (jump_bb_next) = NEXT_INSN (BB_HEAD (jump_bb_next));
4131
4132 /* To make assertion in move_insn happy. */
4133 BB_END (bb) = PREV_INSN (jump);
4134
4135 update_bb_for_insn (jump_bb_next);
4136 }
4137
4138 /* Fix CFG after interblock movement of control_flow_insn_p JUMP. */
4139 static void
4140 move_block_after_check (rtx jump)
4141 {
4142 basic_block bb, jump_bb, jump_bb_next;
4143 VEC(edge,gc) *t;
4144
4145 bb = BLOCK_FOR_INSN (PREV_INSN (jump));
4146 jump_bb = BLOCK_FOR_INSN (jump);
4147 jump_bb_next = jump_bb->next_bb;
4148
4149 update_bb_for_insn (jump_bb);
4150
4151 gcc_assert (IS_SPECULATION_CHECK_P (jump)
4152 || IS_SPECULATION_CHECK_P (BB_END (jump_bb_next)));
4153
4154 unlink_block (jump_bb_next);
4155 link_block (jump_bb_next, bb);
4156
4157 t = bb->succs;
4158 bb->succs = 0;
4159 move_succs (&(jump_bb->succs), bb);
4160 move_succs (&(jump_bb_next->succs), jump_bb);
4161 move_succs (&t, jump_bb_next);
4162
4163 df_mark_solutions_dirty ();
4164
4165 if (current_sched_info->fix_recovery_cfg)
4166 current_sched_info->fix_recovery_cfg
4167 (bb->index, jump_bb->index, jump_bb_next->index);
4168 }
4169
4170 /* Helper function for move_block_after_check.
4171 This functions attaches edge vector pointed to by SUCCSP to
4172 block TO. */
4173 static void
4174 move_succs (VEC(edge,gc) **succsp, basic_block to)
4175 {
4176 edge e;
4177 edge_iterator ei;
4178
4179 gcc_assert (to->succs == 0);
4180
4181 to->succs = *succsp;
4182
4183 FOR_EACH_EDGE (e, ei, to->succs)
4184 e->src = to;
4185
4186 *succsp = 0;
4187 }
4188
4189 /* Remove INSN from the instruction stream.
4190 INSN should have any dependencies. */
4191 static void
4192 sched_remove_insn (rtx insn)
4193 {
4194 change_queue_index (insn, QUEUE_NOWHERE);
4195 current_sched_info->add_remove_insn (insn, 1);
4196 remove_insn (insn);
4197 }
4198
4199 /* Clear priorities of all instructions, that are forward dependent on INSN.
4200 Store in vector pointed to by ROOTS_PTR insns on which priority () should
4201 be invoked to initialize all cleared priorities. */
4202 static void
4203 clear_priorities (rtx insn, rtx_vec_t *roots_ptr)
4204 {
4205 dep_link_t link;
4206 bool insn_is_root_p = true;
4207
4208 gcc_assert (QUEUE_INDEX (insn) != QUEUE_SCHEDULED);
4209
4210 FOR_EACH_DEP_LINK (link, INSN_BACK_DEPS (insn))
4211 {
4212 dep_t dep = DEP_LINK_DEP (link);
4213 rtx pro = DEP_PRO (dep);
4214
4215 if (INSN_PRIORITY_STATUS (pro) >= 0
4216 && QUEUE_INDEX (insn) != QUEUE_SCHEDULED)
4217 {
4218 /* If DEP doesn't contribute to priority then INSN itself should
4219 be added to priority roots. */
4220 if (contributes_to_priority_p (dep))
4221 insn_is_root_p = false;
4222
4223 INSN_PRIORITY_STATUS (pro) = -1;
4224 clear_priorities (pro, roots_ptr);
4225 }
4226 }
4227
4228 if (insn_is_root_p)
4229 VEC_safe_push (rtx, heap, *roots_ptr, insn);
4230 }
4231
4232 /* Recompute priorities of instructions, whose priorities might have been
4233 changed. ROOTS is a vector of instructions whose priority computation will
4234 trigger initialization of all cleared priorities. */
4235 static void
4236 calc_priorities (rtx_vec_t roots)
4237 {
4238 int i;
4239 rtx insn;
4240
4241 for (i = 0; VEC_iterate (rtx, roots, i, insn); i++)
4242 priority (insn);
4243 }
4244
4245
4246 /* Add dependences between JUMP and other instructions in the recovery
4247 block. INSN is the first insn the recovery block. */
4248 static void
4249 add_jump_dependencies (rtx insn, rtx jump)
4250 {
4251 do
4252 {
4253 insn = NEXT_INSN (insn);
4254 if (insn == jump)
4255 break;
4256
4257 if (deps_list_empty_p (INSN_FORW_DEPS (insn)))
4258 add_back_forw_dep (jump, insn, REG_DEP_ANTI, DEP_ANTI);
4259 }
4260 while (1);
4261
4262 gcc_assert (!deps_list_empty_p (INSN_BACK_DEPS (jump)));
4263 }
4264
4265 /* Return the NOTE_INSN_BASIC_BLOCK of BB. */
4266 rtx
4267 bb_note (basic_block bb)
4268 {
4269 rtx note;
4270
4271 note = BB_HEAD (bb);
4272 if (LABEL_P (note))
4273 note = NEXT_INSN (note);
4274
4275 gcc_assert (NOTE_INSN_BASIC_BLOCK_P (note));
4276 return note;
4277 }
4278
4279 #ifdef ENABLE_CHECKING
4280 extern void debug_spec_status (ds_t);
4281
4282 /* Dump information about the dependence status S. */
4283 void
4284 debug_spec_status (ds_t s)
4285 {
4286 FILE *f = stderr;
4287
4288 if (s & BEGIN_DATA)
4289 fprintf (f, "BEGIN_DATA: %d; ", get_dep_weak (s, BEGIN_DATA));
4290 if (s & BE_IN_DATA)
4291 fprintf (f, "BE_IN_DATA: %d; ", get_dep_weak (s, BE_IN_DATA));
4292 if (s & BEGIN_CONTROL)
4293 fprintf (f, "BEGIN_CONTROL: %d; ", get_dep_weak (s, BEGIN_CONTROL));
4294 if (s & BE_IN_CONTROL)
4295 fprintf (f, "BE_IN_CONTROL: %d; ", get_dep_weak (s, BE_IN_CONTROL));
4296
4297 if (s & HARD_DEP)
4298 fprintf (f, "HARD_DEP; ");
4299
4300 if (s & DEP_TRUE)
4301 fprintf (f, "DEP_TRUE; ");
4302 if (s & DEP_ANTI)
4303 fprintf (f, "DEP_ANTI; ");
4304 if (s & DEP_OUTPUT)
4305 fprintf (f, "DEP_OUTPUT; ");
4306
4307 fprintf (f, "\n");
4308 }
4309
4310 /* Helper function for check_cfg.
4311 Return nonzero, if edge vector pointed to by EL has edge with TYPE in
4312 its flags. */
4313 static int
4314 has_edge_p (VEC(edge,gc) *el, int type)
4315 {
4316 edge e;
4317 edge_iterator ei;
4318
4319 FOR_EACH_EDGE (e, ei, el)
4320 if (e->flags & type)
4321 return 1;
4322 return 0;
4323 }
4324
4325 /* Check few properties of CFG between HEAD and TAIL.
4326 If HEAD (TAIL) is NULL check from the beginning (till the end) of the
4327 instruction stream. */
4328 static void
4329 check_cfg (rtx head, rtx tail)
4330 {
4331 rtx next_tail;
4332 basic_block bb = 0;
4333 int not_first = 0, not_last;
4334
4335 if (head == NULL)
4336 head = get_insns ();
4337 if (tail == NULL)
4338 tail = get_last_insn ();
4339 next_tail = NEXT_INSN (tail);
4340
4341 do
4342 {
4343 not_last = head != tail;
4344
4345 if (not_first)
4346 gcc_assert (NEXT_INSN (PREV_INSN (head)) == head);
4347 if (not_last)
4348 gcc_assert (PREV_INSN (NEXT_INSN (head)) == head);
4349
4350 if (LABEL_P (head)
4351 || (NOTE_INSN_BASIC_BLOCK_P (head)
4352 && (!not_first
4353 || (not_first && !LABEL_P (PREV_INSN (head))))))
4354 {
4355 gcc_assert (bb == 0);
4356 bb = BLOCK_FOR_INSN (head);
4357 if (bb != 0)
4358 gcc_assert (BB_HEAD (bb) == head);
4359 else
4360 /* This is the case of jump table. See inside_basic_block_p (). */
4361 gcc_assert (LABEL_P (head) && !inside_basic_block_p (head));
4362 }
4363
4364 if (bb == 0)
4365 {
4366 gcc_assert (!inside_basic_block_p (head));
4367 head = NEXT_INSN (head);
4368 }
4369 else
4370 {
4371 gcc_assert (inside_basic_block_p (head)
4372 || NOTE_P (head));
4373 gcc_assert (BLOCK_FOR_INSN (head) == bb);
4374
4375 if (LABEL_P (head))
4376 {
4377 head = NEXT_INSN (head);
4378 gcc_assert (NOTE_INSN_BASIC_BLOCK_P (head));
4379 }
4380 else
4381 {
4382 if (control_flow_insn_p (head))
4383 {
4384 gcc_assert (BB_END (bb) == head);
4385
4386 if (any_uncondjump_p (head))
4387 gcc_assert (EDGE_COUNT (bb->succs) == 1
4388 && BARRIER_P (NEXT_INSN (head)));
4389 else if (any_condjump_p (head))
4390 gcc_assert (/* Usual case. */
4391 (EDGE_COUNT (bb->succs) > 1
4392 && !BARRIER_P (NEXT_INSN (head)))
4393 /* Or jump to the next instruction. */
4394 || (EDGE_COUNT (bb->succs) == 1
4395 && (BB_HEAD (EDGE_I (bb->succs, 0)->dest)
4396 == JUMP_LABEL (head))));
4397 }
4398 if (BB_END (bb) == head)
4399 {
4400 if (EDGE_COUNT (bb->succs) > 1)
4401 gcc_assert (control_flow_insn_p (head)
4402 || has_edge_p (bb->succs, EDGE_COMPLEX));
4403 bb = 0;
4404 }
4405
4406 head = NEXT_INSN (head);
4407 }
4408 }
4409
4410 not_first = 1;
4411 }
4412 while (head != next_tail);
4413
4414 gcc_assert (bb == 0);
4415 }
4416
4417 /* Perform a few consistency checks of flags in different data structures. */
4418 static void
4419 check_sched_flags (void)
4420 {
4421 unsigned int f = current_sched_info->flags;
4422
4423 if (flag_sched_stalled_insns)
4424 gcc_assert (!(f & DO_SPECULATION));
4425 if (f & DO_SPECULATION)
4426 gcc_assert (!flag_sched_stalled_insns
4427 && spec_info
4428 && spec_info->mask);
4429 }
4430 #endif /* ENABLE_CHECKING */
4431
4432 #endif /* INSN_SCHEDULING */