Major cutover to using system.h:
[gcc.git] / gcc / sched.c
1 /* Instruction scheduling pass.
2 Copyright (C) 1992, 93-97, 1998 Free Software Foundation, Inc.
3 Contributed by Michael Tiemann (tiemann@cygnus.com)
4 Enhanced by, and currently maintained by, Jim Wilson (wilson@cygnus.com)
5
6 This file is part of GNU CC.
7
8 GNU CC is free software; you can redistribute it and/or modify
9 it under the terms of the GNU General Public License as published by
10 the Free Software Foundation; either version 2, or (at your option)
11 any later version.
12
13 GNU CC is distributed in the hope that it will be useful,
14 but WITHOUT ANY WARRANTY; without even the implied warranty of
15 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
16 GNU General Public License for more details.
17
18 You should have received a copy of the GNU General Public License
19 along with GNU CC; see the file COPYING. If not, write to
20 the Free Software Foundation, 59 Temple Place - Suite 330,
21 Boston, MA 02111-1307, USA. */
22
23 /* Instruction scheduling pass.
24
25 This pass implements list scheduling within basic blocks. It is
26 run after flow analysis, but before register allocation. The
27 scheduler works as follows:
28
29 We compute insn priorities based on data dependencies. Flow
30 analysis only creates a fraction of the data-dependencies we must
31 observe: namely, only those dependencies which the combiner can be
32 expected to use. For this pass, we must therefore create the
33 remaining dependencies we need to observe: register dependencies,
34 memory dependencies, dependencies to keep function calls in order,
35 and the dependence between a conditional branch and the setting of
36 condition codes are all dealt with here.
37
38 The scheduler first traverses the data flow graph, starting with
39 the last instruction, and proceeding to the first, assigning
40 values to insn_priority as it goes. This sorts the instructions
41 topologically by data dependence.
42
43 Once priorities have been established, we order the insns using
44 list scheduling. This works as follows: starting with a list of
45 all the ready insns, and sorted according to priority number, we
46 schedule the insn from the end of the list by placing its
47 predecessors in the list according to their priority order. We
48 consider this insn scheduled by setting the pointer to the "end" of
49 the list to point to the previous insn. When an insn has no
50 predecessors, we either queue it until sufficient time has elapsed
51 or add it to the ready list. As the instructions are scheduled or
52 when stalls are introduced, the queue advances and dumps insns into
53 the ready list. When all insns down to the lowest priority have
54 been scheduled, the critical path of the basic block has been made
55 as short as possible. The remaining insns are then scheduled in
56 remaining slots.
57
58 Function unit conflicts are resolved during reverse list scheduling
59 by tracking the time when each insn is committed to the schedule
60 and from that, the time the function units it uses must be free.
61 As insns on the ready list are considered for scheduling, those
62 that would result in a blockage of the already committed insns are
63 queued until no blockage will result. Among the remaining insns on
64 the ready list to be considered, the first one with the largest
65 potential for causing a subsequent blockage is chosen.
66
67 The following list shows the order in which we want to break ties
68 among insns in the ready list:
69
70 1. choose insn with lowest conflict cost, ties broken by
71 2. choose insn with the longest path to end of bb, ties broken by
72 3. choose insn that kills the most registers, ties broken by
73 4. choose insn that conflicts with the most ready insns, or finally
74 5. choose insn with lowest UID.
75
76 Memory references complicate matters. Only if we can be certain
77 that memory references are not part of the data dependency graph
78 (via true, anti, or output dependence), can we move operations past
79 memory references. To first approximation, reads can be done
80 independently, while writes introduce dependencies. Better
81 approximations will yield fewer dependencies.
82
83 Dependencies set up by memory references are treated in exactly the
84 same way as other dependencies, by using LOG_LINKS.
85
86 Having optimized the critical path, we may have also unduly
87 extended the lifetimes of some registers. If an operation requires
88 that constants be loaded into registers, it is certainly desirable
89 to load those constants as early as necessary, but no earlier.
90 I.e., it will not do to load up a bunch of registers at the
91 beginning of a basic block only to use them at the end, if they
92 could be loaded later, since this may result in excessive register
93 utilization.
94
95 Note that since branches are never in basic blocks, but only end
96 basic blocks, this pass will not do any branch scheduling. But
97 that is ok, since we can use GNU's delayed branch scheduling
98 pass to take care of this case.
99
100 Also note that no further optimizations based on algebraic identities
101 are performed, so this pass would be a good one to perform instruction
102 splitting, such as breaking up a multiply instruction into shifts
103 and adds where that is profitable.
104
105 Given the memory aliasing analysis that this pass should perform,
106 it should be possible to remove redundant stores to memory, and to
107 load values from registers instead of hitting memory.
108
109 This pass must update information that subsequent passes expect to be
110 correct. Namely: reg_n_refs, reg_n_sets, reg_n_deaths,
111 reg_n_calls_crossed, and reg_live_length. Also, basic_block_head,
112 basic_block_end.
113
114 The information in the line number notes is carefully retained by
115 this pass. Notes that refer to the starting and ending of
116 exception regions are also carefully retained by this pass. All
117 other NOTE insns are grouped in their same relative order at the
118 beginning of basic blocks that have been scheduled. */
119 \f
120 #include "config.h"
121 #include "system.h"
122 #include "rtl.h"
123 #include "basic-block.h"
124 #include "regs.h"
125 #include "hard-reg-set.h"
126 #include "flags.h"
127 #include "insn-config.h"
128 #include "insn-attr.h"
129
130 extern char *reg_known_equiv_p;
131 extern rtx *reg_known_value;
132
133 #ifdef INSN_SCHEDULING
134 /* Arrays set up by scheduling for the same respective purposes as
135 similar-named arrays set up by flow analysis. We work with these
136 arrays during the scheduling pass so we can compare values against
137 unscheduled code.
138
139 Values of these arrays are copied at the end of this pass into the
140 arrays set up by flow analysis. */
141 static int *sched_reg_n_calls_crossed;
142 static int *sched_reg_live_length;
143
144 /* Element N is the next insn that sets (hard or pseudo) register
145 N within the current basic block; or zero, if there is no
146 such insn. Needed for new registers which may be introduced
147 by splitting insns. */
148 static rtx *reg_last_uses;
149 static rtx *reg_last_sets;
150 static regset reg_pending_sets;
151 static int reg_pending_sets_all;
152
153 /* Vector indexed by INSN_UID giving the original ordering of the insns. */
154 static int *insn_luid;
155 #define INSN_LUID(INSN) (insn_luid[INSN_UID (INSN)])
156
157 /* Vector indexed by INSN_UID giving each instruction a priority. */
158 static int *insn_priority;
159 #define INSN_PRIORITY(INSN) (insn_priority[INSN_UID (INSN)])
160
161 static short *insn_costs;
162 #define INSN_COST(INSN) insn_costs[INSN_UID (INSN)]
163
164 /* Vector indexed by INSN_UID giving an encoding of the function units
165 used. */
166 static short *insn_units;
167 #define INSN_UNIT(INSN) insn_units[INSN_UID (INSN)]
168
169 /* Vector indexed by INSN_UID giving an encoding of the blockage range
170 function. The unit and the range are encoded. */
171 static unsigned int *insn_blockage;
172 #define INSN_BLOCKAGE(INSN) insn_blockage[INSN_UID (INSN)]
173 #define UNIT_BITS 5
174 #define BLOCKAGE_MASK ((1 << BLOCKAGE_BITS) - 1)
175 #define ENCODE_BLOCKAGE(U,R) \
176 ((((U) << UNIT_BITS) << BLOCKAGE_BITS \
177 | MIN_BLOCKAGE_COST (R)) << BLOCKAGE_BITS \
178 | MAX_BLOCKAGE_COST (R))
179 #define UNIT_BLOCKED(B) ((B) >> (2 * BLOCKAGE_BITS))
180 #define BLOCKAGE_RANGE(B) \
181 (((((B) >> BLOCKAGE_BITS) & BLOCKAGE_MASK) << (HOST_BITS_PER_INT / 2)) \
182 | ((B) & BLOCKAGE_MASK))
183
184 /* Encodings of the `<name>_unit_blockage_range' function. */
185 #define MIN_BLOCKAGE_COST(R) ((R) >> (HOST_BITS_PER_INT / 2))
186 #define MAX_BLOCKAGE_COST(R) ((R) & ((1 << (HOST_BITS_PER_INT / 2)) - 1))
187
188 #define DONE_PRIORITY -1
189 #define MAX_PRIORITY 0x7fffffff
190 #define TAIL_PRIORITY 0x7ffffffe
191 #define LAUNCH_PRIORITY 0x7f000001
192 #define DONE_PRIORITY_P(INSN) (INSN_PRIORITY (INSN) < 0)
193 #define LOW_PRIORITY_P(INSN) ((INSN_PRIORITY (INSN) & 0x7f000000) == 0)
194
195 /* Vector indexed by INSN_UID giving number of insns referring to this insn. */
196 static int *insn_ref_count;
197 #define INSN_REF_COUNT(INSN) (insn_ref_count[INSN_UID (INSN)])
198
199 /* Vector indexed by INSN_UID giving line-number note in effect for each
200 insn. For line-number notes, this indicates whether the note may be
201 reused. */
202 static rtx *line_note;
203 #define LINE_NOTE(INSN) (line_note[INSN_UID (INSN)])
204
205 /* Vector indexed by basic block number giving the starting line-number
206 for each basic block. */
207 static rtx *line_note_head;
208
209 /* List of important notes we must keep around. This is a pointer to the
210 last element in the list. */
211 static rtx note_list;
212
213 /* Regsets telling whether a given register is live or dead before the last
214 scheduled insn. Must scan the instructions once before scheduling to
215 determine what registers are live or dead at the end of the block. */
216 static regset bb_dead_regs;
217 static regset bb_live_regs;
218
219 /* Regset telling whether a given register is live after the insn currently
220 being scheduled. Before processing an insn, this is equal to bb_live_regs
221 above. This is used so that we can find registers that are newly born/dead
222 after processing an insn. */
223 static regset old_live_regs;
224
225 /* The chain of REG_DEAD notes. REG_DEAD notes are removed from all insns
226 during the initial scan and reused later. If there are not exactly as
227 many REG_DEAD notes in the post scheduled code as there were in the
228 prescheduled code then we trigger an abort because this indicates a bug. */
229 static rtx dead_notes;
230
231 /* Queues, etc. */
232
233 /* An instruction is ready to be scheduled when all insns following it
234 have already been scheduled. It is important to ensure that all
235 insns which use its result will not be executed until its result
236 has been computed. An insn is maintained in one of four structures:
237
238 (P) the "Pending" set of insns which cannot be scheduled until
239 their dependencies have been satisfied.
240 (Q) the "Queued" set of insns that can be scheduled when sufficient
241 time has passed.
242 (R) the "Ready" list of unscheduled, uncommitted insns.
243 (S) the "Scheduled" list of insns.
244
245 Initially, all insns are either "Pending" or "Ready" depending on
246 whether their dependencies are satisfied.
247
248 Insns move from the "Ready" list to the "Scheduled" list as they
249 are committed to the schedule. As this occurs, the insns in the
250 "Pending" list have their dependencies satisfied and move to either
251 the "Ready" list or the "Queued" set depending on whether
252 sufficient time has passed to make them ready. As time passes,
253 insns move from the "Queued" set to the "Ready" list. Insns may
254 move from the "Ready" list to the "Queued" set if they are blocked
255 due to a function unit conflict.
256
257 The "Pending" list (P) are the insns in the LOG_LINKS of the unscheduled
258 insns, i.e., those that are ready, queued, and pending.
259 The "Queued" set (Q) is implemented by the variable `insn_queue'.
260 The "Ready" list (R) is implemented by the variables `ready' and
261 `n_ready'.
262 The "Scheduled" list (S) is the new insn chain built by this pass.
263
264 The transition (R->S) is implemented in the scheduling loop in
265 `schedule_block' when the best insn to schedule is chosen.
266 The transition (R->Q) is implemented in `schedule_select' when an
267 insn is found to to have a function unit conflict with the already
268 committed insns.
269 The transitions (P->R and P->Q) are implemented in `schedule_insn' as
270 insns move from the ready list to the scheduled list.
271 The transition (Q->R) is implemented at the top of the scheduling
272 loop in `schedule_block' as time passes or stalls are introduced. */
273
274 /* Implement a circular buffer to delay instructions until sufficient
275 time has passed. INSN_QUEUE_SIZE is a power of two larger than
276 MAX_BLOCKAGE and MAX_READY_COST computed by genattr.c. This is the
277 longest time an isnsn may be queued. */
278 static rtx insn_queue[INSN_QUEUE_SIZE];
279 static int q_ptr = 0;
280 static int q_size = 0;
281 #define NEXT_Q(X) (((X)+1) & (INSN_QUEUE_SIZE-1))
282 #define NEXT_Q_AFTER(X,C) (((X)+C) & (INSN_QUEUE_SIZE-1))
283
284 /* Vector indexed by INSN_UID giving the minimum clock tick at which
285 the insn becomes ready. This is used to note timing constraints for
286 insns in the pending list. */
287 static int *insn_tick;
288 #define INSN_TICK(INSN) (insn_tick[INSN_UID (INSN)])
289
290 /* Data structure for keeping track of register information
291 during that register's life. */
292
293 struct sometimes
294 {
295 int regno;
296 int live_length;
297 int calls_crossed;
298 };
299
300 /* Forward declarations. */
301 static void add_dependence PROTO((rtx, rtx, enum reg_note));
302 static void remove_dependence PROTO((rtx, rtx));
303 static rtx find_insn_list PROTO((rtx, rtx));
304 static int insn_unit PROTO((rtx));
305 static unsigned int blockage_range PROTO((int, rtx));
306 static void clear_units PROTO((void));
307 static void prepare_unit PROTO((int));
308 static int actual_hazard_this_instance PROTO((int, int, rtx, int, int));
309 static void schedule_unit PROTO((int, rtx, int));
310 static int actual_hazard PROTO((int, rtx, int, int));
311 static int potential_hazard PROTO((int, rtx, int));
312 static int insn_cost PROTO((rtx, rtx, rtx));
313 static int priority PROTO((rtx));
314 static void free_pending_lists PROTO((void));
315 static void add_insn_mem_dependence PROTO((rtx *, rtx *, rtx, rtx));
316 static void flush_pending_lists PROTO((rtx, int));
317 static void sched_analyze_1 PROTO((rtx, rtx));
318 static void sched_analyze_2 PROTO((rtx, rtx));
319 static void sched_analyze_insn PROTO((rtx, rtx, rtx));
320 static int sched_analyze PROTO((rtx, rtx));
321 static void sched_note_set PROTO((int, rtx, int));
322 static int rank_for_schedule PROTO((const GENERIC_PTR, const GENERIC_PTR));
323 static void swap_sort PROTO((rtx *, int));
324 static void queue_insn PROTO((rtx, int));
325 static int birthing_insn_p PROTO((rtx));
326 static void adjust_priority PROTO((rtx));
327 static int schedule_insn PROTO((rtx, rtx *, int, int));
328 static int schedule_select PROTO((rtx *, int, int, FILE *));
329 static void create_reg_dead_note PROTO((rtx, rtx));
330 static void attach_deaths PROTO((rtx, rtx, int));
331 static void attach_deaths_insn PROTO((rtx));
332 static rtx unlink_notes PROTO((rtx, rtx));
333 static int new_sometimes_live PROTO((struct sometimes *, int, int));
334 static void finish_sometimes_live PROTO((struct sometimes *, int));
335 static rtx reemit_notes PROTO((rtx, rtx));
336 static void schedule_block PROTO((int, FILE *));
337 static rtx regno_use_in PROTO((int, rtx));
338 static void split_hard_reg_notes PROTO((rtx, rtx, rtx, rtx));
339 static void new_insn_dead_notes PROTO((rtx, rtx, rtx, rtx));
340 static void update_n_sets PROTO((rtx, int));
341 static void update_flow_info PROTO((rtx, rtx, rtx, rtx));
342
343 /* Main entry point of this file. */
344 void schedule_insns PROTO((FILE *));
345
346 #endif /* INSN_SCHEDULING */
347 \f
348 #define SIZE_FOR_MODE(X) (GET_MODE_SIZE (GET_MODE (X)))
349
350 /* Helper functions for instruction scheduling. */
351
352 /* Add ELEM wrapped in an INSN_LIST with reg note kind DEP_TYPE to the
353 LOG_LINKS of INSN, if not already there. DEP_TYPE indicates the type
354 of dependence that this link represents. */
355
356 static void
357 add_dependence (insn, elem, dep_type)
358 rtx insn;
359 rtx elem;
360 enum reg_note dep_type;
361 {
362 rtx link, next;
363
364 /* Don't depend an insn on itself. */
365 if (insn == elem)
366 return;
367
368 /* If elem is part of a sequence that must be scheduled together, then
369 make the dependence point to the last insn of the sequence.
370 When HAVE_cc0, it is possible for NOTEs to exist between users and
371 setters of the condition codes, so we must skip past notes here.
372 Otherwise, NOTEs are impossible here. */
373
374 next = NEXT_INSN (elem);
375
376 #ifdef HAVE_cc0
377 while (next && GET_CODE (next) == NOTE)
378 next = NEXT_INSN (next);
379 #endif
380
381 if (next && SCHED_GROUP_P (next)
382 && GET_CODE (next) != CODE_LABEL)
383 {
384 /* Notes will never intervene here though, so don't bother checking
385 for them. */
386 /* We must reject CODE_LABELs, so that we don't get confused by one
387 that has LABEL_PRESERVE_P set, which is represented by the same
388 bit in the rtl as SCHED_GROUP_P. A CODE_LABEL can never be
389 SCHED_GROUP_P. */
390 while (NEXT_INSN (next) && SCHED_GROUP_P (NEXT_INSN (next))
391 && GET_CODE (NEXT_INSN (next)) != CODE_LABEL)
392 next = NEXT_INSN (next);
393
394 /* Again, don't depend an insn on itself. */
395 if (insn == next)
396 return;
397
398 /* Make the dependence to NEXT, the last insn of the group, instead
399 of the original ELEM. */
400 elem = next;
401 }
402
403 /* Check that we don't already have this dependence. */
404 for (link = LOG_LINKS (insn); link; link = XEXP (link, 1))
405 if (XEXP (link, 0) == elem)
406 {
407 /* If this is a more restrictive type of dependence than the existing
408 one, then change the existing dependence to this type. */
409 if ((int) dep_type < (int) REG_NOTE_KIND (link))
410 PUT_REG_NOTE_KIND (link, dep_type);
411 return;
412 }
413 /* Might want to check one level of transitivity to save conses. */
414
415 link = rtx_alloc (INSN_LIST);
416 /* Insn dependency, not data dependency. */
417 PUT_REG_NOTE_KIND (link, dep_type);
418 XEXP (link, 0) = elem;
419 XEXP (link, 1) = LOG_LINKS (insn);
420 LOG_LINKS (insn) = link;
421 }
422
423 /* Remove ELEM wrapped in an INSN_LIST from the LOG_LINKS
424 of INSN. Abort if not found. */
425
426 static void
427 remove_dependence (insn, elem)
428 rtx insn;
429 rtx elem;
430 {
431 rtx prev, link;
432 int found = 0;
433
434 for (prev = 0, link = LOG_LINKS (insn); link; link = XEXP (link, 1))
435 {
436 if (XEXP (link, 0) == elem)
437 {
438 RTX_INTEGRATED_P (link) = 1;
439 if (prev)
440 XEXP (prev, 1) = XEXP (link, 1);
441 else
442 LOG_LINKS (insn) = XEXP (link, 1);
443 found = 1;
444 }
445 else
446 prev = link;
447 }
448
449 if (! found)
450 abort ();
451 return;
452 }
453 \f
454 #ifndef INSN_SCHEDULING
455 void
456 schedule_insns (dump_file)
457 FILE *dump_file;
458 {
459 }
460 #else
461 #ifndef __GNUC__
462 #define __inline
463 #endif
464
465 /* Computation of memory dependencies. */
466
467 /* The *_insns and *_mems are paired lists. Each pending memory operation
468 will have a pointer to the MEM rtx on one list and a pointer to the
469 containing insn on the other list in the same place in the list. */
470
471 /* We can't use add_dependence like the old code did, because a single insn
472 may have multiple memory accesses, and hence needs to be on the list
473 once for each memory access. Add_dependence won't let you add an insn
474 to a list more than once. */
475
476 /* An INSN_LIST containing all insns with pending read operations. */
477 static rtx pending_read_insns;
478
479 /* An EXPR_LIST containing all MEM rtx's which are pending reads. */
480 static rtx pending_read_mems;
481
482 /* An INSN_LIST containing all insns with pending write operations. */
483 static rtx pending_write_insns;
484
485 /* An EXPR_LIST containing all MEM rtx's which are pending writes. */
486 static rtx pending_write_mems;
487
488 /* Indicates the combined length of the two pending lists. We must prevent
489 these lists from ever growing too large since the number of dependencies
490 produced is at least O(N*N), and execution time is at least O(4*N*N), as
491 a function of the length of these pending lists. */
492
493 static int pending_lists_length;
494
495 /* An INSN_LIST containing all INSN_LISTs allocated but currently unused. */
496
497 static rtx unused_insn_list;
498
499 /* An EXPR_LIST containing all EXPR_LISTs allocated but currently unused. */
500
501 static rtx unused_expr_list;
502
503 /* The last insn upon which all memory references must depend.
504 This is an insn which flushed the pending lists, creating a dependency
505 between it and all previously pending memory references. This creates
506 a barrier (or a checkpoint) which no memory reference is allowed to cross.
507
508 This includes all non constant CALL_INSNs. When we do interprocedural
509 alias analysis, this restriction can be relaxed.
510 This may also be an INSN that writes memory if the pending lists grow
511 too large. */
512
513 static rtx last_pending_memory_flush;
514
515 /* The last function call we have seen. All hard regs, and, of course,
516 the last function call, must depend on this. */
517
518 static rtx last_function_call;
519
520 /* The LOG_LINKS field of this is a list of insns which use a pseudo register
521 that does not already cross a call. We create dependencies between each
522 of those insn and the next call insn, to ensure that they won't cross a call
523 after scheduling is done. */
524
525 static rtx sched_before_next_call;
526
527 /* Pointer to the last instruction scheduled. Used by rank_for_schedule,
528 so that insns independent of the last scheduled insn will be preferred
529 over dependent instructions. */
530
531 static rtx last_scheduled_insn;
532
533 /* Process an insn's memory dependencies. There are four kinds of
534 dependencies:
535
536 (0) read dependence: read follows read
537 (1) true dependence: read follows write
538 (2) anti dependence: write follows read
539 (3) output dependence: write follows write
540
541 We are careful to build only dependencies which actually exist, and
542 use transitivity to avoid building too many links. */
543 \f
544 /* Return the INSN_LIST containing INSN in LIST, or NULL
545 if LIST does not contain INSN. */
546
547 __inline static rtx
548 find_insn_list (insn, list)
549 rtx insn;
550 rtx list;
551 {
552 while (list)
553 {
554 if (XEXP (list, 0) == insn)
555 return list;
556 list = XEXP (list, 1);
557 }
558 return 0;
559 }
560
561 /* Compute the function units used by INSN. This caches the value
562 returned by function_units_used. A function unit is encoded as the
563 unit number if the value is non-negative and the compliment of a
564 mask if the value is negative. A function unit index is the
565 non-negative encoding. */
566
567 __inline static int
568 insn_unit (insn)
569 rtx insn;
570 {
571 register int unit = INSN_UNIT (insn);
572
573 if (unit == 0)
574 {
575 recog_memoized (insn);
576
577 /* A USE insn, or something else we don't need to understand.
578 We can't pass these directly to function_units_used because it will
579 trigger a fatal error for unrecognizable insns. */
580 if (INSN_CODE (insn) < 0)
581 unit = -1;
582 else
583 {
584 unit = function_units_used (insn);
585 /* Increment non-negative values so we can cache zero. */
586 if (unit >= 0) unit++;
587 }
588 /* We only cache 16 bits of the result, so if the value is out of
589 range, don't cache it. */
590 if (FUNCTION_UNITS_SIZE < HOST_BITS_PER_SHORT
591 || unit >= 0
592 || (~unit & ((1 << (HOST_BITS_PER_SHORT - 1)) - 1)) == 0)
593 INSN_UNIT (insn) = unit;
594 }
595 return (unit > 0 ? unit - 1 : unit);
596 }
597
598 /* Compute the blockage range for executing INSN on UNIT. This caches
599 the value returned by the blockage_range_function for the unit.
600 These values are encoded in an int where the upper half gives the
601 minimum value and the lower half gives the maximum value. */
602
603 __inline static unsigned int
604 blockage_range (unit, insn)
605 int unit;
606 rtx insn;
607 {
608 unsigned int blockage = INSN_BLOCKAGE (insn);
609 unsigned int range;
610
611 if (UNIT_BLOCKED (blockage) != unit + 1)
612 {
613 range = function_units[unit].blockage_range_function (insn);
614 /* We only cache the blockage range for one unit and then only if
615 the values fit. */
616 if (HOST_BITS_PER_INT >= UNIT_BITS + 2 * BLOCKAGE_BITS)
617 INSN_BLOCKAGE (insn) = ENCODE_BLOCKAGE (unit + 1, range);
618 }
619 else
620 range = BLOCKAGE_RANGE (blockage);
621
622 return range;
623 }
624
625 /* A vector indexed by function unit instance giving the last insn to use
626 the unit. The value of the function unit instance index for unit U
627 instance I is (U + I * FUNCTION_UNITS_SIZE). */
628 static rtx unit_last_insn[FUNCTION_UNITS_SIZE * MAX_MULTIPLICITY];
629
630 /* A vector indexed by function unit instance giving the minimum time when
631 the unit will unblock based on the maximum blockage cost. */
632 static int unit_tick[FUNCTION_UNITS_SIZE * MAX_MULTIPLICITY];
633
634 /* A vector indexed by function unit number giving the number of insns
635 that remain to use the unit. */
636 static int unit_n_insns[FUNCTION_UNITS_SIZE];
637
638 /* Reset the function unit state to the null state. */
639
640 static void
641 clear_units ()
642 {
643 bzero ((char *) unit_last_insn, sizeof (unit_last_insn));
644 bzero ((char *) unit_tick, sizeof (unit_tick));
645 bzero ((char *) unit_n_insns, sizeof (unit_n_insns));
646 }
647
648 /* Record an insn as one that will use the units encoded by UNIT. */
649
650 __inline static void
651 prepare_unit (unit)
652 int unit;
653 {
654 int i;
655
656 if (unit >= 0)
657 unit_n_insns[unit]++;
658 else
659 for (i = 0, unit = ~unit; unit; i++, unit >>= 1)
660 if ((unit & 1) != 0)
661 prepare_unit (i);
662 }
663
664 /* Return the actual hazard cost of executing INSN on the unit UNIT,
665 instance INSTANCE at time CLOCK if the previous actual hazard cost
666 was COST. */
667
668 __inline static int
669 actual_hazard_this_instance (unit, instance, insn, clock, cost)
670 int unit, instance, clock, cost;
671 rtx insn;
672 {
673 int tick = unit_tick[instance];
674
675 if (tick - clock > cost)
676 {
677 /* The scheduler is operating in reverse, so INSN is the executing
678 insn and the unit's last insn is the candidate insn. We want a
679 more exact measure of the blockage if we execute INSN at CLOCK
680 given when we committed the execution of the unit's last insn.
681
682 The blockage value is given by either the unit's max blockage
683 constant, blockage range function, or blockage function. Use
684 the most exact form for the given unit. */
685
686 if (function_units[unit].blockage_range_function)
687 {
688 if (function_units[unit].blockage_function)
689 tick += (function_units[unit].blockage_function
690 (insn, unit_last_insn[instance])
691 - function_units[unit].max_blockage);
692 else
693 tick += ((int) MAX_BLOCKAGE_COST (blockage_range (unit, insn))
694 - function_units[unit].max_blockage);
695 }
696 if (tick - clock > cost)
697 cost = tick - clock;
698 }
699 return cost;
700 }
701
702 /* Record INSN as having begun execution on the units encoded by UNIT at
703 time CLOCK. */
704
705 __inline static void
706 schedule_unit (unit, insn, clock)
707 int unit, clock;
708 rtx insn;
709 {
710 int i;
711
712 if (unit >= 0)
713 {
714 int instance = unit;
715 #if MAX_MULTIPLICITY > 1
716 /* Find the first free instance of the function unit and use that
717 one. We assume that one is free. */
718 for (i = function_units[unit].multiplicity - 1; i > 0; i--)
719 {
720 if (! actual_hazard_this_instance (unit, instance, insn, clock, 0))
721 break;
722 instance += FUNCTION_UNITS_SIZE;
723 }
724 #endif
725 unit_last_insn[instance] = insn;
726 unit_tick[instance] = (clock + function_units[unit].max_blockage);
727 }
728 else
729 for (i = 0, unit = ~unit; unit; i++, unit >>= 1)
730 if ((unit & 1) != 0)
731 schedule_unit (i, insn, clock);
732 }
733
734 /* Return the actual hazard cost of executing INSN on the units encoded by
735 UNIT at time CLOCK if the previous actual hazard cost was COST. */
736
737 __inline static int
738 actual_hazard (unit, insn, clock, cost)
739 int unit, clock, cost;
740 rtx insn;
741 {
742 int i;
743
744 if (unit >= 0)
745 {
746 /* Find the instance of the function unit with the minimum hazard. */
747 int instance = unit;
748 int best_cost = actual_hazard_this_instance (unit, instance, insn,
749 clock, cost);
750 #if MAX_MULTIPLICITY > 1
751 int this_cost;
752
753 if (best_cost > cost)
754 {
755 for (i = function_units[unit].multiplicity - 1; i > 0; i--)
756 {
757 instance += FUNCTION_UNITS_SIZE;
758 this_cost = actual_hazard_this_instance (unit, instance, insn,
759 clock, cost);
760 if (this_cost < best_cost)
761 {
762 best_cost = this_cost;
763 if (this_cost <= cost)
764 break;
765 }
766 }
767 }
768 #endif
769 cost = MAX (cost, best_cost);
770 }
771 else
772 for (i = 0, unit = ~unit; unit; i++, unit >>= 1)
773 if ((unit & 1) != 0)
774 cost = actual_hazard (i, insn, clock, cost);
775
776 return cost;
777 }
778
779 /* Return the potential hazard cost of executing an instruction on the
780 units encoded by UNIT if the previous potential hazard cost was COST.
781 An insn with a large blockage time is chosen in preference to one
782 with a smaller time; an insn that uses a unit that is more likely
783 to be used is chosen in preference to one with a unit that is less
784 used. We are trying to minimize a subsequent actual hazard. */
785
786 __inline static int
787 potential_hazard (unit, insn, cost)
788 int unit, cost;
789 rtx insn;
790 {
791 int i, ncost;
792 unsigned int minb, maxb;
793
794 if (unit >= 0)
795 {
796 minb = maxb = function_units[unit].max_blockage;
797 if (maxb > 1)
798 {
799 if (function_units[unit].blockage_range_function)
800 {
801 maxb = minb = blockage_range (unit, insn);
802 maxb = MAX_BLOCKAGE_COST (maxb);
803 minb = MIN_BLOCKAGE_COST (minb);
804 }
805
806 if (maxb > 1)
807 {
808 /* Make the number of instructions left dominate. Make the
809 minimum delay dominate the maximum delay. If all these
810 are the same, use the unit number to add an arbitrary
811 ordering. Other terms can be added. */
812 ncost = minb * 0x40 + maxb;
813 ncost *= (unit_n_insns[unit] - 1) * 0x1000 + unit;
814 if (ncost > cost)
815 cost = ncost;
816 }
817 }
818 }
819 else
820 for (i = 0, unit = ~unit; unit; i++, unit >>= 1)
821 if ((unit & 1) != 0)
822 cost = potential_hazard (i, insn, cost);
823
824 return cost;
825 }
826
827 /* Compute cost of executing INSN given the dependence LINK on the insn USED.
828 This is the number of virtual cycles taken between instruction issue and
829 instruction results. */
830
831 __inline static int
832 insn_cost (insn, link, used)
833 rtx insn, link, used;
834 {
835 register int cost = INSN_COST (insn);
836
837 if (cost == 0)
838 {
839 recog_memoized (insn);
840
841 /* A USE insn, or something else we don't need to understand.
842 We can't pass these directly to result_ready_cost because it will
843 trigger a fatal error for unrecognizable insns. */
844 if (INSN_CODE (insn) < 0)
845 {
846 INSN_COST (insn) = 1;
847 return 1;
848 }
849 else
850 {
851 cost = result_ready_cost (insn);
852
853 if (cost < 1)
854 cost = 1;
855
856 INSN_COST (insn) = cost;
857 }
858 }
859
860 /* A USE insn should never require the value used to be computed. This
861 allows the computation of a function's result and parameter values to
862 overlap the return and call. */
863 recog_memoized (used);
864 if (INSN_CODE (used) < 0)
865 LINK_COST_FREE (link) = 1;
866
867 /* If some dependencies vary the cost, compute the adjustment. Most
868 commonly, the adjustment is complete: either the cost is ignored
869 (in the case of an output- or anti-dependence), or the cost is
870 unchanged. These values are cached in the link as LINK_COST_FREE
871 and LINK_COST_ZERO. */
872
873 if (LINK_COST_FREE (link))
874 cost = 1;
875 #ifdef ADJUST_COST
876 else if (! LINK_COST_ZERO (link))
877 {
878 int ncost = cost;
879
880 ADJUST_COST (used, link, insn, ncost);
881 if (ncost <= 1)
882 LINK_COST_FREE (link) = ncost = 1;
883 if (cost == ncost)
884 LINK_COST_ZERO (link) = 1;
885 cost = ncost;
886 }
887 #endif
888 return cost;
889 }
890
891 /* Compute the priority number for INSN. */
892
893 static int
894 priority (insn)
895 rtx insn;
896 {
897 if (insn && GET_RTX_CLASS (GET_CODE (insn)) == 'i')
898 {
899 int prev_priority;
900 int max_priority;
901 int this_priority = INSN_PRIORITY (insn);
902 rtx prev;
903
904 if (this_priority > 0)
905 return this_priority;
906
907 max_priority = 1;
908
909 /* Nonzero if these insns must be scheduled together. */
910 if (SCHED_GROUP_P (insn))
911 {
912 prev = insn;
913 while (SCHED_GROUP_P (prev))
914 {
915 prev = PREV_INSN (prev);
916 INSN_REF_COUNT (prev) += 1;
917 }
918 }
919
920 for (prev = LOG_LINKS (insn); prev; prev = XEXP (prev, 1))
921 {
922 rtx x = XEXP (prev, 0);
923
924 /* If this was a duplicate of a dependence we already deleted,
925 ignore it. */
926 if (RTX_INTEGRATED_P (prev))
927 continue;
928
929 /* A dependence pointing to a note or deleted insn is always
930 obsolete, because sched_analyze_insn will have created any
931 necessary new dependences which replace it. Notes and deleted
932 insns can be created when instructions are deleted by insn
933 splitting, or by register allocation. */
934 if (GET_CODE (x) == NOTE || INSN_DELETED_P (x))
935 {
936 remove_dependence (insn, x);
937 continue;
938 }
939
940 /* Clear the link cost adjustment bits. */
941 LINK_COST_FREE (prev) = 0;
942 #ifdef ADJUST_COST
943 LINK_COST_ZERO (prev) = 0;
944 #endif
945
946 /* This priority calculation was chosen because it results in the
947 least instruction movement, and does not hurt the performance
948 of the resulting code compared to the old algorithm.
949 This makes the sched algorithm more stable, which results
950 in better code, because there is less register pressure,
951 cross jumping is more likely to work, and debugging is easier.
952
953 When all instructions have a latency of 1, there is no need to
954 move any instructions. Subtracting one here ensures that in such
955 cases all instructions will end up with a priority of one, and
956 hence no scheduling will be done.
957
958 The original code did not subtract the one, and added the
959 insn_cost of the current instruction to its priority (e.g.
960 move the insn_cost call down to the end). */
961
962 prev_priority = priority (x) + insn_cost (x, prev, insn) - 1;
963
964 if (prev_priority > max_priority)
965 max_priority = prev_priority;
966 INSN_REF_COUNT (x) += 1;
967 }
968
969 prepare_unit (insn_unit (insn));
970 INSN_PRIORITY (insn) = max_priority;
971 return INSN_PRIORITY (insn);
972 }
973 return 0;
974 }
975 \f
976 /* Remove all INSN_LISTs and EXPR_LISTs from the pending lists and add
977 them to the unused_*_list variables, so that they can be reused. */
978
979 static void
980 free_pending_lists ()
981 {
982 register rtx link, prev_link;
983
984 if (pending_read_insns)
985 {
986 prev_link = pending_read_insns;
987 link = XEXP (prev_link, 1);
988
989 while (link)
990 {
991 prev_link = link;
992 link = XEXP (link, 1);
993 }
994
995 XEXP (prev_link, 1) = unused_insn_list;
996 unused_insn_list = pending_read_insns;
997 pending_read_insns = 0;
998 }
999
1000 if (pending_write_insns)
1001 {
1002 prev_link = pending_write_insns;
1003 link = XEXP (prev_link, 1);
1004
1005 while (link)
1006 {
1007 prev_link = link;
1008 link = XEXP (link, 1);
1009 }
1010
1011 XEXP (prev_link, 1) = unused_insn_list;
1012 unused_insn_list = pending_write_insns;
1013 pending_write_insns = 0;
1014 }
1015
1016 if (pending_read_mems)
1017 {
1018 prev_link = pending_read_mems;
1019 link = XEXP (prev_link, 1);
1020
1021 while (link)
1022 {
1023 prev_link = link;
1024 link = XEXP (link, 1);
1025 }
1026
1027 XEXP (prev_link, 1) = unused_expr_list;
1028 unused_expr_list = pending_read_mems;
1029 pending_read_mems = 0;
1030 }
1031
1032 if (pending_write_mems)
1033 {
1034 prev_link = pending_write_mems;
1035 link = XEXP (prev_link, 1);
1036
1037 while (link)
1038 {
1039 prev_link = link;
1040 link = XEXP (link, 1);
1041 }
1042
1043 XEXP (prev_link, 1) = unused_expr_list;
1044 unused_expr_list = pending_write_mems;
1045 pending_write_mems = 0;
1046 }
1047 }
1048
1049 /* Add an INSN and MEM reference pair to a pending INSN_LIST and MEM_LIST.
1050 The MEM is a memory reference contained within INSN, which we are saving
1051 so that we can do memory aliasing on it. */
1052
1053 static void
1054 add_insn_mem_dependence (insn_list, mem_list, insn, mem)
1055 rtx *insn_list, *mem_list, insn, mem;
1056 {
1057 register rtx link;
1058
1059 if (unused_insn_list)
1060 {
1061 link = unused_insn_list;
1062 unused_insn_list = XEXP (link, 1);
1063 }
1064 else
1065 link = rtx_alloc (INSN_LIST);
1066 XEXP (link, 0) = insn;
1067 XEXP (link, 1) = *insn_list;
1068 *insn_list = link;
1069
1070 if (unused_expr_list)
1071 {
1072 link = unused_expr_list;
1073 unused_expr_list = XEXP (link, 1);
1074 }
1075 else
1076 link = rtx_alloc (EXPR_LIST);
1077 XEXP (link, 0) = mem;
1078 XEXP (link, 1) = *mem_list;
1079 *mem_list = link;
1080
1081 pending_lists_length++;
1082 }
1083 \f
1084 /* Make a dependency between every memory reference on the pending lists
1085 and INSN, thus flushing the pending lists. If ONLY_WRITE, don't flush
1086 the read list. */
1087
1088 static void
1089 flush_pending_lists (insn, only_write)
1090 rtx insn;
1091 int only_write;
1092 {
1093 rtx link;
1094
1095 while (pending_read_insns && ! only_write)
1096 {
1097 add_dependence (insn, XEXP (pending_read_insns, 0), REG_DEP_ANTI);
1098
1099 link = pending_read_insns;
1100 pending_read_insns = XEXP (pending_read_insns, 1);
1101 XEXP (link, 1) = unused_insn_list;
1102 unused_insn_list = link;
1103
1104 link = pending_read_mems;
1105 pending_read_mems = XEXP (pending_read_mems, 1);
1106 XEXP (link, 1) = unused_expr_list;
1107 unused_expr_list = link;
1108 }
1109 while (pending_write_insns)
1110 {
1111 add_dependence (insn, XEXP (pending_write_insns, 0), REG_DEP_ANTI);
1112
1113 link = pending_write_insns;
1114 pending_write_insns = XEXP (pending_write_insns, 1);
1115 XEXP (link, 1) = unused_insn_list;
1116 unused_insn_list = link;
1117
1118 link = pending_write_mems;
1119 pending_write_mems = XEXP (pending_write_mems, 1);
1120 XEXP (link, 1) = unused_expr_list;
1121 unused_expr_list = link;
1122 }
1123 pending_lists_length = 0;
1124
1125 if (last_pending_memory_flush)
1126 add_dependence (insn, last_pending_memory_flush, REG_DEP_ANTI);
1127
1128 last_pending_memory_flush = insn;
1129 }
1130
1131 /* Analyze a single SET or CLOBBER rtx, X, creating all dependencies generated
1132 by the write to the destination of X, and reads of everything mentioned. */
1133
1134 static void
1135 sched_analyze_1 (x, insn)
1136 rtx x;
1137 rtx insn;
1138 {
1139 register int regno;
1140 register rtx dest = SET_DEST (x);
1141
1142 if (dest == 0)
1143 return;
1144
1145 while (GET_CODE (dest) == STRICT_LOW_PART || GET_CODE (dest) == SUBREG
1146 || GET_CODE (dest) == ZERO_EXTRACT || GET_CODE (dest) == SIGN_EXTRACT)
1147 {
1148 if (GET_CODE (dest) == ZERO_EXTRACT || GET_CODE (dest) == SIGN_EXTRACT)
1149 {
1150 /* The second and third arguments are values read by this insn. */
1151 sched_analyze_2 (XEXP (dest, 1), insn);
1152 sched_analyze_2 (XEXP (dest, 2), insn);
1153 }
1154 dest = SUBREG_REG (dest);
1155 }
1156
1157 if (GET_CODE (dest) == REG)
1158 {
1159 register int i;
1160
1161 regno = REGNO (dest);
1162
1163 /* A hard reg in a wide mode may really be multiple registers.
1164 If so, mark all of them just like the first. */
1165 if (regno < FIRST_PSEUDO_REGISTER)
1166 {
1167 i = HARD_REGNO_NREGS (regno, GET_MODE (dest));
1168 while (--i >= 0)
1169 {
1170 rtx u;
1171
1172 for (u = reg_last_uses[regno+i]; u; u = XEXP (u, 1))
1173 add_dependence (insn, XEXP (u, 0), REG_DEP_ANTI);
1174 reg_last_uses[regno + i] = 0;
1175 if (reg_last_sets[regno + i])
1176 add_dependence (insn, reg_last_sets[regno + i],
1177 REG_DEP_OUTPUT);
1178 SET_REGNO_REG_SET (reg_pending_sets, regno + i);
1179 if ((call_used_regs[i] || global_regs[i])
1180 && last_function_call)
1181 /* Function calls clobber all call_used regs. */
1182 add_dependence (insn, last_function_call, REG_DEP_ANTI);
1183 }
1184 }
1185 else
1186 {
1187 rtx u;
1188
1189 for (u = reg_last_uses[regno]; u; u = XEXP (u, 1))
1190 add_dependence (insn, XEXP (u, 0), REG_DEP_ANTI);
1191 reg_last_uses[regno] = 0;
1192 if (reg_last_sets[regno])
1193 add_dependence (insn, reg_last_sets[regno], REG_DEP_OUTPUT);
1194 SET_REGNO_REG_SET (reg_pending_sets, regno);
1195
1196 /* Pseudos that are REG_EQUIV to something may be replaced
1197 by that during reloading. We need only add dependencies for
1198 the address in the REG_EQUIV note. */
1199 if (! reload_completed
1200 && reg_known_equiv_p[regno]
1201 && GET_CODE (reg_known_value[regno]) == MEM)
1202 sched_analyze_2 (XEXP (reg_known_value[regno], 0), insn);
1203
1204 /* Don't let it cross a call after scheduling if it doesn't
1205 already cross one. */
1206 if (REG_N_CALLS_CROSSED (regno) == 0 && last_function_call)
1207 add_dependence (insn, last_function_call, REG_DEP_ANTI);
1208 }
1209 }
1210 else if (GET_CODE (dest) == MEM)
1211 {
1212 /* Writing memory. */
1213
1214 if (pending_lists_length > 32)
1215 {
1216 /* Flush all pending reads and writes to prevent the pending lists
1217 from getting any larger. Insn scheduling runs too slowly when
1218 these lists get long. The number 32 was chosen because it
1219 seems like a reasonable number. When compiling GCC with itself,
1220 this flush occurs 8 times for sparc, and 10 times for m88k using
1221 the number 32. */
1222 flush_pending_lists (insn, 0);
1223 }
1224 else
1225 {
1226 rtx pending, pending_mem;
1227
1228 pending = pending_read_insns;
1229 pending_mem = pending_read_mems;
1230 while (pending)
1231 {
1232 /* If a dependency already exists, don't create a new one. */
1233 if (! find_insn_list (XEXP (pending, 0), LOG_LINKS (insn)))
1234 if (anti_dependence (XEXP (pending_mem, 0), dest))
1235 add_dependence (insn, XEXP (pending, 0), REG_DEP_ANTI);
1236
1237 pending = XEXP (pending, 1);
1238 pending_mem = XEXP (pending_mem, 1);
1239 }
1240
1241 pending = pending_write_insns;
1242 pending_mem = pending_write_mems;
1243 while (pending)
1244 {
1245 /* If a dependency already exists, don't create a new one. */
1246 if (! find_insn_list (XEXP (pending, 0), LOG_LINKS (insn)))
1247 if (output_dependence (XEXP (pending_mem, 0), dest))
1248 add_dependence (insn, XEXP (pending, 0), REG_DEP_OUTPUT);
1249
1250 pending = XEXP (pending, 1);
1251 pending_mem = XEXP (pending_mem, 1);
1252 }
1253
1254 if (last_pending_memory_flush)
1255 add_dependence (insn, last_pending_memory_flush, REG_DEP_ANTI);
1256
1257 add_insn_mem_dependence (&pending_write_insns, &pending_write_mems,
1258 insn, dest);
1259 }
1260 sched_analyze_2 (XEXP (dest, 0), insn);
1261 }
1262
1263 /* Analyze reads. */
1264 if (GET_CODE (x) == SET)
1265 sched_analyze_2 (SET_SRC (x), insn);
1266 }
1267
1268 /* Analyze the uses of memory and registers in rtx X in INSN. */
1269
1270 static void
1271 sched_analyze_2 (x, insn)
1272 rtx x;
1273 rtx insn;
1274 {
1275 register int i;
1276 register int j;
1277 register enum rtx_code code;
1278 register char *fmt;
1279
1280 if (x == 0)
1281 return;
1282
1283 code = GET_CODE (x);
1284
1285 switch (code)
1286 {
1287 case CONST_INT:
1288 case CONST_DOUBLE:
1289 case SYMBOL_REF:
1290 case CONST:
1291 case LABEL_REF:
1292 /* Ignore constants. Note that we must handle CONST_DOUBLE here
1293 because it may have a cc0_rtx in its CONST_DOUBLE_CHAIN field, but
1294 this does not mean that this insn is using cc0. */
1295 return;
1296
1297 #ifdef HAVE_cc0
1298 case CC0:
1299 {
1300 rtx link, prev;
1301
1302 /* User of CC0 depends on immediately preceding insn. */
1303 SCHED_GROUP_P (insn) = 1;
1304
1305 /* There may be a note before this insn now, but all notes will
1306 be removed before we actually try to schedule the insns, so
1307 it won't cause a problem later. We must avoid it here though. */
1308 prev = prev_nonnote_insn (insn);
1309
1310 /* Make a copy of all dependencies on the immediately previous insn,
1311 and add to this insn. This is so that all the dependencies will
1312 apply to the group. Remove an explicit dependence on this insn
1313 as SCHED_GROUP_P now represents it. */
1314
1315 if (find_insn_list (prev, LOG_LINKS (insn)))
1316 remove_dependence (insn, prev);
1317
1318 for (link = LOG_LINKS (prev); link; link = XEXP (link, 1))
1319 add_dependence (insn, XEXP (link, 0), REG_NOTE_KIND (link));
1320
1321 return;
1322 }
1323 #endif
1324
1325 case REG:
1326 {
1327 int regno = REGNO (x);
1328 if (regno < FIRST_PSEUDO_REGISTER)
1329 {
1330 int i;
1331
1332 i = HARD_REGNO_NREGS (regno, GET_MODE (x));
1333 while (--i >= 0)
1334 {
1335 reg_last_uses[regno + i]
1336 = gen_rtx_INSN_LIST (VOIDmode,
1337 insn, reg_last_uses[regno + i]);
1338 if (reg_last_sets[regno + i])
1339 add_dependence (insn, reg_last_sets[regno + i], 0);
1340 if ((call_used_regs[regno + i] || global_regs[regno + i])
1341 && last_function_call)
1342 /* Function calls clobber all call_used regs. */
1343 add_dependence (insn, last_function_call, REG_DEP_ANTI);
1344 }
1345 }
1346 else
1347 {
1348 reg_last_uses[regno]
1349 = gen_rtx_INSN_LIST (VOIDmode, insn, reg_last_uses[regno]);
1350 if (reg_last_sets[regno])
1351 add_dependence (insn, reg_last_sets[regno], 0);
1352
1353 /* Pseudos that are REG_EQUIV to something may be replaced
1354 by that during reloading. We need only add dependencies for
1355 the address in the REG_EQUIV note. */
1356 if (! reload_completed
1357 && reg_known_equiv_p[regno]
1358 && GET_CODE (reg_known_value[regno]) == MEM)
1359 sched_analyze_2 (XEXP (reg_known_value[regno], 0), insn);
1360
1361 /* If the register does not already cross any calls, then add this
1362 insn to the sched_before_next_call list so that it will still
1363 not cross calls after scheduling. */
1364 if (REG_N_CALLS_CROSSED (regno) == 0)
1365 add_dependence (sched_before_next_call, insn, REG_DEP_ANTI);
1366 }
1367 return;
1368 }
1369
1370 case MEM:
1371 {
1372 /* Reading memory. */
1373
1374 rtx pending, pending_mem;
1375
1376 pending = pending_read_insns;
1377 pending_mem = pending_read_mems;
1378 while (pending)
1379 {
1380 /* If a dependency already exists, don't create a new one. */
1381 if (! find_insn_list (XEXP (pending, 0), LOG_LINKS (insn)))
1382 if (read_dependence (XEXP (pending_mem, 0), x))
1383 add_dependence (insn, XEXP (pending, 0), REG_DEP_ANTI);
1384
1385 pending = XEXP (pending, 1);
1386 pending_mem = XEXP (pending_mem, 1);
1387 }
1388
1389 pending = pending_write_insns;
1390 pending_mem = pending_write_mems;
1391 while (pending)
1392 {
1393 /* If a dependency already exists, don't create a new one. */
1394 if (! find_insn_list (XEXP (pending, 0), LOG_LINKS (insn)))
1395 if (true_dependence (XEXP (pending_mem, 0), VOIDmode,
1396 x, rtx_varies_p))
1397 add_dependence (insn, XEXP (pending, 0), 0);
1398
1399 pending = XEXP (pending, 1);
1400 pending_mem = XEXP (pending_mem, 1);
1401 }
1402 if (last_pending_memory_flush)
1403 add_dependence (insn, last_pending_memory_flush, REG_DEP_ANTI);
1404
1405 /* Always add these dependencies to pending_reads, since
1406 this insn may be followed by a write. */
1407 add_insn_mem_dependence (&pending_read_insns, &pending_read_mems,
1408 insn, x);
1409
1410 /* Take advantage of tail recursion here. */
1411 sched_analyze_2 (XEXP (x, 0), insn);
1412 return;
1413 }
1414
1415 case ASM_OPERANDS:
1416 case ASM_INPUT:
1417 case UNSPEC_VOLATILE:
1418 case TRAP_IF:
1419 {
1420 rtx u;
1421
1422 /* Traditional and volatile asm instructions must be considered to use
1423 and clobber all hard registers, all pseudo-registers and all of
1424 memory. So must TRAP_IF and UNSPEC_VOLATILE operations.
1425
1426 Consider for instance a volatile asm that changes the fpu rounding
1427 mode. An insn should not be moved across this even if it only uses
1428 pseudo-regs because it might give an incorrectly rounded result. */
1429 if (code != ASM_OPERANDS || MEM_VOLATILE_P (x))
1430 {
1431 int max_reg = max_reg_num ();
1432 for (i = 0; i < max_reg; i++)
1433 {
1434 for (u = reg_last_uses[i]; u; u = XEXP (u, 1))
1435 add_dependence (insn, XEXP (u, 0), REG_DEP_ANTI);
1436 reg_last_uses[i] = 0;
1437 if (reg_last_sets[i])
1438 add_dependence (insn, reg_last_sets[i], 0);
1439 }
1440 reg_pending_sets_all = 1;
1441
1442 flush_pending_lists (insn, 0);
1443 }
1444
1445 /* For all ASM_OPERANDS, we must traverse the vector of input operands.
1446 We can not just fall through here since then we would be confused
1447 by the ASM_INPUT rtx inside ASM_OPERANDS, which do not indicate
1448 traditional asms unlike their normal usage. */
1449
1450 if (code == ASM_OPERANDS)
1451 {
1452 for (j = 0; j < ASM_OPERANDS_INPUT_LENGTH (x); j++)
1453 sched_analyze_2 (ASM_OPERANDS_INPUT (x, j), insn);
1454 return;
1455 }
1456 break;
1457 }
1458
1459 case PRE_DEC:
1460 case POST_DEC:
1461 case PRE_INC:
1462 case POST_INC:
1463 /* These both read and modify the result. We must handle them as writes
1464 to get proper dependencies for following instructions. We must handle
1465 them as reads to get proper dependencies from this to previous
1466 instructions. Thus we need to pass them to both sched_analyze_1
1467 and sched_analyze_2. We must call sched_analyze_2 first in order
1468 to get the proper antecedent for the read. */
1469 sched_analyze_2 (XEXP (x, 0), insn);
1470 sched_analyze_1 (x, insn);
1471 return;
1472
1473 default:
1474 break;
1475 }
1476
1477 /* Other cases: walk the insn. */
1478 fmt = GET_RTX_FORMAT (code);
1479 for (i = GET_RTX_LENGTH (code) - 1; i >= 0; i--)
1480 {
1481 if (fmt[i] == 'e')
1482 sched_analyze_2 (XEXP (x, i), insn);
1483 else if (fmt[i] == 'E')
1484 for (j = 0; j < XVECLEN (x, i); j++)
1485 sched_analyze_2 (XVECEXP (x, i, j), insn);
1486 }
1487 }
1488
1489 /* Analyze an INSN with pattern X to find all dependencies. */
1490
1491 static void
1492 sched_analyze_insn (x, insn, loop_notes)
1493 rtx x, insn;
1494 rtx loop_notes;
1495 {
1496 register RTX_CODE code = GET_CODE (x);
1497 rtx link;
1498 int maxreg = max_reg_num ();
1499 int i;
1500
1501 if (code == SET || code == CLOBBER)
1502 sched_analyze_1 (x, insn);
1503 else if (code == PARALLEL)
1504 {
1505 register int i;
1506 for (i = XVECLEN (x, 0) - 1; i >= 0; i--)
1507 {
1508 code = GET_CODE (XVECEXP (x, 0, i));
1509 if (code == SET || code == CLOBBER)
1510 sched_analyze_1 (XVECEXP (x, 0, i), insn);
1511 else
1512 sched_analyze_2 (XVECEXP (x, 0, i), insn);
1513 }
1514 }
1515 else
1516 sched_analyze_2 (x, insn);
1517
1518 /* Mark registers CLOBBERED or used by called function. */
1519 if (GET_CODE (insn) == CALL_INSN)
1520 for (link = CALL_INSN_FUNCTION_USAGE (insn); link; link = XEXP (link, 1))
1521 {
1522 if (GET_CODE (XEXP (link, 0)) == CLOBBER)
1523 sched_analyze_1 (XEXP (link, 0), insn);
1524 else
1525 sched_analyze_2 (XEXP (link, 0), insn);
1526 }
1527
1528 /* If there is a {LOOP,EHREGION}_{BEG,END} note in the middle of a basic block, then
1529 we must be sure that no instructions are scheduled across it.
1530 Otherwise, the reg_n_refs info (which depends on loop_depth) would
1531 become incorrect. */
1532
1533 if (loop_notes)
1534 {
1535 int max_reg = max_reg_num ();
1536 rtx link;
1537
1538 for (i = 0; i < max_reg; i++)
1539 {
1540 rtx u;
1541 for (u = reg_last_uses[i]; u; u = XEXP (u, 1))
1542 add_dependence (insn, XEXP (u, 0), REG_DEP_ANTI);
1543 reg_last_uses[i] = 0;
1544 if (reg_last_sets[i])
1545 add_dependence (insn, reg_last_sets[i], 0);
1546 }
1547 reg_pending_sets_all = 1;
1548
1549 flush_pending_lists (insn, 0);
1550
1551 link = loop_notes;
1552 while (XEXP (link, 1))
1553 link = XEXP (link, 1);
1554 XEXP (link, 1) = REG_NOTES (insn);
1555 REG_NOTES (insn) = loop_notes;
1556 }
1557
1558 /* After reload, it is possible for an instruction to have a REG_DEAD note
1559 for a register that actually dies a few instructions earlier. For
1560 example, this can happen with SECONDARY_MEMORY_NEEDED reloads.
1561 In this case, we must consider the insn to use the register mentioned
1562 in the REG_DEAD note. Otherwise, we may accidentally move this insn
1563 after another insn that sets the register, thus getting obviously invalid
1564 rtl. This confuses reorg which believes that REG_DEAD notes are still
1565 meaningful.
1566
1567 ??? We would get better code if we fixed reload to put the REG_DEAD
1568 notes in the right places, but that may not be worth the effort. */
1569
1570 if (reload_completed)
1571 {
1572 rtx note;
1573
1574 for (note = REG_NOTES (insn); note; note = XEXP (note, 1))
1575 if (REG_NOTE_KIND (note) == REG_DEAD)
1576 sched_analyze_2 (XEXP (note, 0), insn);
1577 }
1578
1579 EXECUTE_IF_SET_IN_REG_SET (reg_pending_sets, 0, i,
1580 {
1581 reg_last_sets[i] = insn;
1582 });
1583 CLEAR_REG_SET (reg_pending_sets);
1584
1585 if (reg_pending_sets_all)
1586 {
1587 for (i = 0; i < maxreg; i++)
1588 reg_last_sets[i] = insn;
1589 reg_pending_sets_all = 0;
1590 }
1591
1592 /* Handle function calls and function returns created by the epilogue
1593 threading code. */
1594 if (GET_CODE (insn) == CALL_INSN || GET_CODE (insn) == JUMP_INSN)
1595 {
1596 rtx dep_insn;
1597 rtx prev_dep_insn;
1598
1599 /* When scheduling instructions, we make sure calls don't lose their
1600 accompanying USE insns by depending them one on another in order.
1601
1602 Also, we must do the same thing for returns created by the epilogue
1603 threading code. Note this code works only in this special case,
1604 because other passes make no guarantee that they will never emit
1605 an instruction between a USE and a RETURN. There is such a guarantee
1606 for USE instructions immediately before a call. */
1607
1608 prev_dep_insn = insn;
1609 dep_insn = PREV_INSN (insn);
1610 while (GET_CODE (dep_insn) == INSN
1611 && GET_CODE (PATTERN (dep_insn)) == USE
1612 && GET_CODE (XEXP (PATTERN (dep_insn), 0)) == REG)
1613 {
1614 SCHED_GROUP_P (prev_dep_insn) = 1;
1615
1616 /* Make a copy of all dependencies on dep_insn, and add to insn.
1617 This is so that all of the dependencies will apply to the
1618 group. */
1619
1620 for (link = LOG_LINKS (dep_insn); link; link = XEXP (link, 1))
1621 add_dependence (insn, XEXP (link, 0), REG_NOTE_KIND (link));
1622
1623 prev_dep_insn = dep_insn;
1624 dep_insn = PREV_INSN (dep_insn);
1625 }
1626 }
1627 }
1628
1629 /* Analyze every insn between HEAD and TAIL inclusive, creating LOG_LINKS
1630 for every dependency. */
1631
1632 static int
1633 sched_analyze (head, tail)
1634 rtx head, tail;
1635 {
1636 register rtx insn;
1637 register int n_insns = 0;
1638 register rtx u;
1639 register int luid = 0;
1640 rtx loop_notes = 0;
1641
1642 for (insn = head; ; insn = NEXT_INSN (insn))
1643 {
1644 INSN_LUID (insn) = luid++;
1645
1646 if (GET_CODE (insn) == INSN || GET_CODE (insn) == JUMP_INSN)
1647 {
1648 sched_analyze_insn (PATTERN (insn), insn, loop_notes);
1649 loop_notes = 0;
1650 n_insns += 1;
1651 }
1652 else if (GET_CODE (insn) == CALL_INSN)
1653 {
1654 rtx x;
1655 register int i;
1656
1657 /* Any instruction using a hard register which may get clobbered
1658 by a call needs to be marked as dependent on this call.
1659 This prevents a use of a hard return reg from being moved
1660 past a void call (i.e. it does not explicitly set the hard
1661 return reg). */
1662
1663 /* If this call is followed by a NOTE_INSN_SETJMP, then assume that
1664 all registers, not just hard registers, may be clobbered by this
1665 call. */
1666
1667 /* Insn, being a CALL_INSN, magically depends on
1668 `last_function_call' already. */
1669
1670 if (NEXT_INSN (insn) && GET_CODE (NEXT_INSN (insn)) == NOTE
1671 && NOTE_LINE_NUMBER (NEXT_INSN (insn)) == NOTE_INSN_SETJMP)
1672 {
1673 int max_reg = max_reg_num ();
1674 for (i = 0; i < max_reg; i++)
1675 {
1676 for (u = reg_last_uses[i]; u; u = XEXP (u, 1))
1677 add_dependence (insn, XEXP (u, 0), REG_DEP_ANTI);
1678 reg_last_uses[i] = 0;
1679 if (reg_last_sets[i])
1680 add_dependence (insn, reg_last_sets[i], 0);
1681 }
1682 reg_pending_sets_all = 1;
1683
1684 /* Add a pair of fake REG_NOTEs which we will later
1685 convert back into a NOTE_INSN_SETJMP note. See
1686 reemit_notes for why we use a pair of of NOTEs. */
1687
1688 REG_NOTES (insn) = gen_rtx_EXPR_LIST (REG_DEAD,
1689 GEN_INT (0),
1690 REG_NOTES (insn));
1691 REG_NOTES (insn) = gen_rtx_EXPR_LIST (REG_DEAD,
1692 GEN_INT (NOTE_INSN_SETJMP),
1693 REG_NOTES (insn));
1694 }
1695 else
1696 {
1697 for (i = 0; i < FIRST_PSEUDO_REGISTER; i++)
1698 if (call_used_regs[i] || global_regs[i])
1699 {
1700 for (u = reg_last_uses[i]; u; u = XEXP (u, 1))
1701 add_dependence (insn, XEXP (u, 0), REG_DEP_ANTI);
1702 reg_last_uses[i] = 0;
1703 if (reg_last_sets[i])
1704 add_dependence (insn, reg_last_sets[i], REG_DEP_ANTI);
1705 SET_REGNO_REG_SET (reg_pending_sets, i);
1706 }
1707 }
1708
1709 /* For each insn which shouldn't cross a call, add a dependence
1710 between that insn and this call insn. */
1711 x = LOG_LINKS (sched_before_next_call);
1712 while (x)
1713 {
1714 add_dependence (insn, XEXP (x, 0), REG_DEP_ANTI);
1715 x = XEXP (x, 1);
1716 }
1717 LOG_LINKS (sched_before_next_call) = 0;
1718
1719 sched_analyze_insn (PATTERN (insn), insn, loop_notes);
1720 loop_notes = 0;
1721
1722 /* In the absence of interprocedural alias analysis, we must flush
1723 all pending reads and writes, and start new dependencies starting
1724 from here. But only flush writes for constant calls (which may
1725 be passed a pointer to something we haven't written yet). */
1726 flush_pending_lists (insn, CONST_CALL_P (insn));
1727
1728 /* Depend this function call (actually, the user of this
1729 function call) on all hard register clobberage. */
1730 last_function_call = insn;
1731 n_insns += 1;
1732 }
1733
1734 /* See comments on reemit_notes as to why we do this. */
1735 else if (GET_CODE (insn) == NOTE
1736 && (NOTE_LINE_NUMBER (insn) == NOTE_INSN_LOOP_BEG
1737 || NOTE_LINE_NUMBER (insn) == NOTE_INSN_LOOP_END
1738 || NOTE_LINE_NUMBER (insn) == NOTE_INSN_EH_REGION_BEG
1739 || NOTE_LINE_NUMBER (insn) == NOTE_INSN_EH_REGION_END
1740 || (NOTE_LINE_NUMBER (insn) == NOTE_INSN_SETJMP
1741 && GET_CODE (PREV_INSN (insn)) != CALL_INSN)))
1742 {
1743 loop_notes = gen_rtx_EXPR_LIST (REG_DEAD,
1744 GEN_INT (NOTE_BLOCK_NUMBER (insn)),
1745 loop_notes);
1746 loop_notes = gen_rtx_EXPR_LIST (REG_DEAD,
1747 GEN_INT (NOTE_LINE_NUMBER (insn)),
1748 loop_notes);
1749 CONST_CALL_P (loop_notes) = CONST_CALL_P (insn);
1750 }
1751
1752 if (insn == tail)
1753 return n_insns;
1754 }
1755
1756 abort ();
1757 }
1758 \f
1759 /* Called when we see a set of a register. If death is true, then we are
1760 scanning backwards. Mark that register as unborn. If nobody says
1761 otherwise, that is how things will remain. If death is false, then we
1762 are scanning forwards. Mark that register as being born. */
1763
1764 static void
1765 sched_note_set (b, x, death)
1766 int b;
1767 rtx x;
1768 int death;
1769 {
1770 register int regno;
1771 register rtx reg = SET_DEST (x);
1772 int subreg_p = 0;
1773
1774 if (reg == 0)
1775 return;
1776
1777 while (GET_CODE (reg) == SUBREG || GET_CODE (reg) == STRICT_LOW_PART
1778 || GET_CODE (reg) == SIGN_EXTRACT || GET_CODE (reg) == ZERO_EXTRACT)
1779 {
1780 /* Must treat modification of just one hardware register of a multi-reg
1781 value or just a byte field of a register exactly the same way that
1782 mark_set_1 in flow.c does, i.e. anything except a paradoxical subreg
1783 does not kill the entire register. */
1784 if (GET_CODE (reg) != SUBREG
1785 || REG_SIZE (SUBREG_REG (reg)) > REG_SIZE (reg))
1786 subreg_p = 1;
1787
1788 reg = SUBREG_REG (reg);
1789 }
1790
1791 if (GET_CODE (reg) != REG)
1792 return;
1793
1794 /* Global registers are always live, so the code below does not apply
1795 to them. */
1796
1797 regno = REGNO (reg);
1798 if (regno >= FIRST_PSEUDO_REGISTER || ! global_regs[regno])
1799 {
1800 if (death)
1801 {
1802 /* If we only set part of the register, then this set does not
1803 kill it. */
1804 if (subreg_p)
1805 return;
1806
1807 /* Try killing this register. */
1808 if (regno < FIRST_PSEUDO_REGISTER)
1809 {
1810 int j = HARD_REGNO_NREGS (regno, GET_MODE (reg));
1811 while (--j >= 0)
1812 {
1813 CLEAR_REGNO_REG_SET (bb_live_regs, regno + j);
1814 SET_REGNO_REG_SET (bb_dead_regs, regno + j);
1815 }
1816 }
1817 else
1818 {
1819 CLEAR_REGNO_REG_SET (bb_live_regs, regno);
1820 SET_REGNO_REG_SET (bb_dead_regs, regno);
1821 }
1822 }
1823 else
1824 {
1825 /* Make the register live again. */
1826 if (regno < FIRST_PSEUDO_REGISTER)
1827 {
1828 int j = HARD_REGNO_NREGS (regno, GET_MODE (reg));
1829 while (--j >= 0)
1830 {
1831 SET_REGNO_REG_SET (bb_live_regs, regno + j);
1832 CLEAR_REGNO_REG_SET (bb_dead_regs, regno + j);
1833 }
1834 }
1835 else
1836 {
1837 SET_REGNO_REG_SET (bb_live_regs, regno);
1838 CLEAR_REGNO_REG_SET (bb_dead_regs, regno);
1839 }
1840 }
1841 }
1842 }
1843 \f
1844 /* Macros and functions for keeping the priority queue sorted, and
1845 dealing with queueing and dequeueing of instructions. */
1846
1847 #define SCHED_SORT(READY, NEW_READY, OLD_READY) \
1848 do { if ((NEW_READY) - (OLD_READY) == 1) \
1849 swap_sort (READY, NEW_READY); \
1850 else if ((NEW_READY) - (OLD_READY) > 1) \
1851 qsort (READY, NEW_READY, sizeof (rtx), rank_for_schedule); } \
1852 while (0)
1853
1854 /* Returns a positive value if y is preferred; returns a negative value if
1855 x is preferred. Should never return 0, since that will make the sort
1856 unstable. */
1857
1858 static int
1859 rank_for_schedule (x, y)
1860 const GENERIC_PTR x;
1861 const GENERIC_PTR y;
1862 {
1863 rtx tmp = *(rtx *)y;
1864 rtx tmp2 = *(rtx *)x;
1865 rtx link;
1866 int tmp_class, tmp2_class;
1867 int value;
1868
1869 /* Choose the instruction with the highest priority, if different. */
1870 if ((value = INSN_PRIORITY (tmp) - INSN_PRIORITY (tmp2)))
1871 return value;
1872
1873 if (last_scheduled_insn)
1874 {
1875 /* Classify the instructions into three classes:
1876 1) Data dependent on last schedule insn.
1877 2) Anti/Output dependent on last scheduled insn.
1878 3) Independent of last scheduled insn, or has latency of one.
1879 Choose the insn from the highest numbered class if different. */
1880 link = find_insn_list (tmp, LOG_LINKS (last_scheduled_insn));
1881 if (link == 0 || insn_cost (tmp, link, last_scheduled_insn) == 1)
1882 tmp_class = 3;
1883 else if (REG_NOTE_KIND (link) == 0) /* Data dependence. */
1884 tmp_class = 1;
1885 else
1886 tmp_class = 2;
1887
1888 link = find_insn_list (tmp2, LOG_LINKS (last_scheduled_insn));
1889 if (link == 0 || insn_cost (tmp2, link, last_scheduled_insn) == 1)
1890 tmp2_class = 3;
1891 else if (REG_NOTE_KIND (link) == 0) /* Data dependence. */
1892 tmp2_class = 1;
1893 else
1894 tmp2_class = 2;
1895
1896 if ((value = tmp_class - tmp2_class))
1897 return value;
1898 }
1899
1900 /* If insns are equally good, sort by INSN_LUID (original insn order),
1901 so that we make the sort stable. This minimizes instruction movement,
1902 thus minimizing sched's effect on debugging and cross-jumping. */
1903 return INSN_LUID (tmp) - INSN_LUID (tmp2);
1904 }
1905
1906 /* Resort the array A in which only element at index N may be out of order. */
1907
1908 __inline static void
1909 swap_sort (a, n)
1910 rtx *a;
1911 int n;
1912 {
1913 rtx insn = a[n-1];
1914 int i = n-2;
1915
1916 while (i >= 0 && rank_for_schedule (a+i, &insn) >= 0)
1917 {
1918 a[i+1] = a[i];
1919 i -= 1;
1920 }
1921 a[i+1] = insn;
1922 }
1923
1924 static int max_priority;
1925
1926 /* Add INSN to the insn queue so that it fires at least N_CYCLES
1927 before the currently executing insn. */
1928
1929 __inline static void
1930 queue_insn (insn, n_cycles)
1931 rtx insn;
1932 int n_cycles;
1933 {
1934 int next_q = NEXT_Q_AFTER (q_ptr, n_cycles);
1935 NEXT_INSN (insn) = insn_queue[next_q];
1936 insn_queue[next_q] = insn;
1937 q_size += 1;
1938 }
1939
1940 /* Return nonzero if PAT is the pattern of an insn which makes a
1941 register live. */
1942
1943 __inline static int
1944 birthing_insn_p (pat)
1945 rtx pat;
1946 {
1947 int j;
1948
1949 if (reload_completed == 1)
1950 return 0;
1951
1952 if (GET_CODE (pat) == SET
1953 && GET_CODE (SET_DEST (pat)) == REG)
1954 {
1955 rtx dest = SET_DEST (pat);
1956 int i = REGNO (dest);
1957
1958 /* It would be more accurate to use refers_to_regno_p or
1959 reg_mentioned_p to determine when the dest is not live before this
1960 insn. */
1961
1962 if (REGNO_REG_SET_P (bb_live_regs, i))
1963 return (REG_N_SETS (i) == 1);
1964
1965 return 0;
1966 }
1967 if (GET_CODE (pat) == PARALLEL)
1968 {
1969 for (j = 0; j < XVECLEN (pat, 0); j++)
1970 if (birthing_insn_p (XVECEXP (pat, 0, j)))
1971 return 1;
1972 }
1973 return 0;
1974 }
1975
1976 /* PREV is an insn that is ready to execute. Adjust its priority if that
1977 will help shorten register lifetimes. */
1978
1979 __inline static void
1980 adjust_priority (prev)
1981 rtx prev;
1982 {
1983 /* Trying to shorten register lives after reload has completed
1984 is useless and wrong. It gives inaccurate schedules. */
1985 if (reload_completed == 0)
1986 {
1987 rtx note;
1988 int n_deaths = 0;
1989
1990 /* ??? This code has no effect, because REG_DEAD notes are removed
1991 before we ever get here. */
1992 for (note = REG_NOTES (prev); note; note = XEXP (note, 1))
1993 if (REG_NOTE_KIND (note) == REG_DEAD)
1994 n_deaths += 1;
1995
1996 /* Defer scheduling insns which kill registers, since that
1997 shortens register lives. Prefer scheduling insns which
1998 make registers live for the same reason. */
1999 switch (n_deaths)
2000 {
2001 default:
2002 INSN_PRIORITY (prev) >>= 3;
2003 break;
2004 case 3:
2005 INSN_PRIORITY (prev) >>= 2;
2006 break;
2007 case 2:
2008 case 1:
2009 INSN_PRIORITY (prev) >>= 1;
2010 break;
2011 case 0:
2012 if (birthing_insn_p (PATTERN (prev)))
2013 {
2014 int max = max_priority;
2015
2016 if (max > INSN_PRIORITY (prev))
2017 INSN_PRIORITY (prev) = max;
2018 }
2019 break;
2020 }
2021 #ifdef ADJUST_PRIORITY
2022 ADJUST_PRIORITY (prev);
2023 #endif
2024 }
2025 }
2026
2027 /* INSN is the "currently executing insn". Launch each insn which was
2028 waiting on INSN (in the backwards dataflow sense). READY is a
2029 vector of insns which are ready to fire. N_READY is the number of
2030 elements in READY. CLOCK is the current virtual cycle. */
2031
2032 static int
2033 schedule_insn (insn, ready, n_ready, clock)
2034 rtx insn;
2035 rtx *ready;
2036 int n_ready;
2037 int clock;
2038 {
2039 rtx link;
2040 int new_ready = n_ready;
2041
2042 if (MAX_BLOCKAGE > 1)
2043 schedule_unit (insn_unit (insn), insn, clock);
2044
2045 if (LOG_LINKS (insn) == 0)
2046 return n_ready;
2047
2048 /* This is used by the function adjust_priority above. */
2049 if (n_ready > 0)
2050 max_priority = MAX (INSN_PRIORITY (ready[0]), INSN_PRIORITY (insn));
2051 else
2052 max_priority = INSN_PRIORITY (insn);
2053
2054 for (link = LOG_LINKS (insn); link != 0; link = XEXP (link, 1))
2055 {
2056 rtx prev = XEXP (link, 0);
2057 int cost = insn_cost (prev, link, insn);
2058
2059 if ((INSN_REF_COUNT (prev) -= 1) != 0)
2060 {
2061 /* We satisfied one requirement to fire PREV. Record the earliest
2062 time when PREV can fire. No need to do this if the cost is 1,
2063 because PREV can fire no sooner than the next cycle. */
2064 if (cost > 1)
2065 INSN_TICK (prev) = MAX (INSN_TICK (prev), clock + cost);
2066 }
2067 else
2068 {
2069 /* We satisfied the last requirement to fire PREV. Ensure that all
2070 timing requirements are satisfied. */
2071 if (INSN_TICK (prev) - clock > cost)
2072 cost = INSN_TICK (prev) - clock;
2073
2074 /* Adjust the priority of PREV and either put it on the ready
2075 list or queue it. */
2076 adjust_priority (prev);
2077 if (cost <= 1)
2078 ready[new_ready++] = prev;
2079 else
2080 queue_insn (prev, cost);
2081 }
2082 }
2083
2084 return new_ready;
2085 }
2086
2087 /* Given N_READY insns in the ready list READY at time CLOCK, queue
2088 those that are blocked due to function unit hazards and rearrange
2089 the remaining ones to minimize subsequent function unit hazards. */
2090
2091 static int
2092 schedule_select (ready, n_ready, clock, file)
2093 rtx *ready;
2094 int n_ready, clock;
2095 FILE *file;
2096 {
2097 int pri = INSN_PRIORITY (ready[0]);
2098 int i, j, k, q, cost, best_cost, best_insn = 0, new_ready = n_ready;
2099 rtx insn;
2100
2101 /* Work down the ready list in groups of instructions with the same
2102 priority value. Queue insns in the group that are blocked and
2103 select among those that remain for the one with the largest
2104 potential hazard. */
2105 for (i = 0; i < n_ready; i = j)
2106 {
2107 int opri = pri;
2108 for (j = i + 1; j < n_ready; j++)
2109 if ((pri = INSN_PRIORITY (ready[j])) != opri)
2110 break;
2111
2112 /* Queue insns in the group that are blocked. */
2113 for (k = i, q = 0; k < j; k++)
2114 {
2115 insn = ready[k];
2116 if ((cost = actual_hazard (insn_unit (insn), insn, clock, 0)) != 0)
2117 {
2118 q++;
2119 ready[k] = 0;
2120 queue_insn (insn, cost);
2121 if (file)
2122 fprintf (file, "\n;; blocking insn %d for %d cycles",
2123 INSN_UID (insn), cost);
2124 }
2125 }
2126 new_ready -= q;
2127
2128 /* Check the next group if all insns were queued. */
2129 if (j - i - q == 0)
2130 continue;
2131
2132 /* If more than one remains, select the first one with the largest
2133 potential hazard. */
2134 else if (j - i - q > 1)
2135 {
2136 best_cost = -1;
2137 for (k = i; k < j; k++)
2138 {
2139 if ((insn = ready[k]) == 0)
2140 continue;
2141 if ((cost = potential_hazard (insn_unit (insn), insn, 0))
2142 > best_cost)
2143 {
2144 best_cost = cost;
2145 best_insn = k;
2146 }
2147 }
2148 }
2149 /* We have found a suitable insn to schedule. */
2150 break;
2151 }
2152
2153 /* Move the best insn to be front of the ready list. */
2154 if (best_insn != 0)
2155 {
2156 if (file)
2157 {
2158 fprintf (file, ", now");
2159 for (i = 0; i < n_ready; i++)
2160 if (ready[i])
2161 fprintf (file, " %d", INSN_UID (ready[i]));
2162 fprintf (file, "\n;; insn %d has a greater potential hazard",
2163 INSN_UID (ready[best_insn]));
2164 }
2165 for (i = best_insn; i > 0; i--)
2166 {
2167 insn = ready[i-1];
2168 ready[i-1] = ready[i];
2169 ready[i] = insn;
2170 }
2171 }
2172
2173 /* Compact the ready list. */
2174 if (new_ready < n_ready)
2175 for (i = j = 0; i < n_ready; i++)
2176 if (ready[i])
2177 ready[j++] = ready[i];
2178
2179 return new_ready;
2180 }
2181
2182 /* Add a REG_DEAD note for REG to INSN, reusing a REG_DEAD note from the
2183 dead_notes list. */
2184
2185 static void
2186 create_reg_dead_note (reg, insn)
2187 rtx reg, insn;
2188 {
2189 rtx link;
2190
2191 /* The number of registers killed after scheduling must be the same as the
2192 number of registers killed before scheduling. The number of REG_DEAD
2193 notes may not be conserved, i.e. two SImode hard register REG_DEAD notes
2194 might become one DImode hard register REG_DEAD note, but the number of
2195 registers killed will be conserved.
2196
2197 We carefully remove REG_DEAD notes from the dead_notes list, so that
2198 there will be none left at the end. If we run out early, then there
2199 is a bug somewhere in flow, combine and/or sched. */
2200
2201 if (dead_notes == 0)
2202 {
2203 #if 1
2204 abort ();
2205 #else
2206 link = rtx_alloc (EXPR_LIST);
2207 PUT_REG_NOTE_KIND (link, REG_DEAD);
2208 #endif
2209 }
2210 else
2211 {
2212 /* Number of regs killed by REG. */
2213 int regs_killed = (REGNO (reg) >= FIRST_PSEUDO_REGISTER ? 1
2214 : HARD_REGNO_NREGS (REGNO (reg), GET_MODE (reg)));
2215 /* Number of regs killed by REG_DEAD notes taken off the list. */
2216 int reg_note_regs;
2217
2218 link = dead_notes;
2219 reg_note_regs = (REGNO (XEXP (link, 0)) >= FIRST_PSEUDO_REGISTER ? 1
2220 : HARD_REGNO_NREGS (REGNO (XEXP (link, 0)),
2221 GET_MODE (XEXP (link, 0))));
2222 while (reg_note_regs < regs_killed)
2223 {
2224 /* LINK might be zero if we killed more registers after scheduling
2225 than before, and the last hard register we kill is actually
2226 multiple hard regs. */
2227 if (link == NULL_RTX)
2228 abort ();
2229
2230 link = XEXP (link, 1);
2231 reg_note_regs += (REGNO (XEXP (link, 0)) >= FIRST_PSEUDO_REGISTER ? 1
2232 : HARD_REGNO_NREGS (REGNO (XEXP (link, 0)),
2233 GET_MODE (XEXP (link, 0))));
2234 }
2235 dead_notes = XEXP (link, 1);
2236
2237 /* If we took too many regs kills off, put the extra ones back. */
2238 while (reg_note_regs > regs_killed)
2239 {
2240 rtx temp_reg, temp_link;
2241
2242 temp_reg = gen_rtx_REG (word_mode, 0);
2243 temp_link = rtx_alloc (EXPR_LIST);
2244 PUT_REG_NOTE_KIND (temp_link, REG_DEAD);
2245 XEXP (temp_link, 0) = temp_reg;
2246 XEXP (temp_link, 1) = dead_notes;
2247 dead_notes = temp_link;
2248 reg_note_regs--;
2249 }
2250 }
2251
2252 XEXP (link, 0) = reg;
2253 XEXP (link, 1) = REG_NOTES (insn);
2254 REG_NOTES (insn) = link;
2255 }
2256
2257 /* Subroutine on attach_deaths_insn--handles the recursive search
2258 through INSN. If SET_P is true, then x is being modified by the insn. */
2259
2260 static void
2261 attach_deaths (x, insn, set_p)
2262 rtx x;
2263 rtx insn;
2264 int set_p;
2265 {
2266 register int i;
2267 register int j;
2268 register enum rtx_code code;
2269 register char *fmt;
2270
2271 if (x == 0)
2272 return;
2273
2274 code = GET_CODE (x);
2275
2276 switch (code)
2277 {
2278 case CONST_INT:
2279 case CONST_DOUBLE:
2280 case LABEL_REF:
2281 case SYMBOL_REF:
2282 case CONST:
2283 case CODE_LABEL:
2284 case PC:
2285 case CC0:
2286 /* Get rid of the easy cases first. */
2287 return;
2288
2289 case REG:
2290 {
2291 /* If the register dies in this insn, queue that note, and mark
2292 this register as needing to die. */
2293 /* This code is very similar to mark_used_1 (if set_p is false)
2294 and mark_set_1 (if set_p is true) in flow.c. */
2295
2296 register int regno;
2297 int some_needed;
2298 int all_needed;
2299
2300 if (set_p)
2301 return;
2302
2303 regno = REGNO (x);
2304 all_needed = some_needed = REGNO_REG_SET_P (old_live_regs, regno);
2305 if (regno < FIRST_PSEUDO_REGISTER)
2306 {
2307 int n;
2308
2309 n = HARD_REGNO_NREGS (regno, GET_MODE (x));
2310 while (--n > 0)
2311 {
2312 int needed = (REGNO_REG_SET_P (old_live_regs, regno + n));
2313 some_needed |= needed;
2314 all_needed &= needed;
2315 }
2316 }
2317
2318 /* If it wasn't live before we started, then add a REG_DEAD note.
2319 We must check the previous lifetime info not the current info,
2320 because we may have to execute this code several times, e.g.
2321 once for a clobber (which doesn't add a note) and later
2322 for a use (which does add a note).
2323
2324 Always make the register live. We must do this even if it was
2325 live before, because this may be an insn which sets and uses
2326 the same register, in which case the register has already been
2327 killed, so we must make it live again.
2328
2329 Global registers are always live, and should never have a REG_DEAD
2330 note added for them, so none of the code below applies to them. */
2331
2332 if (regno >= FIRST_PSEUDO_REGISTER || ! global_regs[regno])
2333 {
2334 /* Never add REG_DEAD notes for the FRAME_POINTER_REGNUM or the
2335 STACK_POINTER_REGNUM, since these are always considered to be
2336 live. Similarly for ARG_POINTER_REGNUM if it is fixed. */
2337 if (regno != FRAME_POINTER_REGNUM
2338 #if HARD_FRAME_POINTER_REGNUM != FRAME_POINTER_REGNUM
2339 && ! (regno == HARD_FRAME_POINTER_REGNUM)
2340 #endif
2341 #if ARG_POINTER_REGNUM != FRAME_POINTER_REGNUM
2342 && ! (regno == ARG_POINTER_REGNUM && fixed_regs[regno])
2343 #endif
2344 && regno != STACK_POINTER_REGNUM)
2345 {
2346 /* ??? It is perhaps a dead_or_set_p bug that it does
2347 not check for REG_UNUSED notes itself. This is necessary
2348 for the case where the SET_DEST is a subreg of regno, as
2349 dead_or_set_p handles subregs specially. */
2350 if (! all_needed && ! dead_or_set_p (insn, x)
2351 && ! find_reg_note (insn, REG_UNUSED, x))
2352 {
2353 /* Check for the case where the register dying partially
2354 overlaps the register set by this insn. */
2355 if (regno < FIRST_PSEUDO_REGISTER
2356 && HARD_REGNO_NREGS (regno, GET_MODE (x)) > 1)
2357 {
2358 int n = HARD_REGNO_NREGS (regno, GET_MODE (x));
2359 while (--n >= 0)
2360 some_needed |= dead_or_set_regno_p (insn, regno + n);
2361 }
2362
2363 /* If none of the words in X is needed, make a REG_DEAD
2364 note. Otherwise, we must make partial REG_DEAD
2365 notes. */
2366 if (! some_needed)
2367 create_reg_dead_note (x, insn);
2368 else
2369 {
2370 int i;
2371
2372 /* Don't make a REG_DEAD note for a part of a
2373 register that is set in the insn. */
2374 for (i = HARD_REGNO_NREGS (regno, GET_MODE (x)) - 1;
2375 i >= 0; i--)
2376 if (! REGNO_REG_SET_P (old_live_regs, regno + i)
2377 && ! dead_or_set_regno_p (insn, regno + i))
2378 create_reg_dead_note (gen_rtx_REG (reg_raw_mode[regno + i],
2379 regno + i),
2380 insn);
2381 }
2382 }
2383 }
2384
2385 if (regno < FIRST_PSEUDO_REGISTER)
2386 {
2387 int j = HARD_REGNO_NREGS (regno, GET_MODE (x));
2388 while (--j >= 0)
2389 {
2390 CLEAR_REGNO_REG_SET (bb_dead_regs, regno + j);
2391 SET_REGNO_REG_SET (bb_live_regs, regno + j);
2392 }
2393 }
2394 else
2395 {
2396 CLEAR_REGNO_REG_SET (bb_dead_regs, regno);
2397 SET_REGNO_REG_SET (bb_live_regs, regno);
2398 }
2399 }
2400 return;
2401 }
2402
2403 case MEM:
2404 /* Handle tail-recursive case. */
2405 attach_deaths (XEXP (x, 0), insn, 0);
2406 return;
2407
2408 case SUBREG:
2409 case STRICT_LOW_PART:
2410 /* These two cases preserve the value of SET_P, so handle them
2411 separately. */
2412 attach_deaths (XEXP (x, 0), insn, set_p);
2413 return;
2414
2415 case ZERO_EXTRACT:
2416 case SIGN_EXTRACT:
2417 /* This case preserves the value of SET_P for the first operand, but
2418 clears it for the other two. */
2419 attach_deaths (XEXP (x, 0), insn, set_p);
2420 attach_deaths (XEXP (x, 1), insn, 0);
2421 attach_deaths (XEXP (x, 2), insn, 0);
2422 return;
2423
2424 default:
2425 /* Other cases: walk the insn. */
2426 fmt = GET_RTX_FORMAT (code);
2427 for (i = GET_RTX_LENGTH (code) - 1; i >= 0; i--)
2428 {
2429 if (fmt[i] == 'e')
2430 attach_deaths (XEXP (x, i), insn, 0);
2431 else if (fmt[i] == 'E')
2432 for (j = 0; j < XVECLEN (x, i); j++)
2433 attach_deaths (XVECEXP (x, i, j), insn, 0);
2434 }
2435 }
2436 }
2437
2438 /* After INSN has executed, add register death notes for each register
2439 that is dead after INSN. */
2440
2441 static void
2442 attach_deaths_insn (insn)
2443 rtx insn;
2444 {
2445 rtx x = PATTERN (insn);
2446 register RTX_CODE code = GET_CODE (x);
2447 rtx link;
2448
2449 if (code == SET)
2450 {
2451 attach_deaths (SET_SRC (x), insn, 0);
2452
2453 /* A register might die here even if it is the destination, e.g.
2454 it is the target of a volatile read and is otherwise unused.
2455 Hence we must always call attach_deaths for the SET_DEST. */
2456 attach_deaths (SET_DEST (x), insn, 1);
2457 }
2458 else if (code == PARALLEL)
2459 {
2460 register int i;
2461 for (i = XVECLEN (x, 0) - 1; i >= 0; i--)
2462 {
2463 code = GET_CODE (XVECEXP (x, 0, i));
2464 if (code == SET)
2465 {
2466 attach_deaths (SET_SRC (XVECEXP (x, 0, i)), insn, 0);
2467
2468 attach_deaths (SET_DEST (XVECEXP (x, 0, i)), insn, 1);
2469 }
2470 /* Flow does not add REG_DEAD notes to registers that die in
2471 clobbers, so we can't either. */
2472 else if (code != CLOBBER)
2473 attach_deaths (XVECEXP (x, 0, i), insn, 0);
2474 }
2475 }
2476 /* If this is a CLOBBER, only add REG_DEAD notes to registers inside a
2477 MEM being clobbered, just like flow. */
2478 else if (code == CLOBBER && GET_CODE (XEXP (x, 0)) == MEM)
2479 attach_deaths (XEXP (XEXP (x, 0), 0), insn, 0);
2480 /* Otherwise don't add a death note to things being clobbered. */
2481 else if (code != CLOBBER)
2482 attach_deaths (x, insn, 0);
2483
2484 /* Make death notes for things used in the called function. */
2485 if (GET_CODE (insn) == CALL_INSN)
2486 for (link = CALL_INSN_FUNCTION_USAGE (insn); link; link = XEXP (link, 1))
2487 attach_deaths (XEXP (XEXP (link, 0), 0), insn,
2488 GET_CODE (XEXP (link, 0)) == CLOBBER);
2489 }
2490
2491 /* Delete notes beginning with INSN and maybe put them in the chain
2492 of notes ended by NOTE_LIST.
2493 Returns the insn following the notes. */
2494
2495 static rtx
2496 unlink_notes (insn, tail)
2497 rtx insn, tail;
2498 {
2499 rtx prev = PREV_INSN (insn);
2500
2501 while (insn != tail && GET_CODE (insn) == NOTE)
2502 {
2503 rtx next = NEXT_INSN (insn);
2504 /* Delete the note from its current position. */
2505 if (prev)
2506 NEXT_INSN (prev) = next;
2507 if (next)
2508 PREV_INSN (next) = prev;
2509
2510 if (write_symbols != NO_DEBUG && NOTE_LINE_NUMBER (insn) > 0)
2511 /* Record line-number notes so they can be reused. */
2512 LINE_NOTE (insn) = insn;
2513
2514 /* Don't save away NOTE_INSN_SETJMPs, because they must remain
2515 immediately after the call they follow. We use a fake
2516 (REG_DEAD (const_int -1)) note to remember them.
2517 Likewise with NOTE_INSN_{LOOP,EHREGION}_{BEG, END}. */
2518 else if (NOTE_LINE_NUMBER (insn) != NOTE_INSN_SETJMP
2519 && NOTE_LINE_NUMBER (insn) != NOTE_INSN_LOOP_BEG
2520 && NOTE_LINE_NUMBER (insn) != NOTE_INSN_LOOP_END
2521 && NOTE_LINE_NUMBER (insn) != NOTE_INSN_EH_REGION_BEG
2522 && NOTE_LINE_NUMBER (insn) != NOTE_INSN_EH_REGION_END)
2523 {
2524 /* Insert the note at the end of the notes list. */
2525 PREV_INSN (insn) = note_list;
2526 if (note_list)
2527 NEXT_INSN (note_list) = insn;
2528 note_list = insn;
2529 }
2530
2531 insn = next;
2532 }
2533 return insn;
2534 }
2535
2536 /* Constructor for `sometimes' data structure. */
2537
2538 static int
2539 new_sometimes_live (regs_sometimes_live, regno, sometimes_max)
2540 struct sometimes *regs_sometimes_live;
2541 int regno;
2542 int sometimes_max;
2543 {
2544 register struct sometimes *p;
2545
2546 /* There should never be a register greater than max_regno here. If there
2547 is, it means that a define_split has created a new pseudo reg. This
2548 is not allowed, since there will not be flow info available for any
2549 new register, so catch the error here. */
2550 if (regno >= max_regno)
2551 abort ();
2552
2553 p = &regs_sometimes_live[sometimes_max];
2554 p->regno = regno;
2555 p->live_length = 0;
2556 p->calls_crossed = 0;
2557 sometimes_max++;
2558 return sometimes_max;
2559 }
2560
2561 /* Count lengths of all regs we are currently tracking,
2562 and find new registers no longer live. */
2563
2564 static void
2565 finish_sometimes_live (regs_sometimes_live, sometimes_max)
2566 struct sometimes *regs_sometimes_live;
2567 int sometimes_max;
2568 {
2569 int i;
2570
2571 for (i = 0; i < sometimes_max; i++)
2572 {
2573 register struct sometimes *p = &regs_sometimes_live[i];
2574 int regno = p->regno;
2575
2576 sched_reg_live_length[regno] += p->live_length;
2577 sched_reg_n_calls_crossed[regno] += p->calls_crossed;
2578 }
2579 }
2580
2581 /* Search INSN for fake REG_DEAD note pairs for NOTE_INSN_SETJMP,
2582 NOTE_INSN_{LOOP,EHREGION}_{BEG,END}; and convert them back into
2583 NOTEs. The REG_DEAD note following first one is contains the saved
2584 value for NOTE_BLOCK_NUMBER which is useful for
2585 NOTE_INSN_EH_REGION_{BEG,END} NOTEs. LAST is the last instruction
2586 output by the instruction scheduler. Return the new value of LAST. */
2587
2588 static rtx
2589 reemit_notes (insn, last)
2590 rtx insn;
2591 rtx last;
2592 {
2593 rtx note;
2594
2595 for (note = REG_NOTES (insn); note; note = XEXP (note, 1))
2596 {
2597 if (REG_NOTE_KIND (note) == REG_DEAD
2598 && GET_CODE (XEXP (note, 0)) == CONST_INT)
2599 {
2600 if (INTVAL (XEXP (note, 0)) == NOTE_INSN_SETJMP)
2601 {
2602 CONST_CALL_P (emit_note_after (INTVAL (XEXP (note, 0)), insn))
2603 = CONST_CALL_P (note);
2604 remove_note (insn, note);
2605 note = XEXP (note, 1);
2606 }
2607 else
2608 {
2609 last = emit_note_before (INTVAL (XEXP (note, 0)), last);
2610 remove_note (insn, note);
2611 note = XEXP (note, 1);
2612 NOTE_BLOCK_NUMBER (last) = INTVAL (XEXP (note, 0));
2613 }
2614 remove_note (insn, note);
2615 }
2616 }
2617 return last;
2618 }
2619
2620 /* Use modified list scheduling to rearrange insns in basic block
2621 B. FILE, if nonzero, is where we dump interesting output about
2622 this pass. */
2623
2624 static void
2625 schedule_block (b, file)
2626 int b;
2627 FILE *file;
2628 {
2629 rtx insn, last;
2630 rtx *ready, link;
2631 int i, j, n_ready = 0, new_ready, n_insns;
2632 int sched_n_insns = 0;
2633 int clock;
2634 #define NEED_NOTHING 0
2635 #define NEED_HEAD 1
2636 #define NEED_TAIL 2
2637 int new_needs;
2638
2639 /* HEAD and TAIL delimit the region being scheduled. */
2640 rtx head = basic_block_head[b];
2641 rtx tail = basic_block_end[b];
2642 /* PREV_HEAD and NEXT_TAIL are the boundaries of the insns
2643 being scheduled. When the insns have been ordered,
2644 these insns delimit where the new insns are to be
2645 spliced back into the insn chain. */
2646 rtx next_tail;
2647 rtx prev_head;
2648
2649 /* Keep life information accurate. */
2650 register struct sometimes *regs_sometimes_live;
2651 int sometimes_max;
2652
2653 if (file)
2654 fprintf (file, ";;\t -- basic block number %d from %d to %d --\n",
2655 b, INSN_UID (basic_block_head[b]), INSN_UID (basic_block_end[b]));
2656
2657 i = max_reg_num ();
2658 reg_last_uses = (rtx *) alloca (i * sizeof (rtx));
2659 bzero ((char *) reg_last_uses, i * sizeof (rtx));
2660 reg_last_sets = (rtx *) alloca (i * sizeof (rtx));
2661 bzero ((char *) reg_last_sets, i * sizeof (rtx));
2662 reg_pending_sets = ALLOCA_REG_SET ();
2663 CLEAR_REG_SET (reg_pending_sets);
2664 reg_pending_sets_all = 0;
2665 clear_units ();
2666
2667 #if 0
2668 /* We used to have code to avoid getting parameters moved from hard
2669 argument registers into pseudos.
2670
2671 However, it was removed when it proved to be of marginal benefit and
2672 caused problems because of different notions of what the "head" insn
2673 was. */
2674
2675 /* Remove certain insns at the beginning from scheduling,
2676 by advancing HEAD. */
2677
2678 /* At the start of a function, before reload has run, don't delay getting
2679 parameters from hard registers into pseudo registers. */
2680 if (reload_completed == 0 && b == 0)
2681 {
2682 while (head != tail
2683 && GET_CODE (head) == NOTE
2684 && NOTE_LINE_NUMBER (head) != NOTE_INSN_FUNCTION_BEG)
2685 head = NEXT_INSN (head);
2686 while (head != tail
2687 && GET_CODE (head) == INSN
2688 && GET_CODE (PATTERN (head)) == SET)
2689 {
2690 rtx src = SET_SRC (PATTERN (head));
2691 while (GET_CODE (src) == SUBREG
2692 || GET_CODE (src) == SIGN_EXTEND
2693 || GET_CODE (src) == ZERO_EXTEND
2694 || GET_CODE (src) == SIGN_EXTRACT
2695 || GET_CODE (src) == ZERO_EXTRACT)
2696 src = XEXP (src, 0);
2697 if (GET_CODE (src) != REG
2698 || REGNO (src) >= FIRST_PSEUDO_REGISTER)
2699 break;
2700 /* Keep this insn from ever being scheduled. */
2701 INSN_REF_COUNT (head) = 1;
2702 head = NEXT_INSN (head);
2703 }
2704 }
2705 #endif
2706
2707 /* Don't include any notes or labels at the beginning of the
2708 basic block, or notes at the ends of basic blocks. */
2709 while (head != tail)
2710 {
2711 if (GET_CODE (head) == NOTE)
2712 head = NEXT_INSN (head);
2713 else if (GET_CODE (tail) == NOTE)
2714 tail = PREV_INSN (tail);
2715 else if (GET_CODE (head) == CODE_LABEL)
2716 head = NEXT_INSN (head);
2717 else break;
2718 }
2719 /* If the only insn left is a NOTE or a CODE_LABEL, then there is no need
2720 to schedule this block. */
2721 if (head == tail
2722 && (GET_CODE (head) == NOTE || GET_CODE (head) == CODE_LABEL))
2723 goto ret;
2724
2725 #if 0
2726 /* This short-cut doesn't work. It does not count call insns crossed by
2727 registers in reg_sometimes_live. It does not mark these registers as
2728 dead if they die in this block. It does not mark these registers live
2729 (or create new reg_sometimes_live entries if necessary) if they are born
2730 in this block.
2731
2732 The easy solution is to just always schedule a block. This block only
2733 has one insn, so this won't slow down this pass by much. */
2734
2735 if (head == tail)
2736 goto ret;
2737 #endif
2738
2739 /* Now HEAD through TAIL are the insns actually to be rearranged;
2740 Let PREV_HEAD and NEXT_TAIL enclose them. */
2741 prev_head = PREV_INSN (head);
2742 next_tail = NEXT_INSN (tail);
2743
2744 /* Initialize basic block data structures. */
2745 dead_notes = 0;
2746 pending_read_insns = 0;
2747 pending_read_mems = 0;
2748 pending_write_insns = 0;
2749 pending_write_mems = 0;
2750 pending_lists_length = 0;
2751 last_pending_memory_flush = 0;
2752 last_function_call = 0;
2753 last_scheduled_insn = 0;
2754
2755 LOG_LINKS (sched_before_next_call) = 0;
2756
2757 n_insns = sched_analyze (head, tail);
2758 if (n_insns == 0)
2759 {
2760 free_pending_lists ();
2761 goto ret;
2762 }
2763
2764 /* Allocate vector to hold insns to be rearranged (except those
2765 insns which are controlled by an insn with SCHED_GROUP_P set).
2766 All these insns are included between ORIG_HEAD and ORIG_TAIL,
2767 as those variables ultimately are set up. */
2768 ready = (rtx *) alloca ((n_insns+1) * sizeof (rtx));
2769
2770 /* TAIL is now the last of the insns to be rearranged.
2771 Put those insns into the READY vector. */
2772 insn = tail;
2773
2774 /* For all branches, calls, uses, and cc0 setters, force them to remain
2775 in order at the end of the block by adding dependencies and giving
2776 the last a high priority. There may be notes present, and prev_head
2777 may also be a note.
2778
2779 Branches must obviously remain at the end. Calls should remain at the
2780 end since moving them results in worse register allocation. Uses remain
2781 at the end to ensure proper register allocation. cc0 setters remaim
2782 at the end because they can't be moved away from their cc0 user. */
2783 last = 0;
2784 while (GET_CODE (insn) == CALL_INSN || GET_CODE (insn) == JUMP_INSN
2785 || (GET_CODE (insn) == INSN
2786 && (GET_CODE (PATTERN (insn)) == USE
2787 #ifdef HAVE_cc0
2788 || sets_cc0_p (PATTERN (insn))
2789 #endif
2790 ))
2791 || GET_CODE (insn) == NOTE)
2792 {
2793 if (GET_CODE (insn) != NOTE)
2794 {
2795 priority (insn);
2796 if (last == 0)
2797 {
2798 ready[n_ready++] = insn;
2799 INSN_PRIORITY (insn) = TAIL_PRIORITY - i;
2800 INSN_REF_COUNT (insn) = 0;
2801 }
2802 else if (! find_insn_list (insn, LOG_LINKS (last)))
2803 {
2804 add_dependence (last, insn, REG_DEP_ANTI);
2805 INSN_REF_COUNT (insn)++;
2806 }
2807 last = insn;
2808
2809 /* Skip over insns that are part of a group. */
2810 while (SCHED_GROUP_P (insn))
2811 {
2812 insn = prev_nonnote_insn (insn);
2813 priority (insn);
2814 }
2815 }
2816
2817 insn = PREV_INSN (insn);
2818 /* Don't overrun the bounds of the basic block. */
2819 if (insn == prev_head)
2820 break;
2821 }
2822
2823 /* Assign priorities to instructions. Also check whether they
2824 are in priority order already. If so then I will be nonnegative.
2825 We use this shortcut only before reloading. */
2826 #if 0
2827 i = reload_completed ? DONE_PRIORITY : MAX_PRIORITY;
2828 #endif
2829
2830 for (; insn != prev_head; insn = PREV_INSN (insn))
2831 {
2832 if (GET_RTX_CLASS (GET_CODE (insn)) == 'i')
2833 {
2834 priority (insn);
2835 if (INSN_REF_COUNT (insn) == 0)
2836 {
2837 if (last == 0)
2838 ready[n_ready++] = insn;
2839 else
2840 {
2841 /* Make this dependent on the last of the instructions
2842 that must remain in order at the end of the block. */
2843 add_dependence (last, insn, REG_DEP_ANTI);
2844 INSN_REF_COUNT (insn) = 1;
2845 }
2846 }
2847 if (SCHED_GROUP_P (insn))
2848 {
2849 while (SCHED_GROUP_P (insn))
2850 {
2851 insn = prev_nonnote_insn (insn);
2852 priority (insn);
2853 }
2854 continue;
2855 }
2856 #if 0
2857 if (i < 0)
2858 continue;
2859 if (INSN_PRIORITY (insn) < i)
2860 i = INSN_PRIORITY (insn);
2861 else if (INSN_PRIORITY (insn) > i)
2862 i = DONE_PRIORITY;
2863 #endif
2864 }
2865 }
2866
2867 #if 0
2868 /* This short-cut doesn't work. It does not count call insns crossed by
2869 registers in reg_sometimes_live. It does not mark these registers as
2870 dead if they die in this block. It does not mark these registers live
2871 (or create new reg_sometimes_live entries if necessary) if they are born
2872 in this block.
2873
2874 The easy solution is to just always schedule a block. These blocks tend
2875 to be very short, so this doesn't slow down this pass by much. */
2876
2877 /* If existing order is good, don't bother to reorder. */
2878 if (i != DONE_PRIORITY)
2879 {
2880 if (file)
2881 fprintf (file, ";; already scheduled\n");
2882
2883 if (reload_completed == 0)
2884 {
2885 for (i = 0; i < sometimes_max; i++)
2886 regs_sometimes_live[i].live_length += n_insns;
2887
2888 finish_sometimes_live (regs_sometimes_live, sometimes_max);
2889 }
2890 free_pending_lists ();
2891 goto ret;
2892 }
2893 #endif
2894
2895 /* Scan all the insns to be scheduled, removing NOTE insns
2896 and register death notes.
2897 Line number NOTE insns end up in NOTE_LIST.
2898 Register death notes end up in DEAD_NOTES.
2899
2900 Recreate the register life information for the end of this basic
2901 block. */
2902
2903 if (reload_completed == 0)
2904 {
2905 COPY_REG_SET (bb_live_regs, basic_block_live_at_start[b]);
2906 CLEAR_REG_SET (bb_dead_regs);
2907
2908 if (b == 0)
2909 {
2910 /* This is the first block in the function. There may be insns
2911 before head that we can't schedule. We still need to examine
2912 them though for accurate register lifetime analysis. */
2913
2914 /* We don't want to remove any REG_DEAD notes as the code below
2915 does. */
2916
2917 for (insn = basic_block_head[b]; insn != head;
2918 insn = NEXT_INSN (insn))
2919 if (GET_RTX_CLASS (GET_CODE (insn)) == 'i')
2920 {
2921 /* See if the register gets born here. */
2922 /* We must check for registers being born before we check for
2923 registers dying. It is possible for a register to be born
2924 and die in the same insn, e.g. reading from a volatile
2925 memory location into an otherwise unused register. Such
2926 a register must be marked as dead after this insn. */
2927 if (GET_CODE (PATTERN (insn)) == SET
2928 || GET_CODE (PATTERN (insn)) == CLOBBER)
2929 sched_note_set (b, PATTERN (insn), 0);
2930 else if (GET_CODE (PATTERN (insn)) == PARALLEL)
2931 {
2932 int j;
2933 for (j = XVECLEN (PATTERN (insn), 0) - 1; j >= 0; j--)
2934 if (GET_CODE (XVECEXP (PATTERN (insn), 0, j)) == SET
2935 || GET_CODE (XVECEXP (PATTERN (insn), 0, j)) == CLOBBER)
2936 sched_note_set (b, XVECEXP (PATTERN (insn), 0, j), 0);
2937
2938 /* ??? This code is obsolete and should be deleted. It
2939 is harmless though, so we will leave it in for now. */
2940 for (j = XVECLEN (PATTERN (insn), 0) - 1; j >= 0; j--)
2941 if (GET_CODE (XVECEXP (PATTERN (insn), 0, j)) == USE)
2942 sched_note_set (b, XVECEXP (PATTERN (insn), 0, j), 0);
2943 }
2944
2945 /* Each call clobbers (makes live) all call-clobbered regs
2946 that are not global or fixed. Note that the function-value
2947 reg is a call_clobbered reg. */
2948
2949 if (GET_CODE (insn) == CALL_INSN)
2950 {
2951 int j;
2952 for (j = 0; j < FIRST_PSEUDO_REGISTER; j++)
2953 if (call_used_regs[j] && ! global_regs[j]
2954 && ! fixed_regs[j])
2955 {
2956 SET_REGNO_REG_SET (bb_live_regs, j);
2957 CLEAR_REGNO_REG_SET (bb_dead_regs, j);
2958 }
2959 }
2960
2961 for (link = REG_NOTES (insn); link; link = XEXP (link, 1))
2962 {
2963 if ((REG_NOTE_KIND (link) == REG_DEAD
2964 || REG_NOTE_KIND (link) == REG_UNUSED)
2965 /* Verify that the REG_NOTE has a valid value. */
2966 && GET_CODE (XEXP (link, 0)) == REG)
2967 {
2968 register int regno = REGNO (XEXP (link, 0));
2969
2970 if (regno < FIRST_PSEUDO_REGISTER)
2971 {
2972 int j = HARD_REGNO_NREGS (regno,
2973 GET_MODE (XEXP (link, 0)));
2974 while (--j >= 0)
2975 {
2976 CLEAR_REGNO_REG_SET (bb_live_regs, regno + j);
2977 SET_REGNO_REG_SET (bb_dead_regs, regno + j);
2978 }
2979 }
2980 else
2981 {
2982 CLEAR_REGNO_REG_SET (bb_live_regs, regno);
2983 SET_REGNO_REG_SET (bb_dead_regs, regno);
2984 }
2985 }
2986 }
2987 }
2988 }
2989 }
2990
2991 /* If debugging information is being produced, keep track of the line
2992 number notes for each insn. */
2993 if (write_symbols != NO_DEBUG)
2994 {
2995 /* We must use the true line number for the first insn in the block
2996 that was computed and saved at the start of this pass. We can't
2997 use the current line number, because scheduling of the previous
2998 block may have changed the current line number. */
2999 rtx line = line_note_head[b];
3000
3001 for (insn = basic_block_head[b];
3002 insn != next_tail;
3003 insn = NEXT_INSN (insn))
3004 if (GET_CODE (insn) == NOTE && NOTE_LINE_NUMBER (insn) > 0)
3005 line = insn;
3006 else
3007 LINE_NOTE (insn) = line;
3008 }
3009
3010 for (insn = head; insn != next_tail; insn = NEXT_INSN (insn))
3011 {
3012 rtx prev, next, link;
3013
3014 /* Farm out notes. This is needed to keep the debugger from
3015 getting completely deranged. */
3016 if (GET_CODE (insn) == NOTE)
3017 {
3018 prev = insn;
3019 insn = unlink_notes (insn, next_tail);
3020 if (prev == tail)
3021 abort ();
3022 if (prev == head)
3023 abort ();
3024 if (insn == next_tail)
3025 abort ();
3026 }
3027
3028 if (reload_completed == 0
3029 && GET_RTX_CLASS (GET_CODE (insn)) == 'i')
3030 {
3031 /* See if the register gets born here. */
3032 /* We must check for registers being born before we check for
3033 registers dying. It is possible for a register to be born and
3034 die in the same insn, e.g. reading from a volatile memory
3035 location into an otherwise unused register. Such a register
3036 must be marked as dead after this insn. */
3037 if (GET_CODE (PATTERN (insn)) == SET
3038 || GET_CODE (PATTERN (insn)) == CLOBBER)
3039 sched_note_set (b, PATTERN (insn), 0);
3040 else if (GET_CODE (PATTERN (insn)) == PARALLEL)
3041 {
3042 int j;
3043 for (j = XVECLEN (PATTERN (insn), 0) - 1; j >= 0; j--)
3044 if (GET_CODE (XVECEXP (PATTERN (insn), 0, j)) == SET
3045 || GET_CODE (XVECEXP (PATTERN (insn), 0, j)) == CLOBBER)
3046 sched_note_set (b, XVECEXP (PATTERN (insn), 0, j), 0);
3047
3048 /* ??? This code is obsolete and should be deleted. It
3049 is harmless though, so we will leave it in for now. */
3050 for (j = XVECLEN (PATTERN (insn), 0) - 1; j >= 0; j--)
3051 if (GET_CODE (XVECEXP (PATTERN (insn), 0, j)) == USE)
3052 sched_note_set (b, XVECEXP (PATTERN (insn), 0, j), 0);
3053 }
3054
3055 /* Each call clobbers (makes live) all call-clobbered regs that are
3056 not global or fixed. Note that the function-value reg is a
3057 call_clobbered reg. */
3058
3059 if (GET_CODE (insn) == CALL_INSN)
3060 {
3061 int j;
3062 for (j = 0; j < FIRST_PSEUDO_REGISTER; j++)
3063 if (call_used_regs[j] && ! global_regs[j]
3064 && ! fixed_regs[j])
3065 {
3066 SET_REGNO_REG_SET (bb_live_regs, j);
3067 CLEAR_REGNO_REG_SET (bb_dead_regs, j);
3068 }
3069 }
3070
3071 /* Need to know what registers this insn kills. */
3072 for (prev = 0, link = REG_NOTES (insn); link; link = next)
3073 {
3074 next = XEXP (link, 1);
3075 if ((REG_NOTE_KIND (link) == REG_DEAD
3076 || REG_NOTE_KIND (link) == REG_UNUSED)
3077 /* Verify that the REG_NOTE has a valid value. */
3078 && GET_CODE (XEXP (link, 0)) == REG)
3079 {
3080 register int regno = REGNO (XEXP (link, 0));
3081
3082 /* Only unlink REG_DEAD notes; leave REG_UNUSED notes
3083 alone. */
3084 if (REG_NOTE_KIND (link) == REG_DEAD)
3085 {
3086 if (prev)
3087 XEXP (prev, 1) = next;
3088 else
3089 REG_NOTES (insn) = next;
3090 XEXP (link, 1) = dead_notes;
3091 dead_notes = link;
3092 }
3093 else
3094 prev = link;
3095
3096 if (regno < FIRST_PSEUDO_REGISTER)
3097 {
3098 int j = HARD_REGNO_NREGS (regno,
3099 GET_MODE (XEXP (link, 0)));
3100 while (--j >= 0)
3101 {
3102 CLEAR_REGNO_REG_SET (bb_live_regs, regno + j);
3103 SET_REGNO_REG_SET (bb_dead_regs, regno + j);
3104 }
3105 }
3106 else
3107 {
3108 CLEAR_REGNO_REG_SET (bb_live_regs, regno);
3109 SET_REGNO_REG_SET (bb_dead_regs, regno);
3110 }
3111 }
3112 else
3113 prev = link;
3114 }
3115 }
3116 }
3117
3118 if (reload_completed == 0)
3119 {
3120 /* Keep track of register lives. */
3121 old_live_regs = ALLOCA_REG_SET ();
3122 regs_sometimes_live
3123 = (struct sometimes *) alloca (max_regno * sizeof (struct sometimes));
3124 sometimes_max = 0;
3125
3126 /* Start with registers live at end. */
3127 COPY_REG_SET (old_live_regs, bb_live_regs);
3128 EXECUTE_IF_SET_IN_REG_SET (bb_live_regs, 0, j,
3129 {
3130 sometimes_max
3131 = new_sometimes_live (regs_sometimes_live,
3132 j, sometimes_max);
3133 });
3134 }
3135
3136 SCHED_SORT (ready, n_ready, 1);
3137
3138 if (file)
3139 {
3140 fprintf (file, ";; ready list initially:\n;; ");
3141 for (i = 0; i < n_ready; i++)
3142 fprintf (file, "%d ", INSN_UID (ready[i]));
3143 fprintf (file, "\n\n");
3144
3145 for (insn = head; insn != next_tail; insn = NEXT_INSN (insn))
3146 if (INSN_PRIORITY (insn) > 0)
3147 fprintf (file, ";; insn[%4d]: priority = %4d, ref_count = %4d\n",
3148 INSN_UID (insn), INSN_PRIORITY (insn),
3149 INSN_REF_COUNT (insn));
3150 }
3151
3152 /* Now HEAD and TAIL are going to become disconnected
3153 entirely from the insn chain. */
3154 tail = 0;
3155
3156 /* Q_SIZE will always be zero here. */
3157 q_ptr = 0; clock = 0;
3158 bzero ((char *) insn_queue, sizeof (insn_queue));
3159
3160 /* Now, perform list scheduling. */
3161
3162 /* Where we start inserting insns is after TAIL. */
3163 last = next_tail;
3164
3165 new_needs = (NEXT_INSN (prev_head) == basic_block_head[b]
3166 ? NEED_HEAD : NEED_NOTHING);
3167 if (PREV_INSN (next_tail) == basic_block_end[b])
3168 new_needs |= NEED_TAIL;
3169
3170 new_ready = n_ready;
3171 while (sched_n_insns < n_insns)
3172 {
3173 q_ptr = NEXT_Q (q_ptr); clock++;
3174
3175 /* Add all pending insns that can be scheduled without stalls to the
3176 ready list. */
3177 for (insn = insn_queue[q_ptr]; insn; insn = NEXT_INSN (insn))
3178 {
3179 if (file)
3180 fprintf (file, ";; launching %d before %d with no stalls at T-%d\n",
3181 INSN_UID (insn), INSN_UID (last), clock);
3182 ready[new_ready++] = insn;
3183 q_size -= 1;
3184 }
3185 insn_queue[q_ptr] = 0;
3186
3187 /* If there are no ready insns, stall until one is ready and add all
3188 of the pending insns at that point to the ready list. */
3189 if (new_ready == 0)
3190 {
3191 register int stalls;
3192
3193 for (stalls = 1; stalls < INSN_QUEUE_SIZE; stalls++)
3194 if ((insn = insn_queue[NEXT_Q_AFTER (q_ptr, stalls)]))
3195 {
3196 for (; insn; insn = NEXT_INSN (insn))
3197 {
3198 if (file)
3199 fprintf (file, ";; launching %d before %d with %d stalls at T-%d\n",
3200 INSN_UID (insn), INSN_UID (last), stalls, clock);
3201 ready[new_ready++] = insn;
3202 q_size -= 1;
3203 }
3204 insn_queue[NEXT_Q_AFTER (q_ptr, stalls)] = 0;
3205 break;
3206 }
3207
3208 q_ptr = NEXT_Q_AFTER (q_ptr, stalls); clock += stalls;
3209 }
3210
3211 /* There should be some instructions waiting to fire. */
3212 if (new_ready == 0)
3213 abort ();
3214
3215 if (file)
3216 {
3217 fprintf (file, ";; ready list at T-%d:", clock);
3218 for (i = 0; i < new_ready; i++)
3219 fprintf (file, " %d (%x)",
3220 INSN_UID (ready[i]), INSN_PRIORITY (ready[i]));
3221 }
3222
3223 /* Sort the ready list and choose the best insn to schedule. Select
3224 which insn should issue in this cycle and queue those that are
3225 blocked by function unit hazards.
3226
3227 N_READY holds the number of items that were scheduled the last time,
3228 minus the one instruction scheduled on the last loop iteration; it
3229 is not modified for any other reason in this loop. */
3230
3231 SCHED_SORT (ready, new_ready, n_ready);
3232 if (MAX_BLOCKAGE > 1)
3233 {
3234 new_ready = schedule_select (ready, new_ready, clock, file);
3235 if (new_ready == 0)
3236 {
3237 if (file)
3238 fprintf (file, "\n");
3239 /* We must set n_ready here, to ensure that sorting always
3240 occurs when we come back to the SCHED_SORT line above. */
3241 n_ready = 0;
3242 continue;
3243 }
3244 }
3245 n_ready = new_ready;
3246 last_scheduled_insn = insn = ready[0];
3247
3248 /* The first insn scheduled becomes the new tail. */
3249 if (tail == 0)
3250 tail = insn;
3251
3252 if (file)
3253 {
3254 fprintf (file, ", now");
3255 for (i = 0; i < n_ready; i++)
3256 fprintf (file, " %d", INSN_UID (ready[i]));
3257 fprintf (file, "\n");
3258 }
3259
3260 if (DONE_PRIORITY_P (insn))
3261 abort ();
3262
3263 if (reload_completed == 0)
3264 {
3265 /* Process this insn, and each insn linked to this one which must
3266 be immediately output after this insn. */
3267 do
3268 {
3269 /* First we kill registers set by this insn, and then we
3270 make registers used by this insn live. This is the opposite
3271 order used above because we are traversing the instructions
3272 backwards. */
3273
3274 /* Strictly speaking, we should scan REG_UNUSED notes and make
3275 every register mentioned there live, however, we will just
3276 kill them again immediately below, so there doesn't seem to
3277 be any reason why we bother to do this. */
3278
3279 /* See if this is the last notice we must take of a register. */
3280 if (GET_CODE (PATTERN (insn)) == SET
3281 || GET_CODE (PATTERN (insn)) == CLOBBER)
3282 sched_note_set (b, PATTERN (insn), 1);
3283 else if (GET_CODE (PATTERN (insn)) == PARALLEL)
3284 {
3285 int j;
3286 for (j = XVECLEN (PATTERN (insn), 0) - 1; j >= 0; j--)
3287 if (GET_CODE (XVECEXP (PATTERN (insn), 0, j)) == SET
3288 || GET_CODE (XVECEXP (PATTERN (insn), 0, j)) == CLOBBER)
3289 sched_note_set (b, XVECEXP (PATTERN (insn), 0, j), 1);
3290 }
3291
3292 /* This code keeps life analysis information up to date. */
3293 if (GET_CODE (insn) == CALL_INSN)
3294 {
3295 register struct sometimes *p;
3296
3297 /* A call kills all call used registers that are not
3298 global or fixed, except for those mentioned in the call
3299 pattern which will be made live again later. */
3300 for (i = 0; i < FIRST_PSEUDO_REGISTER; i++)
3301 if (call_used_regs[i] && ! global_regs[i]
3302 && ! fixed_regs[i])
3303 {
3304 CLEAR_REGNO_REG_SET (bb_live_regs, i);
3305 SET_REGNO_REG_SET (bb_dead_regs, i);
3306 }
3307
3308 /* Regs live at the time of a call instruction must not
3309 go in a register clobbered by calls. Record this for
3310 all regs now live. Note that insns which are born or
3311 die in a call do not cross a call, so this must be done
3312 after the killings (above) and before the births
3313 (below). */
3314 p = regs_sometimes_live;
3315 for (i = 0; i < sometimes_max; i++, p++)
3316 if (REGNO_REG_SET_P (bb_live_regs, p->regno))
3317 p->calls_crossed += 1;
3318 }
3319
3320 /* Make every register used live, and add REG_DEAD notes for
3321 registers which were not live before we started. */
3322 attach_deaths_insn (insn);
3323
3324 /* Find registers now made live by that instruction. */
3325 EXECUTE_IF_AND_COMPL_IN_REG_SET (bb_live_regs, old_live_regs, 0, i,
3326 {
3327 sometimes_max
3328 = new_sometimes_live (regs_sometimes_live,
3329 i, sometimes_max);
3330 });
3331 IOR_REG_SET (old_live_regs, bb_live_regs);
3332
3333 /* Count lengths of all regs we are worrying about now,
3334 and handle registers no longer live. */
3335
3336 for (i = 0; i < sometimes_max; i++)
3337 {
3338 register struct sometimes *p = &regs_sometimes_live[i];
3339 int regno = p->regno;
3340
3341 p->live_length += 1;
3342
3343 if (!REGNO_REG_SET_P (bb_live_regs, p->regno))
3344 {
3345 /* This is the end of one of this register's lifetime
3346 segments. Save the lifetime info collected so far,
3347 and clear its bit in the old_live_regs entry. */
3348 sched_reg_live_length[regno] += p->live_length;
3349 sched_reg_n_calls_crossed[regno] += p->calls_crossed;
3350 CLEAR_REGNO_REG_SET (old_live_regs, p->regno);
3351
3352 /* Delete the reg_sometimes_live entry for this reg by
3353 copying the last entry over top of it. */
3354 *p = regs_sometimes_live[--sometimes_max];
3355 /* ...and decrement i so that this newly copied entry
3356 will be processed. */
3357 i--;
3358 }
3359 }
3360
3361 link = insn;
3362 insn = PREV_INSN (insn);
3363 }
3364 while (SCHED_GROUP_P (link));
3365
3366 /* Set INSN back to the insn we are scheduling now. */
3367 insn = ready[0];
3368 }
3369
3370 /* Schedule INSN. Remove it from the ready list. */
3371 ready += 1;
3372 n_ready -= 1;
3373
3374 sched_n_insns += 1;
3375 NEXT_INSN (insn) = last;
3376 PREV_INSN (last) = insn;
3377
3378 /* Everything that precedes INSN now either becomes "ready", if
3379 it can execute immediately before INSN, or "pending", if
3380 there must be a delay. Give INSN high enough priority that
3381 at least one (maybe more) reg-killing insns can be launched
3382 ahead of all others. Mark INSN as scheduled by changing its
3383 priority to -1. */
3384 INSN_PRIORITY (insn) = LAUNCH_PRIORITY;
3385 new_ready = schedule_insn (insn, ready, n_ready, clock);
3386 INSN_PRIORITY (insn) = DONE_PRIORITY;
3387
3388 /* Schedule all prior insns that must not be moved. */
3389 if (SCHED_GROUP_P (insn))
3390 {
3391 /* Disable these insns from being launched, in case one of the
3392 insns in the group has a dependency on an earlier one. */
3393 link = insn;
3394 while (SCHED_GROUP_P (link))
3395 {
3396 /* Disable these insns from being launched by anybody. */
3397 link = PREV_INSN (link);
3398 INSN_REF_COUNT (link) = 0;
3399 }
3400
3401 /* Now handle each group insn like the main insn was handled
3402 above. */
3403 link = insn;
3404 while (SCHED_GROUP_P (link))
3405 {
3406 link = PREV_INSN (link);
3407
3408 sched_n_insns += 1;
3409
3410 /* ??? Why don't we set LAUNCH_PRIORITY here? */
3411 new_ready = schedule_insn (link, ready, new_ready, clock);
3412 INSN_PRIORITY (link) = DONE_PRIORITY;
3413 }
3414 }
3415
3416 /* Put back NOTE_INSN_SETJMP,
3417 NOTE_INSN_{LOOP,EHREGION}_{BEGIN,END} notes. */
3418
3419 /* To prime the loop. We need to handle INSN and all the insns in the
3420 sched group. */
3421 last = NEXT_INSN (insn);
3422 do
3423 {
3424 insn = PREV_INSN (last);
3425
3426 /* Maintain a valid chain so emit_note_before works.
3427 This is necessary because PREV_INSN (insn) isn't valid
3428 (if ! SCHED_GROUP_P) and if it points to an insn already
3429 scheduled, a circularity will result. */
3430 if (! SCHED_GROUP_P (insn))
3431 {
3432 NEXT_INSN (prev_head) = insn;
3433 PREV_INSN (insn) = prev_head;
3434 }
3435
3436 last = reemit_notes (insn, insn);
3437 }
3438 while (SCHED_GROUP_P (insn));
3439 }
3440 if (q_size != 0)
3441 abort ();
3442
3443 if (reload_completed == 0)
3444 finish_sometimes_live (regs_sometimes_live, sometimes_max);
3445
3446 /* HEAD is now the first insn in the chain of insns that
3447 been scheduled by the loop above.
3448 TAIL is the last of those insns. */
3449 head = last;
3450
3451 /* NOTE_LIST is the end of a chain of notes previously found
3452 among the insns. Insert them at the beginning of the insns. */
3453 if (note_list != 0)
3454 {
3455 rtx note_head = note_list;
3456 while (PREV_INSN (note_head))
3457 note_head = PREV_INSN (note_head);
3458
3459 PREV_INSN (head) = note_list;
3460 NEXT_INSN (note_list) = head;
3461 head = note_head;
3462 }
3463
3464 /* There should be no REG_DEAD notes leftover at the end.
3465 In practice, this can occur as the result of bugs in flow, combine.c,
3466 and/or sched.c. The values of the REG_DEAD notes remaining are
3467 meaningless, because dead_notes is just used as a free list. */
3468 #if 1
3469 if (dead_notes != 0)
3470 abort ();
3471 #endif
3472
3473 if (new_needs & NEED_HEAD)
3474 basic_block_head[b] = head;
3475 PREV_INSN (head) = prev_head;
3476 NEXT_INSN (prev_head) = head;
3477
3478 if (new_needs & NEED_TAIL)
3479 basic_block_end[b] = tail;
3480 NEXT_INSN (tail) = next_tail;
3481 PREV_INSN (next_tail) = tail;
3482
3483 /* Restore the line-number notes of each insn. */
3484 if (write_symbols != NO_DEBUG)
3485 {
3486 rtx line, note, prev, new;
3487 int notes = 0;
3488
3489 head = basic_block_head[b];
3490 next_tail = NEXT_INSN (basic_block_end[b]);
3491
3492 /* Determine the current line-number. We want to know the current
3493 line number of the first insn of the block here, in case it is
3494 different from the true line number that was saved earlier. If
3495 different, then we need a line number note before the first insn
3496 of this block. If it happens to be the same, then we don't want to
3497 emit another line number note here. */
3498 for (line = head; line; line = PREV_INSN (line))
3499 if (GET_CODE (line) == NOTE && NOTE_LINE_NUMBER (line) > 0)
3500 break;
3501
3502 /* Walk the insns keeping track of the current line-number and inserting
3503 the line-number notes as needed. */
3504 for (insn = head; insn != next_tail; insn = NEXT_INSN (insn))
3505 if (GET_CODE (insn) == NOTE && NOTE_LINE_NUMBER (insn) > 0)
3506 line = insn;
3507 /* This used to emit line number notes before every non-deleted note.
3508 However, this confuses a debugger, because line notes not separated
3509 by real instructions all end up at the same address. I can find no
3510 use for line number notes before other notes, so none are emitted. */
3511 else if (GET_CODE (insn) != NOTE
3512 && (note = LINE_NOTE (insn)) != 0
3513 && note != line
3514 && (line == 0
3515 || NOTE_LINE_NUMBER (note) != NOTE_LINE_NUMBER (line)
3516 || NOTE_SOURCE_FILE (note) != NOTE_SOURCE_FILE (line)))
3517 {
3518 line = note;
3519 prev = PREV_INSN (insn);
3520 if (LINE_NOTE (note))
3521 {
3522 /* Re-use the original line-number note. */
3523 LINE_NOTE (note) = 0;
3524 PREV_INSN (note) = prev;
3525 NEXT_INSN (prev) = note;
3526 PREV_INSN (insn) = note;
3527 NEXT_INSN (note) = insn;
3528 }
3529 else
3530 {
3531 notes++;
3532 new = emit_note_after (NOTE_LINE_NUMBER (note), prev);
3533 NOTE_SOURCE_FILE (new) = NOTE_SOURCE_FILE (note);
3534 RTX_INTEGRATED_P (new) = RTX_INTEGRATED_P (note);
3535 }
3536 }
3537 if (file && notes)
3538 fprintf (file, ";; added %d line-number notes\n", notes);
3539 }
3540
3541 if (file)
3542 {
3543 fprintf (file, ";; total time = %d\n;; new basic block head = %d\n;; new basic block end = %d\n\n",
3544 clock, INSN_UID (basic_block_head[b]), INSN_UID (basic_block_end[b]));
3545 }
3546
3547 /* Yow! We're done! */
3548 free_pending_lists ();
3549
3550 ret:
3551 FREE_REG_SET (reg_pending_sets);
3552 FREE_REG_SET (old_live_regs);
3553
3554 return;
3555 }
3556 \f
3557 /* Subroutine of split_hard_reg_notes. Searches X for any reference to
3558 REGNO, returning the rtx of the reference found if any. Otherwise,
3559 returns 0. */
3560
3561 static rtx
3562 regno_use_in (regno, x)
3563 int regno;
3564 rtx x;
3565 {
3566 register char *fmt;
3567 int i, j;
3568 rtx tem;
3569
3570 if (GET_CODE (x) == REG && REGNO (x) == regno)
3571 return x;
3572
3573 fmt = GET_RTX_FORMAT (GET_CODE (x));
3574 for (i = GET_RTX_LENGTH (GET_CODE (x)) - 1; i >= 0; i--)
3575 {
3576 if (fmt[i] == 'e')
3577 {
3578 if ((tem = regno_use_in (regno, XEXP (x, i))))
3579 return tem;
3580 }
3581 else if (fmt[i] == 'E')
3582 for (j = XVECLEN (x, i) - 1; j >= 0; j--)
3583 if ((tem = regno_use_in (regno , XVECEXP (x, i, j))))
3584 return tem;
3585 }
3586
3587 return 0;
3588 }
3589
3590 /* Subroutine of update_flow_info. Determines whether any new REG_NOTEs are
3591 needed for the hard register mentioned in the note. This can happen
3592 if the reference to the hard register in the original insn was split into
3593 several smaller hard register references in the split insns. */
3594
3595 static void
3596 split_hard_reg_notes (note, first, last, orig_insn)
3597 rtx note, first, last, orig_insn;
3598 {
3599 rtx reg, temp, link;
3600 int n_regs, i, new_reg;
3601 rtx insn;
3602
3603 /* Assume that this is a REG_DEAD note. */
3604 if (REG_NOTE_KIND (note) != REG_DEAD)
3605 abort ();
3606
3607 reg = XEXP (note, 0);
3608
3609 n_regs = HARD_REGNO_NREGS (REGNO (reg), GET_MODE (reg));
3610
3611 for (i = 0; i < n_regs; i++)
3612 {
3613 new_reg = REGNO (reg) + i;
3614
3615 /* Check for references to new_reg in the split insns. */
3616 for (insn = last; ; insn = PREV_INSN (insn))
3617 {
3618 if (GET_RTX_CLASS (GET_CODE (insn)) == 'i'
3619 && (temp = regno_use_in (new_reg, PATTERN (insn))))
3620 {
3621 /* Create a new reg dead note here. */
3622 link = rtx_alloc (EXPR_LIST);
3623 PUT_REG_NOTE_KIND (link, REG_DEAD);
3624 XEXP (link, 0) = temp;
3625 XEXP (link, 1) = REG_NOTES (insn);
3626 REG_NOTES (insn) = link;
3627
3628 /* If killed multiple registers here, then add in the excess. */
3629 i += HARD_REGNO_NREGS (REGNO (temp), GET_MODE (temp)) - 1;
3630
3631 break;
3632 }
3633 /* It isn't mentioned anywhere, so no new reg note is needed for
3634 this register. */
3635 if (insn == first)
3636 break;
3637 }
3638 }
3639 }
3640
3641 /* Subroutine of update_flow_info. Determines whether a SET or CLOBBER in an
3642 insn created by splitting needs a REG_DEAD or REG_UNUSED note added. */
3643
3644 static void
3645 new_insn_dead_notes (pat, insn, last, orig_insn)
3646 rtx pat, insn, last, orig_insn;
3647 {
3648 rtx dest, tem, set;
3649
3650 /* PAT is either a CLOBBER or a SET here. */
3651 dest = XEXP (pat, 0);
3652
3653 while (GET_CODE (dest) == ZERO_EXTRACT || GET_CODE (dest) == SUBREG
3654 || GET_CODE (dest) == STRICT_LOW_PART
3655 || GET_CODE (dest) == SIGN_EXTRACT)
3656 dest = XEXP (dest, 0);
3657
3658 if (GET_CODE (dest) == REG)
3659 {
3660 for (tem = last; tem != insn; tem = PREV_INSN (tem))
3661 {
3662 if (GET_RTX_CLASS (GET_CODE (tem)) == 'i'
3663 && reg_overlap_mentioned_p (dest, PATTERN (tem))
3664 && (set = single_set (tem)))
3665 {
3666 rtx tem_dest = SET_DEST (set);
3667
3668 while (GET_CODE (tem_dest) == ZERO_EXTRACT
3669 || GET_CODE (tem_dest) == SUBREG
3670 || GET_CODE (tem_dest) == STRICT_LOW_PART
3671 || GET_CODE (tem_dest) == SIGN_EXTRACT)
3672 tem_dest = XEXP (tem_dest, 0);
3673
3674 if (! rtx_equal_p (tem_dest, dest))
3675 {
3676 /* Use the same scheme as combine.c, don't put both REG_DEAD
3677 and REG_UNUSED notes on the same insn. */
3678 if (! find_regno_note (tem, REG_UNUSED, REGNO (dest))
3679 && ! find_regno_note (tem, REG_DEAD, REGNO (dest)))
3680 {
3681 rtx note = rtx_alloc (EXPR_LIST);
3682 PUT_REG_NOTE_KIND (note, REG_DEAD);
3683 XEXP (note, 0) = dest;
3684 XEXP (note, 1) = REG_NOTES (tem);
3685 REG_NOTES (tem) = note;
3686 }
3687 /* The reg only dies in one insn, the last one that uses
3688 it. */
3689 break;
3690 }
3691 else if (reg_overlap_mentioned_p (dest, SET_SRC (set)))
3692 /* We found an instruction that both uses the register,
3693 and sets it, so no new REG_NOTE is needed for this set. */
3694 break;
3695 }
3696 }
3697 /* If this is a set, it must die somewhere, unless it is the dest of
3698 the original insn, and hence is live after the original insn. Abort
3699 if it isn't supposed to be live after the original insn.
3700
3701 If this is a clobber, then just add a REG_UNUSED note. */
3702 if (tem == insn)
3703 {
3704 int live_after_orig_insn = 0;
3705 rtx pattern = PATTERN (orig_insn);
3706 int i;
3707
3708 if (GET_CODE (pat) == CLOBBER)
3709 {
3710 rtx note = rtx_alloc (EXPR_LIST);
3711 PUT_REG_NOTE_KIND (note, REG_UNUSED);
3712 XEXP (note, 0) = dest;
3713 XEXP (note, 1) = REG_NOTES (insn);
3714 REG_NOTES (insn) = note;
3715 return;
3716 }
3717
3718 /* The original insn could have multiple sets, so search the
3719 insn for all sets. */
3720 if (GET_CODE (pattern) == SET)
3721 {
3722 if (reg_overlap_mentioned_p (dest, SET_DEST (pattern)))
3723 live_after_orig_insn = 1;
3724 }
3725 else if (GET_CODE (pattern) == PARALLEL)
3726 {
3727 for (i = 0; i < XVECLEN (pattern, 0); i++)
3728 if (GET_CODE (XVECEXP (pattern, 0, i)) == SET
3729 && reg_overlap_mentioned_p (dest,
3730 SET_DEST (XVECEXP (pattern,
3731 0, i))))
3732 live_after_orig_insn = 1;
3733 }
3734
3735 if (! live_after_orig_insn)
3736 abort ();
3737 }
3738 }
3739 }
3740
3741 /* Subroutine of update_flow_info. Update the value of reg_n_sets for all
3742 registers modified by X. INC is -1 if the containing insn is being deleted,
3743 and is 1 if the containing insn is a newly generated insn. */
3744
3745 static void
3746 update_n_sets (x, inc)
3747 rtx x;
3748 int inc;
3749 {
3750 rtx dest = SET_DEST (x);
3751
3752 while (GET_CODE (dest) == STRICT_LOW_PART || GET_CODE (dest) == SUBREG
3753 || GET_CODE (dest) == ZERO_EXTRACT || GET_CODE (dest) == SIGN_EXTRACT)
3754 dest = SUBREG_REG (dest);
3755
3756 if (GET_CODE (dest) == REG)
3757 {
3758 int regno = REGNO (dest);
3759
3760 if (regno < FIRST_PSEUDO_REGISTER)
3761 {
3762 register int i;
3763 int endregno = regno + HARD_REGNO_NREGS (regno, GET_MODE (dest));
3764
3765 for (i = regno; i < endregno; i++)
3766 REG_N_SETS (i) += inc;
3767 }
3768 else
3769 REG_N_SETS (regno) += inc;
3770 }
3771 }
3772
3773 /* Updates all flow-analysis related quantities (including REG_NOTES) for
3774 the insns from FIRST to LAST inclusive that were created by splitting
3775 ORIG_INSN. NOTES are the original REG_NOTES. */
3776
3777 static void
3778 update_flow_info (notes, first, last, orig_insn)
3779 rtx notes;
3780 rtx first, last;
3781 rtx orig_insn;
3782 {
3783 rtx insn, note;
3784 rtx next;
3785 rtx orig_dest, temp;
3786 rtx set;
3787
3788 /* Get and save the destination set by the original insn. */
3789
3790 orig_dest = single_set (orig_insn);
3791 if (orig_dest)
3792 orig_dest = SET_DEST (orig_dest);
3793
3794 /* Move REG_NOTES from the original insn to where they now belong. */
3795
3796 for (note = notes; note; note = next)
3797 {
3798 next = XEXP (note, 1);
3799 switch (REG_NOTE_KIND (note))
3800 {
3801 case REG_DEAD:
3802 case REG_UNUSED:
3803 /* Move these notes from the original insn to the last new insn where
3804 the register is now set. */
3805
3806 for (insn = last; ; insn = PREV_INSN (insn))
3807 {
3808 if (GET_RTX_CLASS (GET_CODE (insn)) == 'i'
3809 && reg_mentioned_p (XEXP (note, 0), PATTERN (insn)))
3810 {
3811 /* If this note refers to a multiple word hard register, it
3812 may have been split into several smaller hard register
3813 references, so handle it specially. */
3814 temp = XEXP (note, 0);
3815 if (REG_NOTE_KIND (note) == REG_DEAD
3816 && GET_CODE (temp) == REG
3817 && REGNO (temp) < FIRST_PSEUDO_REGISTER
3818 && HARD_REGNO_NREGS (REGNO (temp), GET_MODE (temp)) > 1)
3819 split_hard_reg_notes (note, first, last, orig_insn);
3820 else
3821 {
3822 XEXP (note, 1) = REG_NOTES (insn);
3823 REG_NOTES (insn) = note;
3824 }
3825
3826 /* Sometimes need to convert REG_UNUSED notes to REG_DEAD
3827 notes. */
3828 /* ??? This won't handle multiple word registers correctly,
3829 but should be good enough for now. */
3830 if (REG_NOTE_KIND (note) == REG_UNUSED
3831 && GET_CODE (XEXP (note, 0)) != SCRATCH
3832 && ! dead_or_set_p (insn, XEXP (note, 0)))
3833 PUT_REG_NOTE_KIND (note, REG_DEAD);
3834
3835 /* The reg only dies in one insn, the last one that uses
3836 it. */
3837 break;
3838 }
3839 /* It must die somewhere, fail it we couldn't find where it died.
3840
3841 If this is a REG_UNUSED note, then it must be a temporary
3842 register that was not needed by this instantiation of the
3843 pattern, so we can safely ignore it. */
3844 if (insn == first)
3845 {
3846 /* After reload, REG_DEAD notes come sometimes an
3847 instruction after the register actually dies. */
3848 if (reload_completed && REG_NOTE_KIND (note) == REG_DEAD)
3849 {
3850 XEXP (note, 1) = REG_NOTES (insn);
3851 REG_NOTES (insn) = note;
3852 break;
3853 }
3854
3855 if (REG_NOTE_KIND (note) != REG_UNUSED)
3856 abort ();
3857
3858 break;
3859 }
3860 }
3861 break;
3862
3863 case REG_WAS_0:
3864 /* If the insn that set the register to 0 was deleted, this
3865 note cannot be relied on any longer. The destination might
3866 even have been moved to memory.
3867 This was observed for SH4 with execute/920501-6.c compilation,
3868 -O2 -fomit-frame-pointer -finline-functions . */
3869 if (GET_CODE (XEXP (note, 0)) == NOTE
3870 || INSN_DELETED_P (XEXP (note, 0)))
3871 break;
3872 /* This note applies to the dest of the original insn. Find the
3873 first new insn that now has the same dest, and move the note
3874 there. */
3875
3876 if (! orig_dest)
3877 abort ();
3878
3879 for (insn = first; ; insn = NEXT_INSN (insn))
3880 {
3881 if (GET_RTX_CLASS (GET_CODE (insn)) == 'i'
3882 && (temp = single_set (insn))
3883 && rtx_equal_p (SET_DEST (temp), orig_dest))
3884 {
3885 XEXP (note, 1) = REG_NOTES (insn);
3886 REG_NOTES (insn) = note;
3887 /* The reg is only zero before one insn, the first that
3888 uses it. */
3889 break;
3890 }
3891 /* If this note refers to a multiple word hard
3892 register, it may have been split into several smaller
3893 hard register references. We could split the notes,
3894 but simply dropping them is good enough. */
3895 if (GET_CODE (orig_dest) == REG
3896 && REGNO (orig_dest) < FIRST_PSEUDO_REGISTER
3897 && HARD_REGNO_NREGS (REGNO (orig_dest),
3898 GET_MODE (orig_dest)) > 1)
3899 break;
3900 /* It must be set somewhere, fail if we couldn't find where it
3901 was set. */
3902 if (insn == last)
3903 abort ();
3904 }
3905 break;
3906
3907 case REG_EQUAL:
3908 case REG_EQUIV:
3909 /* A REG_EQUIV or REG_EQUAL note on an insn with more than one
3910 set is meaningless. Just drop the note. */
3911 if (! orig_dest)
3912 break;
3913
3914 case REG_NO_CONFLICT:
3915 /* These notes apply to the dest of the original insn. Find the last
3916 new insn that now has the same dest, and move the note there. */
3917
3918 if (! orig_dest)
3919 abort ();
3920
3921 for (insn = last; ; insn = PREV_INSN (insn))
3922 {
3923 if (GET_RTX_CLASS (GET_CODE (insn)) == 'i'
3924 && (temp = single_set (insn))
3925 && rtx_equal_p (SET_DEST (temp), orig_dest))
3926 {
3927 XEXP (note, 1) = REG_NOTES (insn);
3928 REG_NOTES (insn) = note;
3929 /* Only put this note on one of the new insns. */
3930 break;
3931 }
3932
3933 /* The original dest must still be set someplace. Abort if we
3934 couldn't find it. */
3935 if (insn == first)
3936 {
3937 /* However, if this note refers to a multiple word hard
3938 register, it may have been split into several smaller
3939 hard register references. We could split the notes,
3940 but simply dropping them is good enough. */
3941 if (GET_CODE (orig_dest) == REG
3942 && REGNO (orig_dest) < FIRST_PSEUDO_REGISTER
3943 && HARD_REGNO_NREGS (REGNO (orig_dest),
3944 GET_MODE (orig_dest)) > 1)
3945 break;
3946 /* Likewise for multi-word memory references. */
3947 if (GET_CODE (orig_dest) == MEM
3948 && SIZE_FOR_MODE (orig_dest) > MOVE_MAX)
3949 break;
3950 abort ();
3951 }
3952 }
3953 break;
3954
3955 case REG_LIBCALL:
3956 /* Move a REG_LIBCALL note to the first insn created, and update
3957 the corresponding REG_RETVAL note. */
3958 XEXP (note, 1) = REG_NOTES (first);
3959 REG_NOTES (first) = note;
3960
3961 insn = XEXP (note, 0);
3962 note = find_reg_note (insn, REG_RETVAL, NULL_RTX);
3963 if (note)
3964 XEXP (note, 0) = first;
3965 break;
3966
3967 case REG_EXEC_COUNT:
3968 /* Move a REG_EXEC_COUNT note to the first insn created. */
3969 XEXP (note, 1) = REG_NOTES (first);
3970 REG_NOTES (first) = note;
3971 break;
3972
3973 case REG_RETVAL:
3974 /* Move a REG_RETVAL note to the last insn created, and update
3975 the corresponding REG_LIBCALL note. */
3976 XEXP (note, 1) = REG_NOTES (last);
3977 REG_NOTES (last) = note;
3978
3979 insn = XEXP (note, 0);
3980 note = find_reg_note (insn, REG_LIBCALL, NULL_RTX);
3981 if (note)
3982 XEXP (note, 0) = last;
3983 break;
3984
3985 case REG_NONNEG:
3986 case REG_BR_PROB:
3987 /* This should be moved to whichever instruction is a JUMP_INSN. */
3988
3989 for (insn = last; ; insn = PREV_INSN (insn))
3990 {
3991 if (GET_CODE (insn) == JUMP_INSN)
3992 {
3993 XEXP (note, 1) = REG_NOTES (insn);
3994 REG_NOTES (insn) = note;
3995 /* Only put this note on one of the new insns. */
3996 break;
3997 }
3998 /* Fail if we couldn't find a JUMP_INSN. */
3999 if (insn == first)
4000 abort ();
4001 }
4002 break;
4003
4004 case REG_INC:
4005 /* reload sometimes leaves obsolete REG_INC notes around. */
4006 if (reload_completed)
4007 break;
4008 /* This should be moved to whichever instruction now has the
4009 increment operation. */
4010 abort ();
4011
4012 case REG_LABEL:
4013 /* Should be moved to the new insn(s) which use the label. */
4014 for (insn = first; insn != NEXT_INSN (last); insn = NEXT_INSN (insn))
4015 if (GET_RTX_CLASS (GET_CODE (insn)) == 'i'
4016 && reg_mentioned_p (XEXP (note, 0), PATTERN (insn)))
4017 REG_NOTES (insn) = gen_rtx_EXPR_LIST (REG_LABEL,
4018 XEXP (note, 0),
4019 REG_NOTES (insn));
4020 break;
4021
4022 case REG_CC_SETTER:
4023 case REG_CC_USER:
4024 /* These two notes will never appear until after reorg, so we don't
4025 have to handle them here. */
4026 default:
4027 abort ();
4028 }
4029 }
4030
4031 /* Each new insn created, except the last, has a new set. If the destination
4032 is a register, then this reg is now live across several insns, whereas
4033 previously the dest reg was born and died within the same insn. To
4034 reflect this, we now need a REG_DEAD note on the insn where this
4035 dest reg dies.
4036
4037 Similarly, the new insns may have clobbers that need REG_UNUSED notes. */
4038
4039 for (insn = first; insn != last; insn = NEXT_INSN (insn))
4040 {
4041 rtx pat;
4042 int i;
4043
4044 pat = PATTERN (insn);
4045 if (GET_CODE (pat) == SET || GET_CODE (pat) == CLOBBER)
4046 new_insn_dead_notes (pat, insn, last, orig_insn);
4047 else if (GET_CODE (pat) == PARALLEL)
4048 {
4049 for (i = 0; i < XVECLEN (pat, 0); i++)
4050 if (GET_CODE (XVECEXP (pat, 0, i)) == SET
4051 || GET_CODE (XVECEXP (pat, 0, i)) == CLOBBER)
4052 new_insn_dead_notes (XVECEXP (pat, 0, i), insn, last, orig_insn);
4053 }
4054 }
4055
4056 /* If any insn, except the last, uses the register set by the last insn,
4057 then we need a new REG_DEAD note on that insn. In this case, there
4058 would not have been a REG_DEAD note for this register in the original
4059 insn because it was used and set within one insn. */
4060
4061 set = single_set (last);
4062 if (set)
4063 {
4064 rtx dest = SET_DEST (set);
4065
4066 while (GET_CODE (dest) == ZERO_EXTRACT || GET_CODE (dest) == SUBREG
4067 || GET_CODE (dest) == STRICT_LOW_PART
4068 || GET_CODE (dest) == SIGN_EXTRACT)
4069 dest = XEXP (dest, 0);
4070
4071 if (GET_CODE (dest) == REG
4072 /* Global registers are always live, so the code below does not
4073 apply to them. */
4074 && (REGNO (dest) >= FIRST_PSEUDO_REGISTER
4075 || ! global_regs[REGNO (dest)]))
4076 {
4077 rtx stop_insn = PREV_INSN (first);
4078
4079 /* If the last insn uses the register that it is setting, then
4080 we don't want to put a REG_DEAD note there. Search backwards
4081 to find the first insn that sets but does not use DEST. */
4082
4083 insn = last;
4084 if (reg_overlap_mentioned_p (dest, SET_SRC (set)))
4085 {
4086 for (insn = PREV_INSN (insn); insn != first;
4087 insn = PREV_INSN (insn))
4088 {
4089 if ((set = single_set (insn))
4090 && reg_mentioned_p (dest, SET_DEST (set))
4091 && ! reg_overlap_mentioned_p (dest, SET_SRC (set)))
4092 break;
4093 }
4094 }
4095
4096 /* Now find the first insn that uses but does not set DEST. */
4097
4098 for (insn = PREV_INSN (insn); insn != stop_insn;
4099 insn = PREV_INSN (insn))
4100 {
4101 if (GET_RTX_CLASS (GET_CODE (insn)) == 'i'
4102 && reg_mentioned_p (dest, PATTERN (insn))
4103 && (set = single_set (insn)))
4104 {
4105 rtx insn_dest = SET_DEST (set);
4106
4107 while (GET_CODE (insn_dest) == ZERO_EXTRACT
4108 || GET_CODE (insn_dest) == SUBREG
4109 || GET_CODE (insn_dest) == STRICT_LOW_PART
4110 || GET_CODE (insn_dest) == SIGN_EXTRACT)
4111 insn_dest = XEXP (insn_dest, 0);
4112
4113 if (insn_dest != dest)
4114 {
4115 note = rtx_alloc (EXPR_LIST);
4116 PUT_REG_NOTE_KIND (note, REG_DEAD);
4117 XEXP (note, 0) = dest;
4118 XEXP (note, 1) = REG_NOTES (insn);
4119 REG_NOTES (insn) = note;
4120 /* The reg only dies in one insn, the last one
4121 that uses it. */
4122 break;
4123 }
4124 }
4125 }
4126 }
4127 }
4128
4129 /* If the original dest is modifying a multiple register target, and the
4130 original instruction was split such that the original dest is now set
4131 by two or more SUBREG sets, then the split insns no longer kill the
4132 destination of the original insn.
4133
4134 In this case, if there exists an instruction in the same basic block,
4135 before the split insn, which uses the original dest, and this use is
4136 killed by the original insn, then we must remove the REG_DEAD note on
4137 this insn, because it is now superfluous.
4138
4139 This does not apply when a hard register gets split, because the code
4140 knows how to handle overlapping hard registers properly. */
4141 if (orig_dest && GET_CODE (orig_dest) == REG)
4142 {
4143 int found_orig_dest = 0;
4144 int found_split_dest = 0;
4145
4146 for (insn = first; ; insn = NEXT_INSN (insn))
4147 {
4148 rtx pat = PATTERN (insn);
4149 int i = GET_CODE (pat) == PARALLEL ? XVECLEN (pat, 0) : 0;
4150 set = pat;
4151 for (;;)
4152 {
4153 if (GET_CODE (set) == SET)
4154 {
4155 if (GET_CODE (SET_DEST (set)) == REG
4156 && REGNO (SET_DEST (set)) == REGNO (orig_dest))
4157 {
4158 found_orig_dest = 1;
4159 break;
4160 }
4161 else if (GET_CODE (SET_DEST (set)) == SUBREG
4162 && SUBREG_REG (SET_DEST (set)) == orig_dest)
4163 {
4164 found_split_dest = 1;
4165 break;
4166 }
4167 }
4168 if (--i < 0)
4169 break;
4170 set = XVECEXP (pat, 0, i);
4171 }
4172
4173 if (insn == last)
4174 break;
4175 }
4176
4177 if (found_split_dest)
4178 {
4179 /* Search backwards from FIRST, looking for the first insn that uses
4180 the original dest. Stop if we pass a CODE_LABEL or a JUMP_INSN.
4181 If we find an insn, and it has a REG_DEAD note, then delete the
4182 note. */
4183
4184 for (insn = first; insn; insn = PREV_INSN (insn))
4185 {
4186 if (GET_CODE (insn) == CODE_LABEL
4187 || GET_CODE (insn) == JUMP_INSN)
4188 break;
4189 else if (GET_RTX_CLASS (GET_CODE (insn)) == 'i'
4190 && reg_mentioned_p (orig_dest, insn))
4191 {
4192 note = find_regno_note (insn, REG_DEAD, REGNO (orig_dest));
4193 if (note)
4194 remove_note (insn, note);
4195 }
4196 }
4197 }
4198 else if (! found_orig_dest)
4199 {
4200 /* This should never happen. */
4201 abort ();
4202 }
4203 }
4204
4205 /* Update reg_n_sets. This is necessary to prevent local alloc from
4206 converting REG_EQUAL notes to REG_EQUIV when splitting has modified
4207 a reg from set once to set multiple times. */
4208
4209 {
4210 rtx x = PATTERN (orig_insn);
4211 RTX_CODE code = GET_CODE (x);
4212
4213 if (code == SET || code == CLOBBER)
4214 update_n_sets (x, -1);
4215 else if (code == PARALLEL)
4216 {
4217 int i;
4218 for (i = XVECLEN (x, 0) - 1; i >= 0; i--)
4219 {
4220 code = GET_CODE (XVECEXP (x, 0, i));
4221 if (code == SET || code == CLOBBER)
4222 update_n_sets (XVECEXP (x, 0, i), -1);
4223 }
4224 }
4225
4226 for (insn = first; ; insn = NEXT_INSN (insn))
4227 {
4228 x = PATTERN (insn);
4229 code = GET_CODE (x);
4230
4231 if (code == SET || code == CLOBBER)
4232 update_n_sets (x, 1);
4233 else if (code == PARALLEL)
4234 {
4235 int i;
4236 for (i = XVECLEN (x, 0) - 1; i >= 0; i--)
4237 {
4238 code = GET_CODE (XVECEXP (x, 0, i));
4239 if (code == SET || code == CLOBBER)
4240 update_n_sets (XVECEXP (x, 0, i), 1);
4241 }
4242 }
4243
4244 if (insn == last)
4245 break;
4246 }
4247 }
4248 }
4249
4250 /* The one entry point in this file. DUMP_FILE is the dump file for
4251 this pass. */
4252
4253 void
4254 schedule_insns (dump_file)
4255 FILE *dump_file;
4256 {
4257 int max_uid = MAX_INSNS_PER_SPLIT * (get_max_uid () + 1);
4258 int b;
4259 rtx insn;
4260
4261 /* Taking care of this degenerate case makes the rest of
4262 this code simpler. */
4263 if (n_basic_blocks == 0)
4264 return;
4265
4266 /* Create an insn here so that we can hang dependencies off of it later. */
4267 sched_before_next_call
4268 = gen_rtx_INSN (VOIDmode, 0, NULL_RTX, NULL_RTX,
4269 NULL_RTX, 0, NULL_RTX, NULL_RTX);
4270
4271 /* Initialize the unused_*_lists. We can't use the ones left over from
4272 the previous function, because gcc has freed that memory. We can use
4273 the ones left over from the first sched pass in the second pass however,
4274 so only clear them on the first sched pass. The first pass is before
4275 reload if flag_schedule_insns is set, otherwise it is afterwards. */
4276
4277 if (reload_completed == 0 || ! flag_schedule_insns)
4278 {
4279 unused_insn_list = 0;
4280 unused_expr_list = 0;
4281 }
4282
4283 /* We create no insns here, only reorder them, so we
4284 remember how far we can cut back the stack on exit. */
4285
4286 /* Allocate data for this pass. See comments, above,
4287 for what these vectors do. */
4288 insn_luid = (int *) alloca (max_uid * sizeof (int));
4289 insn_priority = (int *) alloca (max_uid * sizeof (int));
4290 insn_tick = (int *) alloca (max_uid * sizeof (int));
4291 insn_costs = (short *) alloca (max_uid * sizeof (short));
4292 insn_units = (short *) alloca (max_uid * sizeof (short));
4293 insn_blockage = (unsigned int *) alloca (max_uid * sizeof (unsigned int));
4294 insn_ref_count = (int *) alloca (max_uid * sizeof (int));
4295
4296 if (reload_completed == 0)
4297 {
4298 sched_reg_n_calls_crossed = (int *) alloca (max_regno * sizeof (int));
4299 sched_reg_live_length = (int *) alloca (max_regno * sizeof (int));
4300 bb_dead_regs = ALLOCA_REG_SET ();
4301 bb_live_regs = ALLOCA_REG_SET ();
4302 bzero ((char *) sched_reg_n_calls_crossed, max_regno * sizeof (int));
4303 bzero ((char *) sched_reg_live_length, max_regno * sizeof (int));
4304 }
4305 else
4306 {
4307 sched_reg_n_calls_crossed = 0;
4308 sched_reg_live_length = 0;
4309 bb_dead_regs = 0;
4310 bb_live_regs = 0;
4311 }
4312 init_alias_analysis ();
4313
4314 if (write_symbols != NO_DEBUG)
4315 {
4316 rtx line;
4317
4318 line_note = (rtx *) alloca (max_uid * sizeof (rtx));
4319 bzero ((char *) line_note, max_uid * sizeof (rtx));
4320 line_note_head = (rtx *) alloca (n_basic_blocks * sizeof (rtx));
4321 bzero ((char *) line_note_head, n_basic_blocks * sizeof (rtx));
4322
4323 /* Determine the line-number at the start of each basic block.
4324 This must be computed and saved now, because after a basic block's
4325 predecessor has been scheduled, it is impossible to accurately
4326 determine the correct line number for the first insn of the block. */
4327
4328 for (b = 0; b < n_basic_blocks; b++)
4329 for (line = basic_block_head[b]; line; line = PREV_INSN (line))
4330 if (GET_CODE (line) == NOTE && NOTE_LINE_NUMBER (line) > 0)
4331 {
4332 line_note_head[b] = line;
4333 break;
4334 }
4335 }
4336
4337 bzero ((char *) insn_luid, max_uid * sizeof (int));
4338 bzero ((char *) insn_priority, max_uid * sizeof (int));
4339 bzero ((char *) insn_tick, max_uid * sizeof (int));
4340 bzero ((char *) insn_costs, max_uid * sizeof (short));
4341 bzero ((char *) insn_units, max_uid * sizeof (short));
4342 bzero ((char *) insn_blockage, max_uid * sizeof (unsigned int));
4343 bzero ((char *) insn_ref_count, max_uid * sizeof (int));
4344
4345 /* Schedule each basic block, block by block. */
4346
4347 /* ??? Add a NOTE after the last insn of the last basic block. It is not
4348 known why this is done. */
4349 /* ??? Perhaps it's done to ensure NEXT_TAIL in schedule_block is a
4350 valid insn. */
4351
4352 insn = basic_block_end[n_basic_blocks-1];
4353 if (NEXT_INSN (insn) == 0
4354 || (GET_CODE (insn) != NOTE
4355 && GET_CODE (insn) != CODE_LABEL
4356 /* Don't emit a NOTE if it would end up between an unconditional
4357 jump and a BARRIER. */
4358 && ! (GET_CODE (insn) == JUMP_INSN
4359 && GET_CODE (NEXT_INSN (insn)) == BARRIER)))
4360 emit_note_after (NOTE_INSN_DELETED, basic_block_end[n_basic_blocks-1]);
4361
4362 for (b = 0; b < n_basic_blocks; b++)
4363 {
4364 rtx insn, next;
4365
4366 note_list = 0;
4367
4368 for (insn = basic_block_head[b]; ; insn = next)
4369 {
4370 rtx prev;
4371 rtx set;
4372
4373 /* Can't use `next_real_insn' because that
4374 might go across CODE_LABELS and short-out basic blocks. */
4375 next = NEXT_INSN (insn);
4376 if (GET_CODE (insn) != INSN)
4377 {
4378 if (insn == basic_block_end[b])
4379 break;
4380
4381 continue;
4382 }
4383
4384 /* Don't split no-op move insns. These should silently disappear
4385 later in final. Splitting such insns would break the code
4386 that handles REG_NO_CONFLICT blocks. */
4387 set = single_set (insn);
4388 if (set && rtx_equal_p (SET_SRC (set), SET_DEST (set)))
4389 {
4390 if (insn == basic_block_end[b])
4391 break;
4392
4393 /* Nops get in the way while scheduling, so delete them now if
4394 register allocation has already been done. It is too risky
4395 to try to do this before register allocation, and there are
4396 unlikely to be very many nops then anyways. */
4397 if (reload_completed)
4398 {
4399 PUT_CODE (insn, NOTE);
4400 NOTE_LINE_NUMBER (insn) = NOTE_INSN_DELETED;
4401 NOTE_SOURCE_FILE (insn) = 0;
4402 }
4403
4404 continue;
4405 }
4406
4407 /* Split insns here to get max fine-grain parallelism. */
4408 prev = PREV_INSN (insn);
4409 /* It is probably not worthwhile to try to split again in the
4410 second pass. However, if flag_schedule_insns is not set,
4411 the first and only (if any) scheduling pass is after reload. */
4412 if (reload_completed == 0 || ! flag_schedule_insns)
4413 {
4414 rtx last, first = PREV_INSN (insn);
4415 rtx notes = REG_NOTES (insn);
4416
4417 last = try_split (PATTERN (insn), insn, 1);
4418 if (last != insn)
4419 {
4420 /* try_split returns the NOTE that INSN became. */
4421 first = NEXT_INSN (first);
4422 update_flow_info (notes, first, last, insn);
4423
4424 PUT_CODE (insn, NOTE);
4425 NOTE_SOURCE_FILE (insn) = 0;
4426 NOTE_LINE_NUMBER (insn) = NOTE_INSN_DELETED;
4427 if (insn == basic_block_head[b])
4428 basic_block_head[b] = first;
4429 if (insn == basic_block_end[b])
4430 {
4431 basic_block_end[b] = last;
4432 break;
4433 }
4434 }
4435 }
4436
4437 if (insn == basic_block_end[b])
4438 break;
4439 }
4440
4441 schedule_block (b, dump_file);
4442
4443 #ifdef USE_C_ALLOCA
4444 alloca (0);
4445 #endif
4446 }
4447
4448 /* Reposition the prologue and epilogue notes in case we moved the
4449 prologue/epilogue insns. */
4450 if (reload_completed)
4451 reposition_prologue_and_epilogue_notes (get_insns ());
4452
4453 if (write_symbols != NO_DEBUG)
4454 {
4455 rtx line = 0;
4456 rtx insn = get_insns ();
4457 int active_insn = 0;
4458 int notes = 0;
4459
4460 /* Walk the insns deleting redundant line-number notes. Many of these
4461 are already present. The remainder tend to occur at basic
4462 block boundaries. */
4463 for (insn = get_last_insn (); insn; insn = PREV_INSN (insn))
4464 if (GET_CODE (insn) == NOTE && NOTE_LINE_NUMBER (insn) > 0)
4465 {
4466 /* If there are no active insns following, INSN is redundant. */
4467 if (active_insn == 0)
4468 {
4469 notes++;
4470 NOTE_SOURCE_FILE (insn) = 0;
4471 NOTE_LINE_NUMBER (insn) = NOTE_INSN_DELETED;
4472 }
4473 /* If the line number is unchanged, LINE is redundant. */
4474 else if (line
4475 && NOTE_LINE_NUMBER (line) == NOTE_LINE_NUMBER (insn)
4476 && NOTE_SOURCE_FILE (line) == NOTE_SOURCE_FILE (insn))
4477 {
4478 notes++;
4479 NOTE_SOURCE_FILE (line) = 0;
4480 NOTE_LINE_NUMBER (line) = NOTE_INSN_DELETED;
4481 line = insn;
4482 }
4483 else
4484 line = insn;
4485 active_insn = 0;
4486 }
4487 else if (! ((GET_CODE (insn) == NOTE
4488 && NOTE_LINE_NUMBER (insn) == NOTE_INSN_DELETED)
4489 || (GET_CODE (insn) == INSN
4490 && (GET_CODE (PATTERN (insn)) == USE
4491 || GET_CODE (PATTERN (insn)) == CLOBBER))))
4492 active_insn++;
4493
4494 if (dump_file && notes)
4495 fprintf (dump_file, ";; deleted %d line-number notes\n", notes);
4496 }
4497
4498 if (reload_completed == 0)
4499 {
4500 int regno;
4501 for (regno = 0; regno < max_regno; regno++)
4502 if (sched_reg_live_length[regno])
4503 {
4504 if (dump_file)
4505 {
4506 if (REG_LIVE_LENGTH (regno) > sched_reg_live_length[regno])
4507 fprintf (dump_file,
4508 ";; register %d life shortened from %d to %d\n",
4509 regno, REG_LIVE_LENGTH (regno),
4510 sched_reg_live_length[regno]);
4511 /* Negative values are special; don't overwrite the current
4512 reg_live_length value if it is negative. */
4513 else if (REG_LIVE_LENGTH (regno) < sched_reg_live_length[regno]
4514 && REG_LIVE_LENGTH (regno) >= 0)
4515 fprintf (dump_file,
4516 ";; register %d life extended from %d to %d\n",
4517 regno, REG_LIVE_LENGTH (regno),
4518 sched_reg_live_length[regno]);
4519
4520 if (! REG_N_CALLS_CROSSED (regno)
4521 && sched_reg_n_calls_crossed[regno])
4522 fprintf (dump_file,
4523 ";; register %d now crosses calls\n", regno);
4524 else if (REG_N_CALLS_CROSSED (regno)
4525 && ! sched_reg_n_calls_crossed[regno]
4526 && REG_BASIC_BLOCK (regno) != REG_BLOCK_GLOBAL)
4527 fprintf (dump_file,
4528 ";; register %d no longer crosses calls\n", regno);
4529
4530 }
4531 /* Negative values are special; don't overwrite the current
4532 reg_live_length value if it is negative. */
4533 if (REG_LIVE_LENGTH (regno) >= 0)
4534 REG_LIVE_LENGTH (regno) = sched_reg_live_length[regno];
4535
4536 /* We can't change the value of reg_n_calls_crossed to zero for
4537 pseudos which are live in more than one block.
4538
4539 This is because combine might have made an optimization which
4540 invalidated basic_block_live_at_start and reg_n_calls_crossed,
4541 but it does not update them. If we update reg_n_calls_crossed
4542 here, the two variables are now inconsistent, and this might
4543 confuse the caller-save code into saving a register that doesn't
4544 need to be saved. This is only a problem when we zero calls
4545 crossed for a pseudo live in multiple basic blocks.
4546
4547 Alternatively, we could try to correctly update basic block live
4548 at start here in sched, but that seems complicated. */
4549 if (sched_reg_n_calls_crossed[regno]
4550 || REG_BASIC_BLOCK (regno) != REG_BLOCK_GLOBAL)
4551 REG_N_CALLS_CROSSED (regno) = sched_reg_n_calls_crossed[regno];
4552 }
4553 }
4554
4555 if (reload_completed == 0)
4556 {
4557 FREE_REG_SET (bb_dead_regs);
4558 FREE_REG_SET (bb_live_regs);
4559 }
4560
4561 }
4562 #endif /* INSN_SCHEDULING */