change = line and re.match(r'^[-+!][^-]', line)
- # Top-level comment can not belong to function
+ # Top-level comment cannot belong to function
if re.match(r'^[-+! ]\/\*', line):
fn = None
+2019-01-09 Sandra Loosemore <sandra@codesourcery.com>
+
+ PR other/16615
+
+ * Makefile.in: Mechanically replace "can not" with "cannot".
+ * alias.c: Likewise.
+ * builtins.c: Likewise.
+ * calls.c: Likewise.
+ * cgraph.c: Likewise.
+ * cgraph.h: Likewise.
+ * cgraphclones.c: Likewise.
+ * cgraphunit.c: Likewise.
+ * combine-stack-adj.c: Likewise.
+ * combine.c: Likewise.
+ * common/config/i386/i386-common.c: Likewise.
+ * config/aarch64/aarch64.c: Likewise.
+ * config/alpha/sync.md: Likewise.
+ * config/arc/arc.c: Likewise.
+ * config/arc/predicates.md: Likewise.
+ * config/arm/arm-c.c: Likewise.
+ * config/arm/arm.c: Likewise.
+ * config/arm/arm.h: Likewise.
+ * config/arm/arm.md: Likewise.
+ * config/arm/cortex-r4f.md: Likewise.
+ * config/csky/csky.c: Likewise.
+ * config/csky/csky.h: Likewise.
+ * config/darwin-f.c: Likewise.
+ * config/epiphany/epiphany.md: Likewise.
+ * config/i386/i386.c: Likewise.
+ * config/i386/sol2.h: Likewise.
+ * config/m68k/m68k.c: Likewise.
+ * config/mcore/mcore.h: Likewise.
+ * config/microblaze/microblaze.md: Likewise.
+ * config/mips/20kc.md: Likewise.
+ * config/mips/sb1.md: Likewise.
+ * config/nds32/nds32.c: Likewise.
+ * config/nds32/predicates.md: Likewise.
+ * config/pa/pa.c: Likewise.
+ * config/rs6000/e300c2c3.md: Likewise.
+ * config/rs6000/rs6000.c: Likewise.
+ * config/s390/s390.h: Likewise.
+ * config/sh/sh.c: Likewise.
+ * config/sh/sh.md: Likewise.
+ * config/spu/vmx2spu.h: Likewise.
+ * cprop.c: Likewise.
+ * dbxout.c: Likewise.
+ * df-scan.c: Likewise.
+ * doc/cfg.texi: Likewise.
+ * doc/extend.texi: Likewise.
+ * doc/fragments.texi: Likewise.
+ * doc/gty.texi: Likewise.
+ * doc/invoke.texi: Likewise.
+ * doc/lto.texi: Likewise.
+ * doc/md.texi: Likewise.
+ * doc/objc.texi: Likewise.
+ * doc/rtl.texi: Likewise.
+ * doc/tm.texi: Likewise.
+ * dse.c: Likewise.
+ * emit-rtl.c: Likewise.
+ * emit-rtl.h: Likewise.
+ * except.c: Likewise.
+ * expmed.c: Likewise.
+ * expr.c: Likewise.
+ * fold-const.c: Likewise.
+ * genautomata.c: Likewise.
+ * gimple-fold.c: Likewise.
+ * hard-reg-set.h: Likewise.
+ * ifcvt.c: Likewise.
+ * ipa-comdats.c: Likewise.
+ * ipa-cp.c: Likewise.
+ * ipa-devirt.c: Likewise.
+ * ipa-fnsummary.c: Likewise.
+ * ipa-icf.c: Likewise.
+ * ipa-inline-transform.c: Likewise.
+ * ipa-inline.c: Likewise.
+ * ipa-polymorphic-call.c: Likewise.
+ * ipa-profile.c: Likewise.
+ * ipa-prop.c: Likewise.
+ * ipa-pure-const.c: Likewise.
+ * ipa-reference.c: Likewise.
+ * ipa-split.c: Likewise.
+ * ipa-visibility.c: Likewise.
+ * ipa.c: Likewise.
+ * ira-build.c: Likewise.
+ * ira-color.c: Likewise.
+ * ira-conflicts.c: Likewise.
+ * ira-costs.c: Likewise.
+ * ira-int.h: Likewise.
+ * ira-lives.c: Likewise.
+ * ira.c: Likewise.
+ * ira.h: Likewise.
+ * loop-invariant.c: Likewise.
+ * loop-unroll.c: Likewise.
+ * lower-subreg.c: Likewise.
+ * lra-assigns.c: Likewise.
+ * lra-constraints.c: Likewise.
+ * lra-eliminations.c: Likewise.
+ * lra-lives.c: Likewise.
+ * lra-remat.c: Likewise.
+ * lra-spills.c: Likewise.
+ * lra.c: Likewise.
+ * lto-cgraph.c: Likewise.
+ * lto-streamer-out.c: Likewise.
+ * postreload-gcse.c: Likewise.
+ * predict.c: Likewise.
+ * profile-count.h: Likewise.
+ * profile.c: Likewise.
+ * recog.c: Likewise.
+ * ree.c: Likewise.
+ * reload.c: Likewise.
+ * reload1.c: Likewise.
+ * reorg.c: Likewise.
+ * resource.c: Likewise.
+ * rtl.def: Likewise.
+ * rtl.h: Likewise.
+ * rtlanal.c: Likewise.
+ * sched-deps.c: Likewise.
+ * sched-ebb.c: Likewise.
+ * sched-rgn.c: Likewise.
+ * sel-sched-ir.c: Likewise.
+ * sel-sched.c: Likewise.
+ * shrink-wrap.c: Likewise.
+ * simplify-rtx.c: Likewise.
+ * symtab.c: Likewise.
+ * target.def: Likewise.
+ * toplev.c: Likewise.
+ * tree-call-cdce.c: Likewise.
+ * tree-cfg.c: Likewise.
+ * tree-complex.c: Likewise.
+ * tree-core.h: Likewise.
+ * tree-eh.c: Likewise.
+ * tree-inline.c: Likewise.
+ * tree-loop-distribution.c: Likewise.
+ * tree-nrv.c: Likewise.
+ * tree-profile.c: Likewise.
+ * tree-sra.c: Likewise.
+ * tree-ssa-alias.c: Likewise.
+ * tree-ssa-dce.c: Likewise.
+ * tree-ssa-dom.c: Likewise.
+ * tree-ssa-forwprop.c: Likewise.
+ * tree-ssa-loop-im.c: Likewise.
+ * tree-ssa-loop-ivcanon.c: Likewise.
+ * tree-ssa-loop-ivopts.c: Likewise.
+ * tree-ssa-loop-niter.c: Likewise.
+ * tree-ssa-phionlycprop.c: Likewise.
+ * tree-ssa-phiopt.c: Likewise.
+ * tree-ssa-propagate.c: Likewise.
+ * tree-ssa-threadedge.c: Likewise.
+ * tree-ssa-threadupdate.c: Likewise.
+ * tree-ssa-uninit.c: Likewise.
+ * tree-ssanames.c: Likewise.
+ * tree-streamer-out.c: Likewise.
+ * tree.c: Likewise.
+ * tree.h: Likewise.
+ * vr-values.c: Likewise.
+
2019-01-09 Uroš Bizjak <ubizjak@gmail.com>
* config/i386/i386-protos.h (ix86_expand_xorsign): New prototype.
USE_GCC_STDINT = @use_gcc_stdint@
# The configure script will set this to collect2$(exeext), except on a
-# (non-Unix) host which can not build collect2, for which it will be
+# (non-Unix) host which cannot build collect2, for which it will be
# set to empty.
COLLECT2 = @collect2@
+2019-01-09 Sandra Loosemore <sandra@codesourcery.com>
+
+ PR other/16615
+
+ * exp_ch9.adb: Mechanically replace "can not" with "cannot".
+ * libgnat/s-regpat.ads: Likewise.
+ * par-ch4.adb: Likewise.
+ * set_targ.adb: Likewise.
+ * types.ads: Likewise.
+
2019-01-08 Justin Squirek <squirek@adacore.com>
Revert:
raise Program_Error;
end case;
- -- When exceptions can not be propagated, we never need to call
+ -- When exceptions cannot be propagated, we never need to call
-- Exception_Complete_Entry_Body.
if No_Exception_Handlers_Set then
-- N'th parenthesized subexpressions; Matches (0) is for the whole
-- expression.
--
- -- Non-capturing parenthesis (introduced with (?:...)) can not be
+ -- Non-capturing parenthesis (introduced with (?:...)) cannot be
-- retrieved and do not count in the match array index.
--
-- For instance, if your regular expression is: "a((b*)c+)(d+)", then
-- called in all contexts where a right parenthesis cannot legitimately
-- follow an expression.
- -- Error recovery: can not raise Error_Resync
+ -- Error recovery: cannot raise Error_Resync
function P_Expression_No_Right_Paren return Node_Id is
Expr : constant Node_Id := P_Expression;
begin
-- First step: see if the -gnateT switch is present. As we have noted,
- -- this has to be done very early, so can not depend on the normal circuit
+ -- this has to be done very early, so cannot depend on the normal circuit
-- for reading switches and setting switches in Opt. The following code
-- will set Opt.Target_Dependent_Info_Read_Name if the switch -gnateT=name
-- is present in the options string.
-- Types Used for Text Buffer Handling --
-----------------------------------------
- -- We can not use type String for text buffers, since we must use the
+ -- We cannot use type String for text buffers, since we must use the
-- standard 32-bit integer as an index value, since we count on all index
-- values being the same size.
{
alias_set_type set;
- /* We can not give up with -fno-strict-aliasing because we need to build
+ /* We cannot give up with -fno-strict-aliasing because we need to build
proper type representation for possible functions which are build with
-fstrict-aliasing. */
/* Handle structure type equality for pointer types, arrays and vectors.
This is easy to do, because the code bellow ignore canonical types on
these anyway. This is important for LTO, where TYPE_CANONICAL for
- pointers can not be meaningfuly computed by the frotnend. */
+ pointers cannot be meaningfuly computed by the frotnend. */
if (canonical_type_used_p (t))
{
/* In LTO we set canonical types for all types where it makes
}
/* Assign the alias set to both p and t.
- We can not call get_alias_set (p) here as that would trigger
+ We cannot call get_alias_set (p) here as that would trigger
infinite recursion when p == t. In other cases it would just
trigger unnecesary legwork of rebuilding the pointer again. */
gcc_checking_assert (p == TYPE_MAIN_VARIANT (p));
}
/* BASE1 and BASE2 are decls. Return 1 if they refer to same object, 0
- if they refer to different objects and -1 if we can not decide. */
+ if they refer to different objects and -1 if we cannot decide. */
int
compare_base_decls (tree base1, tree base2)
symtab_node *x_node = symtab_node::get_create (x_decl)
->ultimate_alias_target ();
- /* External variable can not be in section anchor. */
+ /* External variable cannot be in section anchor. */
if (!x_node->definition)
return 0;
x_base = XEXP (DECL_RTL (x_node->decl), 0);
ways.
If both memory references are volatile, then there must always be a
- dependence between the two references, since their order can not be
+ dependence between the two references, since their order cannot be
changed. A volatile and non-volatile reference can be interchanged
though.
gcc_assert (TREE_OPERAND (addr, 0) == fndecl);
TREE_OPERAND (addr, 0) = builtin_decl_explicit (ext_call);
- /* If we will emit code after the call, the call can not be a tail call.
+ /* If we will emit code after the call, the call cannot be a tail call.
If it is emitted as a tail call, a barrier is emitted after it, and
then all trailing code is removed. */
if (!ignore)
}
#ifdef REG_PARM_STACK_SPACE
- /* If outgoing reg parm stack space changes, we can not do sibcall. */
+ /* If outgoing reg parm stack space changes, we cannot do sibcall. */
if (OUTGOING_REG_PARM_STACK_SPACE (funtype)
!= OUTGOING_REG_PARM_STACK_SPACE (TREE_TYPE (current_function_decl))
|| (reg_parm_stack_space != REG_PARM_STACK_SPACE (current_function_decl)))
emit_move_insn (temp, valreg);
- /* The return value from a malloc-like function can not alias
+ /* The return value from a malloc-like function cannot alias
anything else. */
last = get_last_insn ();
add_reg_note (last, REG_NOALIAS, temp);
return info.changed;
}
-/* Return true when cgraph_node can not return or throw and thus
+/* Return true when cgraph_node cannot return or throw and thus
it is safe to ignore its side effects for IPA analysis. */
bool
== (ECF_NORETURN | ECF_NOTHROW));
}
-/* Return true when call of edge can not lead to return from caller
+/* Return true when call of edge cannot lead to return from caller
and thus it is safe to ignore its side effects for IPA analysis
when computing side effects of the caller.
FIXME: We could actually mark all edges that have no reaching
struct GTY((for_user)) section_hash_entry
{
int ref_count;
- char *name; /* As long as this datastructure stays in GGC, we can not put
+ char *name; /* As long as this datastructure stays in GGC, we cannot put
string at the tail of structure of GGC dies in horrible
way */
};
void *data,
bool include_overwrite);
- /* If node can not be interposable by static or dynamic linker to point to
+ /* If node cannot be interposable by static or dynamic linker to point to
different definition, return this symbol. Otherwise look for alias with
such property and if none exists, introduce new one. */
symtab_node *noninterposable_alias (void);
/* C++ frontend produce same body aliases and extra name aliases for
virtual functions and vtables that are obviously equivalent.
Those aliases are bit special, especially because C++ frontend
- visibility code is so ugly it can not get them right at first time
+ visibility code is so ugly it cannot get them right at first time
and their visibility needs to be copied from their "masters" at
the end of parsing. */
unsigned cpp_implicit_alias : 1;
/* False when there is something makes versioning impossible. */
unsigned versionable : 1;
- /* False when function calling convention and signature can not be changed.
+ /* False when function calling convention and signature cannot be changed.
This is the case when __builtin_apply_args is used. */
unsigned can_change_signature : 1;
compilation units. */
bool can_be_local_p (void);
- /* Return true when cgraph_node can not return or throw and thus
+ /* Return true when cgraph_node cannot return or throw and thus
it is safe to ignore its side effects for IPA analysis. */
bool cannot_return_p (void);
/* Adjust all offsets in contexts by given number of bits. */
void offset_by (HOST_WIDE_INT);
- /* Use when we can not track dynamic type change. This speculatively assume
+ /* Use when we cannot track dynamic type change. This speculatively assume
type change is not happening. */
void possible_dynamic_type_change (bool, tree otr_type = NULL);
/* Assume that both THIS and a given context is valid and strenghten THIS
/* Verify edge count and frequency. */
bool verify_count ();
- /* Return true when call of edge can not lead to return from caller
+ /* Return true when call of edge cannot lead to return from caller
and thus it is safe to ignore its side effects for IPA analysis
when computing side effects of the caller. */
bool cannot_lead_to_return_p (void);
/* Extern inlines can always go, we will use the external definition. */
if (DECL_EXTERNAL (decl))
return true;
- /* When function is needed, we can not remove it. */
+ /* When function is needed, we cannot remove it. */
if (force_output || used_from_other_partition)
return false;
if (DECL_STATIC_CONSTRUCTOR (decl)
DECL_ARGUMENTS (new_decl) = NULL;
DECL_INITIAL (new_decl) = NULL;
DECL_RESULT (new_decl) = NULL;
- /* We can not do DECL_RESULT (new_decl) = NULL; here because of LTO partitioning
+ /* We cannot do DECL_RESULT (new_decl) = NULL; here because of LTO partitioning
sometimes storing only clone decl instead of original. */
/* Generate a new name for the new version. */
symtab->state = CONSTRUCTION;
input_location = UNKNOWN_LOCATION;
- /* Ugly, but the fixup can not happen at a time same body alias is created;
+ /* Ugly, but the fixup cannot happen at a time same body alias is created;
C++ FE is confused about the COMDAT groups being right. */
if (symtab->cpp_implicit_aliases_done)
FOR_EACH_SYMBOL (node)
Also we need to be careful to not move stack pointer
such that we create stack accesses outside the allocated
area. We can combine an allocation into the first insn,
- or a deallocation into the second insn. We can not
+ or a deallocation into the second insn. We cannot
combine an allocation followed by a deallocation.
The only somewhat frequent occurrence of the later is when
}
/* Try to split PATTERN found in INSN. This returns NULL_RTX if
- PATTERN can not be split. Otherwise, it returns an insn sequence.
+ PATTERN cannot be split. Otherwise, it returns an insn sequence.
This is a wrapper around split_insns which ensures that the
reg_stat vector is made larger if the splitter creates a new
register. */
}
/* On the x86 -fsplit-stack and -fstack-protector both use the same
- field in the TCB, so they can not be used together. */
+ field in the TCB, so they cannot be used together. */
static bool
ix86_supports_split_stack (bool report ATTRIBUTE_UNUSED,
/* Generate code to load VALS, which is a PARALLEL containing only
constants (for vec_init) or CONST_VECTOR, efficiently into a
register. Returns an RTX to copy into the register, or NULL_RTX
- for a PARALLEL that can not be converted into a CONST_VECTOR. */
+ for a PARALLEL that cannot be converted into a CONST_VECTOR. */
static rtx
aarch64_simd_make_constant (rtx vals)
{
/* Loaded using DUP. */
return const_dup;
else if (const_vec != NULL_RTX)
- /* Load from constant pool. We can not take advantage of single-cycle
+ /* Load from constant pool. We cannot take advantage of single-cycle
LD1 because we need a PC-relative addressing mode. */
return const_vec;
else
/* A PARALLEL containing something not valid inside CONST_VECTOR.
- We can not construct an initializer. */
+ We cannot construct an initializer. */
return NULL_RTX;
}
;; the lock is cleared by a normal load or store. This means we cannot
;; expand a ll/sc sequence before reload, lest a register spill is
;; inserted inside the sequence. It is also UNPREDICTABLE whether the
-;; lock is cleared by a TAKEN branch. This means that we can not expand
+;; lock is cleared by a TAKEN branch. This means that we cannot expand
;; a ll/sc sequence containing a branch (i.e. compare-and-swap) until after
;; the final basic-block reordering pass.
Brcc.d b, c, s9
Brcc.d b, u6, s9
- For cc={GT, LE, GTU, LEU}, u6=63 can not be allowed,
+ For cc={GT, LE, GTU, LEU}, u6=63 cannot be allowed,
since they are encoded by the assembler as {GE, LT, HS, LS} 64, which
does not have a delay slot
(match_test "REGNO (op) == (TARGET_BIG_ENDIAN ? 59 : 58)")
(match_test "TARGET_V2")))
-; Unfortunately, we can not allow a const_int_operand before reload, because
+; Unfortunately, we cannot allow a const_int_operand before reload, because
; reload needs a non-void mode to guide it how to reload the inside of a
; {sign_}extend.
(define_predicate "extend_operand"
#include "c-family/c-pragma.h"
#include "stringpool.h"
-/* Output C specific EABI object attributes. These can not be done in
+/* Output C specific EABI object attributes. These cannot be done in
arm.c because they require information from the C frontend. */
static void
error ("iWMMXt unsupported under Thumb mode");
if (TARGET_HARD_TP && TARGET_THUMB1_P (flags))
- error ("can not use -mtp=cp15 with 16-bit Thumb");
+ error ("cannot use -mtp=cp15 with 16-bit Thumb");
if (TARGET_THUMB_P (flags) && TARGET_VXWORKS_RTP && flag_pic)
{
/* Generate code to load VALS, which is a PARALLEL containing only
constants (for vec_init) or CONST_VECTOR, efficiently into a
register. Returns an RTX to copy into the register, or NULL_RTX
- for a PARALLEL that can not be converted into a CONST_VECTOR. */
+ for a PARALLEL that cannot be converted into a CONST_VECTOR. */
rtx
neon_make_constant (rtx vals)
return target;
else if (const_vec != NULL_RTX)
/* Load from constant pool. On Cortex-A8 this takes two cycles
- (for either double or quad vectors). We can not take advantage
+ (for either double or quad vectors). We cannot take advantage
of single-cycle VLD1 because we need a PC-relative addressing
mode. */
return const_vec;
else
/* A PARALLEL containing something not valid inside CONST_VECTOR.
- We can not construct an initializer. */
+ We cannot construct an initializer. */
return NULL_RTX;
}
Mnode * mp;
/* If the minipool starts before the end of FIX->INSN then this FIX
- can not be placed into the current pool. Furthermore, adding the
+ cannot be placed into the current pool. Furthermore, adding the
new constant pool entry may cause the pool to start FIX_SIZE bytes
earlier. */
if (minipool_vector_head &&
: GET_MODE_SIZE (MODE) >= 4 ? BASE_REGS \
: LO_REGS)
-/* For Thumb we can not support SP+reg addressing, so we return LO_REGS
+/* For Thumb we cannot support SP+reg addressing, so we return LO_REGS
instead of BASE_REGS. */
#define MODE_BASE_REG_REG_CLASS(MODE) BASE_REG_CLASS
: ARM_REGNO_OK_FOR_BASE_P (REGNO))
/* Nonzero if X can be the base register in a reg+reg addressing mode.
- For Thumb, we can not use SP + reg, so reject SP. */
+ For Thumb, we cannot use SP + reg, so reject SP. */
#define REGNO_MODE_OK_FOR_REG_BASE_P(X, MODE) \
REGNO_MODE_OK_FOR_BASE_P (X, QImode)
: ARM_REG_OK_FOR_INDEX_P (X))
/* Nonzero if X can be the base register in a reg+reg addressing mode.
- For Thumb, we can not use SP + reg, so reject SP. */
+ For Thumb, we cannot use SP + reg, so reject SP. */
#define REG_MODE_OK_FOR_REG_BASE_P(X, MODE) \
REG_OK_FOR_INDEX_P (X)
\f
; CLOB means that the condition codes are altered in an undefined manner, if
; they are altered at all
;
-; UNCONDITIONAL means the instruction can not be conditionally executed and
+; UNCONDITIONAL means the instruction cannot be conditionally executed and
; that the instruction does not use or alter the condition codes.
;
; NOCOND means that the instruction does not use or alter the condition
)
;; DImode comparisons. The generic code generates branches that
-;; if-conversion can not reduce to a conditional compare, so we do
+;; if-conversion cannot reduce to a conditional compare, so we do
;; that directly.
(define_insn_and_split "*arm_cmpdi_insn"
"cortex_r4_fmacs"
"arm_no_early_mul_dep")
-;; Double precision operations. These can not dual issue.
+;; Double precision operations. These cannot dual issue.
(define_insn_reservation "cortex_r4_fmacd" 20
(and (eq_attr "tune_cortexr4" "yes")
Mnode *mp;
/* If the minipool starts before the end of FIX->INSN then this FIX
- can not be placed into the current pool. Furthermore, adding the
+ cannot be placed into the current pool. Furthermore, adding the
new constant pool entry may cause the pool to start FIX_SIZE bytes
earlier. */
if (minipool_vector_head
#define MOVE_MAX 4
/* Shift counts are truncated to 6-bits (0 to 63) instead of the expected
- 5-bits, so we can not define SHIFT_COUNT_TRUNCATED to true for this
+ 5-bits, so we cannot define SHIFT_COUNT_TRUNCATED to true for this
target. */
#define SHIFT_COUNT_TRUNCATED 0
/* Provide stubs for the hooks defined by darwin.h
TARGET_EXTRA_PRE_INCLUDES, TARGET_EXTRA_INCLUDES
- As both, gcc and gfortran link in incpath.o, we can not
+ As both, gcc and gfortran link in incpath.o, we cannot
conditionally undefine said hooks if fortran is build.
However, we can define do-nothing stubs of said hooks as
we are not interested in objc include files in Fortran.
; use next_active_insn to look at the 'following' insn. That should
; exist, because peephole2 runs after reload, and there has to be
; a return after an fp_int insn.
-; ??? However, we can not even ordinarily match the preceding insn;
+; ??? However, we cannot even ordinarily match the preceding insn;
; there is some bug in the generators such that then it leaves out
; the check for PARALLEL before the length check for the then-second
; main insn. Observed when compiling compatibility-atomic-c++0x.cc
a C99 variable-length array variable always has alignment of at least 16 bytes.
This was added to allow use of aligned SSE instructions at arrays. This
- rule is meant for static storage (where compiler can not do the analysis
+ rule is meant for static storage (where compiler cannot do the analysis
by itself). We follow it for automatic variables only when convenient.
We fully control everything in the function compiled and functions from
- other unit can not rely on the alignment.
+ other unit cannot rely on the alignment.
Exclude va_list type. It is the common case of local array where
- we can not benefit from the alignment.
+ we cannot benefit from the alignment.
TODO: Probably one should optimize for size only when var is not escaping. */
if (TARGET_64BIT && optimize_function_for_speed_p (cfun)
#undef STACK_REALIGN_DEFAULT
#define STACK_REALIGN_DEFAULT (TARGET_64BIT ? 0 : 1)
-/* Old versions of the Solaris assembler can not handle the difference of
+/* Old versions of the Solaris assembler cannot handle the difference of
labels in different sections, so force DW_EH_PE_datarel if so. */
#ifndef HAVE_AS_IX86_DIFF_SECT_DELTA
#undef ASM_PREFERRED_EH_DATA_FORMAT
consider (plus (%a5) (const (unspec))) to be a good enough
operand for push, so it forces it into a register. The bad
thing about this is that combiner, due to copy propagation and other
- optimizations, sometimes can not later fix this. As a consequence,
+ optimizations, sometimes cannot later fix this. As a consequence,
additional register may be allocated resulting in a spill.
For reference, see args processing loops in
calls.c:emit_library_call_value_1.
#define SLOW_BYTE_ACCESS TARGET_SLOW_BYTES
/* Shift counts are truncated to 6-bits (0 to 63) instead of the expected
- 5-bits, so we can not define SHIFT_COUNT_TRUNCATED to true for this
+ 5-bits, so we cannot define SHIFT_COUNT_TRUNCATED to true for this
target. */
#define SHIFT_COUNT_TRUNCATED 0
DEFINE_AUTOMATON).
All define_reservations and define_cpu_units should have unique
- names which can not be "nothing".
+ names which cannot be "nothing".
o (exclusion_set string string) means that each CPU function unit
- in the first string can not be reserved simultaneously with each
+ in the first string cannot be reserved simultaneously with each
unit whose name is in the second string and vise versa. CPU
units in the string are separated by commas. For example, it is
useful for description CPU with fully pipelined floating point
floating point insns or only double floating point insns.
o (presence_set string string) means that each CPU function unit in
- the first string can not be reserved unless at least one of units
+ the first string cannot be reserved unless at least one of units
whose names are in the second string is reserved. This is an
asymmetric relation. CPU units in the string are separated by
commas. For example, it is useful for description that slot1 is
reserved after slot0 reservation for a VLIW processor.
o (absence_set string string) means that each CPU function unit in
- the first string can not be reserved only if each unit whose name
+ the first string cannot be reserved only if each unit whose name
is in the second string is not reserved. This is an asymmetric
relation (actually exclusion set is analogous to this one but it
is symmetric). CPU units in the string are separated by commas.
- For example, it is useful for description that slot0 can not be
+ For example, it is useful for description that slot0 cannot be
reserved after slot1 or slot2 reservation for a VLIW processor.
o (define_bypass number out_insn_names in_insn_names) names bypass with
case, you describe common part and use one its name (the 1st
parameter) in regular expression in define_insn_reservation. All
define_reservations, define results and define_cpu_units should
- have unique names which can not be "nothing".
+ have unique names which cannot be "nothing".
o (define_insn_reservation name default_latency condition regexpr)
describes reservation of cpu functional units (the 3nd operand)
(exclusion_set "r20kc_fpu_add" "r20kc_fpu_mpy, r20kc_fpu_divsqrt")
(exclusion_set "r20kc_fpu_mpy" "r20kc_fpu_divsqrt")
-;; After branch any insn can not be issued.
+;; After branch any insn cannot be issued.
(absence_set "r20kc_iss0,r20kc_iss1" "r20kc_ixub_branch")
;;
;; register as destination.
;; ??? SB-1 can co-issue a load with a dependent arith insn if it executes on
-;; an EX unit. Can not co-issue if the dependent insn executes on an LS unit.
+;; an EX unit. Cannot co-issue if the dependent insn executes on an LS unit.
;; SB-1A can always co-issue here.
;; A load normally has a latency of zero cycles. In some cases, dependent
(eq_attr "type" "load,prefetch"))
"sb1_ls0 | sb1_ls1")
-;; Can not co-issue fpload with fp exe when in 32-bit mode.
+;; Cannot co-issue fpload with fp exe when in 32-bit mode.
(define_insn_reservation "ir_sb1_fpload" 0
(and (eq_attr "cpu" "sb1,sb1a")
(eq_attr "type" "const,arith,logical,move,signext"))
"sb1_ls1 | sb1_ex1 | sb1_ex0")
-;; On SB-1A, simple alu instructions can not execute on the LS1 unit, and we
+;; On SB-1A, simple alu instructions cannot execute on the LS1 unit, and we
;; have none of the above problems.
(define_insn_reservation "ir_sb1a_simple_alu" 1
frame_adjust_insn = emit_insn (frame_adjust_insn);
/* Because (tmp_reg <- full_value) may be split into two
- rtl patterns, we can not set its RTX_FRAME_RELATED_P.
+ rtl patterns, we cannot set its RTX_FRAME_RELATED_P.
We need to construct another (sp <- sp + full_value)
and then insert it into sp_adjust_insn's reg note to
represent a frame related expression.
int sp_adjust;
/* Prior to reloading, we can't tell how many registers must be saved.
- Thus we can not determine whether this function has null epilogue. */
+ Thus we cannot determine whether this function has null epilogue. */
if (!reload_completed)
return 0;
(not (match_code "high,const,symbol_ref,label_ref")))
{
/* If the constant op does NOT satisfy Is20 nor Ihig,
- we can not perform move behavior by a single instruction. */
+ we cannot perform move behavior by a single instruction. */
if (CONST_INT_P (op)
&& !satisfies_constraint_Is20 (op)
&& !satisfies_constraint_Ihig (op))
(not (match_code "high,const,symbol_ref,label_ref")))
{
/* If the constant op does NOT satisfy Is20 nor Ihig,
- we can not perform move behavior by a single instruction. */
+ we cannot perform move behavior by a single instruction. */
if (GET_CODE (op) == CONST_VECTOR
&& !satisfies_constraint_CVs2 (op)
&& !satisfies_constraint_CVhi (op))
the callee registers. */
if (VAL_14_BITS_P (actual_fsize) && local_fsize == 0)
merge_sp_adjust_with_store = 1;
- /* Can not optimize. Adjust the stack frame by actual_fsize
+ /* Cannot optimize. Adjust the stack frame by actual_fsize
bytes. */
else
set_reg_plus_d (STACK_POINTER_REGNUM, STACK_POINTER_REGNUM,
(define_cpu_unit "ppce300c3_decode_0,ppce300c3_decode_1" "ppce300c3_most")
;; We don't simulate general issue queue (GIC). If we have SU insn
-;; and then SU1 insn, they can not be issued on the same cycle
+;; and then SU1 insn, they cannot be issued on the same cycle
;; (although SU1 insn and then SU insn can be issued) because the SU
;; insn will go to SU1 from GIC0 entry. Fortunately, the first cycle
;; multipass insn scheduling will find the situation and issue the SU1
;; We could describe completion buffers slots in combination with the
;; retirement units and the order of completion but the result
-;; automaton would behave in the same way because we can not describe
+;; automaton would behave in the same way because we cannot describe
;; real latency time with taking in order completion into account.
;; Actually we could define the real latency time by querying reserved
;; automaton units but the current scheduler uses latency time before
recognizes some LO_SUM addresses as valid although this
function says opposite. In most cases, LRA through different
transformations can generate correct code for address reloads.
- It can not manage only some LO_SUM cases. So we need to add
+ It cannot manage only some LO_SUM cases. So we need to add
code analogous to one in rs6000_legitimize_reload_address for
LOW_SUM here saying that some addresses are still valid. */
large_toc_ok = (lra_in_progress && TARGET_CMODEL != CMODEL_SMALL
length fields that follow. However, if you omit the optional
fields, the assembler outputs zeros for all optional fields
anyways, giving each variable length field is minimum length
- (as defined in sys/debug.h). Thus we can not use the .tbtab
+ (as defined in sys/debug.h). Thus we cannot use the .tbtab
pseudo-op at all. */
/* An all-zero word flags the start of the tbtab, for debuggers
/* Target pragma. */
-/* resolve_overloaded_builtin can not be defined the normal way since
+/* resolve_overloaded_builtin cannot be defined the normal way since
it is defined in code which technically belongs to the
front-end. */
#define REGISTER_TARGET_PRAGMAS() \
to the pressure on R0. */
/* Enable sched1 for SH4 if the user explicitly requests.
When sched1 is enabled, the ready queue will be reordered by
- the target hooks if pressure is high. We can not do this for
+ the target hooks if pressure is high. We cannot do this for
PIC, SH3 and lower as they give spill failures for R0. */
if (!TARGET_HARD_SH4 || flag_pic)
flag_schedule_insns = 0;
})
;; The use of operand 1 / 2 helps us distinguish case table jumps
-;; which can be present in structured code from indirect jumps which can not
+;; which can be present in structured code from indirect jumps which cannot
;; be present in structured code. This allows -fprofile-arcs to work.
;; For SH1 processors.
#ifdef SUPPORT_UNPACK_PIXEL
/* Due to type conflicts, unpacking of pixel types and boolean shorts
- * can not simultaneously be supported. By default, the boolean short is
+ * cannot simultaneously be supported. By default, the boolean short is
* supported.
*/
static inline vec_uint4 vec_unpackh(vec_pixel8 a)
#ifdef SUPPORT_UNPACK_PIXEL
/* Due to type conflicts, unpacking of pixel types and boolean shorts
- * can not simultaneously be supported. By default, the boolean short is
+ * cannot simultaneously be supported. By default, the boolean short is
* supported.
*/
static inline vec_uint4 vec_unpackl(vec_pixel8 a)
+2019-01-09 Sandra Loosemore <sandra@codesourcery.com>
+
+ PR other/16615
+
+ * cp-tree.h: Mechanically replace "can not" with "cannot".
+ * parser.c: Likewise.
+ * pt.c: Likewise.
+
2019-01-08 Paolo Carlini <paolo.carlini@oracle.com>
* decl.c (grok_reference_init): Improve error location.
#define IMPLICIT_CONV_EXPR_NONTYPE_ARG(NODE) \
(TREE_LANG_FLAG_1 (IMPLICIT_CONV_EXPR_CHECK (NODE)))
-/* Nonzero means that an object of this type can not be initialized using
+/* Nonzero means that an object of this type cannot be initialized using
an initializer list. */
#define CLASSTYPE_NON_AGGREGATE(NODE) \
(LANG_TYPE_CLASS_CHECK (NODE)->non_aggregate)
/* Search for a declarator name, or any other declarator that goes
after the point where the ellipsis could appear in a parameter
- pack. If we find any of these, then this declarator can not be
+ pack. If we find any of these, then this declarator cannot be
made into a parameter pack. */
bool found = false;
while (declarator && !found)
}
/* Print the list of functions at FNS, going through all the overloads
- for each element of the list. Alternatively, FNS can not be a
+ for each element of the list. Alternatively, FNS cannot be a
TREE_LIST, in which case it will be printed together with all the
overloads.
/* If the argument deduction results is a METHOD_TYPE,
then there is a problem.
METHOD_TYPE doesn't map to any real C++ type the result of
- the deduction can not be of that type. */
+ the deduction cannot be of that type. */
if (TREE_CODE (arg) == METHOD_TYPE)
return unify_method_type_error (explain_p, arg);
(set (reg X) (reg Y))
(set (reg Y) (reg X))
- This can not happen since the set of (reg Y) would have killed the
+ This cannot happen since the set of (reg Y) would have killed the
set of (reg X) making it unavailable at the start of this block. */
while (1)
{
/* Do not generate a tag for incomplete records. */
&& COMPLETE_TYPE_P (type)
/* Do not generate a tag for records of variable size,
- since this type can not be properly described in the
+ since this type cannot be properly described in the
DBX format, and it confuses some tools such as objdump. */
&& tree_fits_uhwi_p (TYPE_SIZE (type)))
{
scanned and regs_asm_clobbered is filled out.
For all ASM_OPERANDS, we must traverse the vector of input
- operands. We can not just fall through here since then we
+ operands. We cannot just fall through here since then we
would be confused by the ASM_INPUT rtx inside ASM_OPERANDS,
which do not indicate traditional asms unlike their normal
usage. */
Often the CFG may be better viewed as integral part of instruction
chain, than structure built on the top of it. Updating the compiler's
-intermediate representation for instructions can not be easily done
+intermediate representation for instructions cannot be easily done
without proper maintenance of the CFG simultaneously.
@menu
@subsubheading Declaring the variable
-Global register variables can not have initial values, because an
+Global register variables cannot have initial values, because an
executable file has no means to supply initial contents for a register.
When selecting a register, choose one that is normally saved and
@findex MULTILIB_OPTIONS
@item MULTILIB_OPTIONS
For some targets, invoking GCC in different ways produces objects
-that can not be linked together. For example, for some targets GCC
+that cannot be linked together. For example, for some targets GCC
produces both big and little endian code. For these targets, you must
arrange for multiple versions of @file{libgcc.a} to be compiled, one for
each set of incompatible options. When GCC invokes the linker, it
find pointers to mark or relocate, which is why it is marked with the
@code{atomic} option.
-Note that, currently, global variables can not be marked with
+Note that, currently, global variables cannot be marked with
@code{atomic}; only fields of a struct can. This is a known
limitation. It would be useful to be able to mark global pointers
with @code{atomic} to make the PCH machinery aware of them so that
During the incremental link (by @option{-r}) the linker plugin will default to
@option{rel}. With current interfaces to GNU Binutils it is however not
possible to link incrementally LTO objects and non-LTO objects into a single
-mixed object file. In the case any of object files in incremental link can not
+mixed object file. In the case any of object files in incremental link cannot
be used for link-time optimization the linker plugin will output warning and
use @samp{nolto-rel}. To maintain the whole program optimization it is
recommended to link such objects into static library instead. Alternatively it
@item -mfixed-range=@var{register-range}
@opindex mfixed-range
Generate code treating the given register range as fixed registers.
-A fixed register is one that the register allocator can not use. This is
+A fixed register is one that the register allocator cannot use. This is
useful when compiling kernel code. A register range is specified as
two registers separated by a dash. Multiple register ranges can be
specified separated by a comma.
not be reachable in the large code model.
Note that @option{-mindirect-branch=thunk-extern} is incompatible with
-@option{-fcf-protection=branch} since the external thunk can not be modified
+@option{-fcf-protection=branch} since the external thunk cannot be modified
to disable control-flow check.
@item -mfunction-return=@var{choice}
between normal inter-procedural passes and small inter-procedural
passes. A @emph{small inter-procedural pass}
(@code{SIMPLE_IPA_PASS}) is a pass that does
-everything at once and thus it can not be executed during WPA in
+everything at once and thus it cannot be executed during WPA in
WHOPR mode. It defines only the @emph{Execute} stage and during
this stage it accesses and modifies the function bodies. Such
passes are useful for optimization at LGEN or LTRANS time and are
@deffn {MD Expression} define_special_memory_constraint name docstring exp
Use this expression for constraints that match a subset of all memory
-operands: that is, @code{reload} can not make them match by reloading
+operands: that is, @code{reload} cannot make them match by reloading
the address as it is described for @code{define_memory_constraint} or
such address reload is undesirable with the performance point of view.
all cases. This expected alignment is also in bytes, just like operand 4.
Expected size, when unknown, is set to @code{(const_int -1)}.
Operand 7 is the minimal size of the block and operand 8 is the
-maximal size of the block (NULL if it can not be represented as CONST_INT).
-Operand 9 is the probable maximal size (i.e.@: we can not rely on it for
+maximal size of the block (NULL if it cannot be represented as CONST_INT).
+Operand 9 is the probable maximal size (i.e.@: we cannot rely on it for
correctness, but it can be used for choosing proper code sequence for a
given size).
@var{reservation-name} is a string giving name of @var{regexp}.
Functional unit names and reservation names are in the same name
space. So the reservation names should be different from the
-functional unit names and can not be the reserved name @samp{nothing}.
+functional unit names and cannot be the reserved name @samp{nothing}.
@findex define_bypass
@cindex instruction latency time
separated by white-spaces.
The first construction (@samp{exclusion_set}) means that each
-functional unit in the first string can not be reserved simultaneously
+functional unit in the first string cannot be reserved simultaneously
with a unit whose name is in the second string and vice versa. For
example, the construction is useful for describing processors
(e.g.@: some SPARC processors) with a fully pipelined floating point
point insns or only double floating point insns.
The second construction (@samp{presence_set}) means that each
-functional unit in the first string can not be reserved unless at
+functional unit in the first string cannot be reserved unless at
least one of pattern of units whose names are in the second string is
reserved. This is an asymmetric relation. For example, it is useful
for description that @acronym{VLIW} @samp{slot1} is reserved after
(absence_set "slot0" "slot1, slot2")
@end smallexample
-Or @samp{slot2} can not be reserved if @samp{slot0} and unit @samp{b0}
+Or @samp{slot2} cannot be reserved if @samp{slot0} and unit @samp{b0}
are reserved or @samp{slot1} and unit @samp{b1} are reserved. In
this case we could write
multiplication insns can be executed only in the second integer
pipeline and their results are ready correspondingly in 9 and 4
cycles. The integer division is not pipelined, i.e.@: the subsequent
-integer division insn can not be issued until the current division
+integer division insn cannot be issued until the current division
insn finished. Floating point insns are fully pipelined and their
results are ready in 3 cycles. Where the result of a floating point
insn is used by an integer insn, an additional delay of one cycle is
To configure the hook, you set the global variable
@code{__objc_msg_forward2} to a function with the same argument and
return types of @code{objc_msg_lookup()}. When
-@code{objc_msg_lookup()} can not find a method implementation, it
+@code{objc_msg_lookup()} cannot find a method implementation, it
invokes the hook function you provided to get a method implementation
to return. So, in practice @code{__objc_msg_forward2} allows you to
extend @code{objc_msg_lookup()} by adding some custom code that is
The mode @var{m} is the same as the mode that would be used for
@var{loc} if it were a register.
-A @code{sign_extract} can not appear as an lvalue, or part thereof,
+A @code{sign_extract} cannot appear as an lvalue, or part thereof,
in RTL.
@findex zero_extract
If @var{lval} is a @code{zero_extract}, then the referenced part of
the bit-field (a memory or register reference) specified by the
@code{zero_extract} is given the value @var{x} and the rest of the
-bit-field is not changed. Note that @code{sign_extract} can not
+bit-field is not changed. Note that @code{sign_extract} cannot
appear in @var{lval}.
If @var{lval} is @code{(cc0)}, it has no machine mode, and @var{x} may
It is not uncommon for limitations of calling conventions to prevent
tail calls to functions outside the current unit of translation, or
during PIC compilation. The hook is used to enforce these restrictions,
-as the @code{sibcall} md pattern can not fail, or fall over to a
+as the @code{sibcall} md pattern cannot fail, or fall over to a
``normal'' call. The criteria for successful sibling call optimization
may vary greatly between different architectures.
@end deftypefn
bool del = false;
/* If ANY of the store_infos match the cselib group that is
- being deleted, then the insn can not be deleted. */
+ being deleted, then the insn cannot be deleted. */
while (store_info)
{
if ((store_info->group_id == -1)
if (mode != BLKmode && mode != VOIDmode)
{
- /* If this is a register which can not be accessed by words, copy it
+ /* If this is a register which cannot be accessed by words, copy it
to a pseudo register. */
if (REG_P (op))
op = copy_to_reg (op);
/* True if dbr_schedule has already been called for this function. */
bool dbr_scheduled_p;
- /* True if current function can not throw. Unlike
+ /* True if current function cannot throw. Unlike
TREE_NOTHROW (current_function_decl) it is set even for overwritable
function where currently compiled version of it is nothrow. */
bool nothrow;
/* We're storing this libcall's address into memory instead of
calling it directly. Thus, we must call assemble_external_libcall
- here, as we can not depend on emit_library_call to do it for us. */
+ here, as we cannot depend on emit_library_call to do it for us. */
assemble_external_libcall (personality);
mem = adjust_address (fc, Pmode, sjlj_fc_personality_ofs);
emit_move_insn (mem, personality);
/* Emit code to multiply OP0 and OP1 (where OP1 is an integer constant),
putting the high half of the result in TARGET if that is convenient,
- and return where the result is. If the operation can not be performed,
+ and return where the result is. If the operation cannot be performed,
0 is returned.
MODE is the mode of operation and result.
ALIGN is the maximum alignment we can assume they have.
METHOD describes what kind of copy this is, and what mechanisms may be used.
MIN_SIZE is the minimal size of block to move
- MAX_SIZE is the maximal size of block to move, if it can not be represented
+ MAX_SIZE is the maximal size of block to move, if it cannot be represented
in unsigned HOST_WIDE_INT, than it is mask of all ones.
Return the address of the new block, if memcpy is called and returns it,
if (nops >= 8)
{
create_integer_operand (&ops[6], min_size);
- /* If we can not represent the maximal size,
+ /* If we cannot represent the maximal size,
make parameter NULL. */
if ((HOST_WIDE_INT) max_size != -1)
create_integer_operand (&ops[7], max_size);
}
if (nops == 9)
{
- /* If we can not represent the maximal size,
+ /* If we cannot represent the maximal size,
make parameter NULL. */
if ((HOST_WIDE_INT) probable_max_size != -1)
create_integer_operand (&ops[8], probable_max_size);
if (nops >= 8)
{
create_integer_operand (&ops[6], min_size);
- /* If we can not represent the maximal size,
+ /* If we cannot represent the maximal size,
make parameter NULL. */
if ((HOST_WIDE_INT) max_size != -1)
create_integer_operand (&ops[7], max_size);
}
if (nops == 9)
{
- /* If we can not represent the maximal size,
+ /* If we cannot represent the maximal size,
make parameter NULL. */
if ((HOST_WIDE_INT) probable_max_size != -1)
create_integer_operand (&ops[8], probable_max_size);
return fold_convert_loc (loc, type, tem);
}
-/* Like fold_negate_expr, but return a NEGATE_EXPR tree, if T can not be
+/* Like fold_negate_expr, but return a NEGATE_EXPR tree, if T cannot be
negated in a simpler way. Also allow for T to be NULL_TREE, in which case
return NULL_TREE. */
/* Return whether BASE + OFFSET + BITPOS may wrap around the address
space. This is used to avoid issuing overflow warnings for
- expressions like &p->x which can not wrap. */
+ expressions like &p->x which cannot wrap. */
static bool
pointer_may_wrap_p (tree base, tree offset, poly_int64 bitpos)
+2019-01-09 Sandra Loosemore <sandra@codesourcery.com>
+
+ PR other/16615
+
+ * class.c: Mechanically replace "can not" with "cannot".
+ * decl.c: Likewise.
+ * expr.c: Likewise.
+ * gfc-internals.texi: Likewise.
+ * intrinsic.texi: Likewise.
+ * invoke.texi: Likewise.
+ * io.c: Likewise.
+ * match.c: Likewise.
+ * parse.c: Likewise.
+ * primary.c: Likewise.
+ * resolve.c: Likewise.
+ * symbol.c: Likewise.
+ * trans-array.c: Likewise.
+ * trans-decl.c: Likewise.
+ * trans-intrinsic.c: Likewise.
+ * trans-stmt.c: Likewise.
+
2019-01-09 Thomas Koenig <tkoenig@gcc.gnu.org>
PR fortran/68426
|| attr->select_type_temporary || attr->associate_var;
if (!attr->class_ok)
- /* We can not build the class container yet. */
+ /* We cannot build the class container yet. */
return true;
/* Determine the name of the encapsulating type. */
/* At this point, we know for sure if the symbol is PARAMETER and can thus
determine (and check) whether it can be implied-shape. If it
- was parsed as assumed-size, change it because PARAMETERs can not
+ was parsed as assumed-size, change it because PARAMETERs cannot
be assumed-size.
An explicit-shape-array cannot appear under several conditions.
retval = false;
}
- /* Scalar variables that are bind(c) can not have the pointer
+ /* Scalar variables that are bind(c) cannot have the pointer
or allocatable attributes. */
if (tmp_sym->attr.is_bind_c == 1)
{
gfc_error ("Return type of BIND(C) function %qs at %L cannot "
"be an array", tmp_sym->name, &(tmp_sym->declared_at));
- /* BIND(C) functions can not return a character string. */
+ /* BIND(C) functions cannot return a character string. */
if (bind_c_function && tmp_sym->ts.type == BT_CHARACTER)
if (tmp_sym->ts.u.cl == NULL || tmp_sym->ts.u.cl->length == NULL
|| tmp_sym->ts.u.cl->length->expr_type != EXPR_CONSTANT
1 Simplifying array constructors -- will substitute
iterator values.
Returns false on error, true otherwise.
- NOTE: Will return true even if the expression can not be simplified. */
+ NOTE: Will return true even if the expression cannot be simplified. */
bool
gfc_simplify_expr (gfc_expr *p, int type)
if (pointer && is_pointer)
{
if (context)
- gfc_error ("Variable %qs is PROTECTED and can not appear in a"
+ gfc_error ("Variable %qs is PROTECTED and cannot appear in a"
" pointer association context (%s) at %L",
sym->name, context, &e->where);
return false;
if (!pointer && !is_pointer)
{
if (context)
- gfc_error ("Variable %qs is PROTECTED and can not appear in a"
+ gfc_error ("Variable %qs is PROTECTED and cannot appear in a"
" variable definition context (%s) at %L",
sym->name, context, &e->where);
return false;
if (!pointer && !own_scope && gfc_pure (NULL) && gfc_impure_variable (sym))
{
if (context)
- gfc_error ("Variable %qs can not appear in a variable definition"
+ gfc_error ("Variable %qs cannot appear in a variable definition"
" context (%s) at %L in PURE procedure",
sym->name, context, &e->where);
return false;
if (!gfc_check_vardef_context (assoc->target, pointer, false, false, NULL))
{
if (context)
- gfc_error ("Associate-name %qs can not appear in a variable"
+ gfc_error ("Associate-name %qs cannot appear in a variable"
" definition context (%s) at %L because its target"
- " at %L can not, either",
+ " at %L cannot, either",
name, context, &e->where,
&assoc->target->where);
return false;
value. Those need not be packed into @code{gfc_symtree} structures and are
only @code{gfc_typebound_proc} instances.
-When an operator call or assignment is found that can not be handled in
+When an operator call or assignment is found that cannot be handled in
another way (i.e. neither matches an intrinsic nor interface operator
definition) but that contains a derived-type expression, all type-bound
operators defined on that derived-type are checked for a match with
@item @emph{Return value}:
After @code{GETARG} returns, the @var{VALUE} argument holds the
-@var{POS}th command line argument. If @var{VALUE} can not hold the
+@var{POS}th command line argument. If @var{VALUE} cannot hold the
argument, it is truncated to fit the length of @var{VALUE}. If there are
less than @var{POS} arguments specified at the command line, @var{VALUE}
will be filled with blanks. If @math{@var{POS} = 0}, @var{VALUE} is set
@item @emph{Return value}:
After @code{GET_COMMAND_ARGUMENT} returns, the @var{VALUE} argument holds the
-@var{NUMBER}-th command line argument. If @var{VALUE} can not hold the argument, it is
+@var{NUMBER}-th command line argument. If @var{VALUE} cannot hold the argument, it is
truncated to fit the length of @var{VALUE}. If there are less than @var{NUMBER}
arguments specified at the command line, @var{VALUE} will be filled with blanks.
If @math{@var{NUMBER} = 0}, @var{VALUE} is set to the name of the program (on
@item -Wdo-subscript
@opindex @code{Wdo-subscript}
Warn if an array subscript inside a DO loop could lead to an
-out-of-bounds access even if the compiler can not prove that the
+out-of-bounds access even if the compiler cannot prove that the
statement is actually executed, in cases like
@smallexample
real a(3)
&& ((mpz_get_si (inquire->unit->value.integer) == GFC_INTERNAL_UNIT4)
|| (mpz_get_si (inquire->unit->value.integer) == GFC_INTERNAL_UNIT)))
{
- gfc_error ("UNIT number in INQUIRE statement at %L can not "
+ gfc_error ("UNIT number in INQUIRE statement at %L cannot "
"be %d", &loc, (int) mpz_get_si (inquire->unit->value.integer));
goto cleanup;
}
}
if (sym->attr.is_bind_c == 1)
- gfc_error_now ("Variable %qs in common block %qs at %C can not "
+ gfc_error_now ("Variable %qs in common block %qs at %C cannot "
"be bind(c) since it is not global", sym->name,
t->name);
}
break;
}
- /* If we find a statement that can not be followed by an IMPLICIT statement
+ /* If we find a statement that cannot be followed by an IMPLICIT statement
(and thus we can expect to see none any further), type the function result
if it has not yet been typed. Be careful not to give the END statement
to verify_st_order! */
in case of association to a derived-type. */
sym->ts = a->target->ts;
- /* Check if the target expression is array valued. This can not always
+ /* Check if the target expression is array valued. This cannot always
be done by looking at target.rank, because that might not have been
set yet. Therefore traverse the chain of refs, looking for the last
array ref and evaluate that. */
gfc_set_sym_referenced (sym);
if (sym->attr.flavor == FL_NAMELIST)
{
- gfc_error ("Namelist %qs can not be an argument at %L",
+ gfc_error ("Namelist %qs cannot be an argument at %L",
sym->name, &where);
break;
}
/* F08:C612. */
if (gfc_peek_ascii_char() == '%')
{
- gfc_error ("The leftmost part-ref in a data-ref can not be a "
+ gfc_error ("The leftmost part-ref in a data-ref cannot be a "
"function reference at %C");
m = MATCH_ERROR;
}
sym->name, &common_root->n.common->where, &sym->declared_at);
if (sym->attr.external)
- gfc_error ("COMMON block %qs at %L can not have the EXTERNAL attribute",
+ gfc_error ("COMMON block %qs at %L cannot have the EXTERNAL attribute",
sym->name, &common_root->n.common->where);
if (sym->attr.intrinsic)
default_case->next = if_st;
}
- /* Resolve the internal code. This can not be done earlier because
+ /* Resolve the internal code. This cannot be done earlier because
it requires that the sym->assoc of selectors is set already. */
gfc_current_ns = ns;
gfc_resolve_blocks (code->block, gfc_current_ns);
return;
}
- /* C_PTR and C_FUNPTR have private components which means they can not
+ /* C_PTR and C_FUNPTR have private components which means they cannot
be printed. However, if -std=gnu and not -pedantic, allow
the component to be printed to help debugging. */
if (ts->u.derived->ts.f90_type == BT_VOID)
for (; formal; formal = formal->next)
if (formal->sym && formal->sym->attr.flavor == FL_NAMELIST)
{
- gfc_error ("Namelist %qs can not be an argument to "
+ gfc_error ("Namelist %qs cannot be an argument to "
"subroutine or function at %L",
formal->sym->name, &sym->declared_at);
return;
&& curr_comp->ts.u.derived->ts.is_iso_c != 1
&& curr_comp->ts.u.derived != derived_sym)
{
- /* This should be allowed; the draft says a derived-type can not
+ /* This should be allowed; the draft says a derived-type cannot
have type parameters if it is has the BIND attribute. Type
parameters seem to be for making parameterized derived types.
There's no need to verify the type if it is c_ptr/c_funptr. */
TREE_TYPE (len), len, tmp);
gfc_add_expr_to_block (&fnblock, tmp);
size = size_of_string_in_bytes (c->ts.kind, len);
- /* This component can not have allocatable components,
+ /* This component cannot have allocatable components,
therefore add_when_allocated of duplicate_allocatable ()
is always NULL. */
tmp = duplicate_allocatable (dcmp, comp, ctype, rank,
gfc_finish_var_decl (length, sym);
gfc_finish_var_decl (addr, sym);
/* STRING_LENGTH is also used as flag. Less than -1 means that
- ASSIGN_ADDR can not be used. Equal -1 means that ASSIGN_ADDR is the
+ ASSIGN_ADDR cannot be used. Equal -1 means that ASSIGN_ADDR is the
target label's address. Otherwise, value is the length of a format string
and ASSIGN_ADDR is its address. */
if (TREE_STATIC (length))
gcc_assert (end != NULL_TREE);
/* Multiply with the product of array's stride and
the step of the ref to a virtual upper bound.
- We can not compute the actual upper bound here or
+ We cannot compute the actual upper bound here or
the caflib would compute the extend
incorrectly. */
end = fold_build2 (MULT_EXPR, gfc_array_index_type,
al_len = se.string_length;
al_len_needs_set = al_len != NULL_TREE;
- /* When allocating an array one can not use much of the
+ /* When allocating an array one cannot use much of the
pre-evaluated expr3 expressions, because for most of them the
scalarizer is needed which is not available in the pre-evaluation
step. Therefore gfc_array_allocate () is responsible (and able)
information in future loop iterations. */
if (tmp_expr3_len_flag)
/* No need to reset tmp_expr3_len_flag, because the
- presence of an expr3 can not change within in the
+ presence of an expr3 cannot change within in the
loop. */
expr3_len = NULL_TREE;
}
if (units_array [unit]->automaton_decl
== automaton->corresponding_automaton_decl
&& (cycle >= units_array [unit]->min_occ_cycle_num
- /* We can not remove queried unit from reservations. */
+ /* We cannot remove queried unit from reservations. */
|| units_array [unit]->query_p
- /* We can not remove units which are used
+ /* We cannot remove units which are used
`exclusion_set', `presence_set',
`final_presence_set', `absence_set', and
`final_absence_set'. */
#define STANDARD_OUTPUT_DESCRIPTION_FILE_SUFFIX ".dfa"
/* The function returns suffix of given file name. The returned
- string can not be changed. */
+ string cannot be changed. */
static const char *
file_name_suffix (const char *file_name)
{
/* The function returns base name of given file name, i.e. pointer to
first char after last `/' (or `\' for WIN32) in given file name,
given file name itself if the directory name is absent. The
- returned string can not be changed. */
+ returned string cannot be changed. */
static const char *
base_file_name (const char *file_name)
{
STRIP_USELESS_TYPE_CONVERSION (val);
}
else
- /* We can not use __builtin_unreachable here because it
- can not have address taken. */
+ /* We cannot use __builtin_unreachable here because it
+ cannot have address taken. */
val = build_int_cst (TREE_TYPE (val), 0);
return val;
}
but don't fold. */
if (maybe_lt (offset, 0))
return NULL_TREE;
- /* We can not determine ctor. */
+ /* We cannot determine ctor. */
if (!ctor)
return NULL_TREE;
return fold_ctor_reference (TREE_TYPE (t), ctor, offset,
/* We do not know precise address. */
if (!known_size_p (max_size) || maybe_ne (max_size, size))
return NULL_TREE;
- /* We can not determine ctor. */
+ /* We cannot determine ctor. */
if (!ctor)
return NULL_TREE;
+2019-01-09 Sandra Loosemore <sandra@codesourcery.com>
+
+ PR other/16615
+
+ * go-backend.c: Mechanically replace "can not" with "cannot".
+ * go-gcc.cc: Likewise.
+
2019-01-01 Jakub Jelinek <jakub@redhat.com>
Update copyright years.
}
/* This is called by the Go frontend proper if the unsafe package was
- imported. When that happens we can not do type-based alias
+ imported. When that happens we cannot do type-based alias
analysis. */
void
NULL_TREE),
false, false);
- // The compiler uses __builtin_unreachable for cases that can not
+ // The compiler uses __builtin_unreachable for cases that cannot
// occur.
this->define_builtin(BUILT_IN_UNREACHABLE, "__builtin_unreachable", NULL,
build_function_type(void_type_node, void_list_node),
if (result == error_mark_node)
return this->error_type();
- // The libffi library can not represent a zero-sized object. To
+ // The libffi library cannot represent a zero-sized object. To
// avoid causing confusion on 32-bit SPARC, we treat a function that
// returns a zero-sized value as returning void. That should do no
// harm since there is no actual value to be returned. See
with the local stack frame are safe, but scant others. */
HARD_REG_SET x_regs_invalidated_by_call;
- /* Call used hard registers which can not be saved because there is no
+ /* Call used hard registers which cannot be saved because there is no
insn for this. */
HARD_REG_SET x_no_caller_save_reg_set;
return FALSE;
/* If the conditional jump is more than just a conditional jump,
- then we can not do conditional execution conversion on this block. */
+ then we cannot do conditional execution conversion on this block. */
if (! onlyjump_p (BB_END (test_bb)))
return FALSE;
goto fail;
/* If the conditional jump is more than just a conditional jump, then
- we can not do conditional execution conversion on this block. */
+ we cannot do conditional execution conversion on this block. */
if (! onlyjump_p (BB_END (bb)))
goto fail;
}
/* If the conditional jump is more than just a conditional
- jump, then we can not do if-conversion on this block. */
+ jump, then we cannot do if-conversion on this block. */
jump = BB_END (test_bb);
if (! onlyjump_p (jump))
return FALSE;
return FALSE;
/* If the conditional jump is more than just a conditional jump, then
- we can not do if-conversion on this block. Give up for returnjump_p,
+ we cannot do if-conversion on this block. Give up for returnjump_p,
changing a conditional return followed by unconditional trap for
conditional trap followed by unconditional return is likely not
beneficial and harder to handle. */
continue;
}
- /* One COMDAT group can not hold both variables and functions at
+ /* One COMDAT group cannot hold both variables and functions at
a same time. For now we just go to BOTTOM, in future we may
invent special comdat groups for this case. */
if (cgraph_node * cn = dyn_cast <cgraph_node *> (symbol2))
{
- /* Thunks can not call across section boundary. */
+ /* Thunks cannot call across section boundary. */
if (cn->thunk.thunk_p)
newgroup = propagate_comdat_group (symbol2, newgroup, map);
/* If we see inline clone, its comdat group actually
/* Mark the symbol so we won't waste time visiting it for dataflow. */
symbol->aux = (symtab_node *) (void *) 1;
}
- /* See symbols that can not be privatized to comdats; that is externally
+ /* See symbols that cannot be privatized to comdats; that is externally
visible symbols or otherwise used ones. We also do not want to mangle
user section names. */
else if (symbol->externally_visible
int *caller_count = (int *) data;
for (cgraph_edge *cs = node->callers; cs; cs = cs->next_caller)
- /* Local thunks can be handled transparently, but if the thunk can not
+ /* Local thunks can be handled transparently, but if the thunk cannot
be optimized out, count it as a real use. */
if (!cs->caller->thunk.thunk_p || !cs->caller->local.local)
++*caller_count;
{
return !flag_ltrans
&& symtab->state >= CONSTRUCTION
- /* We can not always use type_all_derivations_known_p.
+ /* We cannot always use type_all_derivations_known_p.
For function local types we must assume case where
the function is COMDAT and shared in between units.
verbose. */
if (IDENTIFIER_POINTER (n1) != IDENTIFIER_POINTER (n2))
inform (loc_t1,
- "type %qT defined in anonymous namespace can not match "
+ "type %qT defined in anonymous namespace cannot match "
"type %qT across the translation unit boundary",
t1, t2);
else
inform (loc_t1,
- "type %qT defined in anonymous namespace can not match "
+ "type %qT defined in anonymous namespace cannot match "
"across the translation unit boundary",
t1);
if (loc_t2_useful)
|| (type_with_linkage_p (TYPE_MAIN_VARIANT (t2))
&& type_in_anonymous_namespace_p (TYPE_MAIN_VARIANT (t2))))
{
- /* We can not trip this when comparing ODR types, only when trying to
+ /* We cannot trip this when comparing ODR types, only when trying to
match different ODR derivations from different declarations.
So WARN should be always false. */
gcc_assert (!warn);
val->all_derivations_known = type_all_derivations_known_p (type);
for (i = 0; i < BINFO_N_BASE_BINFOS (binfo); i++)
/* For now record only polymorphic types. other are
- pointless for devirtualization and we can not precisely
+ pointless for devirtualization and we cannot precisely
determine ODR equivalency of these during LTO. */
if (polymorphic_type_binfo_p (BINFO_BASE_BINFO (binfo, i)))
{
/* If TARGET has associated node, record it in the NODES array.
CAN_REFER specify if program can refer to the target directly.
- if TARGET is unknown (NULL) or it can not be inserted (for example because
+ if TARGET is unknown (NULL) or it cannot be inserted (for example because
its body was already removed and there is no way to refer to it), clear
COMPLETEP. */
else if (!completep)
;
/* We have definition of __cxa_pure_virtual that is not accessible (it is
- optimized out or partitioned to other unit) so we can not add it. When
+ optimized out or partitioned to other unit) so we cannot add it. When
not sanitizing, there is nothing to do.
Otherwise declare the list incomplete. */
else if (pure_virtual)
ndropped++;
if (dump_file)
fprintf (dump_file, "Dropping polymorphic call info;"
- " it can not be used by ipa-prop\n");
+ " it cannot be used by ipa-prop\n");
}
if (!opt_for_fn (n->decl, flag_devirtualize_speculatively))
copy when called in a given context. It is a bitmask of conditions. Bit
0 means that condition is known to be false, while bit 1 means that condition
may or may not be true. These differs - for example NOT_INLINED condition
- is always false in the second and also builtin_constant_p tests can not use
+ is always false in the second and also builtin_constant_p tests cannot use
the fact that parameter is indeed a constant.
KNOWN_VALS is partial mapping of parameters of NODE to constant values.
struct predicate p = bb_predicate & will_be_nonconstant;
/* We can ignore statement when we proved it is never going
- to happen, but we can not do that for call statements
+ to happen, but we cannot do that for call statements
because edges are accounted specially. */
if (*(is_gimple_call (stmt) ? &bb_predicate : &p) != false)
info->account_size_time (2 * ipa_fn_summary::size_scale, 0, t, t);
ipa_update_overall_fn_summary (node);
info->self_size = info->size;
- /* We can not inline instrumentation clones. */
+ /* We cannot inline instrumentation clones. */
if (node->thunk.add_pointer_bounds_args)
{
info->inlinable = false;
node->local.can_change_signature = true;
else
{
- /* Functions calling builtin_apply can not change signature. */
+ /* Functions calling builtin_apply cannot change signature. */
for (e = node->callees; e; e = e->next_callee)
{
tree cdecl = e->callee->decl;
&& (!used_by || !is_a <cgraph_node *> (used_by) || address
|| opt_for_fn (used_by->decl, flag_devirtualize)))
return return_false_with_msg
- ("references to virtual tables can not be merged");
+ ("references to virtual tables cannot be merged");
if (address && DECL_ALIGN (n1->decl) != DECL_ALIGN (n2->decl))
return return_false_with_msg ("alignment mismatch");
if (original->can_be_discarded_p ())
original_discardable = true;
/* Also consider case where we have resolution info and we know that
- original's definition is not going to be used. In this case we can not
+ original's definition is not going to be used. In this case we cannot
create alias to original. */
if (node->resolution != LDPR_UNKNOWN
&& !decl_binds_to_current_def_p (node->decl))
original_discardable = original_discarded = true;
/* Creating a symtab alias is the optimal way to merge.
- It however can not be used in the following cases:
+ It however cannot be used in the following cases:
1) if ORIGINAL and ALIAS may be possibly compared for address equality.
2) if ORIGINAL is in a section that may be discarded by linker or if
- it is an external functions where we can not create an alias
+ it is an external functions where we cannot create an alias
(ORIGINAL_DISCARDABLE)
3) if target do not support symbol aliases.
4) original and alias lie in different comdat groups.
- If we can not produce alias, we will turn ALIAS into WRAPPER of ORIGINAL
+ If we cannot produce alias, we will turn ALIAS into WRAPPER of ORIGINAL
and/or redirect all callers from ALIAS to ORIGINAL. */
if ((original_address_matters && alias_address_matters)
|| (original_discardable
{
if (dump_file)
fprintf (dump_file,
- "can not create wrapper of stdarg function.\n");
+ "cannot create wrapper of stdarg function.\n");
}
else if (ipa_fn_summaries
&& ipa_fn_summaries->get (alias) != NULL
if (!redirect_callers && !create_wrapper)
{
if (dump_file)
- fprintf (dump_file, "Not unifying; can not redirect callers nor "
+ fprintf (dump_file, "Not unifying; cannot redirect callers nor "
"produce wrapper\n\n");
return false;
}
If ORIGINAL is interposable, we need to call a local alias.
Also produce local alias (if possible) as an optimization.
- Local aliases can not be created inside comdat groups because that
+ Local aliases cannot be created inside comdat groups because that
prevents inlining. */
if (!original_discardable && !original->get_comdat_group ())
{
&& original->get_availability () > AVAIL_INTERPOSABLE)
local_original = original;
}
- /* If we can not use local alias, fallback to the original
+ /* If we cannot use local alias, fallback to the original
when possible. */
else if (original->get_availability () > AVAIL_INTERPOSABLE)
local_original = original;
- /* If original is COMDAT local, we can not really redirect calls outside
+ /* If original is COMDAT local, we cannot really redirect calls outside
of its comdat group to it. */
if (original->comdat_local_p ())
redirect_callers = false;
{
if (dump_file)
fprintf (dump_file, "Not unifying; "
- "can not produce local alias.\n\n");
+ "cannot produce local alias.\n\n");
return false;
}
{
if (dump_file)
fprintf (dump_file, "Not unifying; "
- "can not redirect callers nor produce a wrapper\n\n");
+ "cannot redirect callers nor produce a wrapper\n\n");
return false;
}
if (!create_wrapper
&& !alias->can_remove_if_no_direct_calls_p ())
{
if (dump_file)
- fprintf (dump_file, "Not unifying; can not make wrapper and "
+ fprintf (dump_file, "Not unifying; cannot make wrapper and "
"function has other uses than direct calls\n\n");
return false;
}
/* See if original is in a section that can be discarded if the main
symbol is not used.
Also consider case where we have resolution info and we know that
- original's definition is not going to be used. In this case we can not
+ original's definition is not going to be used. In this case we cannot
create alias to original. */
if (original->can_be_discarded_p ()
|| (node->resolution != LDPR_UNKNOWN
return false;
}
- /* We can not merge if address comparsion metters. */
+ /* We cannot merge if address comparsion metters. */
if (alias_address_matters && flag_merge_constants < 2)
{
if (dump_file)
/* When doing recursive inlining, the clone may become unnecessary.
This is possible i.e. in the case when the recursive function is proved to be
non-throwing and the recursion happens only in the EH landing pad.
- We can not remove the clone until we are done with saving the body.
+ We cannot remove the clone until we are done with saving the body.
Remove it now. */
if (!first_clone->callers)
{
optimization) and thus improve quality of analysis done by real IPA
optimizers.
- Because of lack of whole unit knowledge, the pass can not really make
+ Because of lack of whole unit knowledge, the pass cannot really make
good code size/performance tradeoffs. It however does very simple
speculative inlining allowing code size to grow by
EARLY_INLINING_INSNS when callee is leaf function. In this case the
{
struct cgraph_node *callee = e->callee->ultimate_alias_target ();
/* Early inliner might get called at WPA stage when IPA pass adds new
- function. In this case we can not really do any of early inlining
+ function. In this case we cannot really do any of early inlining
because function bodies are missing. */
if (cgraph_inline_failed_type (e->inline_failed) == CIF_FINAL_ERROR)
return false;
If the same is produced by multiple inheritance, we end up with A and offset
sizeof(int).
- If we can not find corresponding class, give up by setting
+ If we cannot find corresponding class, give up by setting
THIS->OUTER_TYPE to OTR_TYPE and THIS->OFFSET to NULL.
Return true when lookup was sucesful.
derived from OUTER_TYPE.
Because the instance type may contain field whose type is of OUTER_TYPE,
- we can not derive any effective information about it.
+ we cannot derive any effective information about it.
TODO: In the case we know all derrived types, we can definitely do better
here. */
tree fld;
/* If we do not know size of TYPE, we need to be more conservative
- about accepting cases where we can not find EXPECTED_TYPE.
+ about accepting cases where we cannot find EXPECTED_TYPE.
Generally the types that do matter here are of constant size.
Size_unknown case should be very rare. */
if (TYPE_SIZE (type)
&& type_known_to_have_no_derivations_p (outer_type))
maybe_derived_type = false;
- /* Type can not contain itself on an non-zero offset. In that case
+ /* Type cannot contain itself on an non-zero offset. In that case
just give up. Still accept the case where size is now known.
Either the second copy may appear past the end of type or within
the non-POD buffer located inside the variably sized type
size = tree_to_uhwi (DECL_SIZE (fld));
/* We can always skip types smaller than pointer size:
- those can not contain a virtual table pointer.
+ those cannot contain a virtual table pointer.
Disqualifying fields that are too small to fit OTR_TYPE
saves work needed to walk them for no benefit.
if (DECL_STRUCT_FUNCTION (function)->after_inlining)
return true;
- /* Pure functions can not do any changes on the dynamic type;
+ /* Pure functions cannot do any changes on the dynamic type;
that require writting to memory. */
if ((!base || !auto_var_in_fn_p (base, function))
&& flags_from_decl_or_type (function) & (ECF_PURE | ECF_CONST))
sub-objects and the code written by the user is run. Only this may
include calling virtual functions, directly or indirectly.
- 4) placement new can not be used to change type of non-POD statically
+ 4) placement new cannot be used to change type of non-POD statically
allocated variables.
There is no way to call a constructor of an ancestor sub-object in any
otr_type);
}
-/* Use when we can not track dynamic type change. This speculatively assume
+/* Use when we cannot track dynamic type change. This speculatively assume
type change is not happening. */
void
struct ipa_propagate_frequency_data d = {node, true, true, true, true};
bool changed = false;
- /* We can not propagate anything useful about externally visible functions
+ /* We cannot propagate anything useful about externally visible functions
nor about virtuals. */
if (!node->local.local
|| node->alias
in between beggining of the function until CALL is invoked.
Generally functions are not allowed to change type of such instances,
- but they call destructors. We assume that methods can not destroy the THIS
+ but they call destructors. We assume that methods cannot destroy the THIS
pointer. Also as a special cases, constructor and destructors may change
type of the THIS pointer. */
static bool
param_type_may_change_p (tree function, tree arg, gimple *call)
{
- /* Pure functions can not do any changes on the dynamic type;
+ /* Pure functions cannot do any changes on the dynamic type;
that require writting to memory. */
if (flags_from_decl_or_type (function) & (ECF_PURE | ECF_CONST))
return false;
}
}
- /* If ARG is pointer, we can not use its type to determine the type of aggregate
+ /* If ARG is pointer, we cannot use its type to determine the type of aggregate
passed (because type conversions are ignored in gimple). Usually we can
safely get type from function declaration, but in case of K&R prototypes or
variadic functions we can try our luck with type of the pointer passed.
{
if (dump_file)
fprintf (dump_file, "ipa-prop: Discovered call to a known target "
- "(%s -> %s) but can not refer to it. Giving up.\n",
+ "(%s -> %s) but cannot refer to it. Giving up.\n",
ie->caller->dump_name (),
ie->callee->dump_name ());
return NULL;
ipa_check_create_node_params ();
- /* We can not make edges to inline clones. It is bug that someone removed
+ /* We cannot make edges to inline clones. It is bug that someone removed
the cgraph node too early. */
gcc_assert (!callee->global.inlined_to);
if (!finite_loop_p (loop))
{
if (dump_file)
- fprintf (dump_file, " can not prove finiteness of "
+ fprintf (dump_file, " cannot prove finiteness of "
"loop %i\n", loop->num);
l->looping =true;
break;
/* Process all of the functions.
- We process AVAIL_INTERPOSABLE functions. We can not use the results
+ We process AVAIL_INTERPOSABLE functions. We cannot use the results
by default, but the info can be used at LTO with -fwhole-program or
when function got cloned and the clone is AVAILABLE. */
node = cgraph_node::get (current_function_decl);
- /* We run during lowering, we can not really use availability yet. */
+ /* We run during lowering, we cannot really use availability yet. */
if (cgraph_node::get (current_function_decl)->get_availability ()
<= AVAIL_INTERPOSABLE)
{
if (TREE_READONLY (t))
return true;
- /* We can not track variables with address taken. */
+ /* We cannot track variables with address taken. */
if (TREE_ADDRESSABLE (t))
return true;
{
struct cgraph_edge *e, *ie;
- /* When function is overwritable, we can not assume anything. */
+ /* When function is overwritable, we cannot assume anything. */
if (node->get_availability () <= AVAIL_INTERPOSABLE
|| (node->analyzed && !opt_for_fn (node->decl, flag_ipa_reference)))
read_write_all_from_decl (node, read_all, write_all);
< (ENTRY_BLOCK_PTR_FOR_FN (cfun)->count.apply_scale
(PARAM_VALUE (PARAM_PARTIAL_INLINING_ENTRY_PROBABILITY), 100))))
{
- /* When profile is guessed, we can not expect it to give us
+ /* When profile is guessed, we cannot expect it to give us
realistic estimate on likelyness of function taking the
complex path. As a special case, when tail of the function is
a loop, enable splitting since inlining code skipping the loop
of the form:
<retval> = tmp_var;
return <retval>
- but return_bb can not be more complex than this (except for
+ but return_bb cannot be more complex than this (except for
-fsanitize=thread we allow TSAN_FUNC_EXIT () internal call in there).
If nothing is found, return the exit block.
if (gimple_clobber_p (stmt))
continue;
- /* FIXME: We can split regions containing EH. We can not however
+ /* FIXME: We can split regions containing EH. We cannot however
split RESX, EH_DISPATCH and EH_POINTER referring to same region
into different partitions. This would require tracking of
EH regions and checking in consider_split_point if they
sreal overall_time;
int overall_size;
- /* When false we can not split on this BB. */
+ /* When false we cannot split on this BB. */
bool can_split;
};
if (pos <= entry->earliest && !entry->can_split
&& dump_file && (dump_flags & TDF_DETAILS))
fprintf (dump_file,
- "found articulation at bb %i but can not split\n",
+ "found articulation at bb %i but cannot split\n",
entry->bb->index);
if (pos <= entry->earliest && entry->can_split)
{
node->tp_first_run = cur_node->tp_first_run + 1;
/* For usual cloning it is enough to clear builtin only when signature
- changes. For partial inlining we however can not expect the part
+ changes. For partial inlining we however cannot expect the part
of builtin implementation to have same semantic as the whole. */
if (fndecl_built_in_p (node->decl))
{
its calling convention. We simply mark all static functions whose
address is not taken as local.
- externally_visible flag is set for symbols that can not be privatized.
+ externally_visible flag is set for symbols that cannot be privatized.
For privatized symbols we clear TREE_PUBLIC flag and dismantle comdat
group.
#include "stringpool.h"
#include "attribs.h"
-/* Return true when NODE can not be local. Worker for cgraph_local_node_p. */
+/* Return true when NODE cannot be local. Worker for cgraph_local_node_p. */
static bool
non_local_p (struct cgraph_node *node, void *data ATTRIBUTE_UNUSED)
&& !flag_whole_program))
return false;
- /* Non-readonly and volatile variables can not be duplicated. */
+ /* Non-readonly and volatile variables cannot be duplicated. */
if (is_a <varpool_node *> (node)
&& (!TREE_READONLY (node->decl)
|| TREE_THIS_VOLATILE (node->decl)))
/* COMDAT functions must be shared only if they have address taken,
otherwise we can produce our own private implementation with
-fwhole-program.
- Return true when turning COMDAT function static can not lead to wrong
+ Return true when turning COMDAT function static cannot lead to wrong
code when the resulting object links with a library defining same COMDAT.
Virtual functions do have their addresses taken from the vtables,
gcc_assert (node->weakref);
- /* Weakrefs with no target defined can not be optimized. */
+ /* Weakrefs with no target defined cannot be optimized. */
if (!node->analyzed)
return;
symtab_node *target = node->get_alias_target ();
}
/* C++ FE on lack of COMDAT support create local COMDAT functions
- (that ought to be shared but can not due to object format
+ (that ought to be shared but cannot due to object format
limitations). It is necessary to keep the flag to make rest of C++ FE
happy. Clear the flag here to avoid confusion in middle-end. */
if (DECL_COMDAT (node->decl) && !TREE_PUBLIC (node->decl))
if (!node->local.local)
node->local.local |= node->local_p ();
- /* If we know that function can not be overwritten by a
- different semantics and moreover its section can not be
+ /* If we know that function cannot be overwritten by a
+ different semantics and moreover its section cannot be
discarded, replace all direct calls by calls to an
noninterposable alias. This make dynamic linking cheaper and
enable more optimization.
|| vnode->weakref
|| TREE_PUBLIC (vnode->decl)
|| DECL_EXTERNAL (vnode->decl));
- /* In several cases declarations can not be common:
+ /* In several cases declarations cannot be common:
- when declaration has initializer
- when it is in weak
Return true when unreachable symbol removal should be done.
- FIXME: This can not be done in between gimplify and omp_expand since
+ FIXME: This cannot be done in between gimplify and omp_expand since
readonly flag plays role on what is shared and what is not. Currently we do
this transformation as part of whole program visibility and re-do at
ipa-reference pass (to take into account clonning), but it would
int ira_loop_tree_height;
/* All nodes representing basic blocks are referred through the
- following array. We can not use basic block member `aux' for this
+ following array. We cannot use basic block member `aux' for this
because it is used for insertion of insns on edges. */
ira_loop_tree_node_t ira_bb_nodes;
struct loop *parent;
ira_loop_tree_node_t loop_node, parent_node;
- /* We can not use loop node access macros here because of potential
+ /* We cannot use loop node access macros here because of potential
checking and because the nodes are not initialized enough
yet. */
if (loop != NULL && loop_outer (loop) != NULL)
struct loop *parent;
ira_loop_tree_node_t bb_node, loop_node;
- /* We can not use loop/bb node access macros because of potential
+ /* We cannot use loop/bb node access macros because of potential
checking and because the nodes are not initialized enough
yet. */
FOR_EACH_BB_FN (bb, cfun)
#ifdef STACK_REGS
/* Return TRUE if LOOP has a complex enter or exit edge. We don't
form a region from such loop if the target use stack register
- because reg-stack.c can not deal with such edges. */
+ because reg-stack.c cannot deal with such edges. */
static bool
loop_with_complex_edge_p (struct loop *loop)
{
/* At this point true value of allocno attribute bad_spill_p means
that there is an insn where allocno occurs and where the allocno
- can not be used as memory. The function updates the attribute, now
- it can be true only for allocnos which can not be used as memory in
+ cannot be used as memory. The function updates the attribute, now
+ it can be true only for allocnos which cannot be used as memory in
an insn and in whose live ranges there is other allocno deaths.
Spilling allocnos with true value will not improve the code because
it will not make other allocnos colorable and additional reloads
/* Set up conflicting (through CONFLICT_REGS) for each object of
allocno A and the start allocno profitable regs (through
START_PROFITABLE_REGS). Remember that the start profitable regs
- exclude hard regs which can not hold value of mode of allocno A.
+ exclude hard regs which cannot hold value of mode of allocno A.
This covers mostly cases when multi-register value should be
aligned. */
static inline void
/* Push pseudos requiring less hard registers first. It means that
we will assign pseudos requiring more hard registers first
avoiding creation small holes in free hard register file into
- which the pseudos requiring more hard registers can not fit. */
+ which the pseudos requiring more hard registers cannot fit. */
if ((diff = (ira_reg_class_max_nregs[cl1][ALLOCNO_MODE (a1)]
- ira_reg_class_max_nregs[cl2][ALLOCNO_MODE (a2)])) != 0)
return diff;
if (! IN_RANGE (allocno_preferenced_hard_regno,
0, FIRST_PSEUDO_REGISTER - 1))
- /* Can not be tied. */
+ /* Cannot be tied. */
return false;
rclass = REGNO_REG_CLASS (allocno_preferenced_hard_regno);
mode = ALLOCNO_MODE (a);
return false;
index = ira_class_hard_reg_index[aclass][allocno_preferenced_hard_regno];
if (index < 0)
- /* Can not be tied. It is not in the allocno class. */
+ /* Cannot be tied. It is not in the allocno class. */
return false;
ira_init_register_move_cost_if_necessary (mode);
if (HARD_REGISTER_P (reg1))
/* Setup cost classes for pseudo REGNO with MODE. Usage of MODE can
decrease number of cost classes for the pseudo, if hard registers
- of some important classes can not hold a value of MODE. So the
- pseudo can not get hard register of some important classes and cost
+ of some important classes cannot hold a value of MODE. So the
+ pseudo cannot get hard register of some important classes and cost
calculation for such important classes is only wasting CPU
time. */
static void
/* Allocnos in the loop corresponding to their regnos. If it is
NULL the loop does not form a separate register allocation region
- (e.g. because it has abnormal enter/exit edges and we can not put
+ (e.g. because it has abnormal enter/exit edges and we cannot put
code for register shuffling on the edges if a different
allocation is used for a pseudo-register on different sides of
the edges). Caps are not in the map (remember we can have more
extern int ira_loop_tree_height;
/* All nodes representing basic blocks are referred through the
- following array. We can not use basic block member `aux' for this
+ following array. We cannot use basic block member `aux' for this
because it is used for insertion of insns on edges. */
extern ira_loop_tree_node_t ira_bb_nodes;
of other ira_objects that this one can conflict with. */
int min, max;
/* Initial and accumulated hard registers conflicting with this
- object and as a consequences can not be assigned to the allocno.
+ object and as a consequences cannot be assigned to the allocno.
All non-allocatable hard regs and hard regs of register classes
different from given allocno one are included in the sets. */
HARD_REG_SET conflict_hard_regs, total_conflict_hard_regs;
struct costs *x_op_costs[MAX_RECOG_OPERANDS];
struct costs *x_this_op_costs[MAX_RECOG_OPERANDS];
- /* Hard registers that can not be used for the register allocator for
+ /* Hard registers that cannot be used for the register allocator for
all functions of the current compilation unit. */
HARD_REG_SET x_no_unit_alloc_regs;
/* We should create a conflict of PIC pseudo with
PIC hard reg as PIC hard reg can have a wrong
value after jump described by the abnormal edge.
- In this case we can not allocate PIC hard reg to
+ In this case we cannot allocate PIC hard reg to
PIC pseudo as PIC pseudo will also have a wrong
value. This code is not critical as LRA can fix
it but it is better to have the right allocation
more profitable than memory usage.
* Popping the allocnos from the stack and assigning them hard
- registers. If IRA can not assign a hard register to an
+ registers. If IRA cannot assign a hard register to an
allocno and the allocno is coalesced, IRA undoes the
coalescing and puts the uncoalesced allocnos onto the stack in
the hope that some such allocnos will get a hard register
ira_uniform_class_p[cl] = false;
if (ira_class_hard_regs_num[cl] == 0)
continue;
- /* We can not use alloc_reg_class_subclasses here because move
+ /* We cannot use alloc_reg_class_subclasses here because move
cost hooks does not take into account that some registers are
unavailable for the subtarget. E.g. for i686, INT_SSE_REGS
is element of alloc_reg_class_subclasses for GENERAL_REGS
IRA_IMPORTANT_CLASSES, and IRA_IMPORTANT_CLASSES_NUM.
Target may have many subtargets and not all target hard registers can
- be used for allocation, e.g. x86 port in 32-bit mode can not use
+ be used for allocation, e.g. x86 port in 32-bit mode cannot use
hard registers introduced in x86-64 like r8-r15). Some classes
might have the same allocatable hard registers, e.g. INDEX_REGS
and GENERAL_REGS in x86 port in 32-bit mode. To decrease different
rtx_insn_list *list;
/* If we have already processed this pseudo and determined it
- can not have an equivalence, then honor that decision. */
+ cannot have an equivalence, then honor that decision. */
if (reg_equiv[regno].no_equiv)
continue;
if (ira_conflicts_p && ! ira_use_lra_p)
/* Opposite to reload pass, LRA does not use any conflict info
from IRA. We don't rebuild conflict info for LRA (through
- ira_flattening call) and can not use the check here. We could
+ ira_flattening call) and cannot use the check here. We could
rebuild this info for LRA in the check mode but there is a risk
that code generated with the check and without it will be a bit
different. Calling ira_flattening in any mode would be a
index [CL][M] gives the number of that register, otherwise it is -1. */
short x_ira_class_singleton[N_REG_CLASSES][MAX_MACHINE_MODE];
- /* Function specific hard registers can not be used for the register
+ /* Function specific hard registers cannot be used for the register
allocation. */
HARD_REG_SET x_ira_no_alloc_regs;
return true;
}
-/* Pre-check candidate DEST to skip the one which can not make a valid insn
+/* Pre-check candidate DEST to skip the one which cannot make a valid insn
during move_invariant_reg. SIMPLE is to skip HARD_REGISTER. */
static bool
pre_check_invariant_p (bool simple, rtx dest)
This usually has the effect that FP constant loads from the constant
pool are not moved out of the loop.
- Note that this also means that dependent invariants can not be moved.
+ Note that this also means that dependent invariants cannot be moved.
However, the primary purpose of this pass is to move loop invariant
address arithmetic out of loops, and address arithmetic that depends
on floating point constants is unlikely to ever occur. */
}
/* Hmm, this is a bit paradoxical. We know that INSN is a valid insn
- in MD. But if there is no optab to generate the insn, we can not
+ in MD. But if there is no optab to generate the insn, we cannot
perform the variable expansion. This can happen if an MD provides
an insn but not a named pattern to generate it, for example to avoid
producing code that needs additional mode switches like for x87/mmx.
static bitmap decomposable_context;
/* Bit N in this bitmap is set if regno N is used in a context in
- which it can not be decomposed. */
+ which it cannot be decomposed. */
static bitmap non_decomposable_context;
/* Bit N in this bitmap is set if regno N is used in a subreg
the register.
If this is not a simple copy from one location to another,
- then we can not decompose this register. If this is a simple
+ then we cannot decompose this register. If this is a simple
copy we want to decompose, and the mode is right,
then we mark the register as decomposable.
Otherwise we don't say anything about this register --
/* It's possible for the code to use a subreg of a decomposed
register while forming an address. We need to handle that before
passing the address to emit_move_insn. We pass NULL_RTX as the
- insn parameter to resolve_subreg_use because we can not validate
+ insn parameter to resolve_subreg_use because we cannot validate
the insn yet. */
if (MEM_P (src) || MEM_P (dest))
{
lra_reg_info[conflict_regno].biggest_mode);
/* Remember about multi-register pseudos. For example, 2
hard register pseudos can start on the same hard register
- but can not start on HR and HR+1/HR-1. */
+ but cannot start on HR and HR+1/HR-1. */
for (hr = conflict_hr + 1;
hr < FIRST_PSEUDO_REGISTER && hr < conflict_hr + nregs;
hr++)
PSEUDO_REGNO_MODE (regno), hard_regno)
&& targetm.hard_regno_mode_ok (hard_regno,
PSEUDO_REGNO_MODE (regno))
- /* We can not use prohibited_class_mode_regs for all classes
+ /* We cannot use prohibited_class_mode_regs for all classes
because it is not defined for all classes. */
&& (ira_allocno_class_translate[rclass] != rclass
|| ! TEST_HARD_REG_BIT (ira_prohibited_class_mode_regs
{
int i, hr;
- /* We can not just reassign hard register. */
+ /* We cannot just reassign hard register. */
lra_assert (hard_regno < 0 || reg_renumber[regno] < 0);
if ((hr = hard_regno) < 0)
hr = reg_renumber[regno];
fails_p = nfails != 0;
break;
}
- /* This is a very rare event. We can not assign a hard register
+ /* This is a very rare event. We cannot assign a hard register
to reload pseudo because the hard register was assigned to
another reload pseudo on a previous assignment pass. For x86
example, on the 1st pass we assigned CX (although another
operand ("a"). "b" must then be copied into a new register
so that it doesn't clobber the current value of "a".
- We can not use the same value if the output pseudo is
+ We cannot use the same value if the output pseudo is
early clobbered or the input pseudo is mentioned in the
output, e.g. as an address part in memory, because
output reload will actually extend the pseudo liveness.
/* Major function to choose the current insn alternative and what
operands should be reloaded and how. If ONLY_ALTERNATIVE is not
negative we should consider only this alternative. Return false if
- we can not choose the alternative or find how to reload the
+ we cannot choose the alternative or find how to reload the
operands. */
static bool
process_alt_operands (int only_alternative)
goto fail;
}
- /* Alternative loses if it required class pseudo can not
+ /* Alternative loses if it required class pseudo cannot
hold value of required mode. Such insns can be
described by insn definitions with mode iterators. */
if (GET_MODE (*curr_id->operand_loc[nop]) != VOIDmode
if (lra_dump_file != NULL)
fprintf (lra_dump_file,
" alt=%d: reload pseudo for op %d "
- " can not hold the mode value -- refuse\n",
+ " cannot hold the mode value -- refuse\n",
nalt, nop);
goto fail;
}
if (sec_mode != rld_mode)
{
/* If the target says specifically to use another mode for
- secondary memory moves we can not reuse the original
+ secondary memory moves we cannot reuse the original
insn. */
after = emit_spill_move (false, new_reg, dest);
lra_process_new_insns (curr_insn, NULL, after,
{
bool pseudo_p = contains_reg_p (x, false, false);
- /* After RTL transformation, we can not guarantee that
+ /* After RTL transformation, we cannot guarantee that
pseudo in the substitution was not reloaded which might
make equivalence invalid. For example, in reverse
equiv of p0
|| (! reverse_equiv_p (i)
&& (init_insn_rhs_dead_pseudo_p (i)
/* If we reloaded the pseudo in an equivalence
- init insn, we can not remove the equiv init
+ init insn, we cannot remove the equiv init
insns and the init insns might write into
const memory in this case. */
|| contains_reloaded_insn_p (i)))
if ((REG_P (dest_reg)
&& (x = get_equiv (dest_reg)) != dest_reg
/* Remove insns which set up a pseudo whose value
- can not be changed. Such insns might be not in
+ cannot be changed. Such insns might be not in
init_insns because we don't update equiv data
during insn transformations.
(inheritance/split pseudos and original registers). */
static bitmap_head check_only_regs;
-/* Reload pseudos can not be involded in invariant inheritance in the
+/* Reload pseudos cannot be involded in invariant inheritance in the
current EBB. */
static bitmap_head invalid_invariant_regs;
/* Don't do inheritance if the pseudo is also
used in the insn. */
if (r == NULL)
- /* We can not do inheritance right now
+ /* We cannot do inheritance right now
because the current insn reg info (chain
regs) can change after that. */
add_to_inherit (dst_regno, next_usage_insns);
}
- /* We can not process one reg twice here because of
+ /* We cannot process one reg twice here because of
usage_insns invalidation. */
if ((dst_regno < FIRST_PSEUDO_REGISTER
|| reg_renumber[dst_regno] >= 0)
bool change_p, done_p;
change_p = ! bitmap_empty_p (remove_pseudos);
- /* We can not finish the function right away if CHANGE_P is true
+ /* We cannot finish the function right away if CHANGE_P is true
because we need to marks insns affected by previous
inheritance/split pass for processing by the subsequent
constraint pass. */
{
/* reload pseudo <- invariant inheritance pseudo */
start_sequence ();
- /* We can not just change the source. It might be
+ /* We cannot just change the source. It might be
an insn different from the move. */
emit_insn (lra_reg_info[sregno].restore_rtx);
rtx_insn *new_insns = get_insns ();
{
/* If we are assigning to a hard register that can be
eliminated, it must be as part of a PARALLEL, since
- the code above handles single SETs. This reg can not
+ the code above handles single SETs. This reg cannot
be longer eliminated -- it is forced by
mark_not_eliminable. */
for (ep = reg_eliminate;
" Elimination %d to %d is not possible anymore\n",
ep->from, ep->to);
/* If after processing RTL we decides that SP can be used as
- a result of elimination, it can not be changed. */
+ a result of elimination, it cannot be changed. */
gcc_assert ((ep->to_rtx != stack_pointer_rtx)
|| (ep->from < FIRST_PSEUDO_REGISTER
&& fixed_regs [ep->from]));
/* We should create a conflict of PIC pseudo with PIC
hard reg as PIC hard reg can have a wrong value after
jump described by the abnormal edge. In this case we
- can not allocate PIC hard reg to PIC pseudo as PIC
+ cannot allocate PIC hard reg to PIC pseudo as PIC
pseudo will also have a wrong value. */
|| (px == REAL_PIC_OFFSET_TABLE_REGNUM
&& pic_offset_table_rtx != NULL_RTX
\f
-/* Return true if X contains memory or some UNSPEC. We can not just
+/* Return true if X contains memory or some UNSPEC. We cannot just
check insn operands as memory or unspec might be not an operand
itself but contain an operand. Insn with memory access is not
profitable for rematerialization. Rematerialization of UNSPEC
return false;
}
-/* If INSN can not be used for rematerialization, return negative
+/* If INSN cannot be used for rematerialization, return negative
value. If INSN can be considered as a candidate for
rematerialization, return value which is the operand number of the
pseudo for which the insn can be used for rematerialization. Here
/* First find a pseudo which can be rematerialized. */
for (reg = id->regs; reg != NULL; reg = reg->next)
{
- /* True FRAME_POINTER_NEEDED might be because we can not follow
+ /* True FRAME_POINTER_NEEDED might be because we cannot follow
changing sp offsets, e.g. alloca is used. If the insn contains
- stack pointer in such case, we can not rematerialize it as we
- can not know sp offset at a rematerialization place. */
+ stack pointer in such case, we cannot rematerialize it as we
+ cannot know sp offset at a rematerialization place. */
if (reg->regno == STACK_POINTER_REGNUM && frame_pointer_needed)
return -1;
else if (reg->type == OP_OUT && ! reg->subreg_p
basic_block bb;
HARD_REG_SET conflict_hard_regs;
bitmap setjump_crosses = regstat_get_setjmp_crosses ();
- /* Hard registers which can not be used for any purpose at given
+ /* Hard registers which cannot be used for any purpose at given
program point because they are unallocatable or already allocated
for other pseudos. */
HARD_REG_SET *reserved_hard_regs;
}
if (curr == NULL)
{
- /* This is a new hard regno or the info can not be
+ /* This is a new hard regno or the info cannot be
integrated into the found structure. */
#ifdef STACK_REGS
early_clobber
if (curr->regno == regno)
{
if (curr->subreg_p != subreg_p || curr->biggest_mode != mode)
- /* The info can not be integrated into the found
+ /* The info cannot be integrated into the found
structure. */
data->regs = new_insn_reg (data->insn, regno, type, mode,
subreg_p, early_clobber,
if (boundary_p && node->analyzed
&& node->get_partitioning_class () == SYMBOL_PARTITION)
{
- /* Inline clones can not be part of boundary.
+ /* Inline clones cannot be part of boundary.
gcc_assert (!node->global.inlined_to);
FIXME: At the moment they can be, when partition contains an inline
/* TYPE_CANONICAL is re-computed during type merging, so no need
to follow it here. */
/* Do not stream TYPE_STUB_DECL; it is not needed by LTO but currently
- it can not be freed by free_lang_data without triggering ICEs in
+ it cannot be freed by free_lang_data without triggering ICEs in
langhooks. */
}
+2019-01-09 Sandra Loosemore <sandra@codesourcery.com>
+
+ PR other/16615
+
+ * lto-partition.c: Mechanically replace "can not" with "cannot".
+ * lto-symtab.c: Likewise.
+ * lto.c: Likewise.
+
2019-01-01 Jakub Jelinek <jakub@redhat.com>
Update copyright years.
{
symtab_node *node1;
- /* Verify that we do not try to duplicate something that can not be. */
+ /* Verify that we do not try to duplicate something that cannot be. */
gcc_checking_assert (node->get_partitioning_class () == SYMBOL_DUPLICATE
|| !symbol_partitioned_p (node));
while ((node1 = contained_in_symbol (node)) != node)
node = node1;
- /* If we have duplicated symbol contained in something we can not duplicate,
+ /* If we have duplicated symbol contained in something we cannot duplicate,
we are very badly screwed. The other way is possible, so we do not
assert this in add_symbol_to_partition_1.
return;
/* Now walk symbols sharing the same name and see if there are any conflicts.
- (all types of symbols counts here, since we can not have static of the
+ (all types of symbols counts here, since we cannot have static of the
same name as external or public symbol.) */
for (s = symtab_node::get_for_asmname (name);
s; s = s->next_sharing_asm_name)
prevailing_type = TYPE_MAIN_VARIANT (prevailing_type);
type = TYPE_MAIN_VARIANT (type);
- /* We can not use types_compatible_p because we permit some changes
+ /* We cannot use types_compatible_p because we permit some changes
across types. For example unsigned size_t and "signed size_t" may be
compatible when merging C and Fortran types. */
if (COMPLETE_TYPE_P (prevailing_type)
enum tree_code code;
/* We compute alias sets only for types that needs them.
- Be sure we do not recurse to something else as we can not hash incomplete
+ Be sure we do not recurse to something else as we cannot hash incomplete
types in a way they would have same hash value as compatible complete
types. */
gcc_checking_assert (type_with_alias_set_p (type));
+2019-01-09 Sandra Loosemore <sandra@codesourcery.com>
+
+ PR other/16615
+
+ * objc-act.c: Mechanically replace "can not" with "cannot".
+
2019-01-01 Jakub Jelinek <jakub@redhat.com>
Update copyright years.
if (TREE_CODE (TREE_TYPE (decl)) == ARRAY_TYPE)
{
- error_at (location, "property can not be an array");
+ error_at (location, "property cannot be an array");
return;
}
describe a pair of accessor methods, so its type (which is
the type of the return value of the getter and the first
argument of the setter) can't be a bitfield (as return values
- and arguments of functions can not be bitfields). The
+ and arguments of functions cannot be bitfields). The
underlying instance variable could be a bitfield, but that is
a different matter. */
- error_at (location, "property can not be a bit-field");
+ error_at (location, "property cannot be a bit-field");
return;
}
#endif
if (PROPERTY_READONLY (property_decl))
{
- error ("readonly property can not be set");
+ error ("readonly property cannot be set");
return error_mark_node;
}
else
#endif
if (attributes)
- warning_at (input_location, 0, "method attributes can not be specified in @implementation context");
+ warning_at (input_location, 0, "method attributes cannot be specified in @implementation context");
else
objc_decl_method_attributes (&decl, attributes, 0);
else if (TYPE_HAS_OBJC_INFO (TREE_TYPE (type))
&& TYPE_OBJC_PROTOCOL_LIST (TREE_TYPE (type)))
{
- error ("@catch parameter can not be protocol-qualified");
+ error ("@catch parameter cannot be protocol-qualified");
type = error_mark_node;
}
else if (POINTER_TYPE_P (type) && objc_is_object_id (TREE_TYPE (type)))
TREE_VALUE (type) = objc_object_type;
else if (TREE_CODE (TREE_VALUE (type)) == RECORD_TYPE
&& TYPED_OBJECT (TREE_VALUE (type)))
- error ("can not use an object as parameter to a method");
+ error ("cannot use an object as parameter to a method");
return type;
}
{
/* This should never happen. */
error_at (location,
- "can not find instance variable associated with property");
+ "cannot find instance variable associated with property");
ret_val = error_mark_node;
break;
}
if (!ivar || is_private (ivar))
{
error_at (location,
- "can not find instance variable associated with property");
+ "cannot find instance variable associated with property");
statement = error_mark_node;
break;
}
if (TREE_CODE (objc_implementation_context) == CATEGORY_IMPLEMENTATION_TYPE)
{
- error_at (location, "%<@synthesize%> can not be used in categories");
+ error_at (location, "%<@synthesize%> cannot be used in categories");
return;
}
increase the number of redundant loads found. So compute transparency
information for each memory expression in the hash table. */
df_analyze ();
- /* This can not be part of the normal allocation routine because
+ /* This cannot be part of the normal allocation routine because
we have to know the number of elements in the hash table. */
transp = sbitmap_vector_alloc (last_basic_block_for_fn (cfun),
expr_table->elements ());
return false;
}
-/* We can not predict the probabilities of outgoing edges of bb. Set them
+/* We cannot predict the probabilities of outgoing edges of bb. Set them
evenly and hope for the best. If UNLIKELY_EDGES is not null, distribute
even probability for all edges not mentioned in the set. These edges
are given PROB_VERY_UNLIKELY probability. Similarly for LIKELY_EDGES,
we are not 100% sure.
This function locally updates profile without attempt to keep global
- consistency which can not be reached in full generality without full profile
+ consistency which cannot be reached in full generality without full profile
rebuild from probabilities alone. Doing so is not necessarily a good idea
because frequencies and counts may be more realistic then probabilities.
{
if (impossible)
e->probability = profile_probability::never ();
- /* If BB has some edges out that are not impossible, we can not
+ /* If BB has some edges out that are not impossible, we cannot
assume that BB itself is. */
impossible = false;
}
/* Uninitialized value. */
profile_uninitialized,
/* Profile is based on static branch prediction heuristics and may
- or may not match reality. It is local to function and can not be compared
+ or may not match reality. It is local to function and cannot be compared
inter-procedurally. Never used by probabilities (they are always local).
*/
profile_guessed_local,
}
/* Comparsions are three-state and conservative. False is returned if
- the inequality can not be decided. */
+ the inequality cannot be decided. */
bool operator< (const profile_probability &other) const
{
return initialized_p () && other.initialized_p () && m_val < other.m_val;
was never run in train feedback) but they hold local static profile
estimate.
- Counters of type 1 and 3 can not be mixed with counters of different type
+ Counters of type 1 and 3 cannot be mixed with counters of different type
within operation (because whole function should use one type of counter)
with exception that global zero mix in most operations where outcome is
well defined.
}
/* Comparsions are three-state and conservative. False is returned if
- the inequality can not be decided. */
+ the inequality cannot be decided. */
bool operator< (const profile_count &other) const
{
if (!initialized_p () || !other.initialized_p ())
various transformations. */
if (thunk)
{
- /* At stream in time we do not have CFG, so we can not do checksums. */
+ /* At stream in time we do not have CFG, so we cannot do checksums. */
cfg_checksum = 0;
lineno_checksum = 0;
}
If that happens and INSN was the last reference to the
given EH region, then the EH region will become unreachable.
- We can not leave the unreachable blocks in the CFG as that
+ We cannot leave the unreachable blocks in the CFG as that
will trigger a checking failure.
So track if INSN has a REG_EH_REGION note. If so and we
/* Fourth, if the extended version occupies more registers than the
original and the source of the extension is the same hard register
- as the destination of the extension, then we can not eliminate
+ as the destination of the extension, then we cannot eliminate
the extension without deep analysis, so just punt.
We allow this when the registers are different because the
The convention is that secondary input reloads are valid only if the
secondary_class is different from class. If you have such a case, you
- can not use secondary reloads, you must work around the problem some
+ cannot use secondary reloads, you must work around the problem some
other way.
Allow this when a reload_in/out pattern is being used. I.e. assume
|| GET_RTX_CLASS (GET_CODE (x)) == RTX_AUTOINC)
x = XEXP (x, 0);
- /* If either argument is a constant, then modifying X can not affect IN. */
+ /* If either argument is a constant, then modifying X cannot affect IN. */
if (CONSTANT_P (x) || CONSTANT_P (in))
return 0;
else if (GET_CODE (x) == SUBREG && MEM_P (SUBREG_REG (x)))
RELOADNUM is the number of the reload we want to load this value for;
a reload does not conflict with itself.
- When IGNORE_ADDRESS_RELOADS is set, we can not have conflicts with
+ When IGNORE_ADDRESS_RELOADS is set, we cannot have conflicts with
reloads that load an address for the very reload we are considering.
The caller has to make sure that there is no conflict with the return
emit_insn (gen_rtx_SET (out, in));
/* Return the first insn emitted.
- We can not just return get_last_insn, because there may have
+ We cannot just return get_last_insn, because there may have
been multiple instructions emitted. Also note that gen_move_insn may
- emit more than one insn itself, so we can not assume that there is one
+ emit more than one insn itself, so we cannot assume that there is one
insn emitted per emit_insn_before call. */
return last ? NEXT_INSN (last) : get_insns ();
??? It may be possible to move other sets into INSN in addition to
moving the instructions in the delay slots.
- We can not steal the delay list if one of the instructions in the
+ We cannot steal the delay list if one of the instructions in the
current delay_list modifies the condition codes and the jump in the
- sequence is a conditional jump. We can not do this because we can
+ sequence is a conditional jump. We cannot do this because we can
not change the direction of the jump because the condition codes
will effect the direction of the jump in the sequence. */
res->volatil |= MEM_VOLATILE_P (x);
/* For all ASM_OPERANDS, we must traverse the vector of input operands.
- We can not just fall through here since then we would be confused
+ We cannot just fall through here since then we would be confused
by the ASM_INPUT rtx inside ASM_OPERANDS, which do not indicate
traditional asms unlike their normal usage. */
res->volatil |= MEM_VOLATILE_P (x);
/* For all ASM_OPERANDS, we must traverse the vector of input operands.
- We can not just fall through here since then we would be confused
+ We cannot just fall through here since then we would be confused
by the ASM_INPUT rtx inside ASM_OPERANDS, which do not indicate
traditional asms unlike their normal usage. */
For example, subroutine calls will use the register
in which the static chain is passed.
- USE can not appear as an operand of other rtx except for PARALLEL.
+ USE cannot appear as an operand of other rtx except for PARALLEL.
USE is not deletable, as it indicates that the operand
is used in some unknown way. */
DEF_RTL_EXPR(USE, "use", "e", RTX_EXTRA)
For example, subroutine calls will clobber some physical registers
(the ones that are by convention not saved).
- CLOBBER can not appear as an operand of other rtx except for PARALLEL.
+ CLOBBER cannot appear as an operand of other rtx except for PARALLEL.
CLOBBER of a hard register appearing by itself (not within PARALLEL)
is considered undeletable before reload. */
DEF_RTL_EXPR(CLOBBER, "clobber", "e", RTX_EXTRA)
operand 2 counts from the msb of the memory unit.
Otherwise, the first bit is the lsb and operand 2 counts from
the lsb of the memory unit.
- This kind of expression can not appear as an lvalue in RTL. */
+ This kind of expression cannot appear as an lvalue in RTL. */
DEF_RTL_EXPR(SIGN_EXTRACT, "sign_extract", "eee", RTX_BITFIELD_OPS)
/* Similar for unsigned bit-field.
DEF_RTL_EXPR(DEFINE_QUERY_CPU_UNIT, "define_query_cpu_unit", "sS", RTX_EXTRA)
/* (exclusion_set string string) means that each CPU functional unit
- in the first string can not be reserved simultaneously with any
+ in the first string cannot be reserved simultaneously with any
unit whose name is in the second string and vise versa. CPU units
in the string are separated by commas. For example, it is useful
for description CPU with fully pipelined floating point functional
DEF_RTL_EXPR(EXCLUSION_SET, "exclusion_set", "ss", RTX_EXTRA)
/* (presence_set string string) means that each CPU functional unit in
- the first string can not be reserved unless at least one of pattern
+ the first string cannot be reserved unless at least one of pattern
of units whose names are in the second string is reserved. This is
an asymmetric relation. CPU units or unit patterns in the strings
are separated by commas. Pattern is one unit name or unit names
are separated by commas. Pattern is one unit name or unit names
separated by white-spaces.
- For example, it is useful for description that slot0 can not be
+ For example, it is useful for description that slot0 cannot be
reserved after slot1 or slot2 reservation for a VLIW processor. We
could describe it by the following construction
(absence_set "slot2" "slot0, slot1")
- Or slot2 can not be reserved if slot0 and unit b0 are reserved or
+ Or slot2 cannot be reserved if slot0 and unit b0 are reserved or
slot1 and unit b1 are reserved . In this case we could write
(absence_set "slot2" "slot0 b0, slot1 b1")
are passed to the function.
CLOBBER expressions document the registers explicitly clobbered
by this CALL_INSN.
- Pseudo registers can not be mentioned in this list. */
+ Pseudo registers cannot be mentioned in this list. */
#define CALL_INSN_FUNCTION_USAGE(INSN) XEXP(INSN, 7)
/* The label-number of a code-label. The assembler label
{
unsigned int regno, endregno;
- /* If either argument is a constant, then modifying X can not
+ /* If either argument is a constant, then modifying X cannot
affect IN. Here we look at IN, we can profitably combine
CONSTANT_P (x) with the switch statement below. */
if (CONSTANT_P (in))
reg_pending_barrier = TRUE_BARRIER;
/* For all ASM_OPERANDS, we must traverse the vector of input operands.
- We can not just fall through here since then we would be confused
+ We cannot just fall through here since then we would be confused
by the ASM_INPUT rtx inside ASM_OPERANDS, which do not indicate
traditional asms unlike their normal usage. */
&& (ds & DEP_CONTROL)
&& !(ds & (DEP_OUTPUT | DEP_ANTI | DEP_TRUE)));
- /* HARD_DEP can not appear in dep_status of a link. */
+ /* HARD_DEP cannot appear in dep_status of a link. */
gcc_assert (!(ds & HARD_DEP));
/* Check that dependence status is set correctly when speculation is not
case PRISKY_CANDIDATE:
/* ??? We could implement better checking PRISKY_CANDIDATEs
analogous to sched-rgn.c. */
- /* We can not change the mode of the backward
+ /* We cannot change the mode of the backward
dependency because REG_DEP_ANTI has the lowest
rank. */
if (! sched_insns_conditions_mutex_p (insn, prev))
The algorithm in the DFS traversal may not mark B & D as part
of the loop (i.e. they will not have max_hdr set to A).
- We know they can not be loop latches (else they would have
+ We know they cannot be loop latches (else they would have
had max_hdr set since they'd have a backedge to a dominator
block). So we don't need them on the initial queue.
if (!(e->flags & EDGE_FALLTHRU))
{
- /* We can not invalidate computed topological order by moving
+ /* We cannot invalidate computed topological order by moving
the edge destination block (E->SUCC) along a fallthru edge.
We will update dominators here only when we'll get
ORIGINAL_INSNS list.
REG_RENAME_P denotes the set of hardware registers that
- can not be used with renaming due to the register class restrictions,
+ cannot be used with renaming due to the register class restrictions,
mode restrictions and other (the register we'll choose should be
compatible class with the original uses, shouldn't be in call_used_regs,
should be HARD_REGNO_RENAME_OK etc).
bool insn_emitted = false;
rtx cur_reg;
- /* Bail out early when expression can not be renamed at all. */
+ /* Bail out early when expression cannot be renamed at all. */
if (!EXPR_SEPARABLE_P (params->c_expr))
return false;
dead_debug_local_finish (&debug, NULL);
}
-/* Return whether basic block PRO can get the prologue. It can not if it
+/* Return whether basic block PRO can get the prologue. It cannot if it
has incoming complex edges that need a prologue inserted (we make a new
block for the prologue, so those edges would need to be redirected, which
- does not work). It also can not if there exist registers live on entry
+ does not work). It also cannot if there exist registers live on entry
to PRO that are clobbered by the prologue. */
static bool
/* Propagate original regno. We don't have any way to specify
the offset inside original regno, so do so only for lowpart.
- The information is used only by alias analysis that can not
+ The information is used only by alias analysis that cannot
grog partial register anyway. */
if (known_eq (subreg_lowpart_offset (outermode, innermode), byte))
return false;
}
-/* If node can not be overwriten by static or dynamic linker to point to
+/* If node cannot be overwriten by static or dynamic linker to point to
different definition, return NODE. Otherwise look for alias with such
property and if none exists, introduce new one. */
{
if (alias && definition && !ultimate_alias_target ()->definition)
return SYMBOL_EXTERNAL;
- /* Constant pool references use local symbol names that can not
+ /* Constant pool references use local symbol names that cannot
be promoted global. We should never put into a constant pool
- objects that can not be duplicated across partitions. */
+ objects that cannot be duplicated across partitions. */
if (DECL_IN_CONSTANT_POOL (decl))
return SYMBOL_DUPLICATE;
if (DECL_HARD_REGISTER (decl))
if (target->alias && target->weakref)
return false;
- /* We can not recurse to target::nonzero. It is possible that the
+ /* We cannot recurse to target::nonzero. It is possible that the
target is used only via the alias.
We may walk references and look for strong use, but we do not know
if this strong use will survive to final binary, so be
bool really_binds_local1 = binds_local1;
bool really_binds_local2 = binds_local2;
- /* Addresses of vtables and virtual functions can not be used by user
+ /* Addresses of vtables and virtual functions cannot be used by user
code and are used only within speculation. In this case we may make
symbol equivalent to its alias even if interposition may break this
rule. Doing so will allow us to turn speculative inlining into
return 1;
}
- /* If both symbols may resolve to NULL, we can not really prove them
+ /* If both symbols may resolve to NULL, we cannot really prove them
different. */
if (!memory_accessed && !nonzero_address () && !s2->nonzero_address ())
return -1;
return -1;
/* If we have a non-interposale definition of at least one of the symbols
- and the other symbol is different, we know other unit can not interpose
+ and the other symbol is different, we know other unit cannot interpose
it to the first symbol; all aliases of the definition needs to be
present in the current unit. */
if (((really_binds_local1 || really_binds_local2)
if (TREE_ASM_WRITTEN (target->decl))
return false;
- /* If target is already placed in an anchor, we can not touch its
+ /* If target is already placed in an anchor, we cannot touch its
alignment. */
if (DECL_RTL_SET_P (target->decl)
&& MEM_P (DECL_RTL (target->decl))
/* Ignore all references from external vars initializers - they are not really
part of the compilation unit until they are used by folding. Some symbols,
- like references to external construction vtables can not be referred to at
+ like references to external construction vtables cannot be referred to at
all. We decide this at can_refer_decl_in_current_unit_p. */
if (!definition || DECL_EXTERNAL (decl))
{
It is not uncommon for limitations of calling conventions to prevent\n\
tail calls to functions outside the current unit of translation, or\n\
during PIC compilation. The hook is used to enforce these restrictions,\n\
-as the @code{sibcall} md pattern can not fail, or fall over to a\n\
+as the @code{sibcall} md pattern cannot fail, or fall over to a\n\
``normal'' call. The criteria for successful sibling call optimization\n\
may vary greatly between different architectures.",
bool, (tree decl, tree exp),
invoke_plugin_callbacks (PLUGIN_FINISH_UNIT, NULL);
/* This must be at the end. Some target ports emit end of file directives
- into the assembly file here, and hence we can not output anything to the
+ into the assembly file here, and hence we cannot output anything to the
assembly file after this point. */
targetm.asm_out.file_end ();
flag_stack_clash_protection = 0;
}
- /* We can not support -fstack-check= and -fstack-clash-protection at
+ /* We cannot support -fstack-check= and -fstack-clash-protection at
the same time. */
if (flag_stack_check != NO_STACK_CHECK && flag_stack_clash_protection)
{
|| (mode == DFmode
&& (rfmt == &ieee_double_format || rfmt == &mips_double_format
|| rfmt == &motorola_double_format))
- /* For long double, we can not really check XFmode
+ /* For long double, we cannot really check XFmode
which is only defined on intel platforms.
Candidate pre-selection using builtin function
code guarantees that we are checking formats
a situation where we have a forced label in block B
However, the label at the start of block B might still be
used in other ways (think about the runtime checking for
- Fortran assigned gotos). So we can not just delete the
+ Fortran assigned gotos). So we cannot just delete the
label. Instead we move the label to the start of block A. */
if (FORCED_LABEL (label))
{
if (computed_goto_p (stmt))
{
/* Only optimize if the argument is a label, if the argument is
- not a label then we can not construct a proper CFG.
+ not a label then we cannot construct a proper CFG.
It may be the case that we only need to allow the LABEL_REF to
appear inside an ADDR_EXPR, but we also allow the LABEL_REF to
ac = gimple_assign_rhs1 (stmt);
bc = (gimple_num_ops (stmt) > 2) ? gimple_assign_rhs2 (stmt) : NULL;
}
- /* GIMPLE_CALL can not get here. */
+ /* GIMPLE_CALL cannot get here. */
else
{
ac = gimple_cond_lhs (stmt);
BUILT_IN_NORMAL
};
-/* Last marker used for LTO stremaing of built_in_class. We can not add it
+/* Last marker used for LTO stremaing of built_in_class. We cannot add it
to the enum since we need the enumb to fit in 2 bits. */
#define BUILT_IN_LAST (BUILT_IN_NORMAL + 1)
{
if (eh_edge)
{
- error ("BB %i can not throw but has an EH edge", bb->index);
+ error ("BB %i cannot throw but has an EH edge", bb->index);
return true;
}
return false;
static bool
can_be_nonlocal (tree decl, copy_body_data *id)
{
- /* We can not duplicate function decls. */
+ /* We cannot duplicate function decls. */
if (TREE_CODE (decl) == FUNCTION_DECL)
return true;
&& bb->index != ENTRY_BLOCK
&& bb->index != EXIT_BLOCK)
maybe_move_debug_stmts_to_successors (id, (basic_block) bb->aux);
- /* Update call edge destinations. This can not be done before loop
+ /* Update call edge destinations. This cannot be done before loop
info is updated, because we may split basic blocks. */
if (id->transform_call_graph_edges == CB_CGE_DUPLICATE
&& bb->index != ENTRY_BLOCK
maybe_move_debug_stmts_to_successors (id,
BASIC_BLOCK_FOR_FN (cfun, last));
BASIC_BLOCK_FOR_FN (cfun, last)->aux = NULL;
- /* Update call edge destinations. This can not be done before loop
+ /* Update call edge destinations. This cannot be done before loop
info is updated, because we may split basic blocks. */
if (id->transform_call_graph_edges == CB_CGE_DUPLICATE)
redirect_all_calls (id, BASIC_BLOCK_FOR_FN (cfun, last));
static const char *inline_forbidden_reason;
/* A callback for walk_gimple_seq to handle statements. Returns non-null
- iff a function can not be inlined. Also sets the reason why. */
+ iff a function cannot be inlined. Also sets the reason why. */
static tree
inline_forbidden_p_stmt (gimple_stmt_iterator *gsi, bool *handled_ops_p,
/* Be conservative. If data references are not well analyzed,
or the two data references have the same base address and
offset, add dependence and consider it alias to each other.
- In other words, the dependence can not be resolved by
+ In other words, the dependence cannot be resolved by
runtime alias check. */
if (!DR_BASE_ADDRESS (dr1) || !DR_BASE_ADDRESS (dr2)
|| !DR_OFFSET (dr1) || !DR_OFFSET (dr2)
/* Add edge to partition graph if there exists dependence. There
are two types of edges. One type edge is caused by compilation
- time known dependence, this type can not be resolved by runtime
+ time known dependence, this type cannot be resolved by runtime
alias check. The other type can be resolved by runtime alias
check. */
if (dir == 1 || dir == 2
if (found != NULL)
{
/* If we found a return statement using a different variable
- than previous return statements, then we can not perform
+ than previous return statements, then we cannot perform
NRV optimizations. */
if (found != rhs)
return 0;
if (node->thunk.thunk_p)
{
- /* We can not expand variadic thunks to Gimple. */
+ /* We cannot expand variadic thunks to Gimple. */
if (stdarg_p (TREE_TYPE (node->decl)))
continue;
thunk = true;
if (!node->local.can_change_signature)
{
if (dump_file)
- fprintf (dump_file, "Function can not change signature.\n");
+ fprintf (dump_file, "Function cannot change signature.\n");
return false;
}
return true;
}
- /* Non-aliased variables can not be pointed to. */
+ /* Non-aliased variables cannot be pointed to. */
if (!may_be_aliased (decl))
return false;
if (!finite_loop_p (loop))
{
if (dump_file)
- fprintf (dump_file, "can not prove finiteness of loop %i\n", loop->num);
+ fprintf (dump_file, "cannot prove finiteness of loop %i\n", loop->num);
mark_control_dependent_edges_necessary (loop->latch, false);
}
}
t = dom_valueize (t);
/* If T is an SSA_NAME and its associated edge is a backedge,
- then quit as we can not utilize this equivalence. */
+ then quit as we cannot utilize this equivalence. */
if (TREE_CODE (t) == SSA_NAME
&& (gimple_phi_arg_edge (phi, i)->flags & EDGE_DFS_BACK))
break;
continue;
/* We may have an equivalence associated with this edge. While
- we can not propagate it into non-dominated blocks, we can
+ we cannot propagate it into non-dominated blocks, we can
propagate them into PHIs in non-dominated blocks. */
/* Push the unwind marker so we can reset the const and copies
else
def = gimple_get_lhs (stmt);
- /* Certain expressions on the RHS can be optimized away, but can not
+ /* Certain expressions on the RHS can be optimized away, but cannot
themselves be entered into the hash tables. */
if (! def
|| TREE_CODE (def) != SSA_NAME
return false;
/* If the definition is a conversion of a pointer to a function type,
- then we can not apply optimizations as some targets require
+ then we cannot apply optimizations as some targets require
function pointers to be canonicalized and in this case this
optimization could eliminate a necessary canonicalization. */
if (CONVERT_EXPR_CODE_P (gimple_assign_rhs_code (def_stmt)))
/* Flag is set in FLAG_BBS. Determine probability that flag will be true
at loop exit.
- This code may look fancy, but it can not update profile very realistically
+ This code may look fancy, but it cannot update profile very realistically
because we do not know the probability that flag will be true at given
loop exit.
if (edge_to_cancel == exit)
edge_to_cancel = EDGE_SUCC (exit->src, 1);
}
- /* We do not know the number of iterations and thus we can not eliminate
+ /* We do not know the number of iterations and thus we cannot eliminate
the EXIT edge. */
else
exit = NULL;
{
n_unroll = maxiter;
n_unroll_found = true;
- /* Loop terminates before the IV variable test, so we can not
+ /* Loop terminates before the IV variable test, so we cannot
remove it in the last iteration. */
edge_to_cancel = NULL;
}
unloop_loops (loop_closed_ssa_invalidated, &irred_invalidated);
- /* We can not use TODO_update_ssa_no_phi because VOPS gets confused. */
+ /* We cannot use TODO_update_ssa_no_phi because VOPS gets confused. */
if (loop_closed_ssa_invalidated
&& !bitmap_empty_p (loop_closed_ssa_invalidated))
rewrite_into_loop_closed_ssa (loop_closed_ssa_invalidated,
return false;
/* If STMT could throw, then do not consider STMT as defining a GIV.
- While this will suppress optimizations, we can not safely delete this
+ While this will suppress optimizations, we cannot safely delete this
GIV and associated statements, even if it appears it is not used. */
if (stmt_could_throw_p (cfun, stmt))
return false;
/* By stmt_dominates_stmt_p we already know that STMT appears
before NITER_BOUND->STMT. Still need to test that the loop
- can not be terinated by a side effect in between. */
+ cannot be terinated by a side effect in between. */
for (bsi = gsi_for_stmt (stmt); gsi_stmt (bsi) != niter_bound->stmt;
gsi_next (&bsi))
if (gimple_has_side_effects (gsi_stmt (bsi)))
{
bool cfg_altered = false;
- /* Bitmap of blocks which need EH information updated. We can not
+ /* Bitmap of blocks which need EH information updated. We cannot
update it on-the-fly as doing so invalidates the dominator tree. */
auto_bitmap need_eh_cleanup;
form arg0 = -arg1 or arg1 = -arg0. */
assign = last_and_only_stmt (middle_bb);
- /* If we did not find the proper negation assignment, then we can not
+ /* If we did not find the proper negation assignment, then we cannot
optimize. */
if (assign == NULL)
return false;
/* If we got here, then we have found the only executable statement
in OTHER_BLOCK. If it is anything other than arg = -arg1 or
- arg1 = -arg0, then we can not optimize. */
+ arg1 = -arg0, then we cannot optimize. */
if (gimple_code (assign) != GIMPLE_ASSIGN)
return false;
/* Note that we have simulated this block. */
block->flags |= BB_VISITED;
- /* We can not predict when abnormal and EH edges will be executed, so
+ /* We cannot predict when abnormal and EH edges will be executed, so
once a block is considered executable, we consider any
outgoing abnormal edges as executable.
tree dst = gimple_phi_result (phi);
/* If the desired argument is not the same as this PHI's result
- and it is set by a PHI in E->dest, then we can not thread
+ and it is set by a PHI in E->dest, then we cannot thread
through E->dest. */
if (src != dst
&& TREE_CODE (src) == SSA_NAME
continue;
/* If the statement has volatile operands, then we assume we
- can not thread through this block. This is overly
+ cannot thread through this block. This is overly
conservative in some ways. */
if (gimple_code (stmt) == GIMPLE_ASM
&& gimple_asm_volatile_p (as_a <gasm *> (stmt)))
return NULL;
- /* If the statement is a unique builtin, we can not thread
+ /* If the statement is a unique builtin, we cannot thread
through here. */
if (gimple_code (stmt) == GIMPLE_CALL
&& gimple_call_internal_p (stmt)
tree cond;
/* The key property of these blocks is that they need not be duplicated
- when threading. Thus they can not have visible side effects such
+ when threading. Thus they cannot have visible side effects such
as PHI nodes. */
if (!gsi_end_p (gsi_start_phis (bb)))
return false;
Consider if we have two jump threading paths A and B. If the
target edge of A is the starting edge of B and we thread path A
first, then we create an additional incoming edge into B->dest that
- we can not discover as a jump threading path on this iteration.
+ we cannot discover as a jump threading path on this iteration.
If we instead thread B first, then the edge into B->dest will have
already been redirected before we process path A and path A will
/* Returns true of the domain of single predicate expression
EXPR1 is a subset of that of EXPR2. Returns false if it
- can not be proved. */
+ cannot be proved. */
static bool
is_pred_expr_subset_of (pred_info expr1, pred_info expr2)
}
/* Returns true if the domain of PRED1 is a subset
- of that of PRED2. Returns false if it can not be proved so. */
+ of that of PRED2. Returns false if it cannot be proved so. */
static bool
is_pred_chain_subset_of (pred_chain pred1, pred_chain pred2)
USE_STMT is guarded with a predicate set not overlapping with
predicate sets of all runtime paths that do not have a definition.
- Returns false if it is not or it can not be determined. USE_BB is
+ Returns false if it is not or it cannot be determined. USE_BB is
the bb of the use (for phi operand use, the bb is not the bb of
the phi stmt, but the src bb of the operand edge).
warn_uninitialized_vars (/*warn_possibly_uninitialized=*/!optimize);
- /* Post-dominator information can not be reliably updated. Free it
+ /* Post-dominator information cannot be reliably updated. Free it
after the use. */
free_dominance_info (CDI_POST_DOMINATORS);
keep a status bit in the SSA_NAME node itself to indicate it has
been put on the free list.
- Note that once on the freelist you can not reference the SSA_NAME's
+ Note that once on the freelist you cannot reference the SSA_NAME's
defining statement. */
if (! SSA_NAME_IN_FREE_LIST (var))
{
/* TYPE_CANONICAL is re-computed during type merging, so no need
to stream it here. */
/* Do not stream TYPE_STUB_DECL; it is not needed by LTO but currently
- it can not be freed by free_lang_data without triggering ICEs in
+ it cannot be freed by free_lang_data without triggering ICEs in
langhooks. */
}
use for middle-end.
It would make more sense if frontends set TREE_ADDRESSABLE to 0 only
- for public objects that indeed can not be adressed, but it is not
+ for public objects that indeed cannot be adressed, but it is not
the case. Set the flag to true so we do not get merge failures for
i.e. virtual tables between units that take address of it and
units that don't. */
debug_tree (ct);
error_found = true;
}
- /* Method and function types can not be used to address memory and thus
+ /* Method and function types cannot be used to address memory and thus
TYPE_CANONICAL really matters only for determining useless conversions.
FIXME: C++ FE produce declarations of builtin functions that are not
else if (TREE_CODE (t) == FUNCTION_TYPE)
;
else if (t != ct
- /* FIXME: gimple_canonical_types_compatible_p can not compare types
+ /* FIXME: gimple_canonical_types_compatible_p cannot compare types
with variably sized arrays because their sizes possibly
gimplified to different variables. */
&& !variably_modified_type_p (ct, NULL)
if (COMPLETE_TYPE_P (t))
return true;
- /* Incomplete types can not be accessed in general except for arrays
+ /* Incomplete types cannot be accessed in general except for arrays
where we can fetch its element despite we have no array bounds. */
if (TREE_CODE (t) == ARRAY_TYPE && COMPLETE_TYPE_P (TREE_TYPE (t)))
return true;
}
/* Given (CODE OP0 OP1) within STMT, try to simplify it based on value range
- information. Return NULL if the conditional can not be evaluated.
+ information. Return NULL if the conditional cannot be evaluated.
The ranges of all the names equivalent with the operands in COND
will be used when trying to compute the value. If the result is
based on undefined signed overflow, issue a warning if
+2019-01-09 Sandra Loosemore <sandra@codesourcery.com>
+
+ PR other/16615
+
+ * backtrace.h: Mechanically replace "can not" with "cannot".
+
2019-01-01 Jakub Jelinek <jakub@redhat.com>
Update copyright years.
pointer on success, NULL on error. If an error occurs, this will
call the ERROR_CALLBACK routine.
- Calling this function allocates resources that can not be freed.
+ Calling this function allocates resources that cannot be freed.
There is no backtrace_free_state function. The state is used to
cache information that is expensive to recompute. Programs are
expected to call this function at most once and to save the return
+2019-01-09 Sandra Loosemore <sandra@codesourcery.com>
+
+ PR other/16615
+
+ * config/c6x/libunwind.S: Mechanically replace "can not" with
+ "cannot".
+ * config/tilepro/atomic.h: Likewise.
+ * config/vxlib-tls.c: Likewise.
+ * generic-morestack-thread.c: Likewise.
+ * generic-morestack.c: Likewise.
+ * mkmap-symver.awk: Likewise.
+
2019-01-01 Jakub Jelinek <jakub@redhat.com>
Update copyright years.
;; scratch registers and stack pointer before the base registers
;; disappear. We also need to make sure no interrupts occur,
;; so put the whole thing in the delay slots of a dummy branch
- ;; We can not move the ret earlier as that would cause it to occur
+ ;; We cannot move the ret earlier as that would cause it to occur
;; before the last load completes
b .s1 (1f)
ldw .d1t1 *+A4[4], A4
advantage of the kernel's existing atomic-integer support (managed
by a distributed array of locks). The kernel provides proper
ordering among simultaneous atomic operations on different cores,
- and guarantees a process can not be context-switched part way
+ and guarantees a process cannot be context-switched part way
through an atomic operation. By virtue of sharing the kernel
atomic implementation, the userspace atomic operations
are compatible with the atomic methods provided by the kernel's
The task delete hook is only installed when at least one thread
has TLS data. This is a necessary precaution, to allow this module
- to be unloaded - a module with a hook can not be removed.
+ to be unloaded - a module with a hook cannot be removed.
Since this interface is used to allocate only a small number of
keys, the table size is small and static, which simplifies the
#include "tm.h"
#include "libgcc_tm.h"
-/* If inhibit_libc is defined, we can not compile this file. The
+/* If inhibit_libc is defined, we cannot compile this file. The
effect is that people will not be able to use -fsplit-stack. That
is much better than failing the build particularly since people
will want to define inhibit_libc while building a compiler which
#include "tm.h"
#include "libgcc_tm.h"
-/* If inhibit_libc is defined, we can not compile this file. The
+/* If inhibit_libc is defined, we cannot compile this file. The
effect is that people will not be able to use -fsplit-stack. That
is much better than failing the build particularly since people
will want to define inhibit_libc while building a compiler which
}
# We begin with nm input. Collect the set of symbols that are present
-# so that we can not emit them into the final version script -- Solaris
+# so that we cannot emit them into the final version script -- Solaris
# complains at us if we do.
state == "nm" && /^%%/ {
+2019-01-09 Sandra Loosemore <sandra@codesourcery.com>
+
+ PR other/16615
+
+ * caf/single.c: Mechanically replace "can not" with "cannot".
+ * io/unit.c: Likewise.
+
2019-01-07 Thomas Koenig <tkoenig@gcc.gnu.org>
Harald Anlauf <anlauf@gmx.de>
Tobias Burnus <burnus@gcc.gnu.org>
}
-/* Single image library. There can not be any failed images with only one
+/* Single image library. There cannot be any failed images with only one
image. */
void
if (ref->next && ref->next->type == CAF_REF_COMPONENT)
/* The currently ref'ed component was allocatabe (caf_token_offset
> 0) and the next ref is a component, too, then the new sr has to
- be dereffed. (static arrays can not be allocatable or they
+ be dereffed. (static arrays cannot be allocatable or they
become an array with descriptor. */
sr = *(void **)(sr + ref->u.c.offset);
else
dst_kind, src_kind, dst_dim, src_dim + 1, 1,
stat, src_type);
return;
- /* The OPEN_* are mapped to a RANGE and therefore can not occur. */
+ /* The OPEN_* are mapped to a RANGE and therefore cannot occur. */
case CAF_ARR_REF_OPEN_END:
case CAF_ARR_REF_OPEN_START:
default:
const char extentoutofrange[] = "libcaf_single::caf_get_by_ref(): "
"extent out of range.\n";
const char cannotallocdst[] = "libcaf_single::caf_get_by_ref(): "
- "can not allocate memory.\n";
+ "cannot allocate memory.\n";
const char nonallocextentmismatch[] = "libcaf_single::caf_get_by_ref(): "
"extent of non-allocatable arrays mismatch (%lu != %lu).\n";
const char doublearrayref[] = "libcaf_single::caf_get_by_ref(): "
break;
case CAF_ARR_REF_OPEN_END:
/* This and OPEN_START are mapped to a RANGE and therefore
- can not occur here. */
+ cannot occur here. */
case CAF_ARR_REF_OPEN_START:
default:
caf_internal_error (unknownarrreftype, stat, NULL, 0);
dst_kind, src_kind, dst_dim + 1, src_dim, 1,
size, stat, dst_type);
return;
- /* The OPEN_* are mapped to a RANGE and therefore can not occur. */
+ /* The OPEN_* are mapped to a RANGE and therefore cannot occur. */
case CAF_ARR_REF_OPEN_END:
case CAF_ARR_REF_OPEN_START:
default:
const char realloconinnerref[] = "libcaf_single::caf_send_by_ref(): "
"reallocation of array followed by component ref not allowed.\n";
const char cannotallocdst[] = "libcaf_single::caf_send_by_ref(): "
- "can not allocate memory.\n";
+ "cannot allocate memory.\n";
const char nonallocextentmismatch[] = "libcaf_single::caf_send_by_ref(): "
"extent of non-allocatable array mismatch.\n";
const char innercompref[] = "libcaf_single::caf_send_by_ref(): "
break;
case CAF_ARR_REF_OPEN_END:
/* This and OPEN_START are mapped to a RANGE and therefore
- can not occur here. */
+ cannot occur here. */
case CAF_ARR_REF_OPEN_START:
default:
caf_internal_error (unknownarrreftype, stat, NULL, 0);
/* Check rank and stride. */
if (dtp->internal_unit_desc)
return false;
- /* Format strings can not have 'BZ' or '/'. */
+ /* Format strings cannot have 'BZ' or '/'. */
if (dtp->common.flags & IOPARM_DT_HAS_FORMAT)
{
char *p = dtp->format;
+2019-01-09 Sandra Loosemore <sandra@codesourcery.com>
+
+ PR other/16615
+
+ * class.c: Mechanically replace "can not" with "cannot".
+ * objc/runtime.h: Likewise.
+ * sendmsg.c: Likewise.
+
2019-01-01 Jakub Jelinek <jakub@redhat.com>
Update copyright years.
/* Classes that are in construction are not resolved, and still have
the class name (instead of a class pointer) in the
class_->super_class field. In that case we need to lookup the
- superclass name to return the superclass. We can not resolve the
+ superclass name to return the superclass. We cannot resolve the
class until it is registered. */
if (CLS_IS_IN_CONSTRUCTION (class_))
{
/* Add an instance variable with name 'ivar_name' to class 'class_',
where 'class_' is a class in construction that has been created
using objc_allocateClassPair() and has not been registered with the
- runtime using objc_registerClassPair() yet. You can not add
+ runtime using objc_registerClassPair() yet. You cannot add
instance variables to classes already registered with the runtime.
'size' is the size of the instance variable, 'log_2_of_alignment'
the alignment as a power of 2 (so 0 means alignment to a 1 byte
objc_EXPORT const char * property_getAttributes (Property property);
/* Return the property with name 'propertyName' of the class 'class_'.
- This function returns NULL if the required property can not be
+ This function returns NULL if the required property cannot be
found. Return NULL if 'class_' or 'propertyName' is NULL.
Note that the traditional ABI does not store the list of properties
class_addMethod (object_getClass (class), method)) that are
required, and then you need to call objc_registerClassPair() to
activate the class. If you need to create a hierarchy of classes,
- you need to create and register them one at a time. You can not
+ you need to create and register them one at a time. You cannot
create a new class using another class in construction as
superclass. Return Nil if 'class-name' is NULL or if a class with
that name already exists or 'superclass' is a class still in
properties. At the moment, optional properties and class
properties are not part of the Objective-C language, so both
'requiredProperty' and 'instanceProperty' should be set to YES.
- This function returns NULL if the required property can not be
+ This function returns NULL if the required property cannot be
found.
Note that the traditional ABI does not store the list of properties
struct sarray *dtable;
struct sarray *super_dtable;
- /* This table could be initialized in init.c. We can not use the
+ /* This table could be initialized in init.c. We cannot use the
class name since the class maintains the instance methods and the
meta class maintains the the class methods yet both share the
same name. Classes should be unique in any program. */
+2019-01-09 Sandra Loosemore <sandra@codesourcery.com>
+
+ PR other/16615
+
+ * include/coi/common/COIResult_common.h: Mechanically replace
+ "can not" with "cannot".
+ * include/coi/source/COIBuffer_source.h: Likewise.
+
2018-12-14 Thomas Schwinge <thomas@codesourcery.com>
* runtime/offload.h (omp_target_is_present, omp_target_memcpy)
///< the user that requested an engine.
///< Only reported if daemon is set up for
///< authorization. Is also reported in
- ///< Windows if host can not find user.
+ ///< Windows if host cannot find user.
COI_COMM_NOT_INITIALIZED, ///< The function was called before the
///< comm was initialized.
COI_INCORRECT_FORMAT, ///< Format of data is incorrect
/// are provided as hints to the runtime system so it can make
/// certain performance optimizations. Note that the flag
/// COI_SAME_ADDRESS_SINKS_AND_SOURCE is still valid but may fail
-/// if the same address as in_Memory can not be allocated on the sink.
+/// if the same address as in_Memory cannot be allocated on the sink.
///
/// @param in_Memory
/// [in] A pointer to an already allocated memory region
+2019-01-09 Sandra Loosemore <sandra@codesourcery.com>
+
+ PR other/16615
+
+ * include/ext/bitmap_allocator.h: Mechanically replace "can not"
+ with "cannot".
+
2019-01-09 Jonathan Wakely <jwakely@redhat.com>
* testsuite/libstdc++-prettyprinters/cxx17.cc: Fix expected output
/** @brief Responsible for exponentially growing the internal
* memory pool.
*
- * @throw std::bad_alloc. If memory can not be allocated.
+ * @throw std::bad_alloc. If memory cannot be allocated.
*
* Complexity: O(1), but internally depends upon the
* complexity of the function free_list::_M_get. The part where
/** @brief Allocates memory for a single object of size
* sizeof(_Tp).
*
- * @throw std::bad_alloc. If memory can not be allocated.
+ * @throw std::bad_alloc. If memory cannot be allocated.
*
* Complexity: Worst case complexity is O(N), but that
* is hardly ever hit. If and when this particular case is