Carl Worth [Wed, 2 Jun 2010 21:43:03 +0000 (14:43 -0700)]
Remove dead code: _glcpp_parser_expand_token_list_onto
This function simply isn't being called anymore.
Carl Worth [Wed, 2 Jun 2010 19:54:15 +0000 (12:54 -0700)]
Factor out common sub-expression from multi-line-comment regular expression.
In two places we look for an (optional) sequence of characters other
than "*" followed by a sequence of on or more "*". Using a name for
this (NON_STARS_THEN_STARS) seems to make it a bit easier to
understand.
Carl Worth [Wed, 2 Jun 2010 17:59:08 +0000 (10:59 -0700)]
Make the multi-line comment regular expression a bit easier to read.
Use quoted strings for literal portions rather than a sequence of
single-character character classes.
Carl Worth [Wed, 2 Jun 2010 17:48:47 +0000 (10:48 -0700)]
Fix multi-line comment regular expression to handle (non) nested comments.
Ken reminded me of a couple cases that I should be testing. These are
the non-nestedness of things that look like nested comments as well as
potentially tricky things like "/*/" and "/*/*/".
The (non) nested comment case was not working in the case of the
comment terminator with multiple '*' characters. We fix this by not
considering a '*' as the "non-slash" to terminate a sequence of '*'
characters within the comment. We also fix the final match of the
terminator to use '+' rather than '*' to require the presence of a
final '*' character in the comment terminator.
Carl Worth [Tue, 1 Jun 2010 19:18:43 +0000 (12:18 -0700)]
Implement comment handling in the lexer (with test).
We support both single-line (//) and multi-line (/* ... */) comments
and add a test for this, (trying to stress the rules just a bit by
embedding one comment delimiter into a comment delimited with the
other style, etc.).
To keep the test suite passing we do now discard any output lines from
glcpp that consist only of spacing, (in addition to blank lines as
previously). We also discard any initial whitespace from gcc output.
In neither case should the absence or presence of this whitespace
affect correctness.
Carl Worth [Tue, 1 Jun 2010 18:20:18 +0000 (11:20 -0700)]
Fix #if-skipping to *really* skip the skipped group.
Previously we were avoiding printing within a skipped group, but we
were still evluating directives such as #define and #undef and still
emitting diagnostics for things such as macro calls with the wrong
number of arguments.
Add a test for this and fix it with a high-priority rule in the lexer
that consumes the skipped content.
Carl Worth [Sat, 29 May 2010 13:03:32 +0000 (06:03 -0700)]
Merge branch 'take-2'
The take-2 branch started over with a new grammar based directly on
the grammar from the C99 specification. It doesn't try to capture
things like balanced sets of parentheses for macro arguments in the
grammar. Instead, it merely captures things as token lists and then
performs operations like parsing arguments and expanding macros on
those lists.
We merge it here since it's currently behaving better, (passing the
entire test suite). But the code base has proven quite fragile
really. Several of the recently added test cases required additional
special cases in the take-2 branch while working trivially on master.
So this merge point may be useful in the future, since we might have a
cleaner code base by coming back to the state before this merge and
fixing it, rather than accepting all the fragile
imperative/list-munging code from the take-2 branch.
Carl Worth [Sat, 29 May 2010 13:01:32 +0000 (06:01 -0700)]
Add three more tests cases recently added to the take-2 branch.
The 071-punctuator test is failing only trivially (whitespace change only).
And the 072-token-pasting-same-line.c test passes just fine here, (more
evidence perhaps that the approach in take-2 is more trouble than it's
worth?).
The 099-c99-example test case is the inspiration for much of the rest
of the test suite. It amazingly passes on the take-2 branch, but
doesn't pass here yet.
Carl Worth [Sat, 29 May 2010 12:57:22 +0000 (05:57 -0700)]
Add killer test case from the C99 specification.
Happily, this passes now, (since many of the previously added test
cases were extracted from this one).
Carl Worth [Sat, 29 May 2010 12:54:19 +0000 (05:54 -0700)]
Add test and fix bugs with multiple token-pasting on the same line.
The list replacement when token pasting was broken, (failing to
properly update the list's tail pointer). Also, memory management when
pasting was broken, (modifying the original token's string which would
cause problems with multiple calls to a macro which pasted a literal
string). We didn't catch this with previous tests because they only
pasted argument values.
Carl Worth [Sat, 29 May 2010 12:07:24 +0000 (05:07 -0700)]
Fix pass-through of '=' and add a test for it.
Previously '=' was not included in our PUNCTUATION regeular expression,
but it *was* excldued from our OTHER regular expression, so we were
getting the default (and hamful) lex action of just printing it.
The test we add here is named "punctuator" with the idea that we can
extend it as needed for other punctuator testing.
Carl Worth [Fri, 28 May 2010 22:15:59 +0000 (15:15 -0700)]
Add two more (failing) tests from the take-2 branch.
These tests were recently fixed on the take-2 branch, but will require
additional work before they will pass here.
Carl Worth [Fri, 28 May 2010 22:15:00 +0000 (15:15 -0700)]
Add two (passing) tests from the take-2 branch.
These two tests were tricky to make work on take-2, but happen to
already eb working here.
Carl Worth [Fri, 28 May 2010 22:13:11 +0000 (15:13 -0700)]
Tweak test 25 slightly, (so the non-macro doesn't end the file).
This isn't a problem here, but on the take-2 branch, it was trickier
at one point to make a non-macro work when the last token of the file.
So we use the simpler test case here and defer the other case until
later.
Carl Worth [Fri, 28 May 2010 22:12:36 +0000 (15:12 -0700)]
Remove some blank lines from the end of some test cases.
To match what we have done on the take-2 branch to these test cases.
Carl Worth [Fri, 28 May 2010 22:06:02 +0000 (15:06 -0700)]
Perform macro by replacing tokens in original list.
We take the results of macro expansion and splice them into the
original token list over which we are iterating. This makes it easy
for function-like macro invocations to find their arguments since they
are simply subsequent tokens on the list.
This fixes the recently-introduced regressions (tests 55 and 56) and
also passes new tests 60 and 61 introduced to strees this feature,
(with macro-argument parentheses split between a macro value and the
textual input).
Carl Worth [Fri, 28 May 2010 15:17:46 +0000 (08:17 -0700)]
Simplify calling conventions of functions under expand_token_list_onto.
We previously had a confusing thing where _expand_token_onto would
return a non-zero value to indicate that the caller should then call
_expand_function_onto. It's much cleaner for _expand_token_onto to
just do what's needed and call the necessary function.
Carl Worth [Fri, 28 May 2010 15:04:13 +0000 (08:04 -0700)]
Stop interrupting the test suite at the first failure.
This behavior was useful when starting the implementation over
("take-2") where the whole test suite was failing. This made it easy
to focus on one test at a time and get each working.
More recently, we got the whole suite working, so we don't need this
feature anymore. And in the previous commit, we regressed a couple of
tests, so it's nice to be able to see all the failures with a single
run of the suite.
Carl Worth [Fri, 28 May 2010 15:00:43 +0000 (08:00 -0700)]
Revert "Add support for an object-to-function chain with the parens in the content."
This reverts commit
7db2402a8009772a3f10d19cfc7f30be9ee79295
It doesn't revert the new test case from that commit, just the
extremely ugly second-pass implementation.
Carl Worth [Thu, 27 May 2010 21:53:51 +0000 (14:53 -0700)]
Remove blank lines from output files before comparing.
Recently I'm seeing cases where "gcc -E" mysteriously omits blank
lines, (even though it prints the blank lines in other very similar
cases). Rather than trying to decipher and imitate this, just get rid
of the blank lines.
This approach with sed to kill the lines before the diff is better
than "diff -B" since when there is an actual difference, the presence
of blank lines won't make the diff harder to read.
Carl Worth [Thu, 27 May 2010 21:45:20 +0000 (14:45 -0700)]
Add test for token-pasting of integers.
This test was tricky to make pass in the take-2 branch. It ends up
passing already here with no additional effort, (since we are lexing
integers as string-valued token except when in the ST_IF state in the
lexer anyway).
Carl Worth [Thu, 27 May 2010 21:36:29 +0000 (14:36 -0700)]
Implement token pasting of integers.
To do this correctly, we change the lexer to lex integers as string values,
(new token type of INTEGER_STRING), and only convert to integer values when
evaluating an expression value.
Add a new test case for this, (which does pass now).
Carl Worth [Thu, 27 May 2010 21:01:18 +0000 (14:01 -0700)]
Add placeholder tokens to support pasting with empty arguments.
Along with a passing test to verify that this works.
Carl Worth [Thu, 27 May 2010 20:44:13 +0000 (13:44 -0700)]
Add test for macro invocations with empty arguments.
This case was recently solved on the take-2 branch.
Carl Worth [Thu, 27 May 2010 20:29:19 +0000 (13:29 -0700)]
Provide support for empty arguments in macro invocations.
For this we always add a new argument to the argument list as soon as
possible, without waiting until we see some argument token. This does
mean we need to take some extra care when comparing the number of
arguments with the number of expected arguments. In addition to
matching numbers, we also support one (empty) argument when zero
arguments are expected.
Add a test case here for this, which does pass.
Carl Worth [Thu, 27 May 2010 18:55:36 +0000 (11:55 -0700)]
Make two list-processing functions do nothing with an empty list.
This just makes these functions easier to understand all around. In
the case of _token_list_append_list this is an actual bug fix, (where
append an empty list onto a non-empty list would previously scramble
the tail pointer of the original list).
Carl Worth [Thu, 27 May 2010 17:14:38 +0000 (10:14 -0700)]
Add test 56 for a comma within the expansion of an argument.
This case was tricky on the take-2 branch. It happens to be passing already
here.
Carl Worth [Thu, 27 May 2010 17:12:33 +0000 (10:12 -0700)]
Avoid treating an expanded comma as an argument separator.
That is, a function-like invocation foo(x) is valid as a
single-argument invocation even if 'x' is a macro that expands into a
value with a comma. Add a new COMMA_FINAL token type to handle this,
and add a test for this case, (which passes).
Carl Worth [Thu, 27 May 2010 00:01:57 +0000 (17:01 -0700)]
Add support (and test) for an object-to-function chain with the parens in the content.
That is, the following case:
#define foo(x) (x)
#define bar
bar(baz)
which now works with this (ugly) commit.
I definitely want to come up with something cleaner than this.
Carl Worth [Wed, 26 May 2010 23:18:05 +0000 (16:18 -0700)]
Add two tests developed on the take-2 branch.
The define-chain-obj-to-func-parens-in-text test passes here while the
if-with-macros test fails.
Carl Worth [Wed, 26 May 2010 22:57:10 +0000 (15:57 -0700)]
Treat newlines as space when invoking a function-like macro invocation.
This adds three new pieces of state to the parser, (is_control_line,
newline_as_space, and paren_count), and a large amount of messy
code. I'd definitely like to see a cleaner solution for this.
With this fix, the "define-func-extra-newlines" now passes so we put
it back to test #26 where it was originally (lately it has been known
as test #55).
Also, we tweak test 25 slightly. Previously this test was ending a
file function-like macro name that was not actually a macro (not
followed by a left parenthesis). As is, this fix was making that test
fail because the text_line production expects to see a terminating
NEWLINE, but that NEWLINE is now getting turned into a SPACE here.
This seems unlikely to be a problem in the wild, (function macros
being used in a non-macro sense seems rare enough---but more than
likely they won't happen at the end of a file). Still, we document
this shortcoming in the README.
Carl Worth [Wed, 26 May 2010 22:53:05 +0000 (15:53 -0700)]
All macro lookups should be of type macro_t, not string_list_t.
This is what I get for using a non-type-safe hash-table implementation.
Carl Worth [Wed, 26 May 2010 18:15:21 +0000 (11:15 -0700)]
Implement (and test) support for macro expansion within conditional expressions.
To do this we have split the existing "HASH_IF expression" into two
productions:
First is HASH_IF pp_tokens which simply constructs a list of tokens.
Then, with that resulting token list, we first evaluate all DEFINED
operator tokens, then expand all macros, and finally start lexing from
the resulting token list. This brings us to the second production,
IF_EXPANDED expression
This final production works just like our previous "HASH_IF
expression", evaluating a constant integer expression.
The new test (54) added for this case now passes.
Carl Worth [Wed, 26 May 2010 16:35:34 +0000 (09:35 -0700)]
Fix lexing of "defined" as an operator, not an identifier.
Simply need to move the rule for IDENTIFIER to be after "defined" and
everything is happy.
With this change, tests 50 through 53 all pass now.
Carl Worth [Wed, 26 May 2010 16:32:57 +0000 (09:32 -0700)]
Implement #if and friends.
With this change, tests 41 through 49 all pass. (The defined operator
appears to be somehow broken so that test 50 doesn't pass yet.)
Carl Worth [Wed, 26 May 2010 16:32:12 +0000 (09:32 -0700)]
stash
Carl Worth [Wed, 26 May 2010 16:04:50 +0000 (09:04 -0700)]
Implement token pasting.
Which makes test 40 now pass.
Carl Worth [Wed, 26 May 2010 15:25:44 +0000 (08:25 -0700)]
Rename identifier from 'i' to 'node'.
Now that we no longer have nested for loops with 'i' and 'j' we can
use the 'node' that we already have.
Carl Worth [Wed, 26 May 2010 15:16:56 +0000 (08:16 -0700)]
Remove some stale token types.
All the code referencing these was removed some time ago.
Carl Worth [Wed, 26 May 2010 15:15:49 +0000 (08:15 -0700)]
Prevent unexpanded macros from being expanded again in the future.
With this fix, tests 37 - 39 now pass.
Carl Worth [Wed, 26 May 2010 15:11:08 +0000 (08:11 -0700)]
README: Document some known limitations.
None of these are fundamental---just a few things that haven't been
implemented yet.
Carl Worth [Wed, 26 May 2010 15:10:38 +0000 (08:10 -0700)]
Fix a typo in a comment.
Always better to use proper grammar in our grammar.
Carl Worth [Wed, 26 May 2010 15:09:29 +0000 (08:09 -0700)]
Expand macro arguments before performing argument substitution.
As required by the C99 specification of the preprocessor.
With this fix, tests 33 through 36 now pass.
Carl Worth [Wed, 26 May 2010 15:05:19 +0000 (08:05 -0700)]
Change macro expansion to append onto token lists rather than printing directly.
This doesn't change any functionality here, but will allow us to make
future changes that were not possible with direct printing.
Specifically, we need to expand macros within macro arguments before
performing argument substitution. And *that* expansion cannot result
in immediate printing.
Carl Worth [Wed, 26 May 2010 15:01:42 +0000 (08:01 -0700)]
Check active expansions before expanding a function-like macro invocation.
With this fix, test 32 no longer recurses infinitely, but now passes.
Carl Worth [Wed, 26 May 2010 14:58:59 +0000 (07:58 -0700)]
Defer test 26 until much later (to test 55).
Supporting embedded newlines in a macro invocation is going to be
tricky with our current approach to lexing and parsing. Since this
isn't really an important feature for us, we can defer this until more
important things are resolved.
With this test out of the way, tests 27 through 31 are passing.
Carl Worth [Wed, 26 May 2010 03:35:01 +0000 (20:35 -0700)]
Avoid getting extra trailing whitespace from macros.
This trailing whitespace was coming from macro definitions and from
macro arguments. We fix this with a little extra state in the
token_list. It now remembers the last non-space token added, so that
these can be trimmed off just before printing the list.
With this fix test 23 now passes. Tests 24 and 25 are also passing,
but they probbably would ahve before this fix---just that they weren't
being run earlier.
Carl Worth [Wed, 26 May 2010 01:39:43 +0000 (18:39 -0700)]
Remove a bunch of old code and give the static treatment to what's left.
We're no longer using the expansion stack, so its functions can go
along with most of the body of glcpp_parser_lex that was using it.
Carl Worth [Wed, 26 May 2010 00:45:22 +0000 (17:45 -0700)]
Avoid swallowing initial left parenthesis from nested macro invocation.
We weren't including this left parenthesis in the argument's token
list so the nested function invocation wasn not being recognized.
With this fix, tests 21 and 22 now pass.
Carl Worth [Wed, 26 May 2010 00:41:07 +0000 (17:41 -0700)]
Ignore separating whitespace at the beginning of a macro argument.
This causes test 16 to pass. Tests 17-20 are also passing now, (though
they would probably have passed before this change and simply weren't
being run yet).
Carl Worth [Wed, 26 May 2010 00:32:21 +0000 (17:32 -0700)]
Implement substitution of function parameters in macro calls.
This makes tests 16 - 19 pass.
Carl Worth [Wed, 26 May 2010 00:08:07 +0000 (17:08 -0700)]
Collapse multiple spaces in input down to a single space.
This is what gcc does, and it's actually less work to do
this. Previously we were having to save the contents of space tokens
as a string, but we don't need to do that now.
We extend test #0 to exercise this feature here.
Carl Worth [Wed, 26 May 2010 00:06:17 +0000 (17:06 -0700)]
Add a test #0 to ensure that we don't do any inadvertent token pasting.
This simply ensures that spaces in input line are preserved.
Carl Worth [Tue, 25 May 2010 23:59:02 +0000 (16:59 -0700)]
Pass through literal space values from replacement lists.
This makes test 15 pass and also dramatically simplifies the lexer.
We were previously using a CONTROL state in the lexer to only emit
SPACE tokens when on text lines. But that's not actually what we
want. We need SPACE tokens in the replacement lists as well. Instead
of a lexer state for this, we now simply set a "space_tokens" flag
whenever we start constructing a pp_tokens list and clear the flag
whenever we see a '#' introducing a directive.
Much cleaner this way.
Carl Worth [Tue, 25 May 2010 23:28:26 +0000 (16:28 -0700)]
Implement simplified substitution for function-like macro invocation.
This supports function-like macro invocation but without any argument
substitution. This now makes test 11 through 14 pass.
Carl Worth [Tue, 25 May 2010 22:28:58 +0000 (15:28 -0700)]
Implement #undef.
Which is as simple as copying the former action back from the git
history.
Now all tests through test 11 pass.
Carl Worth [Tue, 25 May 2010 22:24:59 +0000 (15:24 -0700)]
Implement expansion of object-like macros.
For this we add an "active" string_list_t to the parser. This makes
the current expansion_list_t in the parser obsolete, but we don't
remove that yet.
With this change we can now start passing some actual tests, so we
turn on real testing in the test suite again. I expect to implement
things more or less in the same order as before, so the test suite now
halts on first error.
With this change the first 8 tests in the suite pass, (object-like
macros with chaining and recursion).
Carl Worth [Tue, 25 May 2010 22:04:32 +0000 (15:04 -0700)]
Make the lexer pass whitespace through (as OTHER tokens) for text lines.
With this change, we can recreate the original text-line input
exactly. Previously we were inserting a space between every pair of
tokens so our output had a lot more whitespace than our input.
With this change, we can drop the "-b" option to diff and match the
input exactly.
Carl Worth [Tue, 25 May 2010 21:52:43 +0000 (14:52 -0700)]
Store parsed tokens as token list and print all text lines.
Still not doing any macro expansion just yet. But it should be fairly
easy from here.
Carl Worth [Tue, 25 May 2010 21:42:00 +0000 (14:42 -0700)]
Delete some trailing whitespace.
This pernicious stuff managed to sneak in on us.
Carl Worth [Tue, 25 May 2010 21:40:47 +0000 (14:40 -0700)]
Add xtalloc_reference.
Yet another talloc wrapper that should come in handy.
Carl Worth [Tue, 25 May 2010 20:09:03 +0000 (13:09 -0700)]
Starting over with the C99 grammar for the preprocessor.
This is a fresh start with a much simpler approach for the flex/bison
portions of the preprocessor. This isn't functional yet, (produces no
output), but can at least read all of our test cases without any parse
errors.
The grammar here is based on the grammar provided for the preprocessor
in the C99 specification.
Carl Worth [Mon, 24 May 2010 18:33:28 +0000 (11:33 -0700)]
Add test for '/', '<<', and '>>' in #if expressions.
These operators have been supported already, but were not covered in
existing tests yet. So this test passes already.
Carl Worth [Mon, 24 May 2010 18:30:06 +0000 (11:30 -0700)]
Add test of bitwise operators and octal/hexadecimal literals.
This new test covers several features from the last few commits.
This test passes already.
Carl Worth [Mon, 24 May 2010 18:29:02 +0000 (11:29 -0700)]
Add support for octal and hexadecimal integer literals.
In addition to the decimal literals which we already support. Note
that we use strtoll here to get the large-width integers demanded by
the specification.
Carl Worth [Mon, 24 May 2010 18:27:23 +0000 (11:27 -0700)]
Switch to intmax_t (rather than int) for #if expressions
This is what the C99 specification demands. And the GLSL specification
says that we should follow the "standard C++" rules for #if condition
expressions rather than the GLSL rules, (which only support a 32-bit
integer).
Carl Worth [Mon, 24 May 2010 18:26:42 +0000 (11:26 -0700)]
Add the '~' operator to the lexer.
This was simply missing before, (and unnoticed since we had no test of
the '~' operator).
Carl Worth [Mon, 24 May 2010 17:37:38 +0000 (10:37 -0700)]
Implement all operators specified for GLSL #if expressions (with tests).
The operator coverage here is quite complete. The one big thing
missing is that we are not yet doing macro expansion in #if
lines. This makes the whole support fairly useless, so we plan to fix
that shortcoming right away.
Carl Worth [Fri, 21 May 2010 05:27:07 +0000 (22:27 -0700)]
Implement #if, #else, #elif, and #endif with tests.
So far the only expression implemented is a single integer literal,
but obviously that's easy to extend. Various things including nesting
are tested here.
Carl Worth [Thu, 20 May 2010 22:18:54 +0000 (15:18 -0700)]
Implement (and add test) for token pasting.
This is *very* easy to implement now that macro arguments are pre-expanded.
Carl Worth [Thu, 20 May 2010 22:15:26 +0000 (15:15 -0700)]
Pre-expand macro arguments at time of invocation.
Previously, we were using the same lexing stack as we use for macro
expansion to also expand macro arguments. Instead, we now do this
earlier by simply recursing over the macro-invocations replacement
list and constructing a new expanded list, (and pushing only *that*
onto the stack).
This is simpler, and also allows us to more easily implement token
pasting in the future.
Carl Worth [Thu, 20 May 2010 22:02:03 +0000 (15:02 -0700)]
Add xtalloc_asprintf
I expect this to be useful in the upcoming implementation of token pasting.
Carl Worth [Thu, 20 May 2010 21:38:06 +0000 (14:38 -0700)]
Finish cleaning up whitespace differences.
The last remaining thing here was that when a line ended with a macro,
and the parser looked ahead to the newline token, the lexer was
printing that newline before the parser printed the expansion of the
macro.
The fix is simple, just make the lexer tell the parser that a newline
is needed, and the parser can wait until reducing a production to
print that newline.
With this, we now pass the entire test suite with simply "diff -u", so
we no longer have any diff options hiding whitespace bugs from
us. Hurrah!
Carl Worth [Thu, 20 May 2010 21:29:43 +0000 (14:29 -0700)]
Avoid printing a space at the beginning of lines in the output.
This fixes more differences compared to "gcc -E" so removes several
cases of erroneously failing test cases. The implementation isn't very
elegant, but it is functional.
Carl Worth [Thu, 20 May 2010 21:19:57 +0000 (14:19 -0700)]
Fix bug of consuming excess whitespace.
We fix this by moving printing up to the top-level "input" action and
tracking whether a space is needed between one token and the next.
This fixes all actual bugs in test-suite output, but does leave some
tests failing due to differences in the amount of whitespace produced,
(which aren't actual bugs per se).
Carl Worth [Thu, 20 May 2010 21:08:19 +0000 (14:08 -0700)]
Remove unused function _print_string_list
The only good dead code is non-existing dead code.
Carl Worth [Thu, 20 May 2010 21:05:37 +0000 (14:05 -0700)]
Remove "unnecessary" whitespace from some tests.
This whitespace was not part of anything being tested, and it
introduces differences (that we don't actually care about) between the
output of "gcc -E" and glcpp.
Just eliminate this extra whitespace to reduce spurious test-case
failures.
Carl Worth [Thu, 20 May 2010 21:00:28 +0000 (14:00 -0700)]
Stop ignoring whitespace while testing.
Sometime back the output of glcpp started differing from the output of
"gcc -E" in the amount of whitespace in emitted. At the time, I
switched the test suite to use "diff -w" to ignore this. This was a
mistake since it ignores whitespace entirely. (I meant to use "diff
-b" which ignores only changes in the amount of whitespace.)
So bugs have since been introduced that the test suite doesn't
notice. For example, glcpp is producing "twotokens" where it should be
producing "two tokens".
Let's stop ignoring whitespace in the test suite, which currently
introduces lots of failures---some real and some spurious.
Carl Worth [Thu, 20 May 2010 19:06:33 +0000 (12:06 -0700)]
Add test (and fix) for a function argument of a macro that expands with a comma.
The fix here is quite simple (and actually only deletes code). When
expanding a macro, we don't return a ',' as a unique token type, but
simply let it fall through to the generic case.
Carl Worth [Thu, 20 May 2010 15:42:02 +0000 (08:42 -0700)]
Add support for commas within parenthesized groups in function arguments.
The specification says that commas within a parenthesized group,
(that's not a function-like macro invocation), are passed through
literally and not considered argument separators in any outer macro
invocation.
Add support and a test for this case. This support makes a third
occurrence of the same "FUNC_MACRO (" shift/reduce conflict appear, so
expect that.
This change does introduce a fairly large copy/paste block in the
grammar which is unfortunate. Perhaps if I were more clever I'd find a
way to share the common pieces between argument and argument_or_comma.
Carl Worth [Thu, 20 May 2010 15:01:44 +0000 (08:01 -0700)]
Avoid re-expanding a macro name that has once been rejected from expansion.
The specification of the preprocessor in C99 says that when we see a
macro name that we are already expanding that we refuse to expand it
now, (which we've done for a while), but also that we refuse to ever
expand it later if seen in other contexts at which it would be
legitimate to expand.
We add a test case for that here, and fix it to work. The fix takes
advantage of a new token_t value for tokens and argument words along
with the recently added IDENTIFIER_FINALIZED token type which
instructs the parser to not even look for another expansion.
Carl Worth [Wed, 19 May 2010 20:54:37 +0000 (13:54 -0700)]
Use new token_list_t rather than string_list_t for macro values.
There's not yet any change in functionality here, (at least according
to the test suite). But we now have the option of specifying a type
for each string in the token list. This will allow us to finalize an
unexpanded macro name so that it won't be subjected to excess
expansion later.
Carl Worth [Wed, 19 May 2010 20:28:24 +0000 (13:28 -0700)]
Perform "re lexing" on string list values rathern than on text.
Previously, we would pass original strings back to the original lexer
whenever we needed to re-lex something, (such as an expanded macro or
a macro argument). Now, we instead parse the macro or argument
originally to a string list, and then re-lex by simply returning each
string from this list in turn.
We do this in the recently added glcpp_parser_lex function that sits
on top of the lower-level glcpp_lex that only deals with text.
This doesn't change any behavior (at least according to the existing
test suite which all still passes) but it brings us much closer to
being able to "finalize" an unexpanded macro as required by the
specification.
Carl Worth [Wed, 19 May 2010 17:07:31 +0000 (10:07 -0700)]
Remove unused NEWLINE token.
We fixed the lexer a while back to never return a NEWLINE token, but
negelcted to clean up this declaration.
Carl Worth [Wed, 19 May 2010 17:06:56 +0000 (10:06 -0700)]
Remove unneeded YYLEX_PARAM define.
I'm not sure where this came from, but it's clearly not needed.
Carl Worth [Wed, 19 May 2010 17:05:40 +0000 (10:05 -0700)]
Rename yylex to glcpp_parser_lex and give it a glcpp_parser_t* argument.
Much cleaner this way, (and now our custom lex function has access to
all the parser state which it will need).
Carl Worth [Wed, 19 May 2010 17:01:29 +0000 (10:01 -0700)]
Add a wrapper function around the lexer.
We rename the generated lexer from yylex to glcpp_lex. Then we
implement our own yylex function in glcpp-parse.y that calls
glcpp_lex. This doesn't change the behavior at all yet, but gives us a
place where we can do implement alternate lexing in the future.
(We want this because instead of re-lexing from strings for macro
expansion, we want to lex from pre-parsed token lists. We need this so
that when we terminate recursion due to an already active macro
expansion, we can ensure that that symbol never gets expanded again
later.)
Carl Worth [Wed, 19 May 2010 14:57:03 +0000 (07:57 -0700)]
Like previous fix, but for object-like macros (and add a test).
The support for an object-like amcro within a macro-invocation
argument was also implemented at one level too high in the
grammar. Fortunately, this is a very simple fix.
Carl Worth [Wed, 19 May 2010 14:49:47 +0000 (07:49 -0700)]
Fix bug as in previous fix, but with multi-token argument.
The previous fix added FUNC_MACRO to a production one higher in teh
grammar than it should have. So it prevented a FUNC_MACRO from
appearing as part of a mutli-token argument rather than just alone as
an argument. Fix this (and add a test).
Carl Worth [Wed, 19 May 2010 14:42:42 +0000 (07:42 -0700)]
Fix bug (and test) for an invocation using macro name as a non-macro argument
This adds a second shift/reduce conflict to our grammar. It's basically the
same conflict we had previously, (deciding to shift a '(' after a FUNC_MACRO)
but this time in the "argument" context rather than the "content" context.
It would be nice to not have these, but I think they are unavoidable
(withotu a lot of pain at least) given the preprocessor specification.
Carl Worth [Wed, 19 May 2010 14:29:22 +0000 (07:29 -0700)]
Fix bug (and add tests) for a function-like macro defined as itself.
This case worked previously, but broke in the recent rewrite of
function- like macro expansion. The recursion was still terminated
correctly, but any parenthesized expression after the macro name was
still being swallowed even though the identifier was not being
expanded as a macro.
The fix is to notice earlier that the identifier is an
already-expanding macro. We let the lexer know this through the
classify_token function so that an already-expanding macro is lexed as
an identifier, not a FUNC_MACRO.
Carl Worth [Wed, 19 May 2010 05:10:04 +0000 (22:10 -0700)]
Rewrite macro handling to support function-like macro invocation in macro values
The rewrite her discards the functions that did direct, recursive
expansion of macro values. Instead, the parser now pushes the macro
definition string over to a stack of buffers for the lexer. This way,
macro expansion gets access to all parsing machinery.
This isn't a small change, but the result is simpler than before (I
think). It passes the entire test suite, including the four tests
added with the previous commit that were failing before.
Carl Worth [Mon, 17 May 2010 20:33:10 +0000 (13:33 -0700)]
Add several tests where the defined value of a macro is (or looks like) a macro
Many of these look quite similar to existing tests that are handled
correctly, yet none of these work. For example, in test 30 we have a
simple non-function macro "foo" that is defined as "bar(baz(success))"
and obviously non-function macro expansion has been working for a long
time. Similarly, if we had text of "bar(baz(success))" it would be
expanded correctly as well.
But when this otherwise functioning text appears as the body of a
macro, things don't work at all.
This is pointing out a fundamental problem with the current
approach. The current code does a recursive expansion of a macro
definition, but this doesn't involve the parsing machinery, so it
can't actually handle things like an arbitrary nesting of parentheses.
The fix will require the parser to stuff macro values back into the
lexer to get at all of the existing machinery when expanding macros.
Carl Worth [Mon, 17 May 2010 20:19:04 +0000 (13:19 -0700)]
Fix (and add test for) function-like macro invocation with newlines.
The test has a newline before the left parenthesis, and newlines to
separate the parentheses from the argument.
The fix involves more state in the lexer to only return a NEWLINE
token when termniating a directive. This is very similar to our
previous fix with extra lexer state to only return the SPACE token
when it would be significant for the parser.
With this change, the exact number and positioning of newlines in the
output is now different compared to "gcc -E" so we add a -B option to
diff when testing to ignore that.
Carl Worth [Mon, 17 May 2010 19:45:16 +0000 (12:45 -0700)]
Expect 1 shift/reduce conflict.
The most recent fix to the parser introduced a shift/reduce
conflict. We document this conflict here, and tell bison that it need
not report it (since I verified that it's being resolved in the
direction desired).
For the record, I did write additional lexer code to eliminate this
conflict, but it was quite fragile, (would not accept a newline
between a function-like macro name and the left parenthesis, for
example).
Carl Worth [Mon, 17 May 2010 17:34:29 +0000 (10:34 -0700)]
Fix bug (and add test) for a function-like-macro appearing as a non-macro.
That is, when a function-like macro appears in the content without
parentheses it should be accepted and passed on through, (previously
the parser was regarding this as a syntax error).
Carl Worth [Mon, 17 May 2010 17:15:23 +0000 (10:15 -0700)]
Add test and fix bug leading to infinite recursion.
The test case here is simply "#define foo foo" and "#define bar foo"
and then attempting to expand "bar".
Previously, our termination condition for the recursion was overly
simple---just looking for the single identifier that began the
expansion. We now fix this to maintain a stack of identifiers and
terminate when any one of them occurs in the replacement list.
Carl Worth [Sat, 15 May 2010 00:29:24 +0000 (17:29 -0700)]
Fix two whitespace bugs in the lexer.
The first bug was not allowing whitespace between '#' and the
directive name.
The second bug was swallowing a terminating newline along with any
trailing whitespace on a line.
With these two fixes, and the previous commit to stop emitting SPACE
tokens, the recently added extra-whitespace test now passes.
Carl Worth [Sat, 15 May 2010 00:08:45 +0000 (17:08 -0700)]
Don't return SPACE tokens unless strictly needed.
This reverts the unconditional return of SPACE tokens from the lexer
from commit
48b94da0994b44e41324a2419117dcd81facce8b .
That commit seemed useful because it kept the lexer simpler, but the
presence of SPACE tokens is causing lots of extra complication for the
parser itself, (redundant productions other than whitespace
differences, several productions buggy in the case of extra
whitespace, etc.)
Of course, we'd prefer to never have any whitespace token, but that's
not possible with the need to distinguish between "#define foo()" and
"#define foo ()". So we'll accept a little bit of pain in the lexer,
(enough state to support this special-case token), in exchange for
keeping most of the parser blissffully ignorant of whether tokens are
separated by whitespace or not.
This change does mean that our output now differs from that of "gcc -E",
but only in whitespace. So we test with "diff -w now to ignore those
differences.
Carl Worth [Fri, 14 May 2010 23:58:00 +0000 (16:58 -0700)]
Add test with extra whitespace in macro defintions and invocations.
This whitespace is not dealt with in an elegant way yet so this test
does not pass currently.