scons: fix sanitizer flags with multiple sanitizers
[gem5.git] / ext / ply / doc / ply.html
1 <html>
2 <head>
3 <title>PLY (Python Lex-Yacc)</title>
4 </head>
5 <body bgcolor="#ffffff">
6
7 <h1>PLY (Python Lex-Yacc)</h1>
8
9 <b>
10 David M. Beazley <br>
11 dave@dabeaz.com<br>
12 </b>
13
14 <p>
15 <b>PLY Version: 3.0</b>
16 <p>
17
18 <!-- INDEX -->
19 <div class="sectiontoc">
20 <ul>
21 <li><a href="#ply_nn1">Preface and Requirements</a>
22 <li><a href="#ply_nn1">Introduction</a>
23 <li><a href="#ply_nn2">PLY Overview</a>
24 <li><a href="#ply_nn3">Lex</a>
25 <ul>
26 <li><a href="#ply_nn4">Lex Example</a>
27 <li><a href="#ply_nn5">The tokens list</a>
28 <li><a href="#ply_nn6">Specification of tokens</a>
29 <li><a href="#ply_nn7">Token values</a>
30 <li><a href="#ply_nn8">Discarded tokens</a>
31 <li><a href="#ply_nn9">Line numbers and positional information</a>
32 <li><a href="#ply_nn10">Ignored characters</a>
33 <li><a href="#ply_nn11">Literal characters</a>
34 <li><a href="#ply_nn12">Error handling</a>
35 <li><a href="#ply_nn13">Building and using the lexer</a>
36 <li><a href="#ply_nn14">The @TOKEN decorator</a>
37 <li><a href="#ply_nn15">Optimized mode</a>
38 <li><a href="#ply_nn16">Debugging</a>
39 <li><a href="#ply_nn17">Alternative specification of lexers</a>
40 <li><a href="#ply_nn18">Maintaining state</a>
41 <li><a href="#ply_nn19">Lexer cloning</a>
42 <li><a href="#ply_nn20">Internal lexer state</a>
43 <li><a href="#ply_nn21">Conditional lexing and start conditions</a>
44 <li><a href="#ply_nn21">Miscellaneous Issues</a>
45 </ul>
46 <li><a href="#ply_nn22">Parsing basics</a>
47 <li><a href="#ply_nn23">Yacc</a>
48 <ul>
49 <li><a href="#ply_nn24">An example</a>
50 <li><a href="#ply_nn25">Combining Grammar Rule Functions</a>
51 <li><a href="#ply_nn26">Character Literals</a>
52 <li><a href="#ply_nn26">Empty Productions</a>
53 <li><a href="#ply_nn28">Changing the starting symbol</a>
54 <li><a href="#ply_nn27">Dealing With Ambiguous Grammars</a>
55 <li><a href="#ply_nn28">The parser.out file</a>
56 <li><a href="#ply_nn29">Syntax Error Handling</a>
57 <ul>
58 <li><a href="#ply_nn30">Recovery and resynchronization with error rules</a>
59 <li><a href="#ply_nn31">Panic mode recovery</a>
60 <li><a href="#ply_nn35">Signaling an error from a production</a>
61 <li><a href="#ply_nn32">General comments on error handling</a>
62 </ul>
63 <li><a href="#ply_nn33">Line Number and Position Tracking</a>
64 <li><a href="#ply_nn34">AST Construction</a>
65 <li><a href="#ply_nn35">Embedded Actions</a>
66 <li><a href="#ply_nn36">Miscellaneous Yacc Notes</a>
67 </ul>
68 <li><a href="#ply_nn37">Multiple Parsers and Lexers</a>
69 <li><a href="#ply_nn38">Using Python's Optimized Mode</a>
70 <li><a href="#ply_nn44">Advanced Debugging</a>
71 <ul>
72 <li><a href="#ply_nn45">Debugging the lex() and yacc() commands</a>
73 <li><a href="#ply_nn46">Run-time Debugging</a>
74 </ul>
75 <li><a href="#ply_nn39">Where to go from here?</a>
76 </ul>
77 </div>
78 <!-- INDEX -->
79
80
81
82 <H2><a name="ply_nn1"></a>1. Preface and Requirements</H2>
83
84
85 <p>
86 This document provides an overview of lexing and parsing with PLY.
87 Given the intrinsic complexity of parsing, I would strongly advise
88 that you read (or at least skim) this entire document before jumping
89 into a big development project with PLY.
90 </p>
91
92 <p>
93 PLY-3.0 is compatible with both Python 2 and Python 3. Be aware that
94 Python 3 support is new and has not been extensively tested (although
95 all of the examples and unit tests pass under Python 3.0). If you are
96 using Python 2, you should try to use Python 2.4 or newer. Although PLY
97 works with versions as far back as Python 2.2, some of its optional features
98 require more modern library modules.
99 </p>
100
101 <H2><a name="ply_nn1"></a>2. Introduction</H2>
102
103
104 PLY is a pure-Python implementation of the popular compiler
105 construction tools lex and yacc. The main goal of PLY is to stay
106 fairly faithful to the way in which traditional lex/yacc tools work.
107 This includes supporting LALR(1) parsing as well as providing
108 extensive input validation, error reporting, and diagnostics. Thus,
109 if you've used yacc in another programming language, it should be
110 relatively straightforward to use PLY.
111
112 <p>
113 Early versions of PLY were developed to support an Introduction to
114 Compilers Course I taught in 2001 at the University of Chicago. In this course,
115 students built a fully functional compiler for a simple Pascal-like
116 language. Their compiler, implemented entirely in Python, had to
117 include lexical analysis, parsing, type checking, type inference,
118 nested scoping, and code generation for the SPARC processor.
119 Approximately 30 different compiler implementations were completed in
120 this course. Most of PLY's interface and operation has been influenced by common
121 usability problems encountered by students. Since 2001, PLY has
122 continued to be improved as feedback has been received from users.
123 PLY-3.0 represents a major refactoring of the original implementation
124 with an eye towards future enhancements.
125
126 <p>
127 Since PLY was primarily developed as an instructional tool, you will
128 find it to be fairly picky about token and grammar rule
129 specification. In part, this
130 added formality is meant to catch common programming mistakes made by
131 novice users. However, advanced users will also find such features to
132 be useful when building complicated grammars for real programming
133 languages. It should also be noted that PLY does not provide much in
134 the way of bells and whistles (e.g., automatic construction of
135 abstract syntax trees, tree traversal, etc.). Nor would I consider it
136 to be a parsing framework. Instead, you will find a bare-bones, yet
137 fully capable lex/yacc implementation written entirely in Python.
138
139 <p>
140 The rest of this document assumes that you are somewhat familar with
141 parsing theory, syntax directed translation, and the use of compiler
142 construction tools such as lex and yacc in other programming
143 languages. If you are unfamilar with these topics, you will probably
144 want to consult an introductory text such as "Compilers: Principles,
145 Techniques, and Tools", by Aho, Sethi, and Ullman. O'Reilly's "Lex
146 and Yacc" by John Levine may also be handy. In fact, the O'Reilly book can be
147 used as a reference for PLY as the concepts are virtually identical.
148
149 <H2><a name="ply_nn2"></a>3. PLY Overview</H2>
150
151
152 PLY consists of two separate modules; <tt>lex.py</tt> and
153 <tt>yacc.py</tt>, both of which are found in a Python package
154 called <tt>ply</tt>. The <tt>lex.py</tt> module is used to break input text into a
155 collection of tokens specified by a collection of regular expression
156 rules. <tt>yacc.py</tt> is used to recognize language syntax that has
157 been specified in the form of a context free grammar. <tt>yacc.py</tt> uses LR parsing and generates its parsing tables
158 using either the LALR(1) (the default) or SLR table generation algorithms.
159
160 <p>
161 The two tools are meant to work together. Specifically,
162 <tt>lex.py</tt> provides an external interface in the form of a
163 <tt>token()</tt> function that returns the next valid token on the
164 input stream. <tt>yacc.py</tt> calls this repeatedly to retrieve
165 tokens and invoke grammar rules. The output of <tt>yacc.py</tt> is
166 often an Abstract Syntax Tree (AST). However, this is entirely up to
167 the user. If desired, <tt>yacc.py</tt> can also be used to implement
168 simple one-pass compilers.
169
170 <p>
171 Like its Unix counterpart, <tt>yacc.py</tt> provides most of the
172 features you expect including extensive error checking, grammar
173 validation, support for empty productions, error tokens, and ambiguity
174 resolution via precedence rules. In fact, everything that is possible in traditional yacc
175 should be supported in PLY.
176
177 <p>
178 The primary difference between
179 <tt>yacc.py</tt> and Unix <tt>yacc</tt> is that <tt>yacc.py</tt>
180 doesn't involve a separate code-generation process.
181 Instead, PLY relies on reflection (introspection)
182 to build its lexers and parsers. Unlike traditional lex/yacc which
183 require a special input file that is converted into a separate source
184 file, the specifications given to PLY <em>are</em> valid Python
185 programs. This means that there are no extra source files nor is
186 there a special compiler construction step (e.g., running yacc to
187 generate Python code for the compiler). Since the generation of the
188 parsing tables is relatively expensive, PLY caches the results and
189 saves them to a file. If no changes are detected in the input source,
190 the tables are read from the cache. Otherwise, they are regenerated.
191
192 <H2><a name="ply_nn3"></a>4. Lex</H2>
193
194
195 <tt>lex.py</tt> is used to tokenize an input string. For example, suppose
196 you're writing a programming language and a user supplied the following input string:
197
198 <blockquote>
199 <pre>
200 x = 3 + 42 * (s - t)
201 </pre>
202 </blockquote>
203
204 A tokenizer splits the string into individual tokens
205
206 <blockquote>
207 <pre>
208 'x','=', '3', '+', '42', '*', '(', 's', '-', 't', ')'
209 </pre>
210 </blockquote>
211
212 Tokens are usually given names to indicate what they are. For example:
213
214 <blockquote>
215 <pre>
216 'ID','EQUALS','NUMBER','PLUS','NUMBER','TIMES',
217 'LPAREN','ID','MINUS','ID','RPAREN'
218 </pre>
219 </blockquote>
220
221 More specifically, the input is broken into pairs of token types and values. For example:
222
223 <blockquote>
224 <pre>
225 ('ID','x'), ('EQUALS','='), ('NUMBER','3'),
226 ('PLUS','+'), ('NUMBER','42), ('TIMES','*'),
227 ('LPAREN','('), ('ID','s'), ('MINUS','-'),
228 ('ID','t'), ('RPAREN',')'
229 </pre>
230 </blockquote>
231
232 The identification of tokens is typically done by writing a series of regular expression
233 rules. The next section shows how this is done using <tt>lex.py</tt>.
234
235 <H3><a name="ply_nn4"></a>4.1 Lex Example</H3>
236
237
238 The following example shows how <tt>lex.py</tt> is used to write a simple tokenizer.
239
240 <blockquote>
241 <pre>
242 # ------------------------------------------------------------
243 # calclex.py
244 #
245 # tokenizer for a simple expression evaluator for
246 # numbers and +,-,*,/
247 # ------------------------------------------------------------
248 import ply.lex as lex
249
250 # List of token names. This is always required
251 tokens = (
252 'NUMBER',
253 'PLUS',
254 'MINUS',
255 'TIMES',
256 'DIVIDE',
257 'LPAREN',
258 'RPAREN',
259 )
260
261 # Regular expression rules for simple tokens
262 t_PLUS = r'\+'
263 t_MINUS = r'-'
264 t_TIMES = r'\*'
265 t_DIVIDE = r'/'
266 t_LPAREN = r'\('
267 t_RPAREN = r'\)'
268
269 # A regular expression rule with some action code
270 def t_NUMBER(t):
271 r'\d+'
272 t.value = int(t.value)
273 return t
274
275 # Define a rule so we can track line numbers
276 def t_newline(t):
277 r'\n+'
278 t.lexer.lineno += len(t.value)
279
280 # A string containing ignored characters (spaces and tabs)
281 t_ignore = ' \t'
282
283 # Error handling rule
284 def t_error(t):
285 print "Illegal character '%s'" % t.value[0]
286 t.lexer.skip(1)
287
288 # Build the lexer
289 lexer = lex.lex()
290
291 </pre>
292 </blockquote>
293 To use the lexer, you first need to feed it some input text using
294 its <tt>input()</tt> method. After that, repeated calls
295 to <tt>token()</tt> produce tokens. The following code shows how this
296 works:
297
298 <blockquote>
299 <pre>
300
301 # Test it out
302 data = '''
303 3 + 4 * 10
304 + -20 *2
305 '''
306
307 # Give the lexer some input
308 lexer.input(data)
309
310 # Tokenize
311 while True:
312 tok = lexer.token()
313 if not tok: break # No more input
314 print tok
315 </pre>
316 </blockquote>
317
318 When executed, the example will produce the following output:
319
320 <blockquote>
321 <pre>
322 $ python example.py
323 LexToken(NUMBER,3,2,1)
324 LexToken(PLUS,'+',2,3)
325 LexToken(NUMBER,4,2,5)
326 LexToken(TIMES,'*',2,7)
327 LexToken(NUMBER,10,2,10)
328 LexToken(PLUS,'+',3,14)
329 LexToken(MINUS,'-',3,16)
330 LexToken(NUMBER,20,3,18)
331 LexToken(TIMES,'*',3,20)
332 LexToken(NUMBER,2,3,21)
333 </pre>
334 </blockquote>
335
336 Lexers also support the iteration protocol. So, you can write the above loop as follows:
337
338 <blockquote>
339 <pre>
340 for tok in lexer:
341 print tok
342 </pre>
343 </blockquote>
344
345 The tokens returned by <tt>lexer.token()</tt> are instances
346 of <tt>LexToken</tt>. This object has
347 attributes <tt>tok.type</tt>, <tt>tok.value</tt>,
348 <tt>tok.lineno</tt>, and <tt>tok.lexpos</tt>. The following code shows an example of
349 accessing these attributes:
350
351 <blockquote>
352 <pre>
353 # Tokenize
354 while True:
355 tok = lexer.token()
356 if not tok: break # No more input
357 print tok.type, tok.value, tok.line, tok.lexpos
358 </pre>
359 </blockquote>
360
361 The <tt>tok.type</tt> and <tt>tok.value</tt> attributes contain the
362 type and value of the token itself.
363 <tt>tok.line</tt> and <tt>tok.lexpos</tt> contain information about
364 the location of the token. <tt>tok.lexpos</tt> is the index of the
365 token relative to the start of the input text.
366
367 <H3><a name="ply_nn5"></a>4.2 The tokens list</H3>
368
369
370 All lexers must provide a list <tt>tokens</tt> that defines all of the possible token
371 names that can be produced by the lexer. This list is always required
372 and is used to perform a variety of validation checks. The tokens list is also used by the
373 <tt>yacc.py</tt> module to identify terminals.
374
375 <p>
376 In the example, the following code specified the token names:
377
378 <blockquote>
379 <pre>
380 tokens = (
381 'NUMBER',
382 'PLUS',
383 'MINUS',
384 'TIMES',
385 'DIVIDE',
386 'LPAREN',
387 'RPAREN',
388 )
389 </pre>
390 </blockquote>
391
392 <H3><a name="ply_nn6"></a>4.3 Specification of tokens</H3>
393
394
395 Each token is specified by writing a regular expression rule. Each of these rules are
396 are defined by making declarations with a special prefix <tt>t_</tt> to indicate that it
397 defines a token. For simple tokens, the regular expression can
398 be specified as strings such as this (note: Python raw strings are used since they are the
399 most convenient way to write regular expression strings):
400
401 <blockquote>
402 <pre>
403 t_PLUS = r'\+'
404 </pre>
405 </blockquote>
406
407 In this case, the name following the <tt>t_</tt> must exactly match one of the
408 names supplied in <tt>tokens</tt>. If some kind of action needs to be performed,
409 a token rule can be specified as a function. For example, this rule matches numbers and
410 converts the string into a Python integer.
411
412 <blockquote>
413 <pre>
414 def t_NUMBER(t):
415 r'\d+'
416 t.value = int(t.value)
417 return t
418 </pre>
419 </blockquote>
420
421 When a function is used, the regular expression rule is specified in the function documentation string.
422 The function always takes a single argument which is an instance of
423 <tt>LexToken</tt>. This object has attributes of <tt>t.type</tt> which is the token type (as a string),
424 <tt>t.value</tt> which is the lexeme (the actual text matched), <tt>t.lineno</tt> which is the current line number, and <tt>t.lexpos</tt> which
425 is the position of the token relative to the beginning of the input text.
426 By default, <tt>t.type</tt> is set to the name following the <tt>t_</tt> prefix. The action
427 function can modify the contents of the <tt>LexToken</tt> object as appropriate. However,
428 when it is done, the resulting token should be returned. If no value is returned by the action
429 function, the token is simply discarded and the next token read.
430
431 <p>
432 Internally, <tt>lex.py</tt> uses the <tt>re</tt> module to do its patten matching. When building the master regular expression,
433 rules are added in the following order:
434 <p>
435 <ol>
436 <li>All tokens defined by functions are added in the same order as they appear in the lexer file.
437 <li>Tokens defined by strings are added next by sorting them in order of decreasing regular expression length (longer expressions
438 are added first).
439 </ol>
440 <p>
441 Without this ordering, it can be difficult to correctly match certain types of tokens. For example, if you
442 wanted to have separate tokens for "=" and "==", you need to make sure that "==" is checked first. By sorting regular
443 expressions in order of decreasing length, this problem is solved for rules defined as strings. For functions,
444 the order can be explicitly controlled since rules appearing first are checked first.
445
446 <p>
447 To handle reserved words, you should write a single rule to match an
448 identifier and do a special name lookup in a function like this:
449
450 <blockquote>
451 <pre>
452 reserved = {
453 'if' : 'IF',
454 'then' : 'THEN',
455 'else' : 'ELSE',
456 'while' : 'WHILE',
457 ...
458 }
459
460 tokens = ['LPAREN','RPAREN',...,'ID'] + list(reserved.values())
461
462 def t_ID(t):
463 r'[a-zA-Z_][a-zA-Z_0-9]*'
464 t.type = reserved.get(t.value,'ID') # Check for reserved words
465 return t
466 </pre>
467 </blockquote>
468
469 This approach greatly reduces the number of regular expression rules and is likely to make things a little faster.
470
471 <p>
472 <b>Note:</b> You should avoid writing individual rules for reserved words. For example, if you write rules like this,
473
474 <blockquote>
475 <pre>
476 t_FOR = r'for'
477 t_PRINT = r'print'
478 </pre>
479 </blockquote>
480
481 those rules will be triggered for identifiers that include those words as a prefix such as "forget" or "printed". This is probably not
482 what you want.
483
484 <H3><a name="ply_nn7"></a>4.4 Token values</H3>
485
486
487 When tokens are returned by lex, they have a value that is stored in the <tt>value</tt> attribute. Normally, the value is the text
488 that was matched. However, the value can be assigned to any Python object. For instance, when lexing identifiers, you may
489 want to return both the identifier name and information from some sort of symbol table. To do this, you might write a rule like this:
490
491 <blockquote>
492 <pre>
493 def t_ID(t):
494 ...
495 # Look up symbol table information and return a tuple
496 t.value = (t.value, symbol_lookup(t.value))
497 ...
498 return t
499 </pre>
500 </blockquote>
501
502 It is important to note that storing data in other attribute names is <em>not</em> recommended. The <tt>yacc.py</tt> module only exposes the
503 contents of the <tt>value</tt> attribute. Thus, accessing other attributes may be unnecessarily awkward. If you
504 need to store multiple values on a token, assign a tuple, dictionary, or instance to <tt>value</tt>.
505
506 <H3><a name="ply_nn8"></a>4.5 Discarded tokens</H3>
507
508
509 To discard a token, such as a comment, simply define a token rule that returns no value. For example:
510
511 <blockquote>
512 <pre>
513 def t_COMMENT(t):
514 r'\#.*'
515 pass
516 # No return value. Token discarded
517 </pre>
518 </blockquote>
519
520 Alternatively, you can include the prefix "ignore_" in the token declaration to force a token to be ignored. For example:
521
522 <blockquote>
523 <pre>
524 t_ignore_COMMENT = r'\#.*'
525 </pre>
526 </blockquote>
527
528 Be advised that if you are ignoring many different kinds of text, you may still want to use functions since these provide more precise
529 control over the order in which regular expressions are matched (i.e., functions are matched in order of specification whereas strings are
530 sorted by regular expression length).
531
532 <H3><a name="ply_nn9"></a>4.6 Line numbers and positional information</H3>
533
534
535 <p>By default, <tt>lex.py</tt> knows nothing about line numbers. This is because <tt>lex.py</tt> doesn't know anything
536 about what constitutes a "line" of input (e.g., the newline character or even if the input is textual data).
537 To update this information, you need to write a special rule. In the example, the <tt>t_newline()</tt> rule shows how to do this.
538
539 <blockquote>
540 <pre>
541 # Define a rule so we can track line numbers
542 def t_newline(t):
543 r'\n+'
544 t.lexer.lineno += len(t.value)
545 </pre>
546 </blockquote>
547 Within the rule, the <tt>lineno</tt> attribute of the underlying lexer <tt>t.lexer</tt> is updated.
548 After the line number is updated, the token is simply discarded since nothing is returned.
549
550 <p>
551 <tt>lex.py</tt> does not perform and kind of automatic column tracking. However, it does record positional
552 information related to each token in the <tt>lexpos</tt> attribute. Using this, it is usually possible to compute
553 column information as a separate step. For instance, just count backwards until you reach a newline.
554
555 <blockquote>
556 <pre>
557 # Compute column.
558 # input is the input text string
559 # token is a token instance
560 def find_column(input,token):
561 last_cr = input.rfind('\n',0,token.lexpos)
562 if last_cr < 0:
563 last_cr = 0
564 column = (token.lexpos - last_cr) + 1
565 return column
566 </pre>
567 </blockquote>
568
569 Since column information is often only useful in the context of error handling, calculating the column
570 position can be performed when needed as opposed to doing it for each token.
571
572 <H3><a name="ply_nn10"></a>4.7 Ignored characters</H3>
573
574
575 <p>
576 The special <tt>t_ignore</tt> rule is reserved by <tt>lex.py</tt> for characters
577 that should be completely ignored in the input stream.
578 Usually this is used to skip over whitespace and other non-essential characters.
579 Although it is possible to define a regular expression rule for whitespace in a manner
580 similar to <tt>t_newline()</tt>, the use of <tt>t_ignore</tt> provides substantially better
581 lexing performance because it is handled as a special case and is checked in a much
582 more efficient manner than the normal regular expression rules.
583
584 <H3><a name="ply_nn11"></a>4.8 Literal characters</H3>
585
586
587 <p>
588 Literal characters can be specified by defining a variable <tt>literals</tt> in your lexing module. For example:
589
590 <blockquote>
591 <pre>
592 literals = [ '+','-','*','/' ]
593 </pre>
594 </blockquote>
595
596 or alternatively
597
598 <blockquote>
599 <pre>
600 literals = "+-*/"
601 </pre>
602 </blockquote>
603
604 A literal character is simply a single character that is returned "as is" when encountered by the lexer. Literals are checked
605 after all of the defined regular expression rules. Thus, if a rule starts with one of the literal characters, it will always
606 take precedence.
607 <p>
608 When a literal token is returned, both its <tt>type</tt> and <tt>value</tt> attributes are set to the character itself. For example, <tt>'+'</tt>.
609
610 <H3><a name="ply_nn12"></a>4.9 Error handling</H3>
611
612
613 <p>
614 Finally, the <tt>t_error()</tt>
615 function is used to handle lexing errors that occur when illegal
616 characters are detected. In this case, the <tt>t.value</tt> attribute contains the
617 rest of the input string that has not been tokenized. In the example, the error function
618 was defined as follows:
619
620 <blockquote>
621 <pre>
622 # Error handling rule
623 def t_error(t):
624 print "Illegal character '%s'" % t.value[0]
625 t.lexer.skip(1)
626 </pre>
627 </blockquote>
628
629 In this case, we simply print the offending character and skip ahead one character by calling <tt>t.lexer.skip(1)</tt>.
630
631 <H3><a name="ply_nn13"></a>4.10 Building and using the lexer</H3>
632
633
634 <p>
635 To build the lexer, the function <tt>lex.lex()</tt> is used. This function
636 uses Python reflection (or introspection) to read the the regular expression rules
637 out of the calling context and build the lexer. Once the lexer has been built, two methods can
638 be used to control the lexer.
639
640 <ul>
641 <li><tt>lexer.input(data)</tt>. Reset the lexer and store a new input string.
642 <li><tt>lexer.token()</tt>. Return the next token. Returns a special <tt>LexToken</tt> instance on success or
643 None if the end of the input text has been reached.
644 </ul>
645
646 The preferred way to use PLY is to invoke the above methods directly on the lexer object returned by the
647 <tt>lex()</tt> function. The legacy interface to PLY involves module-level functions <tt>lex.input()</tt> and <tt>lex.token()</tt>.
648 For example:
649
650 <blockquote>
651 <pre>
652 lex.lex()
653 lex.input(sometext)
654 while 1:
655 tok = lex.token()
656 if not tok: break
657 print tok
658 </pre>
659 </blockquote>
660
661 <p>
662 In this example, the module-level functions <tt>lex.input()</tt> and <tt>lex.token()</tt> are bound to the <tt>input()</tt>
663 and <tt>token()</tt> methods of the last lexer created by the lex module. This interface may go away at some point so
664 it's probably best not to use it.
665
666 <H3><a name="ply_nn14"></a>4.11 The @TOKEN decorator</H3>
667
668
669 In some applications, you may want to define build tokens from as a series of
670 more complex regular expression rules. For example:
671
672 <blockquote>
673 <pre>
674 digit = r'([0-9])'
675 nondigit = r'([_A-Za-z])'
676 identifier = r'(' + nondigit + r'(' + digit + r'|' + nondigit + r')*)'
677
678 def t_ID(t):
679 # want docstring to be identifier above. ?????
680 ...
681 </pre>
682 </blockquote>
683
684 In this case, we want the regular expression rule for <tt>ID</tt> to be one of the variables above. However, there is no
685 way to directly specify this using a normal documentation string. To solve this problem, you can use the <tt>@TOKEN</tt>
686 decorator. For example:
687
688 <blockquote>
689 <pre>
690 from ply.lex import TOKEN
691
692 @TOKEN(identifier)
693 def t_ID(t):
694 ...
695 </pre>
696 </blockquote>
697
698 This will attach <tt>identifier</tt> to the docstring for <tt>t_ID()</tt> allowing <tt>lex.py</tt> to work normally. An alternative
699 approach this problem is to set the docstring directly like this:
700
701 <blockquote>
702 <pre>
703 def t_ID(t):
704 ...
705
706 t_ID.__doc__ = identifier
707 </pre>
708 </blockquote>
709
710 <b>NOTE:</b> Use of <tt>@TOKEN</tt> requires Python-2.4 or newer. If you're concerned about backwards compatibility with older
711 versions of Python, use the alternative approach of setting the docstring directly.
712
713 <H3><a name="ply_nn15"></a>4.12 Optimized mode</H3>
714
715
716 For improved performance, it may be desirable to use Python's
717 optimized mode (e.g., running Python with the <tt>-O</tt>
718 option). However, doing so causes Python to ignore documentation
719 strings. This presents special problems for <tt>lex.py</tt>. To
720 handle this case, you can create your lexer using
721 the <tt>optimize</tt> option as follows:
722
723 <blockquote>
724 <pre>
725 lexer = lex.lex(optimize=1)
726 </pre>
727 </blockquote>
728
729 Next, run Python in its normal operating mode. When you do
730 this, <tt>lex.py</tt> will write a file called <tt>lextab.py</tt> to
731 the current directory. This file contains all of the regular
732 expression rules and tables used during lexing. On subsequent
733 executions,
734 <tt>lextab.py</tt> will simply be imported to build the lexer. This
735 approach substantially improves the startup time of the lexer and it
736 works in Python's optimized mode.
737
738 <p>
739 To change the name of the lexer-generated file, use the <tt>lextab</tt> keyword argument. For example:
740
741 <blockquote>
742 <pre>
743 lexer = lex.lex(optimize=1,lextab="footab")
744 </pre>
745 </blockquote>
746
747 When running in optimized mode, it is important to note that lex disables most error checking. Thus, this is really only recommended
748 if you're sure everything is working correctly and you're ready to start releasing production code.
749
750 <H3><a name="ply_nn16"></a>4.13 Debugging</H3>
751
752
753 For the purpose of debugging, you can run <tt>lex()</tt> in a debugging mode as follows:
754
755 <blockquote>
756 <pre>
757 lexer = lex.lex(debug=1)
758 </pre>
759 </blockquote>
760
761 <p>
762 This will produce various sorts of debugging information including all of the added rules,
763 the master regular expressions used by the lexer, and tokens generating during lexing.
764 </p>
765
766 <p>
767 In addition, <tt>lex.py</tt> comes with a simple main function which
768 will either tokenize input read from standard input or from a file specified
769 on the command line. To use it, simply put this in your lexer:
770 </p>
771
772 <blockquote>
773 <pre>
774 if __name__ == '__main__':
775 lex.runmain()
776 </pre>
777 </blockquote>
778
779 Please refer to the "Debugging" section near the end for some more advanced details
780 of debugging.
781
782 <H3><a name="ply_nn17"></a>4.14 Alternative specification of lexers</H3>
783
784
785 As shown in the example, lexers are specified all within one Python module. If you want to
786 put token rules in a different module from the one in which you invoke <tt>lex()</tt>, use the
787 <tt>module</tt> keyword argument.
788
789 <p>
790 For example, you might have a dedicated module that just contains
791 the token rules:
792
793 <blockquote>
794 <pre>
795 # module: tokrules.py
796 # This module just contains the lexing rules
797
798 # List of token names. This is always required
799 tokens = (
800 'NUMBER',
801 'PLUS',
802 'MINUS',
803 'TIMES',
804 'DIVIDE',
805 'LPAREN',
806 'RPAREN',
807 )
808
809 # Regular expression rules for simple tokens
810 t_PLUS = r'\+'
811 t_MINUS = r'-'
812 t_TIMES = r'\*'
813 t_DIVIDE = r'/'
814 t_LPAREN = r'\('
815 t_RPAREN = r'\)'
816
817 # A regular expression rule with some action code
818 def t_NUMBER(t):
819 r'\d+'
820 t.value = int(t.value)
821 return t
822
823 # Define a rule so we can track line numbers
824 def t_newline(t):
825 r'\n+'
826 t.lexer.lineno += len(t.value)
827
828 # A string containing ignored characters (spaces and tabs)
829 t_ignore = ' \t'
830
831 # Error handling rule
832 def t_error(t):
833 print "Illegal character '%s'" % t.value[0]
834 t.lexer.skip(1)
835 </pre>
836 </blockquote>
837
838 Now, if you wanted to build a tokenizer from these rules from within a different module, you would do the following (shown for Python interactive mode):
839
840 <blockquote>
841 <pre>
842 >>> import tokrules
843 >>> <b>lexer = lex.lex(module=tokrules)</b>
844 >>> lexer.input("3 + 4")
845 >>> lexer.token()
846 LexToken(NUMBER,3,1,1,0)
847 >>> lexer.token()
848 LexToken(PLUS,'+',1,2)
849 >>> lexer.token()
850 LexToken(NUMBER,4,1,4)
851 >>> lexer.token()
852 None
853 >>>
854 </pre>
855 </blockquote>
856
857 The <tt>module</tt> option can also be used to define lexers from instances of a class. For example:
858
859 <blockquote>
860 <pre>
861 import ply.lex as lex
862
863 class MyLexer:
864 # List of token names. This is always required
865 tokens = (
866 'NUMBER',
867 'PLUS',
868 'MINUS',
869 'TIMES',
870 'DIVIDE',
871 'LPAREN',
872 'RPAREN',
873 )
874
875 # Regular expression rules for simple tokens
876 t_PLUS = r'\+'
877 t_MINUS = r'-'
878 t_TIMES = r'\*'
879 t_DIVIDE = r'/'
880 t_LPAREN = r'\('
881 t_RPAREN = r'\)'
882
883 # A regular expression rule with some action code
884 # Note addition of self parameter since we're in a class
885 def t_NUMBER(self,t):
886 r'\d+'
887 t.value = int(t.value)
888 return t
889
890 # Define a rule so we can track line numbers
891 def t_newline(self,t):
892 r'\n+'
893 t.lexer.lineno += len(t.value)
894
895 # A string containing ignored characters (spaces and tabs)
896 t_ignore = ' \t'
897
898 # Error handling rule
899 def t_error(self,t):
900 print "Illegal character '%s'" % t.value[0]
901 t.lexer.skip(1)
902
903 <b># Build the lexer
904 def build(self,**kwargs):
905 self.lexer = lex.lex(module=self, **kwargs)</b>
906
907 # Test it output
908 def test(self,data):
909 self.lexer.input(data)
910 while True:
911 tok = lexer.token()
912 if not tok: break
913 print tok
914
915 # Build the lexer and try it out
916 m = MyLexer()
917 m.build() # Build the lexer
918 m.test("3 + 4") # Test it
919 </pre>
920 </blockquote>
921
922
923 When building a lexer from class, <em>you should construct the lexer from
924 an instance of the class</em>, not the class object itself. This is because
925 PLY only works properly if the lexer actions are defined by bound-methods.
926
927 <p>
928 When using the <tt>module</tt> option to <tt>lex()</tt>, PLY collects symbols
929 from the underlying object using the <tt>dir()</tt> function. There is no
930 direct access to the <tt>__dict__</tt> attribute of the object supplied as a
931 module value.
932
933 <P>
934 Finally, if you want to keep things nicely encapsulated, but don't want to use a
935 full-fledged class definition, lexers can be defined using closures. For example:
936
937 <blockquote>
938 <pre>
939 import ply.lex as lex
940
941 # List of token names. This is always required
942 tokens = (
943 'NUMBER',
944 'PLUS',
945 'MINUS',
946 'TIMES',
947 'DIVIDE',
948 'LPAREN',
949 'RPAREN',
950 )
951
952 def MyLexer():
953 # Regular expression rules for simple tokens
954 t_PLUS = r'\+'
955 t_MINUS = r'-'
956 t_TIMES = r'\*'
957 t_DIVIDE = r'/'
958 t_LPAREN = r'\('
959 t_RPAREN = r'\)'
960
961 # A regular expression rule with some action code
962 def t_NUMBER(t):
963 r'\d+'
964 t.value = int(t.value)
965 return t
966
967 # Define a rule so we can track line numbers
968 def t_newline(t):
969 r'\n+'
970 t.lexer.lineno += len(t.value)
971
972 # A string containing ignored characters (spaces and tabs)
973 t_ignore = ' \t'
974
975 # Error handling rule
976 def t_error(t):
977 print "Illegal character '%s'" % t.value[0]
978 t.lexer.skip(1)
979
980 # Build the lexer from my environment and return it
981 return lex.lex()
982 </pre>
983 </blockquote>
984
985
986 <H3><a name="ply_nn18"></a>4.15 Maintaining state</H3>
987
988
989 In your lexer, you may want to maintain a variety of state
990 information. This might include mode settings, symbol tables, and
991 other details. As an example, suppose that you wanted to keep
992 track of how many NUMBER tokens had been encountered.
993
994 <p>
995 One way to do this is to keep a set of global variables in the module
996 where you created the lexer. For example:
997
998 <blockquote>
999 <pre>
1000 num_count = 0
1001 def t_NUMBER(t):
1002 r'\d+'
1003 global num_count
1004 num_count += 1
1005 t.value = int(t.value)
1006 return t
1007 </pre>
1008 </blockquote>
1009
1010 If you don't like the use of a global variable, another place to store
1011 information is inside the Lexer object created by <tt>lex()</tt>.
1012 To this, you can use the <tt>lexer</tt> attribute of tokens passed to
1013 the various rules. For example:
1014
1015 <blockquote>
1016 <pre>
1017 def t_NUMBER(t):
1018 r'\d+'
1019 t.lexer.num_count += 1 # Note use of lexer attribute
1020 t.value = int(t.value)
1021 return t
1022
1023 lexer = lex.lex()
1024 lexer.num_count = 0 # Set the initial count
1025 </pre>
1026 </blockquote>
1027
1028 This latter approach has the advantage of being simple and working
1029 correctly in applications where multiple instantiations of a given
1030 lexer exist in the same application. However, this might also feel
1031 like a gross violation of encapsulation to OO purists.
1032 Just to put your mind at some ease, all
1033 internal attributes of the lexer (with the exception of <tt>lineno</tt>) have names that are prefixed
1034 by <tt>lex</tt> (e.g., <tt>lexdata</tt>,<tt>lexpos</tt>, etc.). Thus,
1035 it is perfectly safe to store attributes in the lexer that
1036 don't have names starting with that prefix or a name that conlicts with one of the
1037 predefined methods (e.g., <tt>input()</tt>, <tt>token()</tt>, etc.).
1038
1039 <p>
1040 If you don't like assigning values on the lexer object, you can define your lexer as a class as
1041 shown in the previous section:
1042
1043 <blockquote>
1044 <pre>
1045 class MyLexer:
1046 ...
1047 def t_NUMBER(self,t):
1048 r'\d+'
1049 self.num_count += 1
1050 t.value = int(t.value)
1051 return t
1052
1053 def build(self, **kwargs):
1054 self.lexer = lex.lex(object=self,**kwargs)
1055
1056 def __init__(self):
1057 self.num_count = 0
1058 </pre>
1059 </blockquote>
1060
1061 The class approach may be the easiest to manage if your application is
1062 going to be creating multiple instances of the same lexer and you need
1063 to manage a lot of state.
1064
1065 <p>
1066 State can also be managed through closures. For example, in Python 3:
1067
1068 <blockquote>
1069 <pre>
1070 def MyLexer():
1071 num_count = 0
1072 ...
1073 def t_NUMBER(t):
1074 r'\d+'
1075 nonlocal num_count
1076 num_count += 1
1077 t.value = int(t.value)
1078 return t
1079 ...
1080 </pre>
1081 </blockquote>
1082
1083 <H3><a name="ply_nn19"></a>4.16 Lexer cloning</H3>
1084
1085
1086 <p>
1087 If necessary, a lexer object can be duplicated by invoking its <tt>clone()</tt> method. For example:
1088
1089 <blockquote>
1090 <pre>
1091 lexer = lex.lex()
1092 ...
1093 newlexer = lexer.clone()
1094 </pre>
1095 </blockquote>
1096
1097 When a lexer is cloned, the copy is exactly identical to the original lexer
1098 including any input text and internal state. However, the clone allows a
1099 different set of input text to be supplied which may be processed separately.
1100 This may be useful in situations when you are writing a parser/compiler that
1101 involves recursive or reentrant processing. For instance, if you
1102 needed to scan ahead in the input for some reason, you could create a
1103 clone and use it to look ahead. Or, if you were implementing some kind of preprocessor,
1104 cloned lexers could be used to handle different input files.
1105
1106 <p>
1107 Creating a clone is different than calling <tt>lex.lex()</tt> in that
1108 PLY doesn't regenerate any of the internal tables or regular expressions. So,
1109
1110 <p>
1111 Special considerations need to be made when cloning lexers that also
1112 maintain their own internal state using classes or closures. Namely,
1113 you need to be aware that the newly created lexers will share all of
1114 this state with the original lexer. For example, if you defined a
1115 lexer as a class and did this:
1116
1117 <blockquote>
1118 <pre>
1119 m = MyLexer()
1120 a = lex.lex(object=m) # Create a lexer
1121
1122 b = a.clone() # Clone the lexer
1123 </pre>
1124 </blockquote>
1125
1126 Then both <tt>a</tt> and <tt>b</tt> are going to be bound to the same
1127 object <tt>m</tt> and any changes to <tt>m</tt> will be reflected in both lexers. It's
1128 important to emphasize that <tt>clone()</tt> is only meant to create a new lexer
1129 that reuses the regular expressions and environment of another lexer. If you
1130 need to make a totally new copy of a lexer, then call <tt>lex()</tt> again.
1131
1132 <H3><a name="ply_nn20"></a>4.17 Internal lexer state</H3>
1133
1134
1135 A Lexer object <tt>lexer</tt> has a number of internal attributes that may be useful in certain
1136 situations.
1137
1138 <p>
1139 <tt>lexer.lexpos</tt>
1140 <blockquote>
1141 This attribute is an integer that contains the current position within the input text. If you modify
1142 the value, it will change the result of the next call to <tt>token()</tt>. Within token rule functions, this points
1143 to the first character <em>after</em> the matched text. If the value is modified within a rule, the next returned token will be
1144 matched at the new position.
1145 </blockquote>
1146
1147 <p>
1148 <tt>lexer.lineno</tt>
1149 <blockquote>
1150 The current value of the line number attribute stored in the lexer. PLY only specifies that the attribute
1151 exists---it never sets, updates, or performs any processing with it. If you want to track line numbers,
1152 you will need to add code yourself (see the section on line numbers and positional information).
1153 </blockquote>
1154
1155 <p>
1156 <tt>lexer.lexdata</tt>
1157 <blockquote>
1158 The current input text stored in the lexer. This is the string passed with the <tt>input()</tt> method. It
1159 would probably be a bad idea to modify this unless you really know what you're doing.
1160 </blockquote>
1161
1162 <P>
1163 <tt>lexer.lexmatch</tt>
1164 <blockquote>
1165 This is the raw <tt>Match</tt> object returned by the Python <tt>re.match()</tt> function (used internally by PLY) for the
1166 current token. If you have written a regular expression that contains named groups, you can use this to retrieve those values.
1167 Note: This attribute is only updated when tokens are defined and processed by functions.
1168 </blockquote>
1169
1170 <H3><a name="ply_nn21"></a>4.18 Conditional lexing and start conditions</H3>
1171
1172
1173 In advanced parsing applications, it may be useful to have different
1174 lexing states. For instance, you may want the occurrence of a certain
1175 token or syntactic construct to trigger a different kind of lexing.
1176 PLY supports a feature that allows the underlying lexer to be put into
1177 a series of different states. Each state can have its own tokens,
1178 lexing rules, and so forth. The implementation is based largely on
1179 the "start condition" feature of GNU flex. Details of this can be found
1180 at <a
1181 href="http://www.gnu.org/software/flex/manual/html_chapter/flex_11.html">http://www.gnu.org/software/flex/manual/html_chapter/flex_11.html.</a>.
1182
1183 <p>
1184 To define a new lexing state, it must first be declared. This is done by including a "states" declaration in your
1185 lex file. For example:
1186
1187 <blockquote>
1188 <pre>
1189 states = (
1190 ('foo','exclusive'),
1191 ('bar','inclusive'),
1192 )
1193 </pre>
1194 </blockquote>
1195
1196 This declaration declares two states, <tt>'foo'</tt>
1197 and <tt>'bar'</tt>. States may be of two types; <tt>'exclusive'</tt>
1198 and <tt>'inclusive'</tt>. An exclusive state completely overrides the
1199 default behavior of the lexer. That is, lex will only return tokens
1200 and apply rules defined specifically for that state. An inclusive
1201 state adds additional tokens and rules to the default set of rules.
1202 Thus, lex will return both the tokens defined by default in addition
1203 to those defined for the inclusive state.
1204
1205 <p>
1206 Once a state has been declared, tokens and rules are declared by including the
1207 state name in token/rule declaration. For example:
1208
1209 <blockquote>
1210 <pre>
1211 t_foo_NUMBER = r'\d+' # Token 'NUMBER' in state 'foo'
1212 t_bar_ID = r'[a-zA-Z_][a-zA-Z0-9_]*' # Token 'ID' in state 'bar'
1213
1214 def t_foo_newline(t):
1215 r'\n'
1216 t.lexer.lineno += 1
1217 </pre>
1218 </blockquote>
1219
1220 A token can be declared in multiple states by including multiple state names in the declaration. For example:
1221
1222 <blockquote>
1223 <pre>
1224 t_foo_bar_NUMBER = r'\d+' # Defines token 'NUMBER' in both state 'foo' and 'bar'
1225 </pre>
1226 </blockquote>
1227
1228 Alternative, a token can be declared in all states using the 'ANY' in the name.
1229
1230 <blockquote>
1231 <pre>
1232 t_ANY_NUMBER = r'\d+' # Defines a token 'NUMBER' in all states
1233 </pre>
1234 </blockquote>
1235
1236 If no state name is supplied, as is normally the case, the token is associated with a special state <tt>'INITIAL'</tt>. For example,
1237 these two declarations are identical:
1238
1239 <blockquote>
1240 <pre>
1241 t_NUMBER = r'\d+'
1242 t_INITIAL_NUMBER = r'\d+'
1243 </pre>
1244 </blockquote>
1245
1246 <p>
1247 States are also associated with the special <tt>t_ignore</tt> and <tt>t_error()</tt> declarations. For example, if a state treats
1248 these differently, you can declare:
1249
1250 <blockquote>
1251 <pre>
1252 t_foo_ignore = " \t\n" # Ignored characters for state 'foo'
1253
1254 def t_bar_error(t): # Special error handler for state 'bar'
1255 pass
1256 </pre>
1257 </blockquote>
1258
1259 By default, lexing operates in the <tt>'INITIAL'</tt> state. This state includes all of the normally defined tokens.
1260 For users who aren't using different states, this fact is completely transparent. If, during lexing or parsing, you want to change
1261 the lexing state, use the <tt>begin()</tt> method. For example:
1262
1263 <blockquote>
1264 <pre>
1265 def t_begin_foo(t):
1266 r'start_foo'
1267 t.lexer.begin('foo') # Starts 'foo' state
1268 </pre>
1269 </blockquote>
1270
1271 To get out of a state, you use <tt>begin()</tt> to switch back to the initial state. For example:
1272
1273 <blockquote>
1274 <pre>
1275 def t_foo_end(t):
1276 r'end_foo'
1277 t.lexer.begin('INITIAL') # Back to the initial state
1278 </pre>
1279 </blockquote>
1280
1281 The management of states can also be done with a stack. For example:
1282
1283 <blockquote>
1284 <pre>
1285 def t_begin_foo(t):
1286 r'start_foo'
1287 t.lexer.push_state('foo') # Starts 'foo' state
1288
1289 def t_foo_end(t):
1290 r'end_foo'
1291 t.lexer.pop_state() # Back to the previous state
1292 </pre>
1293 </blockquote>
1294
1295 <p>
1296 The use of a stack would be useful in situations where there are many ways of entering a new lexing state and you merely want to go back
1297 to the previous state afterwards.
1298
1299 <P>
1300 An example might help clarify. Suppose you were writing a parser and you wanted to grab sections of arbitrary C code enclosed by
1301 curly braces. That is, whenever you encounter a starting brace '{', you want to read all of the enclosed code up to the ending brace '}'
1302 and return it as a string. Doing this with a normal regular expression rule is nearly (if not actually) impossible. This is because braces can
1303 be nested and can be included in comments and strings. Thus, simply matching up to the first matching '}' character isn't good enough. Here is how
1304 you might use lexer states to do this:
1305
1306 <blockquote>
1307 <pre>
1308 # Declare the state
1309 states = (
1310 ('ccode','exclusive'),
1311 )
1312
1313 # Match the first {. Enter ccode state.
1314 def t_ccode(t):
1315 r'\{'
1316 t.lexer.code_start = t.lexer.lexpos # Record the starting position
1317 t.lexer.level = 1 # Initial brace level
1318 t.lexer.begin('ccode') # Enter 'ccode' state
1319
1320 # Rules for the ccode state
1321 def t_ccode_lbrace(t):
1322 r'\{'
1323 t.lexer.level +=1
1324
1325 def t_ccode_rbrace(t):
1326 r'\}'
1327 t.lexer.level -=1
1328
1329 # If closing brace, return the code fragment
1330 if t.lexer.level == 0:
1331 t.value = t.lexer.lexdata[t.lexer.code_start:t.lexer.lexpos+1]
1332 t.type = "CCODE"
1333 t.lexer.lineno += t.value.count('\n')
1334 t.lexer.begin('INITIAL')
1335 return t
1336
1337 # C or C++ comment (ignore)
1338 def t_ccode_comment(t):
1339 r'(/\*(.|\n)*?*/)|(//.*)'
1340 pass
1341
1342 # C string
1343 def t_ccode_string(t):
1344 r'\"([^\\\n]|(\\.))*?\"'
1345
1346 # C character literal
1347 def t_ccode_char(t):
1348 r'\'([^\\\n]|(\\.))*?\''
1349
1350 # Any sequence of non-whitespace characters (not braces, strings)
1351 def t_ccode_nonspace(t):
1352 r'[^\s\{\}\'\"]+'
1353
1354 # Ignored characters (whitespace)
1355 t_ccode_ignore = " \t\n"
1356
1357 # For bad characters, we just skip over it
1358 def t_ccode_error(t):
1359 t.lexer.skip(1)
1360 </pre>
1361 </blockquote>
1362
1363 In this example, the occurrence of the first '{' causes the lexer to record the starting position and enter a new state <tt>'ccode'</tt>. A collection of rules then match
1364 various parts of the input that follow (comments, strings, etc.). All of these rules merely discard the token (by not returning a value).
1365 However, if the closing right brace is encountered, the rule <tt>t_ccode_rbrace</tt> collects all of the code (using the earlier recorded starting
1366 position), stores it, and returns a token 'CCODE' containing all of that text. When returning the token, the lexing state is restored back to its
1367 initial state.
1368
1369 <H3><a name="ply_nn21"></a>4.19 Miscellaneous Issues</H3>
1370
1371
1372 <P>
1373 <li>The lexer requires input to be supplied as a single input string. Since most machines have more than enough memory, this
1374 rarely presents a performance concern. However, it means that the lexer currently can't be used with streaming data
1375 such as open files or sockets. This limitation is primarily a side-effect of using the <tt>re</tt> module.
1376
1377 <p>
1378 <li>The lexer should work properly with both Unicode strings given as token and pattern matching rules as
1379 well as for input text.
1380
1381 <p>
1382 <li>If you need to supply optional flags to the re.compile() function, use the reflags option to lex. For example:
1383
1384 <blockquote>
1385 <pre>
1386 lex.lex(reflags=re.UNICODE)
1387 </pre>
1388 </blockquote>
1389
1390 <p>
1391 <li>Since the lexer is written entirely in Python, its performance is
1392 largely determined by that of the Python <tt>re</tt> module. Although
1393 the lexer has been written to be as efficient as possible, it's not
1394 blazingly fast when used on very large input files. If
1395 performance is concern, you might consider upgrading to the most
1396 recent version of Python, creating a hand-written lexer, or offloading
1397 the lexer into a C extension module.
1398
1399 <p>
1400 If you are going to create a hand-written lexer and you plan to use it with <tt>yacc.py</tt>,
1401 it only needs to conform to the following requirements:
1402
1403 <ul>
1404 <li>It must provide a <tt>token()</tt> method that returns the next token or <tt>None</tt> if no more
1405 tokens are available.
1406 <li>The <tt>token()</tt> method must return an object <tt>tok</tt> that has <tt>type</tt> and <tt>value</tt> attributes.
1407 </ul>
1408
1409 <H2><a name="ply_nn22"></a>5. Parsing basics</H2>
1410
1411
1412 <tt>yacc.py</tt> is used to parse language syntax. Before showing an
1413 example, there are a few important bits of background that must be
1414 mentioned. First, <em>syntax</em> is usually specified in terms of a BNF grammar.
1415 For example, if you wanted to parse
1416 simple arithmetic expressions, you might first write an unambiguous
1417 grammar specification like this:
1418
1419 <blockquote>
1420 <pre>
1421 expression : expression + term
1422 | expression - term
1423 | term
1424
1425 term : term * factor
1426 | term / factor
1427 | factor
1428
1429 factor : NUMBER
1430 | ( expression )
1431 </pre>
1432 </blockquote>
1433
1434 In the grammar, symbols such as <tt>NUMBER</tt>, <tt>+</tt>, <tt>-</tt>, <tt>*</tt>, and <tt>/</tt> are known
1435 as <em>terminals</em> and correspond to raw input tokens. Identifiers such as <tt>term</tt> and <tt>factor</tt> refer to
1436 grammar rules comprised of a collection of terminals and other rules. These identifiers are known as <em>non-terminals</em>.
1437 <P>
1438
1439 The semantic behavior of a language is often specified using a
1440 technique known as syntax directed translation. In syntax directed
1441 translation, attributes are attached to each symbol in a given grammar
1442 rule along with an action. Whenever a particular grammar rule is
1443 recognized, the action describes what to do. For example, given the
1444 expression grammar above, you might write the specification for a
1445 simple calculator like this:
1446
1447 <blockquote>
1448 <pre>
1449 Grammar Action
1450 -------------------------------- --------------------------------------------
1451 expression0 : expression1 + term expression0.val = expression1.val + term.val
1452 | expression1 - term expression0.val = expression1.val - term.val
1453 | term expression0.val = term.val
1454
1455 term0 : term1 * factor term0.val = term1.val * factor.val
1456 | term1 / factor term0.val = term1.val / factor.val
1457 | factor term0.val = factor.val
1458
1459 factor : NUMBER factor.val = int(NUMBER.lexval)
1460 | ( expression ) factor.val = expression.val
1461 </pre>
1462 </blockquote>
1463
1464 A good way to think about syntax directed translation is to
1465 view each symbol in the grammar as a kind of object. Associated
1466 with each symbol is a value representing its "state" (for example, the
1467 <tt>val</tt> attribute above). Semantic
1468 actions are then expressed as a collection of functions or methods
1469 that operate on the symbols and associated values.
1470
1471 <p>
1472 Yacc uses a parsing technique known as LR-parsing or shift-reduce parsing. LR parsing is a
1473 bottom up technique that tries to recognize the right-hand-side of various grammar rules.
1474 Whenever a valid right-hand-side is found in the input, the appropriate action code is triggered and the
1475 grammar symbols are replaced by the grammar symbol on the left-hand-side.
1476
1477 <p>
1478 LR parsing is commonly implemented by shifting grammar symbols onto a
1479 stack and looking at the stack and the next input token for patterns that
1480 match one of the grammar rules.
1481 The details of the algorithm can be found in a compiler textbook, but the
1482 following example illustrates the steps that are performed if you
1483 wanted to parse the expression
1484 <tt>3 + 5 * (10 - 20)</tt> using the grammar defined above. In the example,
1485 the special symbol <tt>$</tt> represents the end of input.
1486
1487
1488 <blockquote>
1489 <pre>
1490 Step Symbol Stack Input Tokens Action
1491 ---- --------------------- --------------------- -------------------------------
1492 1 3 + 5 * ( 10 - 20 )$ Shift 3
1493 2 3 + 5 * ( 10 - 20 )$ Reduce factor : NUMBER
1494 3 factor + 5 * ( 10 - 20 )$ Reduce term : factor
1495 4 term + 5 * ( 10 - 20 )$ Reduce expr : term
1496 5 expr + 5 * ( 10 - 20 )$ Shift +
1497 6 expr + 5 * ( 10 - 20 )$ Shift 5
1498 7 expr + 5 * ( 10 - 20 )$ Reduce factor : NUMBER
1499 8 expr + factor * ( 10 - 20 )$ Reduce term : factor
1500 9 expr + term * ( 10 - 20 )$ Shift *
1501 10 expr + term * ( 10 - 20 )$ Shift (
1502 11 expr + term * ( 10 - 20 )$ Shift 10
1503 12 expr + term * ( 10 - 20 )$ Reduce factor : NUMBER
1504 13 expr + term * ( factor - 20 )$ Reduce term : factor
1505 14 expr + term * ( term - 20 )$ Reduce expr : term
1506 15 expr + term * ( expr - 20 )$ Shift -
1507 16 expr + term * ( expr - 20 )$ Shift 20
1508 17 expr + term * ( expr - 20 )$ Reduce factor : NUMBER
1509 18 expr + term * ( expr - factor )$ Reduce term : factor
1510 19 expr + term * ( expr - term )$ Reduce expr : expr - term
1511 20 expr + term * ( expr )$ Shift )
1512 21 expr + term * ( expr ) $ Reduce factor : (expr)
1513 22 expr + term * factor $ Reduce term : term * factor
1514 23 expr + term $ Reduce expr : expr + term
1515 24 expr $ Reduce expr
1516 25 $ Success!
1517 </pre>
1518 </blockquote>
1519
1520 When parsing the expression, an underlying state machine and the
1521 current input token determine what happens next. If the next token
1522 looks like part of a valid grammar rule (based on other items on the
1523 stack), it is generally shifted onto the stack. If the top of the
1524 stack contains a valid right-hand-side of a grammar rule, it is
1525 usually "reduced" and the symbols replaced with the symbol on the
1526 left-hand-side. When this reduction occurs, the appropriate action is
1527 triggered (if defined). If the input token can't be shifted and the
1528 top of stack doesn't match any grammar rules, a syntax error has
1529 occurred and the parser must take some kind of recovery step (or bail
1530 out). A parse is only successful if the parser reaches a state where
1531 the symbol stack is empty and there are no more input tokens.
1532
1533 <p>
1534 It is important to note that the underlying implementation is built
1535 around a large finite-state machine that is encoded in a collection of
1536 tables. The construction of these tables is non-trivial and
1537 beyond the scope of this discussion. However, subtle details of this
1538 process explain why, in the example above, the parser chooses to shift
1539 a token onto the stack in step 9 rather than reducing the
1540 rule <tt>expr : expr + term</tt>.
1541
1542 <H2><a name="ply_nn23"></a>6. Yacc</H2>
1543
1544
1545 The <tt>ply.yacc</tt> module implements the parsing component of PLY.
1546 The name "yacc" stands for "Yet Another Compiler Compiler" and is
1547 borrowed from the Unix tool of the same name.
1548
1549 <H3><a name="ply_nn24"></a>6.1 An example</H3>
1550
1551
1552 Suppose you wanted to make a grammar for simple arithmetic expressions as previously described. Here is
1553 how you would do it with <tt>yacc.py</tt>:
1554
1555 <blockquote>
1556 <pre>
1557 # Yacc example
1558
1559 import ply.yacc as yacc
1560
1561 # Get the token map from the lexer. This is required.
1562 from calclex import tokens
1563
1564 def p_expression_plus(p):
1565 'expression : expression PLUS term'
1566 p[0] = p[1] + p[3]
1567
1568 def p_expression_minus(p):
1569 'expression : expression MINUS term'
1570 p[0] = p[1] - p[3]
1571
1572 def p_expression_term(p):
1573 'expression : term'
1574 p[0] = p[1]
1575
1576 def p_term_times(p):
1577 'term : term TIMES factor'
1578 p[0] = p[1] * p[3]
1579
1580 def p_term_div(p):
1581 'term : term DIVIDE factor'
1582 p[0] = p[1] / p[3]
1583
1584 def p_term_factor(p):
1585 'term : factor'
1586 p[0] = p[1]
1587
1588 def p_factor_num(p):
1589 'factor : NUMBER'
1590 p[0] = p[1]
1591
1592 def p_factor_expr(p):
1593 'factor : LPAREN expression RPAREN'
1594 p[0] = p[2]
1595
1596 # Error rule for syntax errors
1597 def p_error(p):
1598 print "Syntax error in input!"
1599
1600 # Build the parser
1601 parser = yacc.yacc()
1602
1603 while True:
1604 try:
1605 s = raw_input('calc > ')
1606 except EOFError:
1607 break
1608 if not s: continue
1609 result = parser.parse(s)
1610 print result
1611 </pre>
1612 </blockquote>
1613
1614 In this example, each grammar rule is defined by a Python function
1615 where the docstring to that function contains the appropriate
1616 context-free grammar specification. The statements that make up the
1617 function body implement the semantic actions of the rule. Each function
1618 accepts a single argument <tt>p</tt> that is a sequence containing the
1619 values of each grammar symbol in the corresponding rule. The values
1620 of <tt>p[i]</tt> are mapped to grammar symbols as shown here:
1621
1622 <blockquote>
1623 <pre>
1624 def p_expression_plus(p):
1625 'expression : expression PLUS term'
1626 # ^ ^ ^ ^
1627 # p[0] p[1] p[2] p[3]
1628
1629 p[0] = p[1] + p[3]
1630 </pre>
1631 </blockquote>
1632
1633 <p>
1634 For tokens, the "value" of the corresponding <tt>p[i]</tt> is the
1635 <em>same</em> as the <tt>p.value</tt> attribute assigned in the lexer
1636 module. For non-terminals, the value is determined by whatever is
1637 placed in <tt>p[0]</tt> when rules are reduced. This value can be
1638 anything at all. However, it probably most common for the value to be
1639 a simple Python type, a tuple, or an instance. In this example, we
1640 are relying on the fact that the <tt>NUMBER</tt> token stores an
1641 integer value in its value field. All of the other rules simply
1642 perform various types of integer operations and propagate the result.
1643 </p>
1644
1645 <p>
1646 Note: The use of negative indices have a special meaning in
1647 yacc---specially <tt>p[-1]</tt> does not have the same value
1648 as <tt>p[3]</tt> in this example. Please see the section on "Embedded
1649 Actions" for further details.
1650 </p>
1651
1652 <p>
1653 The first rule defined in the yacc specification determines the
1654 starting grammar symbol (in this case, a rule for <tt>expression</tt>
1655 appears first). Whenever the starting rule is reduced by the parser
1656 and no more input is available, parsing stops and the final value is
1657 returned (this value will be whatever the top-most rule placed
1658 in <tt>p[0]</tt>). Note: an alternative starting symbol can be
1659 specified using the <tt>start</tt> keyword argument to
1660 <tt>yacc()</tt>.
1661
1662 <p>The <tt>p_error(p)</tt> rule is defined to catch syntax errors.
1663 See the error handling section below for more detail.
1664
1665 <p>
1666 To build the parser, call the <tt>yacc.yacc()</tt> function. This
1667 function looks at the module and attempts to construct all of the LR
1668 parsing tables for the grammar you have specified. The first
1669 time <tt>yacc.yacc()</tt> is invoked, you will get a message such as
1670 this:
1671
1672 <blockquote>
1673 <pre>
1674 $ python calcparse.py
1675 Generating LALR tables
1676 calc >
1677 </pre>
1678 </blockquote>
1679
1680 Since table construction is relatively expensive (especially for large
1681 grammars), the resulting parsing table is written to the current
1682 directory in a file called <tt>parsetab.py</tt>. In addition, a
1683 debugging file called <tt>parser.out</tt> is created. On subsequent
1684 executions, <tt>yacc</tt> will reload the table from
1685 <tt>parsetab.py</tt> unless it has detected a change in the underlying
1686 grammar (in which case the tables and <tt>parsetab.py</tt> file are
1687 regenerated). Note: The names of parser output files can be changed
1688 if necessary. See the <a href="reference.html">PLY Reference</a> for details.
1689
1690 <p>
1691 If any errors are detected in your grammar specification, <tt>yacc.py</tt> will produce
1692 diagnostic messages and possibly raise an exception. Some of the errors that can be detected include:
1693
1694 <ul>
1695 <li>Duplicated function names (if more than one rule function have the same name in the grammar file).
1696 <li>Shift/reduce and reduce/reduce conflicts generated by ambiguous grammars.
1697 <li>Badly specified grammar rules.
1698 <li>Infinite recursion (rules that can never terminate).
1699 <li>Unused rules and tokens
1700 <li>Undefined rules and tokens
1701 </ul>
1702
1703 The next few sections discuss grammar specification in more detail.
1704
1705 <p>
1706 The final part of the example shows how to actually run the parser
1707 created by
1708 <tt>yacc()</tt>. To run the parser, you simply have to call
1709 the <tt>parse()</tt> with a string of input text. This will run all
1710 of the grammar rules and return the result of the entire parse. This
1711 result return is the value assigned to <tt>p[0]</tt> in the starting
1712 grammar rule.
1713
1714 <H3><a name="ply_nn25"></a>6.2 Combining Grammar Rule Functions</H3>
1715
1716
1717 When grammar rules are similar, they can be combined into a single function.
1718 For example, consider the two rules in our earlier example:
1719
1720 <blockquote>
1721 <pre>
1722 def p_expression_plus(p):
1723 'expression : expression PLUS term'
1724 p[0] = p[1] + p[3]
1725
1726 def p_expression_minus(t):
1727 'expression : expression MINUS term'
1728 p[0] = p[1] - p[3]
1729 </pre>
1730 </blockquote>
1731
1732 Instead of writing two functions, you might write a single function like this:
1733
1734 <blockquote>
1735 <pre>
1736 def p_expression(p):
1737 '''expression : expression PLUS term
1738 | expression MINUS term'''
1739 if p[2] == '+':
1740 p[0] = p[1] + p[3]
1741 elif p[2] == '-':
1742 p[0] = p[1] - p[3]
1743 </pre>
1744 </blockquote>
1745
1746 In general, the doc string for any given function can contain multiple grammar rules. So, it would
1747 have also been legal (although possibly confusing) to write this:
1748
1749 <blockquote>
1750 <pre>
1751 def p_binary_operators(p):
1752 '''expression : expression PLUS term
1753 | expression MINUS term
1754 term : term TIMES factor
1755 | term DIVIDE factor'''
1756 if p[2] == '+':
1757 p[0] = p[1] + p[3]
1758 elif p[2] == '-':
1759 p[0] = p[1] - p[3]
1760 elif p[2] == '*':
1761 p[0] = p[1] * p[3]
1762 elif p[2] == '/':
1763 p[0] = p[1] / p[3]
1764 </pre>
1765 </blockquote>
1766
1767 When combining grammar rules into a single function, it is usually a good idea for all of the rules to have
1768 a similar structure (e.g., the same number of terms). Otherwise, the corresponding action code may be more
1769 complicated than necessary. However, it is possible to handle simple cases using len(). For example:
1770
1771 <blockquote>
1772 <pre>
1773 def p_expressions(p):
1774 '''expression : expression MINUS expression
1775 | MINUS expression'''
1776 if (len(p) == 4):
1777 p[0] = p[1] - p[3]
1778 elif (len(p) == 3):
1779 p[0] = -p[2]
1780 </pre>
1781 </blockquote>
1782
1783 If parsing performance is a concern, you should resist the urge to put
1784 too much conditional processing into a single grammar rule as shown in
1785 these examples. When you add checks to see which grammar rule is
1786 being handled, you are actually duplicating the work that the parser
1787 has already performed (i.e., the parser already knows exactly what rule it
1788 matched). You can eliminate this overhead by using a
1789 separate <tt>p_rule()</tt> function for each grammar rule.
1790
1791 <H3><a name="ply_nn26"></a>6.3 Character Literals</H3>
1792
1793
1794 If desired, a grammar may contain tokens defined as single character literals. For example:
1795
1796 <blockquote>
1797 <pre>
1798 def p_binary_operators(p):
1799 '''expression : expression '+' term
1800 | expression '-' term
1801 term : term '*' factor
1802 | term '/' factor'''
1803 if p[2] == '+':
1804 p[0] = p[1] + p[3]
1805 elif p[2] == '-':
1806 p[0] = p[1] - p[3]
1807 elif p[2] == '*':
1808 p[0] = p[1] * p[3]
1809 elif p[2] == '/':
1810 p[0] = p[1] / p[3]
1811 </pre>
1812 </blockquote>
1813
1814 A character literal must be enclosed in quotes such as <tt>'+'</tt>. In addition, if literals are used, they must be declared in the
1815 corresponding <tt>lex</tt> file through the use of a special <tt>literals</tt> declaration.
1816
1817 <blockquote>
1818 <pre>
1819 # Literals. Should be placed in module given to lex()
1820 literals = ['+','-','*','/' ]
1821 </pre>
1822 </blockquote>
1823
1824 <b>Character literals are limited to a single character</b>. Thus, it is not legal to specify literals such as <tt>'&lt;='</tt> or <tt>'=='</tt>. For this, use
1825 the normal lexing rules (e.g., define a rule such as <tt>t_EQ = r'=='</tt>).
1826
1827 <H3><a name="ply_nn26"></a>6.4 Empty Productions</H3>
1828
1829
1830 <tt>yacc.py</tt> can handle empty productions by defining a rule like this:
1831
1832 <blockquote>
1833 <pre>
1834 def p_empty(p):
1835 'empty :'
1836 pass
1837 </pre>
1838 </blockquote>
1839
1840 Now to use the empty production, simply use 'empty' as a symbol. For example:
1841
1842 <blockquote>
1843 <pre>
1844 def p_optitem(p):
1845 'optitem : item'
1846 ' | empty'
1847 ...
1848 </pre>
1849 </blockquote>
1850
1851 Note: You can write empty rules anywhere by simply specifying an empty
1852 right hand side. However, I personally find that writing an "empty"
1853 rule and using "empty" to denote an empty production is easier to read
1854 and more clearly states your intentions.
1855
1856 <H3><a name="ply_nn28"></a>6.5 Changing the starting symbol</H3>
1857
1858
1859 Normally, the first rule found in a yacc specification defines the starting grammar rule (top level rule). To change this, simply
1860 supply a <tt>start</tt> specifier in your file. For example:
1861
1862 <blockquote>
1863 <pre>
1864 start = 'foo'
1865
1866 def p_bar(p):
1867 'bar : A B'
1868
1869 # This is the starting rule due to the start specifier above
1870 def p_foo(p):
1871 'foo : bar X'
1872 ...
1873 </pre>
1874 </blockquote>
1875
1876 The use of a <tt>start</tt> specifier may be useful during debugging
1877 since you can use it to have yacc build a subset of a larger grammar.
1878 For this purpose, it is also possible to specify a starting symbol as
1879 an argument to <tt>yacc()</tt>. For example:
1880
1881 <blockquote>
1882 <pre>
1883 yacc.yacc(start='foo')
1884 </pre>
1885 </blockquote>
1886
1887 <H3><a name="ply_nn27"></a>6.6 Dealing With Ambiguous Grammars</H3>
1888
1889
1890 The expression grammar given in the earlier example has been written
1891 in a special format to eliminate ambiguity. However, in many
1892 situations, it is extremely difficult or awkward to write grammars in
1893 this format. A much more natural way to express the grammar is in a
1894 more compact form like this:
1895
1896 <blockquote>
1897 <pre>
1898 expression : expression PLUS expression
1899 | expression MINUS expression
1900 | expression TIMES expression
1901 | expression DIVIDE expression
1902 | LPAREN expression RPAREN
1903 | NUMBER
1904 </pre>
1905 </blockquote>
1906
1907 Unfortunately, this grammar specification is ambiguous. For example,
1908 if you are parsing the string "3 * 4 + 5", there is no way to tell how
1909 the operators are supposed to be grouped. For example, does the
1910 expression mean "(3 * 4) + 5" or is it "3 * (4+5)"?
1911
1912 <p>
1913 When an ambiguous grammar is given to <tt>yacc.py</tt> it will print
1914 messages about "shift/reduce conflicts" or "reduce/reduce conflicts".
1915 A shift/reduce conflict is caused when the parser generator can't
1916 decide whether or not to reduce a rule or shift a symbol on the
1917 parsing stack. For example, consider the string "3 * 4 + 5" and the
1918 internal parsing stack:
1919
1920 <blockquote>
1921 <pre>
1922 Step Symbol Stack Input Tokens Action
1923 ---- --------------------- --------------------- -------------------------------
1924 1 $ 3 * 4 + 5$ Shift 3
1925 2 $ 3 * 4 + 5$ Reduce : expression : NUMBER
1926 3 $ expr * 4 + 5$ Shift *
1927 4 $ expr * 4 + 5$ Shift 4
1928 5 $ expr * 4 + 5$ Reduce: expression : NUMBER
1929 6 $ expr * expr + 5$ SHIFT/REDUCE CONFLICT ????
1930 </pre>
1931 </blockquote>
1932
1933 In this case, when the parser reaches step 6, it has two options. One
1934 is to reduce the rule <tt>expr : expr * expr</tt> on the stack. The
1935 other option is to shift the token <tt>+</tt> on the stack. Both
1936 options are perfectly legal from the rules of the
1937 context-free-grammar.
1938
1939 <p>
1940 By default, all shift/reduce conflicts are resolved in favor of
1941 shifting. Therefore, in the above example, the parser will always
1942 shift the <tt>+</tt> instead of reducing. Although this strategy
1943 works in many cases (for example, the case of
1944 "if-then" versus "if-then-else"), it is not enough for arithmetic expressions. In fact,
1945 in the above example, the decision to shift <tt>+</tt> is completely
1946 wrong---we should have reduced <tt>expr * expr</tt> since
1947 multiplication has higher mathematical precedence than addition.
1948
1949 <p>To resolve ambiguity, especially in expression
1950 grammars, <tt>yacc.py</tt> allows individual tokens to be assigned a
1951 precedence level and associativity. This is done by adding a variable
1952 <tt>precedence</tt> to the grammar file like this:
1953
1954 <blockquote>
1955 <pre>
1956 precedence = (
1957 ('left', 'PLUS', 'MINUS'),
1958 ('left', 'TIMES', 'DIVIDE'),
1959 )
1960 </pre>
1961 </blockquote>
1962
1963 This declaration specifies that <tt>PLUS</tt>/<tt>MINUS</tt> have the
1964 same precedence level and are left-associative and that
1965 <tt>TIMES</tt>/<tt>DIVIDE</tt> have the same precedence and are
1966 left-associative. Within the <tt>precedence</tt> declaration, tokens
1967 are ordered from lowest to highest precedence. Thus, this declaration
1968 specifies that <tt>TIMES</tt>/<tt>DIVIDE</tt> have higher precedence
1969 than <tt>PLUS</tt>/<tt>MINUS</tt> (since they appear later in the
1970 precedence specification).
1971
1972 <p>
1973 The precedence specification works by associating a numerical
1974 precedence level value and associativity direction to the listed
1975 tokens. For example, in the above example you get:
1976
1977 <blockquote>
1978 <pre>
1979 PLUS : level = 1, assoc = 'left'
1980 MINUS : level = 1, assoc = 'left'
1981 TIMES : level = 2, assoc = 'left'
1982 DIVIDE : level = 2, assoc = 'left'
1983 </pre>
1984 </blockquote>
1985
1986 These values are then used to attach a numerical precedence value and
1987 associativity direction to each grammar rule. <em>This is always
1988 determined by looking at the precedence of the right-most terminal
1989 symbol.</em> For example:
1990
1991 <blockquote>
1992 <pre>
1993 expression : expression PLUS expression # level = 1, left
1994 | expression MINUS expression # level = 1, left
1995 | expression TIMES expression # level = 2, left
1996 | expression DIVIDE expression # level = 2, left
1997 | LPAREN expression RPAREN # level = None (not specified)
1998 | NUMBER # level = None (not specified)
1999 </pre>
2000 </blockquote>
2001
2002 When shift/reduce conflicts are encountered, the parser generator resolves the conflict by
2003 looking at the precedence rules and associativity specifiers.
2004
2005 <p>
2006 <ol>
2007 <li>If the current token has higher precedence than the rule on the stack, it is shifted.
2008 <li>If the grammar rule on the stack has higher precedence, the rule is reduced.
2009 <li>If the current token and the grammar rule have the same precedence, the
2010 rule is reduced for left associativity, whereas the token is shifted for right associativity.
2011 <li>If nothing is known about the precedence, shift/reduce conflicts are resolved in
2012 favor of shifting (the default).
2013 </ol>
2014
2015 For example, if "expression PLUS expression" has been parsed and the
2016 next token is "TIMES", the action is going to be a shift because
2017 "TIMES" has a higher precedence level than "PLUS". On the other hand,
2018 if "expression TIMES expression" has been parsed and the next token is
2019 "PLUS", the action is going to be reduce because "PLUS" has a lower
2020 precedence than "TIMES."
2021
2022 <p>
2023 When shift/reduce conflicts are resolved using the first three
2024 techniques (with the help of precedence rules), <tt>yacc.py</tt> will
2025 report no errors or conflicts in the grammar (although it will print
2026 some information in the <tt>parser.out</tt> debugging file).
2027
2028 <p>
2029 One problem with the precedence specifier technique is that it is
2030 sometimes necessary to change the precedence of an operator in certain
2031 contexts. For example, consider a unary-minus operator in "3 + 4 *
2032 -5". Mathematically, the unary minus is normally given a very high
2033 precedence--being evaluated before the multiply. However, in our
2034 precedence specifier, MINUS has a lower precedence than TIMES. To
2035 deal with this, precedence rules can be given for so-called "fictitious tokens"
2036 like this:
2037
2038 <blockquote>
2039 <pre>
2040 precedence = (
2041 ('left', 'PLUS', 'MINUS'),
2042 ('left', 'TIMES', 'DIVIDE'),
2043 ('right', 'UMINUS'), # Unary minus operator
2044 )
2045 </pre>
2046 </blockquote>
2047
2048 Now, in the grammar file, we can write our unary minus rule like this:
2049
2050 <blockquote>
2051 <pre>
2052 def p_expr_uminus(p):
2053 'expression : MINUS expression %prec UMINUS'
2054 p[0] = -p[2]
2055 </pre>
2056 </blockquote>
2057
2058 In this case, <tt>%prec UMINUS</tt> overrides the default rule precedence--setting it to that
2059 of UMINUS in the precedence specifier.
2060
2061 <p>
2062 At first, the use of UMINUS in this example may appear very confusing.
2063 UMINUS is not an input token or a grammer rule. Instead, you should
2064 think of it as the name of a special marker in the precedence table. When you use the <tt>%prec</tt> qualifier, you're simply
2065 telling yacc that you want the precedence of the expression to be the same as for this special marker instead of the usual precedence.
2066
2067 <p>
2068 It is also possible to specify non-associativity in the <tt>precedence</tt> table. This would
2069 be used when you <em>don't</em> want operations to chain together. For example, suppose
2070 you wanted to support comparison operators like <tt>&lt;</tt> and <tt>&gt;</tt> but you didn't want to allow
2071 combinations like <tt>a &lt; b &lt; c</tt>. To do this, simply specify a rule like this:
2072
2073 <blockquote>
2074 <pre>
2075 precedence = (
2076 ('nonassoc', 'LESSTHAN', 'GREATERTHAN'), # Nonassociative operators
2077 ('left', 'PLUS', 'MINUS'),
2078 ('left', 'TIMES', 'DIVIDE'),
2079 ('right', 'UMINUS'), # Unary minus operator
2080 )
2081 </pre>
2082 </blockquote>
2083
2084 <p>
2085 If you do this, the occurrence of input text such as <tt> a &lt; b &lt; c</tt> will result in a syntax error. However, simple
2086 expressions such as <tt>a &lt; b</tt> will still be fine.
2087
2088 <p>
2089 Reduce/reduce conflicts are caused when there are multiple grammar
2090 rules that can be applied to a given set of symbols. This kind of
2091 conflict is almost always bad and is always resolved by picking the
2092 rule that appears first in the grammar file. Reduce/reduce conflicts
2093 are almost always caused when different sets of grammar rules somehow
2094 generate the same set of symbols. For example:
2095
2096 <blockquote>
2097 <pre>
2098 assignment : ID EQUALS NUMBER
2099 | ID EQUALS expression
2100
2101 expression : expression PLUS expression
2102 | expression MINUS expression
2103 | expression TIMES expression
2104 | expression DIVIDE expression
2105 | LPAREN expression RPAREN
2106 | NUMBER
2107 </pre>
2108 </blockquote>
2109
2110 In this case, a reduce/reduce conflict exists between these two rules:
2111
2112 <blockquote>
2113 <pre>
2114 assignment : ID EQUALS NUMBER
2115 expression : NUMBER
2116 </pre>
2117 </blockquote>
2118
2119 For example, if you wrote "a = 5", the parser can't figure out if this
2120 is supposed to be reduced as <tt>assignment : ID EQUALS NUMBER</tt> or
2121 whether it's supposed to reduce the 5 as an expression and then reduce
2122 the rule <tt>assignment : ID EQUALS expression</tt>.
2123
2124 <p>
2125 It should be noted that reduce/reduce conflicts are notoriously
2126 difficult to spot simply looking at the input grammer. When a
2127 reduce/reduce conflict occurs, <tt>yacc()</tt> will try to help by
2128 printing a warning message such as this:
2129
2130 <blockquote>
2131 <pre>
2132 WARNING: 1 reduce/reduce conflict
2133 WARNING: reduce/reduce conflict in state 15 resolved using rule (assignment -> ID EQUALS NUMBER)
2134 WARNING: rejected rule (expression -> NUMBER)
2135 </pre>
2136 </blockquote>
2137
2138 This message identifies the two rules that are in conflict. However,
2139 it may not tell you how the parser arrived at such a state. To try
2140 and figure it out, you'll probably have to look at your grammar and
2141 the contents of the
2142 <tt>parser.out</tt> debugging file with an appropriately high level of
2143 caffeination.
2144
2145 <H3><a name="ply_nn28"></a>6.7 The parser.out file</H3>
2146
2147
2148 Tracking down shift/reduce and reduce/reduce conflicts is one of the finer pleasures of using an LR
2149 parsing algorithm. To assist in debugging, <tt>yacc.py</tt> creates a debugging file called
2150 'parser.out' when it generates the parsing table. The contents of this file look like the following:
2151
2152 <blockquote>
2153 <pre>
2154 Unused terminals:
2155
2156
2157 Grammar
2158
2159 Rule 1 expression -> expression PLUS expression
2160 Rule 2 expression -> expression MINUS expression
2161 Rule 3 expression -> expression TIMES expression
2162 Rule 4 expression -> expression DIVIDE expression
2163 Rule 5 expression -> NUMBER
2164 Rule 6 expression -> LPAREN expression RPAREN
2165
2166 Terminals, with rules where they appear
2167
2168 TIMES : 3
2169 error :
2170 MINUS : 2
2171 RPAREN : 6
2172 LPAREN : 6
2173 DIVIDE : 4
2174 PLUS : 1
2175 NUMBER : 5
2176
2177 Nonterminals, with rules where they appear
2178
2179 expression : 1 1 2 2 3 3 4 4 6 0
2180
2181
2182 Parsing method: LALR
2183
2184
2185 state 0
2186
2187 S' -> . expression
2188 expression -> . expression PLUS expression
2189 expression -> . expression MINUS expression
2190 expression -> . expression TIMES expression
2191 expression -> . expression DIVIDE expression
2192 expression -> . NUMBER
2193 expression -> . LPAREN expression RPAREN
2194
2195 NUMBER shift and go to state 3
2196 LPAREN shift and go to state 2
2197
2198
2199 state 1
2200
2201 S' -> expression .
2202 expression -> expression . PLUS expression
2203 expression -> expression . MINUS expression
2204 expression -> expression . TIMES expression
2205 expression -> expression . DIVIDE expression
2206
2207 PLUS shift and go to state 6
2208 MINUS shift and go to state 5
2209 TIMES shift and go to state 4
2210 DIVIDE shift and go to state 7
2211
2212
2213 state 2
2214
2215 expression -> LPAREN . expression RPAREN
2216 expression -> . expression PLUS expression
2217 expression -> . expression MINUS expression
2218 expression -> . expression TIMES expression
2219 expression -> . expression DIVIDE expression
2220 expression -> . NUMBER
2221 expression -> . LPAREN expression RPAREN
2222
2223 NUMBER shift and go to state 3
2224 LPAREN shift and go to state 2
2225
2226
2227 state 3
2228
2229 expression -> NUMBER .
2230
2231 $ reduce using rule 5
2232 PLUS reduce using rule 5
2233 MINUS reduce using rule 5
2234 TIMES reduce using rule 5
2235 DIVIDE reduce using rule 5
2236 RPAREN reduce using rule 5
2237
2238
2239 state 4
2240
2241 expression -> expression TIMES . expression
2242 expression -> . expression PLUS expression
2243 expression -> . expression MINUS expression
2244 expression -> . expression TIMES expression
2245 expression -> . expression DIVIDE expression
2246 expression -> . NUMBER
2247 expression -> . LPAREN expression RPAREN
2248
2249 NUMBER shift and go to state 3
2250 LPAREN shift and go to state 2
2251
2252
2253 state 5
2254
2255 expression -> expression MINUS . expression
2256 expression -> . expression PLUS expression
2257 expression -> . expression MINUS expression
2258 expression -> . expression TIMES expression
2259 expression -> . expression DIVIDE expression
2260 expression -> . NUMBER
2261 expression -> . LPAREN expression RPAREN
2262
2263 NUMBER shift and go to state 3
2264 LPAREN shift and go to state 2
2265
2266
2267 state 6
2268
2269 expression -> expression PLUS . expression
2270 expression -> . expression PLUS expression
2271 expression -> . expression MINUS expression
2272 expression -> . expression TIMES expression
2273 expression -> . expression DIVIDE expression
2274 expression -> . NUMBER
2275 expression -> . LPAREN expression RPAREN
2276
2277 NUMBER shift and go to state 3
2278 LPAREN shift and go to state 2
2279
2280
2281 state 7
2282
2283 expression -> expression DIVIDE . expression
2284 expression -> . expression PLUS expression
2285 expression -> . expression MINUS expression
2286 expression -> . expression TIMES expression
2287 expression -> . expression DIVIDE expression
2288 expression -> . NUMBER
2289 expression -> . LPAREN expression RPAREN
2290
2291 NUMBER shift and go to state 3
2292 LPAREN shift and go to state 2
2293
2294
2295 state 8
2296
2297 expression -> LPAREN expression . RPAREN
2298 expression -> expression . PLUS expression
2299 expression -> expression . MINUS expression
2300 expression -> expression . TIMES expression
2301 expression -> expression . DIVIDE expression
2302
2303 RPAREN shift and go to state 13
2304 PLUS shift and go to state 6
2305 MINUS shift and go to state 5
2306 TIMES shift and go to state 4
2307 DIVIDE shift and go to state 7
2308
2309
2310 state 9
2311
2312 expression -> expression TIMES expression .
2313 expression -> expression . PLUS expression
2314 expression -> expression . MINUS expression
2315 expression -> expression . TIMES expression
2316 expression -> expression . DIVIDE expression
2317
2318 $ reduce using rule 3
2319 PLUS reduce using rule 3
2320 MINUS reduce using rule 3
2321 TIMES reduce using rule 3
2322 DIVIDE reduce using rule 3
2323 RPAREN reduce using rule 3
2324
2325 ! PLUS [ shift and go to state 6 ]
2326 ! MINUS [ shift and go to state 5 ]
2327 ! TIMES [ shift and go to state 4 ]
2328 ! DIVIDE [ shift and go to state 7 ]
2329
2330 state 10
2331
2332 expression -> expression MINUS expression .
2333 expression -> expression . PLUS expression
2334 expression -> expression . MINUS expression
2335 expression -> expression . TIMES expression
2336 expression -> expression . DIVIDE expression
2337
2338 $ reduce using rule 2
2339 PLUS reduce using rule 2
2340 MINUS reduce using rule 2
2341 RPAREN reduce using rule 2
2342 TIMES shift and go to state 4
2343 DIVIDE shift and go to state 7
2344
2345 ! TIMES [ reduce using rule 2 ]
2346 ! DIVIDE [ reduce using rule 2 ]
2347 ! PLUS [ shift and go to state 6 ]
2348 ! MINUS [ shift and go to state 5 ]
2349
2350 state 11
2351
2352 expression -> expression PLUS expression .
2353 expression -> expression . PLUS expression
2354 expression -> expression . MINUS expression
2355 expression -> expression . TIMES expression
2356 expression -> expression . DIVIDE expression
2357
2358 $ reduce using rule 1
2359 PLUS reduce using rule 1
2360 MINUS reduce using rule 1
2361 RPAREN reduce using rule 1
2362 TIMES shift and go to state 4
2363 DIVIDE shift and go to state 7
2364
2365 ! TIMES [ reduce using rule 1 ]
2366 ! DIVIDE [ reduce using rule 1 ]
2367 ! PLUS [ shift and go to state 6 ]
2368 ! MINUS [ shift and go to state 5 ]
2369
2370 state 12
2371
2372 expression -> expression DIVIDE expression .
2373 expression -> expression . PLUS expression
2374 expression -> expression . MINUS expression
2375 expression -> expression . TIMES expression
2376 expression -> expression . DIVIDE expression
2377
2378 $ reduce using rule 4
2379 PLUS reduce using rule 4
2380 MINUS reduce using rule 4
2381 TIMES reduce using rule 4
2382 DIVIDE reduce using rule 4
2383 RPAREN reduce using rule 4
2384
2385 ! PLUS [ shift and go to state 6 ]
2386 ! MINUS [ shift and go to state 5 ]
2387 ! TIMES [ shift and go to state 4 ]
2388 ! DIVIDE [ shift and go to state 7 ]
2389
2390 state 13
2391
2392 expression -> LPAREN expression RPAREN .
2393
2394 $ reduce using rule 6
2395 PLUS reduce using rule 6
2396 MINUS reduce using rule 6
2397 TIMES reduce using rule 6
2398 DIVIDE reduce using rule 6
2399 RPAREN reduce using rule 6
2400 </pre>
2401 </blockquote>
2402
2403 The different states that appear in this file are a representation of
2404 every possible sequence of valid input tokens allowed by the grammar.
2405 When receiving input tokens, the parser is building up a stack and
2406 looking for matching rules. Each state keeps track of the grammar
2407 rules that might be in the process of being matched at that point. Within each
2408 rule, the "." character indicates the current location of the parse
2409 within that rule. In addition, the actions for each valid input token
2410 are listed. When a shift/reduce or reduce/reduce conflict arises,
2411 rules <em>not</em> selected are prefixed with an !. For example:
2412
2413 <blockquote>
2414 <pre>
2415 ! TIMES [ reduce using rule 2 ]
2416 ! DIVIDE [ reduce using rule 2 ]
2417 ! PLUS [ shift and go to state 6 ]
2418 ! MINUS [ shift and go to state 5 ]
2419 </pre>
2420 </blockquote>
2421
2422 By looking at these rules (and with a little practice), you can usually track down the source
2423 of most parsing conflicts. It should also be stressed that not all shift-reduce conflicts are
2424 bad. However, the only way to be sure that they are resolved correctly is to look at <tt>parser.out</tt>.
2425
2426 <H3><a name="ply_nn29"></a>6.8 Syntax Error Handling</H3>
2427
2428
2429 If you are creating a parser for production use, the handling of
2430 syntax errors is important. As a general rule, you don't want a
2431 parser to simply throw up its hands and stop at the first sign of
2432 trouble. Instead, you want it to report the error, recover if possible, and
2433 continue parsing so that all of the errors in the input get reported
2434 to the user at once. This is the standard behavior found in compilers
2435 for languages such as C, C++, and Java.
2436
2437 In PLY, when a syntax error occurs during parsing, the error is immediately
2438 detected (i.e., the parser does not read any more tokens beyond the
2439 source of the error). However, at this point, the parser enters a
2440 recovery mode that can be used to try and continue further parsing.
2441 As a general rule, error recovery in LR parsers is a delicate
2442 topic that involves ancient rituals and black-magic. The recovery mechanism
2443 provided by <tt>yacc.py</tt> is comparable to Unix yacc so you may want
2444 consult a book like O'Reilly's "Lex and Yacc" for some of the finer details.
2445
2446 <p>
2447 When a syntax error occurs, <tt>yacc.py</tt> performs the following steps:
2448
2449 <ol>
2450 <li>On the first occurrence of an error, the user-defined <tt>p_error()</tt> function
2451 is called with the offending token as an argument. However, if the syntax error is due to
2452 reaching the end-of-file, <tt>p_error()</tt> is called with an argument of <tt>None</tt>.
2453 Afterwards, the parser enters
2454 an "error-recovery" mode in which it will not make future calls to <tt>p_error()</tt> until it
2455 has successfully shifted at least 3 tokens onto the parsing stack.
2456
2457 <p>
2458 <li>If no recovery action is taken in <tt>p_error()</tt>, the offending lookahead token is replaced
2459 with a special <tt>error</tt> token.
2460
2461 <p>
2462 <li>If the offending lookahead token is already set to <tt>error</tt>, the top item of the parsing stack is
2463 deleted.
2464
2465 <p>
2466 <li>If the entire parsing stack is unwound, the parser enters a restart state and attempts to start
2467 parsing from its initial state.
2468
2469 <p>
2470 <li>If a grammar rule accepts <tt>error</tt> as a token, it will be
2471 shifted onto the parsing stack.
2472
2473 <p>
2474 <li>If the top item of the parsing stack is <tt>error</tt>, lookahead tokens will be discarded until the
2475 parser can successfully shift a new symbol or reduce a rule involving <tt>error</tt>.
2476 </ol>
2477
2478 <H4><a name="ply_nn30"></a>6.8.1 Recovery and resynchronization with error rules</H4>
2479
2480
2481 The most well-behaved approach for handling syntax errors is to write grammar rules that include the <tt>error</tt>
2482 token. For example, suppose your language had a grammar rule for a print statement like this:
2483
2484 <blockquote>
2485 <pre>
2486 def p_statement_print(p):
2487 'statement : PRINT expr SEMI'
2488 ...
2489 </pre>
2490 </blockquote>
2491
2492 To account for the possibility of a bad expression, you might write an additional grammar rule like this:
2493
2494 <blockquote>
2495 <pre>
2496 def p_statement_print_error(p):
2497 'statement : PRINT error SEMI'
2498 print "Syntax error in print statement. Bad expression"
2499
2500 </pre>
2501 </blockquote>
2502
2503 In this case, the <tt>error</tt> token will match any sequence of
2504 tokens that might appear up to the first semicolon that is
2505 encountered. Once the semicolon is reached, the rule will be
2506 invoked and the <tt>error</tt> token will go away.
2507
2508 <p>
2509 This type of recovery is sometimes known as parser resynchronization.
2510 The <tt>error</tt> token acts as a wildcard for any bad input text and
2511 the token immediately following <tt>error</tt> acts as a
2512 synchronization token.
2513
2514 <p>
2515 It is important to note that the <tt>error</tt> token usually does not appear as the last token
2516 on the right in an error rule. For example:
2517
2518 <blockquote>
2519 <pre>
2520 def p_statement_print_error(p):
2521 'statement : PRINT error'
2522 print "Syntax error in print statement. Bad expression"
2523 </pre>
2524 </blockquote>
2525
2526 This is because the first bad token encountered will cause the rule to
2527 be reduced--which may make it difficult to recover if more bad tokens
2528 immediately follow.
2529
2530 <H4><a name="ply_nn31"></a>6.8.2 Panic mode recovery</H4>
2531
2532
2533 An alternative error recovery scheme is to enter a panic mode recovery in which tokens are
2534 discarded to a point where the parser might be able to recover in some sensible manner.
2535
2536 <p>
2537 Panic mode recovery is implemented entirely in the <tt>p_error()</tt> function. For example, this
2538 function starts discarding tokens until it reaches a closing '}'. Then, it restarts the
2539 parser in its initial state.
2540
2541 <blockquote>
2542 <pre>
2543 def p_error(p):
2544 print "Whoa. You are seriously hosed."
2545 # Read ahead looking for a closing '}'
2546 while 1:
2547 tok = yacc.token() # Get the next token
2548 if not tok or tok.type == 'RBRACE': break
2549 yacc.restart()
2550 </pre>
2551 </blockquote>
2552
2553 <p>
2554 This function simply discards the bad token and tells the parser that the error was ok.
2555
2556 <blockquote>
2557 <pre>
2558 def p_error(p):
2559 print "Syntax error at token", p.type
2560 # Just discard the token and tell the parser it's okay.
2561 yacc.errok()
2562 </pre>
2563 </blockquote>
2564
2565 <P>
2566 Within the <tt>p_error()</tt> function, three functions are available to control the behavior
2567 of the parser:
2568 <p>
2569 <ul>
2570 <li><tt>yacc.errok()</tt>. This resets the parser state so it doesn't think it's in error-recovery
2571 mode. This will prevent an <tt>error</tt> token from being generated and will reset the internal
2572 error counters so that the next syntax error will call <tt>p_error()</tt> again.
2573
2574 <p>
2575 <li><tt>yacc.token()</tt>. This returns the next token on the input stream.
2576
2577 <p>
2578 <li><tt>yacc.restart()</tt>. This discards the entire parsing stack and resets the parser
2579 to its initial state.
2580 </ul>
2581
2582 Note: these functions are only available when invoking <tt>p_error()</tt> and are not available
2583 at any other time.
2584
2585 <p>
2586 To supply the next lookahead token to the parser, <tt>p_error()</tt> can return a token. This might be
2587 useful if trying to synchronize on special characters. For example:
2588
2589 <blockquote>
2590 <pre>
2591 def p_error(p):
2592 # Read ahead looking for a terminating ";"
2593 while 1:
2594 tok = yacc.token() # Get the next token
2595 if not tok or tok.type == 'SEMI': break
2596 yacc.errok()
2597
2598 # Return SEMI to the parser as the next lookahead token
2599 return tok
2600 </pre>
2601 </blockquote>
2602
2603 <H4><a name="ply_nn35"></a>6.8.3 Signaling an error from a production</H4>
2604
2605
2606 If necessary, a production rule can manually force the parser to enter error recovery. This
2607 is done by raising the <tt>SyntaxError</tt> exception like this:
2608
2609 <blockquote>
2610 <pre>
2611 def p_production(p):
2612 'production : some production ...'
2613 raise SyntaxError
2614 </pre>
2615 </blockquote>
2616
2617 The effect of raising <tt>SyntaxError</tt> is the same as if the last symbol shifted onto the
2618 parsing stack was actually a syntax error. Thus, when you do this, the last symbol shifted is popped off
2619 of the parsing stack and the current lookahead token is set to an <tt>error</tt> token. The parser
2620 then enters error-recovery mode where it tries to reduce rules that can accept <tt>error</tt> tokens.
2621 The steps that follow from this point are exactly the same as if a syntax error were detected and
2622 <tt>p_error()</tt> were called.
2623
2624 <P>
2625 One important aspect of manually setting an error is that the <tt>p_error()</tt> function will <b>NOT</b> be
2626 called in this case. If you need to issue an error message, make sure you do it in the production that
2627 raises <tt>SyntaxError</tt>.
2628
2629 <P>
2630 Note: This feature of PLY is meant to mimic the behavior of the YYERROR macro in yacc.
2631
2632
2633 <H4><a name="ply_nn32"></a>6.8.4 General comments on error handling</H4>
2634
2635
2636 For normal types of languages, error recovery with error rules and resynchronization characters is probably the most reliable
2637 technique. This is because you can instrument the grammar to catch errors at selected places where it is relatively easy
2638 to recover and continue parsing. Panic mode recovery is really only useful in certain specialized applications where you might want
2639 to discard huge portions of the input text to find a valid restart point.
2640
2641 <H3><a name="ply_nn33"></a>6.9 Line Number and Position Tracking</H3>
2642
2643
2644 Position tracking is often a tricky problem when writing compilers.
2645 By default, PLY tracks the line number and position of all tokens.
2646 This information is available using the following functions:
2647
2648 <ul>
2649 <li><tt>p.lineno(num)</tt>. Return the line number for symbol <em>num</em>
2650 <li><tt>p.lexpos(num)</tt>. Return the lexing position for symbol <em>num</em>
2651 </ul>
2652
2653 For example:
2654
2655 <blockquote>
2656 <pre>
2657 def p_expression(p):
2658 'expression : expression PLUS expression'
2659 line = p.lineno(2) # line number of the PLUS token
2660 index = p.lexpos(2) # Position of the PLUS token
2661 </pre>
2662 </blockquote>
2663
2664 As an optional feature, <tt>yacc.py</tt> can automatically track line
2665 numbers and positions for all of the grammar symbols as well.
2666 However, this extra tracking requires extra processing and can
2667 significantly slow down parsing. Therefore, it must be enabled by
2668 passing the
2669 <tt>tracking=True</tt> option to <tt>yacc.parse()</tt>. For example:
2670
2671 <blockquote>
2672 <pre>
2673 yacc.parse(data,tracking=True)
2674 </pre>
2675 </blockquote>
2676
2677 Once enabled, the <tt>lineno()</tt> and <tt>lexpos()</tt> methods work
2678 for all grammar symbols. In addition, two additional methods can be
2679 used:
2680
2681 <ul>
2682 <li><tt>p.linespan(num)</tt>. Return a tuple (startline,endline) with the starting and ending line number for symbol <em>num</em>.
2683 <li><tt>p.lexspan(num)</tt>. Return a tuple (start,end) with the starting and ending positions for symbol <em>num</em>.
2684 </ul>
2685
2686 For example:
2687
2688 <blockquote>
2689 <pre>
2690 def p_expression(p):
2691 'expression : expression PLUS expression'
2692 p.lineno(1) # Line number of the left expression
2693 p.lineno(2) # line number of the PLUS operator
2694 p.lineno(3) # line number of the right expression
2695 ...
2696 start,end = p.linespan(3) # Start,end lines of the right expression
2697 starti,endi = p.lexspan(3) # Start,end positions of right expression
2698
2699 </pre>
2700 </blockquote>
2701
2702 Note: The <tt>lexspan()</tt> function only returns the range of values up to the start of the last grammar symbol.
2703
2704 <p>
2705 Although it may be convenient for PLY to track position information on
2706 all grammar symbols, this is often unnecessary. For example, if you
2707 are merely using line number information in an error message, you can
2708 often just key off of a specific token in the grammar rule. For
2709 example:
2710
2711 <blockquote>
2712 <pre>
2713 def p_bad_func(p):
2714 'funccall : fname LPAREN error RPAREN'
2715 # Line number reported from LPAREN token
2716 print "Bad function call at line", p.lineno(2)
2717 </pre>
2718 </blockquote>
2719
2720 <p>
2721 Similarly, you may get better parsing performance if you only
2722 selectively propagate line number information where it's needed using
2723 the <tt>p.set_lineno()</tt> method. For example:
2724
2725 <blockquote>
2726 <pre>
2727 def p_fname(p):
2728 'fname : ID'
2729 p[0] = p[1]
2730 p.set_lineno(0,p.lineno(1))
2731 </pre>
2732 </blockquote>
2733
2734 PLY doesn't retain line number information from rules that have already been
2735 parsed. If you are building an abstract syntax tree and need to have line numbers,
2736 you should make sure that the line numbers appear in the tree itself.
2737
2738 <H3><a name="ply_nn34"></a>6.10 AST Construction</H3>
2739
2740
2741 <tt>yacc.py</tt> provides no special functions for constructing an
2742 abstract syntax tree. However, such construction is easy enough to do
2743 on your own.
2744
2745 <p>A minimal way to construct a tree is to simply create and
2746 propagate a tuple or list in each grammar rule function. There
2747 are many possible ways to do this, but one example would be something
2748 like this:
2749
2750 <blockquote>
2751 <pre>
2752 def p_expression_binop(p):
2753 '''expression : expression PLUS expression
2754 | expression MINUS expression
2755 | expression TIMES expression
2756 | expression DIVIDE expression'''
2757
2758 p[0] = ('binary-expression',p[2],p[1],p[3])
2759
2760 def p_expression_group(p):
2761 'expression : LPAREN expression RPAREN'
2762 p[0] = ('group-expression',p[2])
2763
2764 def p_expression_number(p):
2765 'expression : NUMBER'
2766 p[0] = ('number-expression',p[1])
2767 </pre>
2768 </blockquote>
2769
2770 <p>
2771 Another approach is to create a set of data structure for different
2772 kinds of abstract syntax tree nodes and assign nodes to <tt>p[0]</tt>
2773 in each rule. For example:
2774
2775 <blockquote>
2776 <pre>
2777 class Expr: pass
2778
2779 class BinOp(Expr):
2780 def __init__(self,left,op,right):
2781 self.type = "binop"
2782 self.left = left
2783 self.right = right
2784 self.op = op
2785
2786 class Number(Expr):
2787 def __init__(self,value):
2788 self.type = "number"
2789 self.value = value
2790
2791 def p_expression_binop(p):
2792 '''expression : expression PLUS expression
2793 | expression MINUS expression
2794 | expression TIMES expression
2795 | expression DIVIDE expression'''
2796
2797 p[0] = BinOp(p[1],p[2],p[3])
2798
2799 def p_expression_group(p):
2800 'expression : LPAREN expression RPAREN'
2801 p[0] = p[2]
2802
2803 def p_expression_number(p):
2804 'expression : NUMBER'
2805 p[0] = Number(p[1])
2806 </pre>
2807 </blockquote>
2808
2809 The advantage to this approach is that it may make it easier to attach more complicated
2810 semantics, type checking, code generation, and other features to the node classes.
2811
2812 <p>
2813 To simplify tree traversal, it may make sense to pick a very generic
2814 tree structure for your parse tree nodes. For example:
2815
2816 <blockquote>
2817 <pre>
2818 class Node:
2819 def __init__(self,type,children=None,leaf=None):
2820 self.type = type
2821 if children:
2822 self.children = children
2823 else:
2824 self.children = [ ]
2825 self.leaf = leaf
2826
2827 def p_expression_binop(p):
2828 '''expression : expression PLUS expression
2829 | expression MINUS expression
2830 | expression TIMES expression
2831 | expression DIVIDE expression'''
2832
2833 p[0] = Node("binop", [p[1],p[3]], p[2])
2834 </pre>
2835 </blockquote>
2836
2837 <H3><a name="ply_nn35"></a>6.11 Embedded Actions</H3>
2838
2839
2840 The parsing technique used by yacc only allows actions to be executed at the end of a rule. For example,
2841 suppose you have a rule like this:
2842
2843 <blockquote>
2844 <pre>
2845 def p_foo(p):
2846 "foo : A B C D"
2847 print "Parsed a foo", p[1],p[2],p[3],p[4]
2848 </pre>
2849 </blockquote>
2850
2851 <p>
2852 In this case, the supplied action code only executes after all of the
2853 symbols <tt>A</tt>, <tt>B</tt>, <tt>C</tt>, and <tt>D</tt> have been
2854 parsed. Sometimes, however, it is useful to execute small code
2855 fragments during intermediate stages of parsing. For example, suppose
2856 you wanted to perform some action immediately after <tt>A</tt> has
2857 been parsed. To do this, write an empty rule like this:
2858
2859 <blockquote>
2860 <pre>
2861 def p_foo(p):
2862 "foo : A seen_A B C D"
2863 print "Parsed a foo", p[1],p[3],p[4],p[5]
2864 print "seen_A returned", p[2]
2865
2866 def p_seen_A(p):
2867 "seen_A :"
2868 print "Saw an A = ", p[-1] # Access grammar symbol to left
2869 p[0] = some_value # Assign value to seen_A
2870
2871 </pre>
2872 </blockquote>
2873
2874 <p>
2875 In this example, the empty <tt>seen_A</tt> rule executes immediately
2876 after <tt>A</tt> is shifted onto the parsing stack. Within this
2877 rule, <tt>p[-1]</tt> refers to the symbol on the stack that appears
2878 immediately to the left of the <tt>seen_A</tt> symbol. In this case,
2879 it would be the value of <tt>A</tt> in the <tt>foo</tt> rule
2880 immediately above. Like other rules, a value can be returned from an
2881 embedded action by simply assigning it to <tt>p[0]</tt>
2882
2883 <p>
2884 The use of embedded actions can sometimes introduce extra shift/reduce conflicts. For example,
2885 this grammar has no conflicts:
2886
2887 <blockquote>
2888 <pre>
2889 def p_foo(p):
2890 """foo : abcd
2891 | abcx"""
2892
2893 def p_abcd(p):
2894 "abcd : A B C D"
2895
2896 def p_abcx(p):
2897 "abcx : A B C X"
2898 </pre>
2899 </blockquote>
2900
2901 However, if you insert an embedded action into one of the rules like this,
2902
2903 <blockquote>
2904 <pre>
2905 def p_foo(p):
2906 """foo : abcd
2907 | abcx"""
2908
2909 def p_abcd(p):
2910 "abcd : A B C D"
2911
2912 def p_abcx(p):
2913 "abcx : A B seen_AB C X"
2914
2915 def p_seen_AB(p):
2916 "seen_AB :"
2917 </pre>
2918 </blockquote>
2919
2920 an extra shift-reduce conflict will be introduced. This conflict is
2921 caused by the fact that the same symbol <tt>C</tt> appears next in
2922 both the <tt>abcd</tt> and <tt>abcx</tt> rules. The parser can either
2923 shift the symbol (<tt>abcd</tt> rule) or reduce the empty
2924 rule <tt>seen_AB</tt> (<tt>abcx</tt> rule).
2925
2926 <p>
2927 A common use of embedded rules is to control other aspects of parsing
2928 such as scoping of local variables. For example, if you were parsing C code, you might
2929 write code like this:
2930
2931 <blockquote>
2932 <pre>
2933 def p_statements_block(p):
2934 "statements: LBRACE new_scope statements RBRACE"""
2935 # Action code
2936 ...
2937 pop_scope() # Return to previous scope
2938
2939 def p_new_scope(p):
2940 "new_scope :"
2941 # Create a new scope for local variables
2942 s = new_scope()
2943 push_scope(s)
2944 ...
2945 </pre>
2946 </blockquote>
2947
2948 In this case, the embedded action <tt>new_scope</tt> executes
2949 immediately after a <tt>LBRACE</tt> (<tt>{</tt>) symbol is parsed.
2950 This might adjust internal symbol tables and other aspects of the
2951 parser. Upon completion of the rule <tt>statements_block</tt>, code
2952 might undo the operations performed in the embedded action
2953 (e.g., <tt>pop_scope()</tt>).
2954
2955 <H3><a name="ply_nn36"></a>6.12 Miscellaneous Yacc Notes</H3>
2956
2957
2958 <ul>
2959 <li>The default parsing method is LALR. To use SLR instead, run yacc() as follows:
2960
2961 <blockquote>
2962 <pre>
2963 yacc.yacc(method="SLR")
2964 </pre>
2965 </blockquote>
2966 Note: LALR table generation takes approximately twice as long as SLR table generation. There is no
2967 difference in actual parsing performance---the same code is used in both cases. LALR is preferred when working
2968 with more complicated grammars since it is more powerful.
2969
2970 <p>
2971
2972 <li>By default, <tt>yacc.py</tt> relies on <tt>lex.py</tt> for tokenizing. However, an alternative tokenizer
2973 can be supplied as follows:
2974
2975 <blockquote>
2976 <pre>
2977 yacc.parse(lexer=x)
2978 </pre>
2979 </blockquote>
2980 in this case, <tt>x</tt> must be a Lexer object that minimally has a <tt>x.token()</tt> method for retrieving the next
2981 token. If an input string is given to <tt>yacc.parse()</tt>, the lexer must also have an <tt>x.input()</tt> method.
2982
2983 <p>
2984 <li>By default, the yacc generates tables in debugging mode (which produces the parser.out file and other output).
2985 To disable this, use
2986
2987 <blockquote>
2988 <pre>
2989 yacc.yacc(debug=0)
2990 </pre>
2991 </blockquote>
2992
2993 <p>
2994 <li>To change the name of the <tt>parsetab.py</tt> file, use:
2995
2996 <blockquote>
2997 <pre>
2998 yacc.yacc(tabmodule="foo")
2999 </pre>
3000 </blockquote>
3001
3002 <p>
3003 <li>To change the directory in which the <tt>parsetab.py</tt> file (and other output files) are written, use:
3004 <blockquote>
3005 <pre>
3006 yacc.yacc(tabmodule="foo",outputdir="somedirectory")
3007 </pre>
3008 </blockquote>
3009
3010 <p>
3011 <li>To prevent yacc from generating any kind of parser table file, use:
3012 <blockquote>
3013 <pre>
3014 yacc.yacc(write_tables=0)
3015 </pre>
3016 </blockquote>
3017
3018 Note: If you disable table generation, yacc() will regenerate the parsing tables
3019 each time it runs (which may take awhile depending on how large your grammar is).
3020
3021 <P>
3022 <li>To print copious amounts of debugging during parsing, use:
3023
3024 <blockquote>
3025 <pre>
3026 yacc.parse(debug=1)
3027 </pre>
3028 </blockquote>
3029
3030 <p>
3031 <li>The <tt>yacc.yacc()</tt> function really returns a parser object. If you want to support multiple
3032 parsers in the same application, do this:
3033
3034 <blockquote>
3035 <pre>
3036 p = yacc.yacc()
3037 ...
3038 p.parse()
3039 </pre>
3040 </blockquote>
3041
3042 Note: The function <tt>yacc.parse()</tt> is bound to the last parser that was generated.
3043
3044 <p>
3045 <li>Since the generation of the LALR tables is relatively expensive, previously generated tables are
3046 cached and reused if possible. The decision to regenerate the tables is determined by taking an MD5
3047 checksum of all grammar rules and precedence rules. Only in the event of a mismatch are the tables regenerated.
3048
3049 <p>
3050 It should be noted that table generation is reasonably efficient, even for grammars that involve around a 100 rules
3051 and several hundred states. For more complex languages such as C, table generation may take 30-60 seconds on a slow
3052 machine. Please be patient.
3053
3054 <p>
3055 <li>Since LR parsing is driven by tables, the performance of the parser is largely independent of the
3056 size of the grammar. The biggest bottlenecks will be the lexer and the complexity of the code in your grammar rules.
3057 </ul>
3058
3059 <H2><a name="ply_nn37"></a>7. Multiple Parsers and Lexers</H2>
3060
3061
3062 In advanced parsing applications, you may want to have multiple
3063 parsers and lexers.
3064
3065 <p>
3066 As a general rules this isn't a problem. However, to make it work,
3067 you need to carefully make sure everything gets hooked up correctly.
3068 First, make sure you save the objects returned by <tt>lex()</tt> and
3069 <tt>yacc()</tt>. For example:
3070
3071 <blockquote>
3072 <pre>
3073 lexer = lex.lex() # Return lexer object
3074 parser = yacc.yacc() # Return parser object
3075 </pre>
3076 </blockquote>
3077
3078 Next, when parsing, make sure you give the <tt>parse()</tt> function a reference to the lexer it
3079 should be using. For example:
3080
3081 <blockquote>
3082 <pre>
3083 parser.parse(text,lexer=lexer)
3084 </pre>
3085 </blockquote>
3086
3087 If you forget to do this, the parser will use the last lexer
3088 created--which is not always what you want.
3089
3090 <p>
3091 Within lexer and parser rule functions, these objects are also
3092 available. In the lexer, the "lexer" attribute of a token refers to
3093 the lexer object that triggered the rule. For example:
3094
3095 <blockquote>
3096 <pre>
3097 def t_NUMBER(t):
3098 r'\d+'
3099 ...
3100 print t.lexer # Show lexer object
3101 </pre>
3102 </blockquote>
3103
3104 In the parser, the "lexer" and "parser" attributes refer to the lexer
3105 and parser objects respectively.
3106
3107 <blockquote>
3108 <pre>
3109 def p_expr_plus(p):
3110 'expr : expr PLUS expr'
3111 ...
3112 print p.parser # Show parser object
3113 print p.lexer # Show lexer object
3114 </pre>
3115 </blockquote>
3116
3117 If necessary, arbitrary attributes can be attached to the lexer or parser object.
3118 For example, if you wanted to have different parsing modes, you could attach a mode
3119 attribute to the parser object and look at it later.
3120
3121 <H2><a name="ply_nn38"></a>8. Using Python's Optimized Mode</H2>
3122
3123
3124 Because PLY uses information from doc-strings, parsing and lexing
3125 information must be gathered while running the Python interpreter in
3126 normal mode (i.e., not with the -O or -OO options). However, if you
3127 specify optimized mode like this:
3128
3129 <blockquote>
3130 <pre>
3131 lex.lex(optimize=1)
3132 yacc.yacc(optimize=1)
3133 </pre>
3134 </blockquote>
3135
3136 then PLY can later be used when Python runs in optimized mode. To make this work,
3137 make sure you first run Python in normal mode. Once the lexing and parsing tables
3138 have been generated the first time, run Python in optimized mode. PLY will use
3139 the tables without the need for doc strings.
3140
3141 <p>
3142 Beware: running PLY in optimized mode disables a lot of error
3143 checking. You should only do this when your project has stabilized
3144 and you don't need to do any debugging. One of the purposes of
3145 optimized mode is to substantially decrease the startup time of
3146 your compiler (by assuming that everything is already properly
3147 specified and works).
3148
3149 <H2><a name="ply_nn44"></a>9. Advanced Debugging</H2>
3150
3151
3152 <p>
3153 Debugging a compiler is typically not an easy task. PLY provides some
3154 advanced diagonistic capabilities through the use of Python's
3155 <tt>logging</tt> module. The next two sections describe this:
3156
3157 <H3><a name="ply_nn45"></a>9.1 Debugging the lex() and yacc() commands</H3>
3158
3159
3160 <p>
3161 Both the <tt>lex()</tt> and <tt>yacc()</tt> commands have a debugging
3162 mode that can be enabled using the <tt>debug</tt> flag. For example:
3163
3164 <blockquote>
3165 <pre>
3166 lex.lex(debug=True)
3167 yacc.yacc(debug=True)
3168 </pre>
3169 </blockquote>
3170
3171 Normally, the output produced by debugging is routed to either
3172 standard error or, in the case of <tt>yacc()</tt>, to a file
3173 <tt>parser.out</tt>. This output can be more carefully controlled
3174 by supplying a logging object. Here is an example that adds
3175 information about where different debugging messages are coming from:
3176
3177 <blockquote>
3178 <pre>
3179 # Set up a logging object
3180 import logging
3181 logging.basicConfig(
3182 level = logging.DEBUG,
3183 filename = "parselog.txt",
3184 filemode = "w",
3185 format = "%(filename)10s:%(lineno)4d:%(message)s"
3186 )
3187 log = logging.getLogger()
3188
3189 lex.lex(debug=True,debuglog=log)
3190 yacc.yacc(debug=True,debuglog=log)
3191 </pre>
3192 </blockquote>
3193
3194 If you supply a custom logger, the amount of debugging
3195 information produced can be controlled by setting the logging level.
3196 Typically, debugging messages are either issued at the <tt>DEBUG</tt>,
3197 <tt>INFO</tt>, or <tt>WARNING</tt> levels.
3198
3199 <p>
3200 PLY's error messages and warnings are also produced using the logging
3201 interface. This can be controlled by passing a logging object
3202 using the <tt>errorlog</tt> parameter.
3203
3204 <blockquote>
3205 <pre>
3206 lex.lex(errorlog=log)
3207 yacc.yacc(errorlog=log)
3208 </pre>
3209 </blockquote>
3210
3211 If you want to completely silence warnings, you can either pass in a
3212 logging object with an appropriate filter level or use the <tt>NullLogger</tt>
3213 object defined in either <tt>lex</tt> or <tt>yacc</tt>. For example:
3214
3215 <blockquote>
3216 <pre>
3217 yacc.yacc(errorlog=yacc.NullLogger())
3218 </pre>
3219 </blockquote>
3220
3221 <H3><a name="ply_nn46"></a>9.2 Run-time Debugging</H3>
3222
3223
3224 <p>
3225 To enable run-time debugging of a parser, use the <tt>debug</tt> option to parse. This
3226 option can either be an integer (which simply turns debugging on or off) or an instance
3227 of a logger object. For example:
3228
3229 <blockquote>
3230 <pre>
3231 log = logging.getLogger()
3232 parser.parse(input,debug=log)
3233 </pre>
3234 </blockquote>
3235
3236 If a logging object is passed, you can use its filtering level to control how much
3237 output gets generated. The <tt>INFO</tt> level is used to produce information
3238 about rule reductions. The <tt>DEBUG</tt> level will show information about the
3239 parsing stack, token shifts, and other details. The <tt>ERROR</tt> level shows information
3240 related to parsing errors.
3241
3242 <p>
3243 For very complicated problems, you should pass in a logging object that
3244 redirects to a file where you can more easily inspect the output after
3245 execution.
3246
3247 <H2><a name="ply_nn39"></a>10. Where to go from here?</H2>
3248
3249
3250 The <tt>examples</tt> directory of the PLY distribution contains several simple examples. Please consult a
3251 compilers textbook for the theory and underlying implementation details or LR parsing.
3252
3253 </body>
3254 </html>
3255
3256
3257
3258
3259
3260
3261