PPI::Tokenizer5.12 man page on Darwin

Man page or keyword search:  
man Server   23457 pages
apropos Keyword Search (all sections)
Output format
Darwin logo
[printable version]

PPI::Tokenizer(3)     User Contributed Perl Documentation    PPI::Tokenizer(3)

NAME
       PPI::Tokenizer - The Perl Document Tokenizer

SYNOPSIS
	 # Create a tokenizer for a file, array or string
	 $Tokenizer = PPI::Tokenizer->new( 'filename.pl' );
	 $Tokenizer = PPI::Tokenizer->new( \@lines	 );
	 $Tokenizer = PPI::Tokenizer->new( \$source	 );

	 # Return all the tokens for the document
	 my $tokens = $Tokenizer->all_tokens;

	 # Or we can use it as an iterator
	 while ( my $Token = $Tokenizer->get_token ) {
	       print "Found token '$Token'\n";
	 }

	 # If we REALLY need to manually nudge the cursor, you
	 # can do that to (The lexer needs this ability to do rollbacks)
	 $is_incremented = $Tokenizer->increment_cursor;
	 $is_decremented = $Tokenizer->decrement_cursor;

DESCRIPTION
       PPI::Tokenizer is the class that provides Tokenizer objects for use in
       breaking strings of Perl source code into Tokens.

       By the time you are reading this, you probably need to know a little
       about the difference between how perl parses Perl "code" and how PPI
       parsers Perl "documents".

       "perl" itself (the interpreter) uses a heavily modified lex
       specification to specify its parsing logic, maintains several types of
       state as it goes, and incrementally tokenizes, lexes AND EXECUTES at
       the same time.

       In fact, it is provably impossible to use perl's parsing method without
       simultaneously executing code. A formal mathematical proof has been
       published demonstrating the method.

       This is where the truism "Only perl can parse Perl" comes from.

       PPI uses a completely different approach by abandoning the (impossible)
       ability to parse Perl the same way that the interpreter does, and
       instead parsing the source as a document, using a document structure
       independantly derived from the Perl documentation and approximating the
       perl interpreter interpretation as closely as possible.

       It was touch and go for a long time whether we could get it close
       enough, but in the end it turned out that it could be done.

       In this approach, the tokenizer "PPI::Tokenizer" is implemented
       separately from the lexer PPI::Lexer.

       The job of "PPI::Tokenizer" is to take pure source as a string and
       break it up into a stream/set of tokens, and contains most of the
       "black magic" used in PPI. By comparison, the lexer implements a
       relatively straight forward tree structure, and has an implementation
       that is uncomplicated (compared to the insanity in the tokenizer at
       least).

       The Tokenizer uses an immense amount of heuristics, guessing and cruft,
       supported by a very VERY flexible internal API, but fortunately it was
       possible to largely encapsulate the black magic, so there is not a lot
       that gets exposed to people using the "PPI::Tokenizer" itself.

METHODS
       Despite the incredible complexity, the Tokenizer itself only exposes a
       relatively small number of methods, with most of the complexity
       implemented in private methods.

   new $file | \@lines | \$source
       The main "new" constructor creates a new Tokenizer object. These
       objects have no configuration parameters, and can only be used once, to
       tokenize a single perl source file.

       It takes as argument either a normal scalar containing source code, a
       reference to a scalar containing source code, or a reference to an
       ARRAY containing newline-terminated lines of source code.

       Returns a new "PPI::Tokenizer" object on success, or throws a
       PPI::Exception exception on error.

   get_token
       When using the PPI::Tokenizer object as an iterator, the "get_token"
       method is the primary method that is used. It increments the cursor and
       returns the next Token in the output array.

       The actual parsing of the file is done only as-needed, and a line at a
       time. When "get_token" hits the end of the token array, it will cause
       the parser to pull in the next line and parse it, continuing as needed
       until there are more tokens on the output array that get_token can then
       return.

       This means that a number of Tokenizer objects can be created, and won't
       consume significant CPU until you actually begin to pull tokens from
       it.

       Return a PPI::Token object on success, 0 if the Tokenizer had reached
       the end of the file, or "undef" on error.

   all_tokens
       When not being used as an iterator, the "all_tokens" method tells the
       Tokenizer to parse the entire file and return all of the tokens in a
       single ARRAY reference.

       It should be noted that "all_tokens" does NOT interfere with the use of
       the Tokenizer object as an iterator (does not modify the token cursor)
       and use of the two different mechanisms can be mixed safely.

       Returns a reference to an ARRAY of PPI::Token objects on success or
       throws an exception on error.

   increment_cursor
       Although exposed as a public method, "increment_method" is implemented
       for expert use only, when writing lexers or other components that work
       directly on token streams.

       It manually increments the token cursor forward through the file, in
       effect "skipping" the next token.

       Return true if the cursor is incremented, 0 if already at the end of
       the file, or "undef" on error.

   decrement_cursor
       Although exposed as a public method, "decrement_method" is implemented
       for expert use only, when writing lexers or other components that work
       directly on token streams.

       It manually decrements the token cursor backwards through the file, in
       effect "rolling back" the token stream. And indeed that is what it is
       primarily intended for, when the component that is consuming the token
       stream needs to implement some sort of "roll back" feature in its use
       of the token stream.

       Return true if the cursor is decremented, 0 if already at the beginning
       of the file, or "undef" on error.

NOTES
   How the Tokenizer Works
       Understanding the Tokenizer is not for the feint-hearted. It is by far
       the most complex and twisty piece of perl I've ever written that is
       actually still built properly and isn't a terrible spaghetti-like mess.
       In fact, you probably want to skip this section.

       But if you really want to understand, well then here goes.

   Source Input and Clean Up
       The Tokenizer starts by taking source in a variety of forms, sucking it
       all in and merging into one big string, and doing our own internal line
       split, using a "universal line separator" which allows the Tokenizer to
       take source for any platform (and even supports a few known types of
       broken newlines caused by mixed mac/pc/*nix editor screw ups).

       The resulting array of lines is used to feed the tokenizer, and is also
       accessed directly by the heredoc-logic to do the line-oriented part of
       here-doc support.

   Doing Things the Old Fashioned Way
       Due to the complexity of perl, and after 2 previously aborted parser
       attempts, in the end the tokenizer was fashioned around a line-buffered
       character-by-character method.

       That is, the Tokenizer pulls and holds a line at a time into a line
       buffer, and then iterates a cursor along it. At each cursor position, a
       method is called in whatever token class we are currently in, which
       will examine the character at the current position, and handle it.

       As the handler methods in the various token classes are called, they
       build up a output token array for the source code.

       Various parts of the Tokenizer use look-ahead, arbitrary-distance look-
       behind (although currently the maximum is three significant tokens), or
       both, and various other heuristic guesses.

       I've been told it is officially termed a "backtracking parser with
       infinite lookaheads".

   State Variables
       Aside from the current line and the character cursor, the Tokenizer
       maintains a number of different state variables.

       Current Class
	   The Tokenizer maintains the current token class at all times. Much
	   of the time is just going to be the "Whitespace" class, which is
	   what the base of a document is. As the tokenizer executes the
	   various character handlers, the class changes a lot as it moves a
	   long. In fact, in some instances, the character handler may not
	   handle the character directly itself, but rather change the
	   "current class" and then hand off to the character handler for the
	   new class.

	   Because of this, and some other things I'll deal with later, the
	   number of times the character handlers are called does not in fact
	   have a direct relationship to the number of actual characters in
	   the document.

       Current Zone
	   Rather than create a class stack to allow for infinitely nested
	   layers of classes, the Tokenizer recognises just a single layer.

	   To put it a different way, in various parts of the file, the
	   Tokenizer will recognise different "base" or "substrate" classes.
	   When a Token such as a comment or a number is finalised by the
	   tokenizer, it "falls back" to the base state.

	   This allows proper tokenization of special areas such as __DATA__
	   and __END__ blocks, which also contain things like comments and
	   POD, without allowing the creation of any significant Tokens inside
	   these areas.

	   For the main part of a document we use PPI::Token::Whitespace for
	   this, with the idea being that code is "floating in a sea of
	   whitespace".

       Current Token
	   The final main state variable is the "current token". This is the
	   Token that is currently being built by the Tokenizer. For certain
	   types, it can be manipulated and morphed and change class quite a
	   bit while being assembled, as the Tokenizer's understanding of the
	   token content changes.

	   When the Tokenizer is confident that it has seen the end of the
	   Token, it will be "finalized", which adds it to the output token
	   array and resets the current class to that of the zone that we are
	   currently in.

	   I should also note at this point that the "current token" variable
	   is optional. The Tokenizer is capable of knowing what class it is
	   currently set to, without actually having accumulated any
	   characters in the Token.

   Making It Faster
       As I'm sure you can imagine, calling several different methods for each
       character and running regexes and other complex heuristics made the
       first fully working version of the tokenizer extremely slow.

       During testing, I created a metric to measure parsing speed called
       LPGC, or "lines per gigacycle" . A gigacycle is simple a billion CPU
       cycles on a typical single-core CPU, and so a Tokenizer running at
       "1000 lines per gigacycle" should generate around 1200 lines of
       tokenized code when running on a 1200 MHz processor.

       The first working version of the tokenizer ran at only 350 LPGC, so to
       tokenize a typical large module such as ExtUtils::MakeMaker took 10-15
       seconds. This sluggishness made it unpractical for many uses.

       So in the current parser, there are multiple layers of optimisation
       very carefully built in to the basic. This has brought the tokenizer up
       to a more reasonable 1000 LPGC, at the expense of making the code quite
       a bit twistier.

   Making It Faster - Whole Line Classification
       The first step in the optimisation process was to add a hew handler to
       enable several of the more basic classes (whitespace, comments) to be
       able to be parsed a line at a time. At the start of each line, a
       special optional handler (only supported by a few classes) is called to
       check and see if the entire line can be parsed in one go.

       This is used mainly to handle things like POD, comments, empty lines,
       and a few other minor special cases.

   Making It Faster - Inlining
       The second stage of the optimisation involved inlining a small number
       of critical methods that were repeated an extremely high number of
       times. Profiling suggested that there were about 1,000,000 individual
       method calls per gigacycle, and by cutting these by two thirds a
       significant speed improvement was gained, in the order of about 50%.

       You may notice that many methods in the "PPI::Tokenizer" code look very
       nested and long hand. This is primarily due to this inlining.

       At around this time, some statistics code that existed in the early
       versions of the parser was also removed, as it was determined that it
       was consuming around 15% of the CPU for the entire parser, while making
       the core more complicated.

       A judgment call was made that with the difficulties likely to be
       encountered with future planned enhancements, and given the relatively
       high cost involved, the statistics features would be removed from the
       Tokenizer.

   Making It Faster - Quote Engine
       Once inlining had reached diminishing returns, it became obvious from
       the profiling results that a huge amount of time was being spent
       stepping a char at a time though long, simple and "syntactically
       boring" code such as comments and strings.

       The existing regex engine was expanded to also encompass quotes and
       other quote-like things, and a special abstract base class was added
       that provided a number of specialised parsing methods that would "scan
       ahead", looking out ahead to find the end of a string, and updating the
       cursor to leave it in a valid position for the next call.

       This is also the point at which the number of character handler calls
       began to greatly differ from the number of characters. But it has been
       done in a way that allows the parser to retain the power of the
       original version at the critical points, while skipping through the
       "boring bits" as needed for additional speed.

       The addition of this feature allowed the tokenizer to exceed 1000 LPGC
       for the first time.

   Making It Faster - The "Complete" Mechanism
       As it became evident that great speed increases were available by using
       this "skipping ahead" mechanism, a new handler method was added that
       explicitly handles the parsing of an entire token, where the structure
       of the token is relatively simple. Tokens such as symbols fit this
       case, as once we are passed the initial sigil and word char, we know
       that we can skip ahead and "complete" the rest of the token much more
       easily.

       A number of these have been added for most or possibly all of the
       common cases, with most of these "complete" handlers implemented using
       regular expressions.

       In fact, so many have been added that at this point, you could arguably
       reclassify the tokenizer as a "hybrid regex, char-by=char heuristic
       tokenizer". More tokens are now consumed in "complete" methods in a
       typical program than are handled by the normal char-by-char methods.

       Many of the these complete-handlers were implemented during the writing
       of the Lexer, and this has allowed the full parser to maintain around
       1000 LPGC despite the increasing weight of the Lexer.

   Making It Faster - Porting To C (In Progress)
       While it would be extraordinarily difficult to port all of the
       Tokenizer to C, work has started on a PPI::XS "accelerator" package
       which acts as a separate and automatically-detected add-on to the main
       PPI package.

       PPI::XS implements faster versions of a variety of functions scattered
       over the entire PPI codebase, from the Tokenizer Core, Quote Engine,
       and various other places, and implements them identically in XS/C.

       In particular, the skip-ahead methods from the Quote Engine would
       appear to be extremely amenable to being done in C, and a number of
       other functions could be cherry-picked one at a time and implemented in
       C.

       Each method is heavily tested to ensure that the functionality is
       identical, and a versioning mechanism is included to ensure that if a
       function gets out of sync, PPI::XS will degrade gracefully and just not
       replace that single method.

TO DO
       - Add an option to reset or seek the token stream...

       - Implement more Tokenizer functions in PPI::XS

SUPPORT
       See the support section in the main module.

AUTHOR
       Adam Kennedy <adamk@cpan.org>

COPYRIGHT
       Copyright 2001 - 2011 Adam Kennedy.

       This program is free software; you can redistribute it and/or modify it
       under the same terms as Perl itself.

       The full text of the license can be found in the LICENSE file included
       with this module.

perl v5.12.5			  2011-02-25		     PPI::Tokenizer(3)
[top]

List of man pages available for Darwin

Copyright (c) for man pages and the logo by the respective OS vendor.

For those who want to learn more, the polarhome community provides shell access and support.

[legal] [privacy] [GNU] [policy] [cookies] [netiquette] [sponsors] [FAQ]
Tweet
Polarhome, production since 1999.
Member of Polarhome portal.
Based on Fawad Halim's script.
....................................................................
Vote for polarhome
Free Shell Accounts :: the biggest list on the net