jupyLR.tokenizer
index
/usr/local/lib/python2.7/dist-packages/jupyLR/tokenizer.py

 
Modules
       
re

 
Classes
       
__builtin__.object
Scanner
exceptions.Exception(exceptions.BaseException)
TokenizerException

 
class Scanner(__builtin__.object)
     Methods defined here:
__call__(self, text)
Iteratively scans through text and yield each token
__init__(self, **tokens)
add(self, **tokens)
Each named keyword is a token type and its value is the
corresponding regular expression. Returns a function that iterates
tokens in the form (type, value) over a string.
 
Special keywords are discard_names and discard_values, which specify
lists (actually any iterable is accepted) containing tokens names or
values that must be discarded from the scanner output.

Data descriptors defined here:
__dict__
dictionary for instance variables (if defined)
__weakref__
list of weak references to the object (if defined)

 
class TokenizerException(exceptions.Exception)
    
Method resolution order:
TokenizerException
exceptions.Exception
exceptions.BaseException
__builtin__.object

Data descriptors defined here:
__weakref__
list of weak references to the object (if defined)

Methods inherited from exceptions.Exception:
__init__(...)
x.__init__(...) initializes x; see help(type(x)) for signature

Data and other attributes inherited from exceptions.Exception:
__new__ = <built-in method __new__ of type object>
T.__new__(S, ...) -> a new object with type S, a subtype of T

Methods inherited from exceptions.BaseException:
__delattr__(...)
x.__delattr__('name') <==> del x.name
__getattribute__(...)
x.__getattribute__('name') <==> x.name
__getitem__(...)
x.__getitem__(y) <==> x[y]
__getslice__(...)
x.__getslice__(i, j) <==> x[i:j]
 
Use of negative indices is not supported.
__reduce__(...)
__repr__(...)
x.__repr__() <==> repr(x)
__setattr__(...)
x.__setattr__('name', value) <==> x.name = value
__setstate__(...)
__str__(...)
x.__str__() <==> str(x)
__unicode__(...)

Data descriptors inherited from exceptions.BaseException:
__dict__
args
message

 
Functions
       
make_scanner(**tokens)
token_line_col(text, tok)
Converts the token offset into (line, column) position.
First character is at position (1, 1).

 
Data
        __all__ = ['TokenizerException', 'Scanner', 'make_scanner', 'token_line_col']