Tokenize is a Julia package that serves a similar purpose and API as the tokenize module in Python but for Julia. This is to take a string or buffer containing Julia code, perform lexical analysis and return a stream of tokens.
Features
- Fast, it currently lexes all of Julia source files in ~0.25 seconds (580 files, 2 million Tokens)
- Round trippable, that is, from a stream of tokens the original string should be recoverable exactly
- Round trippable, that is, from a stream of tokens the original string should be recoverable exactly
- The function tokenize is the main entrypoint for generating Tokens
- Each Token is represented by where it starts and ends, what string it contains and what type it is
- Documentation available
License
MIT LicenseFollow Tokenize.jl
Other Useful Business Software
Digital business card + lead capture + contact enrichment
Share digital business cards, capture leads, and enrich validated contact info - at events, in the field, and beyond. Powered by AI and our proprietary data engine, Popl drives growth for companies around the world, turning every handshake into an opportunity.
Rate This Project
Login To Rate This Project
User Reviews
Be the first to post a review of Tokenize.jl!