Quick Facts
- Role: Solo project
- Timeline: Feb 2026 (ongoing)
- Focus: Lexer → Parser → Evaluator
- Tools: Python, src/ layout, Git
What It Does
Tiny-Lang reads source code, converts it into tokens (lexer), builds an abstract syntax tree (parser), then evaluates the AST to produce a result (interpreter).
Goal: Build a clean, modular interpreter that is easy to extend with new syntax.
Architecture
This project is organized as small modules that match each stage of interpretation:
Lexer
Turns raw characters into tokens (identifiers, numbers, operators, keywords).
Parser
Consumes tokens and builds an AST using a clear, readable parsing strategy.
AST
Node structures representing expressions/statements, used by the evaluator.
Evaluator
Walks the AST and executes logic, producing values and handling errors.
Current Features
- Clean Python project setup (src/ layout, installable package)
- Lexer/tokenizer implemented for the initial core syntax
- Token definitions + basic error handling for unknown characters
In Progress
- Parser (tokens → AST)
- Grammar decisions + precedence rules
Planned
- AST node types + recursive descent parsing
- Evaluator / interpreter loop
- Variables + assignment
- Conditionals and blocks
- Functions
- REPL mode
- Test suite
How to Run
- Clone the repository.
- Create/activate a virtual environment.
- Install (editable) and run a sample program.
git clone https://github.com/lizziejperez/Tiny-Lang-Interpreter.git
cd Tiny-Lang-Interpreter
python -m venv .venv
# Windows PowerShell:
.venv\Scripts\activate
# macOS/Linux:
source .venv/bin/activate
pip install -e .
# Example run (update to your entrypoint):
python -m tinylang
Note: Update the command above to match your current entrypoint once you finalize it.
Design Notes
- Modularity: each stage is independent and testable.
- Readability: simple parsing decisions over clever abstractions.
- Error handling: fail early with clear messages.
Next Steps
- Implement the Parser: Convert tokens into an AST using recursive descent parsing.
- Define Grammar Rules: Add support for variables and assignment expressions.
- Introduce an Environment Model: Build a symbol table for storing and resolving variable bindings.
- Add Automated Tests: Write unit tests for the lexer and upcoming parser components.