Compact Syntax Reference
Human-friendly compact syntax for AINL — Python-like, 66% fewer tokens, compiles to the same IR as standard opcodes.
Compact Syntax Reference
AINL supports two equivalent syntaxes that compile to the same IR:
- Compact syntax (recommended) — Human-friendly, Python-like
- Opcode syntax — Low-level, 1-char ops (
S,R,J,If)
Compact is recommended for new code. It uses 66% fewer tokens than opcodes, making it cheaper for LLMs to generate and easier for humans to read.
Quick Start
# hello.ainl — compact syntax
adder:
result = core.ADD 2 3
out result
ainl validate hello.ainl --strict
ainl run hello.ainl
# Output: {"ok": true, "result": 5}
Syntax Reference
Graph Header
my_graph: # Basic graph
my_cron @cron "*/5 * * * *": # Cron-scheduled
my_api @api "/webhooks/stripe": # API endpoint
Input Fields
processor:
in: name email amount # Each becomes a variable
Adapter Calls
fetcher:
resp = http.GET "https://api.example.com" # Result bound to resp
cache.set "key" resp # No result needed (->_)
data = core.ADD 2 3 # Core operations
Branching
router:
in: level
if level == "high":
out "critical"
if level == "medium":
out "warning"
out "info"
Supported operators: ==, !=, >, <, >=, or bare variable (truthy check).
Declarations
monitor:
config threshold:N
state last_check:T
state counter:N
Error Handling
validator:
in: value
if value == 0:
err "value cannot be zero"
out value
Labels and Calls
workflow:
call process_step
out done
How It Works
The compact preprocessor (ainl_preprocess.py) transpiles compact syntax
into standard opcodes before the compiler sees it. The compiler, runtime,
adapters, and emitters are unchanged.
Compact source → Preprocessor → Opcodes → Compiler → IR → Runtime
Example Transpilation
Compact input:
classifier:
in: level
if level == "high":
out "critical"
out "info"
Generated opcodes (what the compiler sees):
S app core noop
L_entry:
X level ctx.level
Set _cmp_c_cmp_3 (core.eq level "high")
If _cmp_c_cmp_3 ->L_c_then_1 ->L_c_cont_2
L_c_then_1:
Set _out "critical"
J _out
L_c_cont_2:
Set _out "info"
J _out
Token Efficiency
Real-world monitoring example (golden/04_alerting_monitor):
| Syntax | Bytes | Tokens | Lines | vs Opcodes | |--------|-------|--------|-------|------------| | Opcodes | 2536 | 714 | 56 | baseline | | Compact | 880 | 246 | 24 | 66% fewer tokens |
This means:
- LLMs generating AINL use 66% fewer output tokens
- LLMs reading AINL use 66% fewer input tokens
- Source files are 65% smaller
Compatibility
- ✅ Standard
.ainlopcode files pass through unchanged - ✅ Both syntaxes can coexist in a project (different files)
- ✅ Works with
--strictand non-strict modes - ✅ All CLI commands work:
validate,run,emit,serve,compile - ✅ All emitters work: LangGraph, Temporal, React, OpenAPI, etc.
When to Use Opcodes
Opcodes are still useful for:
- Performance-critical inner loops (avoid preprocessor overhead)
- Generated code from tooling (emitters already produce opcodes)
- Advanced patterns not yet supported by compact syntax
- Understanding what the compiler actually processes
See Also
- Grammar & Ops Reference — Full opcode specification
- Examples — Compact syntax examples
- AGENTS.md — Quick reference for AI agents
