AINL (AI Native Lang) from sbhooley/ainativelang is a compact, graph-canonical DSL plus runtime that nudges AI agents from “smart conversationalists” (ever-growing prompt loops) toward structured workers. It compiles .ainl (and related .lang) workflow sources into a deterministic intermediate representation (IR) that the runtime executes with strict validation, repeatable control flow, tool calling, and explicit state and memory handling.
That design maps well to agents such as OpenClaw (a primary integration target, with a dedicated openclaw/bridge/ tree and quickstart material) and similar lightweight stacks (ZeroClaw, NemoClaw-style hosts, and anything that can speak MCP or HTTP). This site is the narrative layer; the repo remains the technical source of truth for code, examples, and contracts.
What lives in the repo
The monorepo ships the full toolchain: compiler (compiler_v2.py), runtime (runtime/engine.py), CLI, MCP server (scripts/ainl_mcp_server.py / ainl-mcp entrypoint), modules, adapters, and OpenClaw-oriented bridges. The project is open-core (Apache 2.0 for the public code lane) and aimed at AI-agent builders: agents can author .ainl, validate and run it via MCP, or embed AINL as the execution layer for multi-step, stateful tasks.
Where to dig deeper:
- Integration story — how AINL sits in agent stacks end to end.
- MCP host integrations —
ainl install-mcp --host openclaw|zeroclawand skill layout. - OpenClaw integration — skill paths,
openclaw.json,ainl-run. - OpenClaw adapters —
oclsurface and host verbs. - Root quickstart (repo):
AI_AGENT_QUICKSTART_OPENCLAW.md.
Core architecture: memory, context, and session continuity
AINL enforces tiered state discipline instead of stuffing everything into the model context window. The full model is in State discipline; in short:
| Tier | Role |
| --- | --- |
| Frame | Ephemeral variables for one run; discarded when the run ends. |
| Cache | Short-lived key/value with TTL (e.g. R cache get / put); scoped to a runtime instance. |
| Persistent memory | Durable, namespaced JSON records via the memory adapter (session, long_term, workflow, ops, daily_log, …). This is the main lever for continuity. |
| Coordination | Queues and mailboxes for cross-workflow or cross-agent handoff. |
Memory is externalized — it is not silently poured into prompts. Workflows call memory.put, get, list, append, delete, prune (via R memory.* or helper modules) when they need data. That keeps retrieval surgical: load only what the graph says to load, then let the LLM (if any) choose the next step. The goal is to avoid token explosion and lost state on restart when the graph and adapters are used consistently.
Memory adapter contract (v1 + shipped v1.1 additions)
The behavior of persistent memory is standardized in Memory contract (v1) (the live doc includes v1.1 additive behavior: optional record metadata and bounded list filters). Memory contract v1.1 RFC remains the detailed design record for that additive layer.
Namespaces scope data (e.g. session for per-interaction continuity, long_term for durable facts, workflow for checkpoints, ops / daily_log for operator-style trails).
Typical operations (as seen from graph code and tooling):
put— store or overwrite JSON keyed by namespace, kind, and id (optional TTL and optional v1.1 metadata such astags,source,valid_at).get,append(log-style),list—listsupports deterministic filters including v1.1tags_any,tags_all, created/updated windows,source,valid_at, pluslimit/offset(see contract doc for the full matrix).delete,prune— eviction and TTL-oriented cleanup (prune returns counts in the contract narrative).
Determinism: records are keyed (namespace, kind, id); list ordering is stable. Vector / semantic search is out of scope for core contract semantics so behavior stays reproducible and testable.
Default backend: SQLite behind AINL_MEMORY_DB (see contract doc for schema and provenance). The contract is backend-agnostic — implement it for filesystem, hosted DB, or custom stores.
Capability hint: hosts can read memory_profile (v1.1-deterministic-metadata) from the capability manifest to branch on supported memory features (v1.2.2).
Convenience modules: the repo’s modules/common/ tree wraps low-level ops into reusable includes—e.g. generic_memory (namespace-parameterized LWRITE / LLIST with v1.1 list filters), token_cost_memory / ops_memory (shared monitor writes/lists, v1.2.3), plus optional access_aware_memory (LACCESS_*, LACCESS_LIST_SAFE for graph mode, v1.2.4). Strict mode allowlists memory.PUT … PRUNE and maps R memory … steps for tooling (v1.2.3). Compile-time merging keeps graphs readable. Browse on GitHub: modules/common/.
Release line (memory-related): v1.2.4 — access metadata helpers + runtime label resolution for included subgraphs · v1.2.3 — shared memory include modules for production monitors · v1.2.2 — Memory Contract v1.1 metadata, deterministic filters, capability hints. PyPI ainl / RUNTIME_VERSION: 1.2.4.
Session continuity in practice
- Persist checkpoints or last-run markers in
sessionorworkflowat the end of a run. - On the next run (or a cron-triggered entrypoint),
get/listreload exactly what the graph needs (for example alast_email_checktimestamp in a daily-digest flow). demo/session_budget_enforcer.langillustrates budget tracking, pruning old session data, and enforcing limits in one narrative arc.
OpenClaw bridge: openclaw/bridge/ can align AINL memory with OpenClaw’s Markdown workspace (for example daily notes under ~/.openclaw/workspace/memory/), so operators and agents see a unified story across graph execution and host tooling. Operator-oriented detail: Unified monitoring guide · token/budget wrapper: Bridge token budget alert · bridge source: openclaw/bridge/README.md.
Context management contrast
- Traditional loop: history → ever-growing prompt → higher cost and drift.
- AINL-shaped loop: explicit
memory.get/list→ small, relevant context slices → model or pure runtime advances the graph. Auditable and restart-safe when persistence is wired.
Integrations: OpenClaw, ZeroClaw, and “anything with MCP”
OpenClaw
ocladapter exposes host capabilities as AINL verbs — see OpenClaw adapters.- MCP exposes
ainl_validate,ainl_run, and related tools to the host — see AINL MCP on the web and the integration doc above. - Intelligence / monitors (token-aware startup, consolidation patterns) are described in Intelligence programs.
Onboarding sketch (dev checkout): clone the repo, pip install -e ".[dev,web]", have your agent read AI_AGENT_QUICKSTART_OPENCLAW.md, generate .ainl, validate and visualize (e.g. ainl visualize — Graph introspection), then run via cron, HTTP runner, or MCP. For session handoff between humans and agents, the quickstart suggests leaving notes in docs/SESSION_NOTES.md (or equivalent) when wrapping a session.
ZeroClaw and peers
- ZeroClaw integration and the host hub document the MCP-first bootstrap.
- Any agent that can call MCP or HTTP can treat AINL as a tooling layer; the same memory tiers and adapters apply without forking the runtime.
Extensibility: storage, RAG, and “not in core v1”
- Pluggable adapters live under
adapters/with a registry model — see Adapter registry (and the capability registry for grants). - Vector / RAG are not built into the memory contract’s deterministic
listsemantics. In practice, teams combinememory.listfor structured keys with an HTTP or custom adapter step that talks to Pinecone, Chroma, Weaviate, etc., then feed compact results back into the graph. - Shipped v1.1 layer: optional metadata and bounded
listfilters (tags, time windows, pagination) are in the runtime; the RFC doc tracks rationale and edge cases for that additive design.
Related safety and ops docs: Capability grant model · Sandbox execution profile · External orchestration guide.
How this improves agents (what to measure)
- Continuity: explicit checkpoints and prune paths reduce “forgot we already notified” classes of bugs when graphs are written to namespaces intentionally.
- Context efficiency: size economics and methodology are summarized on Benchmarks and Benchmarks hub — use them instead of hand-wavy multipliers; your mileage depends on workload shape and tokenizer alignment.
- Determinism and validation: structured diagnostics and Mermaid/IR introspection (Graph introspection) make regressions easier to bisect than free-form prompt drift.
- Multi-step and multi-agent patterns: includes/modules encode retries and timeouts; coordination tier supports handoff stories (State discipline).
Examples on GitHub: examples/openclaw/daily_digest.lang (and sibling strict variant) alongside demo/session_budget_enforcer.lang.
Getting started
- Clone
github.com/sbhooley/ainativelangand install a dev checkout:pip install -e ".[dev,web]"(see Install for environment notes). - Point your agent at
AI_AGENT_QUICKSTART_OPENCLAW.mdor wire MCP per OpenClaw integration / MCP host integrations. - Add patterns from
modules/common/(for example memory helpers) and start persistingsession/workflowstate intentionally.
In short: AINL is a path from stateless prompt chains to production-shaped, memory-aware workers with pluggable, auditable state — the structured layer many agent frameworks leave underspecified. This site is the map; the repo is the machinery.
Doc and source index (used above)
| Topic | Link |
| --- | --- |
| State tiers | /docs/architecture/STATE_DISCIPLINE |
| Memory v1 | /docs/adapters/MEMORY_CONTRACT |
| Memory v1.1 RFC | /docs/adapters/MEMORY_CONTRACT_V1_1_RFC |
| Release notes (v1.2.x memory + runtime) | /docs/RELEASE_NOTES |
| Integration narrative | /docs/INTEGRATION_STORY |
| Whitepaper (§7 state, §10 ops + memory cross-links) | /whitepaper |
| AINL · OpenClaw unified integration | /docs/ainl_openclaw_unified_integration |
| MCP host hub | /docs/getting_started/HOST_MCP_INTEGRATIONS |
| OpenClaw skill / bootstrap | /docs/OPENCLAW_INTEGRATION |
| ZeroClaw | /docs/ZEROCLAW_INTEGRATION |
| OpenClaw adapters | /docs/adapters/OPENCLAW_ADAPTERS |
| Intelligence / monitors | /docs/INTELLIGENCE_PROGRAMS |
| Unified OpenClaw monitoring | /docs/operations/UNIFIED_MONITORING_GUIDE |
| Token budget bridge | /docs/openclaw/BRIDGE_TOKEN_BUDGET_ALERT |
| Orchestration + MCP | /docs/operations/EXTERNAL_ORCHESTRATION_GUIDE |
| Capability grants | /docs/operations/CAPABILITY_GRANT_MODEL |
| Sandbox profile | /docs/operations/SANDBOX_EXECUTION_PROFILE |
| Graph / Mermaid | /docs/architecture/GRAPH_INTROSPECTION |
| Adapter registry | /docs/reference/ADAPTER_REGISTRY |
| Capability registry | /docs/reference/CAPABILITY_REGISTRY |
| Benchmarks (site) | /benchmark |
| Benchmarks (docs hub) | /docs/benchmarks |
| Install | /docs/INSTALL |
| MCP marketing | /mcp |
| Agent quickstart (repo) | AI_AGENT_QUICKSTART_OPENCLAW.md |
| OpenClaw bridge README | openclaw/bridge/README.md |
| modules/common/ | Browse on GitHub |
