How to Use AINL with Cursor, Claude Code, or Gemini CLI (MCP Path)
AINL ships a native MCP (Model Context Protocol) server. Once it's running, any MCP-compatible AI coding tool — Cursor, Claude Code, Gemini CLI, or a custom agent — can call AINL tools directly to validate, compile, inspect, and execute AINL workflows without writing any integration code.
This guide covers the full setup: install, configuration, tool descriptions, and your first MCP tool calls.
What the MCP server exposes
The ainl-mcp server is defined in scripts/ainl_mcp_server.py. It uses stdio transport (no HTTP server needed) and exposes five tools and two resources:
Tools
| Tool | Side effects | What it does |
|---|---|---|
| ainl_validate | None | Check AINL source for syntax and semantic validity. Returns ok, errors, warnings. |
| ainl_compile | None | Compile AINL source to canonical graph IR. Returns the full IR JSON. |
| ainl_capabilities | None | Discover available adapters, verbs, privilege tiers, and effect metadata. |
| ainl_security_report | None | Generate a per-label privilege/security map for a workflow. |
| ainl_run | Adapter calls (restricted) | Compile, policy-validate, and execute a workflow. By default, only the core adapter is allowed. |
Resources
| URI | Contents |
|---|---|
| ainl://adapter-manifest | Full adapter catalog with verbs, tiers, effects, and privilege levels |
| ainl://security-profiles | Named security profiles for common deployment scenarios |
Security posture
The MCP server is safe by default:
ainl_runis restricted to thecoreadapter. It will not make network calls, write files, or call external APIs unless you explicitly set a broader security profile viaAINL_MCP_PROFILE.- Callers can add further restrictions via a
policyparameter but cannot widen beyond the server defaults. - The
validate,compile,capabilities, andsecurity_reporttools have zero side effects.
Step 1: Install with MCP support
AINL's MCP server requires the mcp Python extra. From your AINL repository:
pip install -e ".[mcp]"
Verify the command is available:
ainl-mcp --help
If you see the help output, the MCP server is ready to use.
Step 2: Connect to Cursor
Cursor uses a mcp.json (or cursor.json) configuration file in your project root or ~/.cursor/. Add an entry for the AINL server:
{
"mcpServers": {
"ainl": {
"command": "ainl-mcp",
"args": [],
"env": {}
}
}
}
If ainl-mcp is not on your $PATH (e.g. you're using a virtualenv), use the full path:
{
"mcpServers": {
"ainl": {
"command": "/path/to/your/.venv/bin/ainl-mcp",
"args": [],
"env": {}
}
}
}
After saving, restart or reload Cursor. The AINL tools will be discoverable by the Cursor agent.
Step 3: Connect to Claude Code
Claude Code (claude CLI, formerly Claude Code Runner) follows the same MCP stdio pattern. Create or edit ~/.claude/mcp.json:
{
"servers": {
"ainl": {
"command": "ainl-mcp",
"args": [],
"type": "stdio"
}
}
}
Or pass it directly when launching a session:
claude --mcp-config mcp.json "Validate this AINL program: L1:\n R core.ADD 2 3 ->x\n J x"
Claude Code will call ainl_validate automatically when it needs to check AINL syntax during code generation or review.
Step 4: Connect to Gemini CLI
Gemini CLI also supports MCP stdio servers. The configuration key may vary by version — check the Gemini CLI docs for your installed version. Generally:
gemini --mcp-server "ainl-mcp" "What adapters does AINL support?"
Or add it to your Gemini CLI config file:
{
"tools": [
{
"type": "mcp",
"name": "ainl",
"command": "ainl-mcp"
}
]
}
The server sends its tool descriptions via stdio on startup — the host discovers them automatically.
Step 5: First tool calls
Once connected, your AI agent can call AINL tools by name. Here's what each call looks like from the MCP host's perspective:
Validate a workflow
{
"tool": "ainl_validate",
"params": {
"code": "L1:\n R core.ADD 2 3 ->x\n J x",
"strict": true
}
}
Response:
{
"ok": true,
"errors": [],
"warnings": []
}
Compile to graph IR
{
"tool": "ainl_compile",
"params": {
"code": "L1:\n R core.ADD 2 3 ->x\n J x"
}
}
Response:
{
"ok": true,
"ir": {
"ir_version": "1.0",
"nodes": [...],
"edges": [...]
}
}
Run a workflow (core adapter only by default)
{
"tool": "ainl_run",
"params": {
"code": "L1:\n R core.ADD 2 3 ->x\n J x",
"strict": true
}
}
Response:
{
"ok": true,
"trace_id": "...",
"label": "L1",
"out": {"x": 5},
"runtime_version": "1.0.0",
"ir_version": "1.0"
}
Audit a workflow's privilege footprint
{
"tool": "ainl_security_report",
"params": {
"code": "L1:\n R http.Post \"https://api.example.com/hook\" payload ->resp\n J resp"
}
}
This returns a per-label map of which adapters and privilege tiers the workflow touches — useful before deploying.
Controlling what the MCP server exposes
You can scope the MCP server's surface using environment variables:
| Variable | Effect |
|---|---|
| AINL_MCP_EXPOSURE_PROFILE | Named profile from tooling/mcp_exposure_profiles.json |
| AINL_MCP_TOOLS | Comma-separated list of tools to expose (inclusion) |
| AINL_MCP_TOOLS_EXCLUDE | Comma-separated list of tools to hide |
| AINL_MCP_RESOURCES | Comma-separated list of resource URIs to expose |
| AINL_MCP_RESOURCES_EXCLUDE | Comma-separated list of resource URIs to hide |
| AINL_MCP_PROFILE | Named security profile that sets the server-level capability grant |
Named exposure profiles
| Profile | Tools exposed |
|---|---|
| validate_only | ainl_validate, ainl_compile |
| inspect_only | validate + ainl_capabilities + ainl_security_report |
| safe_workflow | All 5 tools |
| full | All 5 tools |
Example — start the server in validate-only mode:
AINL_MCP_EXPOSURE_PROFILE=validate_only ainl-mcp
This is useful when you want your AI agent to be able to check AINL syntax but not execute anything.
For Claude Code / Cowork / Dispatch-style environments: start with inspect_only or validate_only to minimize the visible tool surface and avoid execution by default. Only enable safe_workflow after operators have reviewed the security profile, capability grants, policies, limits, and adapter exposure for that environment. AINL is a scoped tool provider and deterministic workflow layer — not the host, not a gateway, and not a control plane.
For a step-by-step end-to-end example showing validator → inspector → safe-runner roles, see the External Orchestration Guide (section: End-to-end example: validator, inspector, safe runner).
How your agent will use AINL tools
Once connected, an MCP-enabled agent like Cursor or Claude Code will:
- Discover tools on first connection — the server sends tool names and descriptions
- Call
ainl_validatewhen it generates or modifies AINL code, to check for errors - Call
ainl_compileto inspect the IR and understand the graph structure - Call
ainl_security_reportto review privilege tiers before proposing a deployment - Call
ainl_runto testcore-only workflows inline during development
The agent doesn't need to know the AINL grammar — it can validate and repair its own output using ainl_validate feedback in a loop.
Troubleshooting
MCP SDK not installed error:
pip install -e ".[mcp]"
ainl-mcp: command not found:
Your virtualenv may not be activated, or the install didn't put the script on $PATH. Use the full path in your MCP config:
which ainl-mcp
# /Users/you/project/.venv/bin/ainl-mcp
Agent can see the tools but ainl_run fails with a policy error:
The MCP server defaults to core-only execution. If your workflow uses http, fs, sqlite, or other adapters, you need to set a broader security profile:
AINL_MCP_PROFILE=sandbox_network_restricted ainl-mcp
Available profiles: local_minimal, sandbox_compute_and_store, sandbox_network_restricted, operator_full. See tooling/security_profiles.json for the full definitions.
What's next
- Your first AINL workflow — write, validate, and run programs from the CLI
- How to connect AINL to Claude — call Claude from an AINL workflow using the http adapter
- How to connect AINL to OpenAI — same pattern for OpenAI
- Full MCP server docs + end-to-end validator/inspector/runner example: External Orchestration Guide (section 9 and End-to-end example: validator, inspector, safe runner)
- Batch/worktree automation guide: Batch Automation Guide — inspect-first, audit-friendly, deterministic batch flows
