AI Native Lang

MCP

Bring AINL into your AI IDEs with MCP.

AINL ships a thin, stdio-only MCP server (ainl-mcp) so Gemini CLI, Claude Code, Codex-style SDKs, and other MCP-compatible hosts can validate, compile, run, and introspect AINL workflows as first-class tools — not just text snippets. For OpenClaw, use skills/openclaw/ plus ainl install-mcp --host openclaw (merges mcpServers.ainl into ~/.openclaw/openclaw.json). For ZeroClaw, use the skill plus ainl install-mcp --host zeroclaw (merges into ~/.zeroclaw/mcp.json). For Hermes Agent, run ainl install-mcp --host hermes (YAML mcp_servers.ainl in ~/.hermes/config.yaml; deterministic ainl_run + --emit hermes-skill). Start from Easy install, MCP host integrations, OpenClaw integration, or ZeroClaw integration, Hermes integration (one-pager).

Workflow-level toolsVendor-neutral surfaceSafe-default profilesExposure profiles

Overview

A workflow-level, vendor-neutral surface.

The MCP server lives in scripts/ainl_mcp_server.py with CLI entrypoint ainl-mcp. It's a thin layer over the existing compiler and runtime — not a replacement for the runner service and not an agent host or MCP gateway.

What it does

Exposes AINL-specific tools like ainl_validate, ainl_compile, ainl_run, ainl_capabilities, and ainl_security_report to MCP hosts via stdio, with safe defaults on adapters and limits.

What it's not

It does not turn AINL into an agent host, orchestration fabric, or sandbox. It is an integration boundary so existing MCP tools can call AINL safely within their own policies.

How it runs

You launch it with pip install -e ".[mcp]" then ainl-mcp. Hosts like Gemini CLI and Claude Code speak stdio MCP v1 and treat AINL as just another tool with strong capability metadata.

Surface

Tools and resources exposed over MCP.

The server exposes a small, focused set of tools and resources so MCP hosts can validate and run workflows, plus introspect adapters and security profiles.

MCP ToolPurpose
ainl_validateValidate an AINL program for syntax, graph, and capability issues.
ainl_compileCompile AINL source to canonical graph IR without executing.
ainl_runCompile (if needed) and execute a workflow with safe-default limits.
ainl_capabilitiesReturn adapter and capability metadata similar to the /capabilities HTTP endpoint.
ainl_security_reportSummarize security profiles and adapter privilege information for a host.
MCP ResourceDescription
ainl://adapter-manifestFull adapter manifest with verbs, effect defaults, privilege tiers, and safety flags.
ainl://security-profilesNamed security profiles, mirroring tooling/security_profiles.json, for common deployment scenarios.

Exposure & grants

Capability grants and exposure profiles for MCP.

Like the HTTP runner, the MCP server applies a capability grant and safe-default limits at startup, then lets hosts further narrow which tools and resources are visible.

Server-level MCP grant

AINL_MCP_PROFILE tells the MCP server which named security profile to load as its base grant, using the same capability grant model as the runner. Callers can only tighten this grant, never widen it.

Exposure profiles

AINL_MCP_EXPOSURE_PROFILE pulls a profile from tooling/mcp_exposure_profiles.json to pre-configure which tools and resources are exposed. Use this when running AINL behind a gateway or centralized MCP manager.

Fine-grained env controls

AINL_MCP_TOOLS, AINL_MCP_TOOLS_EXCLUDE, AINL_MCP_RESOURCES, and AINL_MCP_RESOURCES_EXCLUDE let you add or remove specific tools and resources by name or URI. Inclusion lists take precedence over exclusion.

For concrete examples and recommended profiles, see the External Orchestration Guide and Capability Grant Model.

Where MCP fits in your AINL stack.

MCP sits at the edge of your AI tooling: the host (IDE, CLI, agent framework) stays in charge of user identity, secrets, and sandboxing, while AINL provides a safe, workflow-level surface for validating, compiling, and running programs. The same capability grant model and security profiles used by the HTTP runner apply here as well.

In practice, you can let engineers design and validate workflows inside their MCP-compatible tools, then hand the same programs to the runner service for high-throughput production execution — without changing the underlying AINL semantics.