AI Native Lang
Back to Learn
how-toFeatured

How to Connect AINL to Claude (Anthropic API)

Use AINL's http adapter to call Anthropic's Claude API from a deterministic workflow. Covers auth, request construction, and handling the response.

March 18, 2026·5 min read
#ainl#claude#anthropic#http-adapter#llm-integration
Share:TwitterLinkedIn

How to Connect AINL to Claude (Anthropic API)

AINL programs call external APIs through the http adapter. Claude is just an HTTP API — so connecting AINL to Claude means writing an AINL workflow that:

  1. Builds a request payload (headers, body)
  2. Calls http.Post against Anthropic's API endpoint
  3. Branches or returns based on the response

This guide shows you exactly how to do that, using real adapter syntax and real Anthropic API shapes.


Prerequisites

  • AINL installed and working (ainl-validate --version passes)
  • An Anthropic API key from console.anthropic.com
  • The http adapter enabled at runtime (it is disabled by default for security; you must pass --enable-adapter http to ainl run)

How the http adapter works

The http adapter is part of AINL's canonical core. Its Post verb signature is:

R http.Post <url> <body_var> [headers_var] [timeout_s] ->resp
  • url — string URL (quoted literal or frame variable)
  • body_var — a frame variable holding a JSON-serializable body
  • headers_var — optional frame variable holding a string-to-string dict of HTTP headers
  • resp — frame variable that receives the normalized response envelope

The response envelope has these fields:

| Field | Type | Meaning | |---|---|---| | ok | bool | true if the HTTP call completed with a 2xx status | | status_code | int or null | HTTP status code returned | | error | str or null | Transport error message, if any | | body | any | Decoded response body (JSON or string) | | url | str | URL that was called |

Security note: The http adapter requires an explicit allowlist. At runtime, pass --enable-adapter http --http-allow-host api.anthropic.com. Never widen the host allowlist beyond what your workflow actually needs.


The workflow

Here is a complete AINL program that calls Claude's Messages API:

S core web /api
E /ask P ->L_ask ->resp

L_ask:
  Set headers {"x-api-key": "YOUR_ANTHROPIC_API_KEY", "anthropic-version": "2023-06-01", "content-type": "application/json"}
  Set body {"model": "claude-opus-4-5", "max_tokens": 1024, "messages": [{"role": "user", "content": "What is AINL?"}]}
  R http.Post "https://api.anthropic.com/v1/messages" body headers ->resp
  J resp

Breaking it down:

  • S core web /api — declare a web service, path prefix /api
  • E /ask P ->L_ask ->resp — POST endpoint /api/ask, handled by label L_ask, returns variable resp
  • Set headers {...} — build the required Anthropic headers as a frame dict
  • Set body {...} — build the messages payload
  • R http.Post ... ->resp — POST to Claude, store the full response envelope in resp
  • J resp — return resp as JSON

Keep the API key out of source

Hard-coding credentials in AINL source is fine for local experiments, but don't commit them. The better pattern is to pass the key in at runtime via the initial frame:

ainl run ask.ainl --json \
  --enable-adapter http \
  --http-allow-host api.anthropic.com \
  --http-timeout-s 30 \
  --frame '{"ANTHROPIC_KEY": "sk-ant-..."}'

Then reference it in the workflow as a variable:

S core web /api
E /ask P ->L_ask ->resp

L_ask:
  Set headers {"x-api-key": ANTHROPIC_KEY, "anthropic-version": "2023-06-01", "content-type": "application/json"}
  Set body {"model": "claude-opus-4-5", "max_tokens": 1024, "messages": [{"role": "user", "content": "What is AINL?"}]}
  R http.Post "https://api.anthropic.com/v1/messages" body headers ->resp
  J resp

In strict mode, bare identifiers in read positions are treated as variable references — so ANTHROPIC_KEY (no quotes) is read from the frame, not treated as a literal string.


Branching on the response

Because AINL gives you explicit control flow, you can branch on whether Claude's response succeeded:

L_ask:
  Set headers {"x-api-key": ANTHROPIC_KEY, "anthropic-version": "2023-06-01", "content-type": "application/json"}
  Set body {"model": "claude-opus-4-5", "max_tokens": 1024, "messages": [{"role": "user", "content": "What is AINL?"}]}
  R http.Post "https://api.anthropic.com/v1/messages" body headers ->resp
  If resp.ok ->L_success ->L_error

L_success:
  J resp.body

L_error:
  Set out {"error": "claude_call_failed", "status": resp.status_code}
  J out

This is the key advantage of AINL over prompt-loop agents: the branching logic is compiled into the graph before execution — the model doesn't decide whether to check for errors, the workflow enforces it.


Validate and run

# Validate
ainl-validate ask.ainl --strict

# Run with http adapter + host allowlist + API key from env
ainl run ask.ainl --json \
  --enable-adapter http \
  --http-allow-host api.anthropic.com \
  --http-timeout-s 30 \
  --frame "{\"ANTHROPIC_KEY\": \"$ANTHROPIC_API_KEY\"}"

Expected successful output will be a JSON envelope with resp.body containing Claude's message response.


Using the runner service (HTTP API)

If you're running the AINL runner service (scripts/runtime_runner_service.py), submit the workflow via the /run endpoint:

uvicorn scripts.runtime_runner_service:app --port 8000

Then POST your workflow:

curl -X POST http://localhost:8000/run \
  -H "Content-Type: application/json" \
  -d '{
    "code": "S core web /api\nE /ask P ->L_ask ->resp\n\nL_ask:\n  Set headers {\"x-api-key\": \"sk-ant-...\", \"anthropic-version\": \"2023-06-01\", \"content-type\": \"application/json\"}\n  Set body {\"model\": \"claude-opus-4-5\", \"max_tokens\": 1024, \"messages\": [{\"role\": \"user\", \"content\": \"What is AINL?\"}]}\n  R http.Post \"https://api.anthropic.com/v1/messages\" body headers ->resp\n  J resp",
    "strict": true,
    "allowed_adapters": ["core", "http"],
    "adapters": {
      "http": {
        "allow_hosts": ["api.anthropic.com"],
        "timeout_s": 30
      }
    }
  }'

The runner enforces its own security floor — even if you pass a broader allowlist, the server-level grant caps what adapters can do.


What's next

A

AI Native Lang Team

The team behind AI Native Lang — building deterministic AI workflow infrastructure.

Related Articles