AI Native Lang
Back to Learn
how-to

How to Connect AINL to OpenAI / ChatGPT

Use AINL's http adapter to call the OpenAI Chat Completions API from a deterministic workflow. Covers auth headers, request body, response handling, and branching.

March 18, 2026·5 min read
#ainl#openai#chatgpt#http-adapter#llm-integration
Share:TwitterLinkedIn

How to Connect AINL to OpenAI / ChatGPT

OpenAI's Chat Completions API is an HTTP endpoint. AINL reaches it through the http adapter, the same way it reaches any other API. This guide shows you a working workflow with correct headers, a proper request body, and response branching.

If you've read How to Connect AINL to Claude, the pattern is almost identical — different headers, different endpoint, same AINL mechanics.


Prerequisites

  • AINL installed and working (ainl-validate --version passes)
  • An OpenAI API key from platform.openai.com
  • The http adapter enabled at runtime (disabled by default; requires --enable-adapter http)

The http adapter recap

The http adapter's Post verb:

R http.Post <url> <body_var> [headers_var] [timeout_s] ->resp

Response envelope fields:

| Field | Type | Meaning | |---|---|---| | ok | bool | true if the call completed with a 2xx status | | status_code | int or null | HTTP status code | | error | str or null | Transport error message, if any | | body | any | Decoded response body (JSON or string) | | url | str | URL that was called |


The workflow

S core web /api
E /ask P ->L_ask ->resp

L_ask:
  Set headers {"Authorization": "Bearer YOUR_OPENAI_KEY", "Content-Type": "application/json"}
  Set body {"model": "gpt-4o", "messages": [{"role": "user", "content": "What is AINL?"}]}
  R http.Post "https://api.openai.com/v1/chat/completions" body headers ->resp
  J resp

Breaking it down:

  • S core web /api — web service, path prefix /api
  • E /ask P ->L_ask ->resp — POST /api/ask, runs label L_ask, returns resp
  • Set headers {...} — the two required OpenAI headers: Authorization (Bearer token) and Content-Type
  • Set body {...} — a minimal Chat Completions request body with model and messages
  • R http.Post ... ->resp — POST to the completions endpoint, response goes into resp
  • J resp — return resp as JSON output

Pass the API key at runtime (don't hardcode it)

Instead of embedding your key in the source, pass it in via the initial frame:

ainl run ask.ainl --json \
  --enable-adapter http \
  --http-allow-host api.openai.com \
  --http-timeout-s 30 \
  --frame "{\"OPENAI_KEY\": \"$OPENAI_API_KEY\"}"

Then reference OPENAI_KEY as a variable in the workflow:

L_ask:
  Set headers {"Authorization": "Bearer " + OPENAI_KEY, "Content-Type": "application/json"}
  Set body {"model": "gpt-4o", "messages": [{"role": "user", "content": "What is AINL?"}]}
  R http.Post "https://api.openai.com/v1/chat/completions" body headers ->resp
  J resp

In strict mode, bare unquoted identifiers like OPENAI_KEY are resolved as frame variable references, not treated as literal strings.


Branch on success or failure

L_ask:
  Set headers {"Authorization": "Bearer " + OPENAI_KEY, "Content-Type": "application/json"}
  Set body {"model": "gpt-4o", "messages": [{"role": "user", "content": "What is AINL?"}]}
  R http.Post "https://api.openai.com/v1/chat/completions" body headers ->resp
  If resp.ok ->L_success ->L_error

L_success:
  J resp.body

L_error:
  Set out {"error": "openai_call_failed", "status": resp.status_code, "detail": resp.error}
  J out

The If resp.ok branch is compiled into the workflow graph before runtime. The model doesn't decide whether to check for errors — the control flow is enforced by the compiled graph.


Validate and run locally

Save the workflow as openai_ask.ainl, then:

# Validate syntax and semantics
ainl-validate openai_ask.ainl --strict

# Run it
ainl run openai_ask.ainl --json \
  --enable-adapter http \
  --http-allow-host api.openai.com \
  --http-timeout-s 30 \
  --frame "{\"OPENAI_KEY\": \"$OPENAI_API_KEY\"}"

On success, resp.body will contain the OpenAI response JSON with choices[0].message.content.


Using the runner service (HTTP API)

Start the runner service:

uvicorn scripts.runtime_runner_service:app --port 8000

Submit via POST /run:

curl -X POST http://localhost:8000/run \
  -H "Content-Type: application/json" \
  -d '{
    "code": "S core web /api\nE /ask P ->L_ask ->resp\n\nL_ask:\n  Set headers {\"Authorization\": \"Bearer sk-...\", \"Content-Type\": \"application/json\"}\n  Set body {\"model\": \"gpt-4o\", \"messages\": [{\"role\": \"user\", \"content\": \"What is AINL?\"}]}\n  R http.Post \"https://api.openai.com/v1/chat/completions\" body headers ->resp\n  J resp",
    "strict": true,
    "allowed_adapters": ["core", "http"],
    "adapters": {
      "http": {
        "allow_hosts": ["api.openai.com"],
        "timeout_s": 30
      }
    }
  }'

The runner service enforces a server-level security floor — callers can only restrict further, not widen the allowed adapter surface.


Wiring multiple models in one workflow

You're not limited to one LLM call. Because AINL is a compiled graph, you can call OpenAI for one step and a different API for another:

S core web /api
E /summarize P ->L_summarize ->result

L_summarize:
  Set oai_headers {"Authorization": "Bearer " + OPENAI_KEY, "Content-Type": "application/json"}
  Set oai_body {"model": "gpt-4o", "messages": [{"role": "user", "content": "Summarize this in one sentence."}]}
  R http.Post "https://api.openai.com/v1/chat/completions" oai_body oai_headers ->oai_resp
  If oai_resp.ok ->L_done ->L_error

L_done:
  J oai_resp.body

L_error:
  Set result {"error": "model_call_failed"}
  J result

Each call is a node in the graph. The runtime walks the edges — which means the flow is predictable, inspectable, and can be audited.


What's next

A

AI Native Lang Team

The team behind AI Native Lang — building deterministic AI workflow infrastructure.

Related Articles