AI Native Lang

Security

Explicit capability boundaries for AI workers.

AINL moves security decisions out of the prompt and into the compiler. Every workflow declares exactly what it can touch — files, HTTP, databases, tools — and operators approve those profiles before execution ever starts. No implicit permissions. No ambient authority.

Core model

Capability-first, not ambient-authority

Traditional AI agents run with whatever permissions the host process has. AINL programs are different — every side effect is an explicit adapter call, and every adapter call maps to a declared capability.

🔒

Declared adapters

A workflow lists every adapter it may invoke at author time. The runtime refuses any call to an undeclared adapter, regardless of what the LLM suggested.

📋

Operator-approved profiles

Operators choose a security profile — local_minimal, sandbox, network_restricted, or operator_full — that sets the adapter allowlist before any workflow runs.

🧱

No ambient authority

The AINL runtime has no implicit network, filesystem, or credential access. Everything that can touch the outside world must be explicitly granted and scoped.

🔍

Compile-time validation

The compiler validates capability declarations against known adapter signatures before producing executable IR. Type mismatches and undeclared side effects are errors, not runtime surprises.

📊

Auditable execution

Every step, adapter invocation, and effect is a node in the compiled graph — inspectable, diffable, and loggable without touching model internals.

Hard runtime limits

Each profile enforces maximum steps, graph depth, adapter calls, and wall-clock time. Runaway workflows are terminated, not just throttled.

Runtime profiles

Four built-in security tiers

Operators select the profile that matches their deployment context. Profiles are enforced by the runtime — they cannot be overridden by workflow code or LLM output.

local_minimal

Dev / dry-run

Designed for local authoring and graph debugging. No external I/O permitted. Safe for validating workflow structure without executing side effects.

Adapters
core only — all I/O adapters forbidden
Limits
500 steps · 10 depth · 0 adapter calls · 5 s

sandbox_compute_and_store

Air-gapped

Sandboxed container with local compute and storage but no outbound network. Suitable for offline environments and stateful local workflows.

Adapters
core, sqlite, fs, wasm, memory, cache
Limits
5 000 steps · 50 depth · 500 adapter calls · 30 s

sandbox_network_restricted

Controlled egress

Sandboxed container with controlled outbound HTTP. Operator configures an explicit host allowlist at the network or adapter-config layer.

Adapters
core, sqlite, fs, wasm, memory, cache, http, tools, queue
Limits
5 000 steps · 50 depth · 500 adapter calls · 30 s

operator_full

Trusted operator

Full adapter surface for trusted operator-managed deployments. Requires the operator to provide their own policy/approval engine, network egress controls, and secrets management.

Adapters
operator-defined allowlist — all adapters available by explicit choice
Limits
50 000 steps · 200 depth · 5 000 adapter calls · 300 s

Principle: AINL is a workflow engine, not a primary security boundary. It enforces what it can inside the process boundary. Network egress, secrets management, and container isolation are the operator's responsibility and should be layered on top.

Adapter model

Privilege tiers and effect classification

Every adapter belongs to a privilege tier. Profiles can forbid entire tiers without enumerating individual adapters.

TierExamplesRisk surfaceAvailable in
corearithmetic, string ops, control flowNone — pure computationAll profiles
local_statefs, sqlite, memory, cacheLocal disk / in-process statesandbox + above
networkhttp, tools, queueOutbound network egressnetwork_restricted + above
operator_sensitiveagent, svc, email, calendar, social, dbExternal services, credentials, third-party APIsoperator_full only

Operators

What operators control

AINL is designed to integrate cleanly into operator-controlled infrastructure. Here is what you own, and what the runtime owns.

Operator responsibility

  • Selecting and enforcing the security profile for each deployment
  • Container or process isolation (cgroups, seccomp, namespaces)
  • Network egress policy and HTTP host allowlists
  • Secrets injection — AINL never reads credentials from prompts or env vars implicitly
  • Identity and access management for who can deploy or modify workflows
  • Log aggregation, alerting, and SIEM integration
  • Dependency pinning and supply-chain controls for AINL tooling

What AINL enforces

  • Adapter allowlist — no undeclared adapter calls reach execution
  • Effect and privilege-tier restrictions per active profile
  • Hard step, depth, adapter-call, and wall-clock limits
  • Compile-time type and capability validation
  • Deterministic graph IR — same inputs produce same execution path
  • No implicit ambient access to host credentials or network
  • Structured execution trace for every run

Infrastructure

Website and hosted service security

Beyond the runtime model, ainativelang.com and any hosted AINL services are hardened at every layer.

Transport security

  • ·TLS 1.2+ enforced site-wide
  • ·HSTS with preload
  • ·No HTTP downgrade paths

HTTP security headers

  • ·Content-Security-Policy with per-request nonces
  • ·X-Frame-Options: DENY
  • ·Referrer-Policy: strict-origin-when-cross-origin
  • ·Permissions-Policy restricts camera, mic, geolocation

Network & perimeter

  • ·AWS WAF with managed rule sets (SQLi, XSS, known bad IPs)
  • ·AWS Shield Standard DDoS baseline
  • ·Rate limiting on all public API routes
  • ·Honeypot fields on all public forms

Input validation

  • ·Zod schema validation on every API route
  • ·Server-side re-validation independent of client
  • ·No raw SQL — parameterised queries or ORM only
  • ·File upload restrictions by type, size, and scan

Secrets management

  • ·All secrets in AWS Secrets Manager — never in env vars or source
  • ·server-only package prevents secret leaks to client bundles
  • ·Secrets rotated on schedule; access logged

Supply chain

  • ·Automated dependency vulnerability scanning in CI
  • ·Pinned lockfiles; Dependabot alerts enabled
  • ·No unreviewed third-party scripts loaded client-side

Deployment guidance

Baseline hardening checklist

For teams deploying AINL in production environments.

Process & container

  • Run AINL runtime as a non-root user with minimal capabilities
  • Apply seccomp / AppArmor profiles where available
  • Use read-only root filesystems where workflows permit
  • Set explicit CPU, memory, and PID limits

Network

  • Block all inbound ports not required by the service
  • Enforce egress allowlists — do not allow arbitrary outbound HTTP
  • Use VPC / private networking; avoid public endpoints for internal adapters
  • Log all outbound connections for audit

Secrets & credentials

  • Never pass secrets through prompts, workflow inputs, or AINL variables
  • Inject credentials through secure environment or secret-store mounts only
  • Rotate service credentials on a defined schedule
  • Scope API keys to minimum required permissions

Pipelines & supply chain

  • Pin AINL tooling versions in CI; verify checksums
  • Review generated artifacts before deploying to production
  • Keep dependencies updated; subscribe to security advisories
  • Enforce code review on all workflow changes, including LLM-authored ones

Found a vulnerability?

We take security reports seriously. Please disclose privately so we can investigate and ship a fix before public disclosure.