Audience: Individual developers, early adopters, indie hackers
The problem
Maintaining a smart X/Twitter promoter the traditional way means: Tweepy + LangChain/LangGraph, an LLM call on every poll, retry logic scattered across Python files, and a SQLite dedupe table you hope doesn't drift. Every 45-minute cron tick costs tokens whether there's anything interesting to do or not.
The AINL approach
The apollo-x-bot ships ainl-x-promoter.ainl — a single strict AINL graph that handles the full loop:
- Incremental search with
since_idcursor stored in SQLite (never re-processes seen tweets) - LLM classification + fast keyword gating so most runs never hit the model at all
- Reply and rate-limit policy baked into the graph, not scattered across helpers
- Deduplication enforced at compile time — the graph can't accidentally double-post
# apollo-x-bot/ainl-x-promoter.ainl (excerpt)
S core cron "*/45 * * * *"
include "modules/common/retry.ainl" as retry
L_search:
R x.search "ainativelang OR #AINL" ->results
If results ->L_classify ->L_exit
L_classify:
R llm.classify results ->scored
If scored ->L_reply ->L_exit
L_reply:
R x.reply scored ->ack
J ack
L_exit:
Ret "ok"
The LLM is only invoked when there are fresh, unclassified results — not on every tick.
Outcome
- Tokens spent per run: near zero on quiet polls, LLM only on genuine new content
- Control flow: auditable via JSONL execution tape (
--trace-jsonl) - Schedule: OpenClaw cron
*/45 * * * *or ZeroClaw equivalent — setup guide
Try it
Full walkthrough: How I Built a Production X/Twitter Bot in 100 Lines of AINL
pip install ainativelang
git clone https://github.com/sbhooley/ainativelang.git
cd ainativelang/apollo-x-bot
ainl check ainl-x-promoter.ainl --strict
