-
Notifications
You must be signed in to change notification settings - Fork 491
Description
[RFC] OpenCLI Channel — Event Subscription Protocol for CLI Adapters
Summary
Introduce a Channel protocol to OpenCLI, enabling adapters to expose event subscriptions alongside existing request/response commands. This allows AI agents (and any downstream consumer) to react to platform changes in real-time — not just query them on demand.
Motivation
OpenCLI today is a powerful read/write interface to the web: opencli twitter post, opencli notion read, opencli gh pr list. But the interaction model is always pull — the agent asks, the platform answers.
What's missing is the reverse direction: the platform has something new → the agent gets notified → the agent acts.
Real-world scenarios that are impossible today:
- You write a Notion doc with an AI agent. You add a comment asking for changes. The agent doesn't know until you manually tell it.
- A reviewer leaves comments on your GitHub PR. The agent can't pick them up and auto-fix without a human copy-pasting.
- A GitLab MR gets a new thread. The agent that wrote the code never sees it.
These are all the same pattern: a platform event that should trigger agent action, but has no delivery path.
Design
The Unix Analogy
This is fetchmail for APIs.
fetchmail is a daemon that polls remote mailboxes, tracks what it already fetched (cursor), and delivers new messages to a local MDA. Replace "mailbox" with "GitHub Events API" and "MDA" with "agent wake", and you have the Channel protocol.
In Unix terms, Channel fills the gap that pipes can't: time (when to poll) and memory (where we left off).
Architecture
┌─ opencli-channel (standalone daemon) ─────────────────┐
│ │
│ Sources (per-adapter extensions): │
│ ├── github.ts → poll GitHub Events/Notifications │
│ ├── gitlab.ts → poll GitLab Events API │
│ ├── notion.ts → poll Notion Comments/Changes │
│ ├── twitter.ts → poll via opencli twitter timeline │
│ └── ... → community-contributed │
│ │
│ Core: │
│ ├── Scheduler → manages per-source poll intervals │
│ ├── Cursor Store → persists position per subscription│
│ ├── Dedup → idempotent event delivery │
│ └── Event Router → matches events to sinks │
│ │
│ Sinks (output plugins): │
│ ├── webhook → POST to any URL │
│ ├── stdout → JSON lines (pipe-friendly) │
│ ├── openclaw → wake API / systemEvent injection │
│ └── ... → any agent framework can add sinks │
│ │
└────────────────────────────────────────────────────────┘
Three clear boundaries:
- Sources know how to poll a specific platform API and extract events
- Core knows scheduling, state, dedup, routing — nothing about platforms or consumers
- Sinks know how to deliver events to a specific consumer — nothing about platforms
Event Schema
Every source emits a unified event envelope:
{
"id": "evt_abc123",
"source": "github",
"type": "pull_request_review_comment.created",
"timestamp": "2026-03-24T17:30:00Z",
"subscription": "my-project-reviews",
"payload": {
"repo": "user/repo",
"pr": 42,
"author": "reviewer",
"body": "This function needs error handling",
"path": "src/handler.ts",
"line": 15
}
}Fields:
id— globally unique, used for dedupsource— which adapter produced ittype— platform-specific event type (dot-namespaced)timestamp— when the event occurred on the platformsubscription— which subscription matched this eventpayload— platform-specific data, structured by the source adapter
Subscription Declaration
# ~/.opencli-channel/subscriptions.yaml
subscriptions:
- name: my-project-reviews
source: gitlab
config:
project: myorg/myproject
events: [note, merge_request]
interval: 60s
sink: webhook
sink_config:
url: http://localhost:3000/wake
- name: notion-doc-comments
source: notion
config:
pages: ["page-id-1", "page-id-2"]
events: [comment.created]
interval: 30s
sink: stdoutCLI Interface
# Daemon lifecycle
opencli channel start # start daemon (foreground)
opencli channel start -d # start daemon (background)
opencli channel stop # stop daemon
opencli channel status # show running subscriptions & stats
# Subscription management
opencli channel add <name> --source github --config '...' --interval 60s --sink webhook
opencli channel remove <name>
opencli channel list # list all subscriptions
opencli channel logs [name] # tail event log
# One-shot (no daemon, for cron/debugging)
opencli channel poll <name> # poll once, print new events to stdout
opencli channel poll <name> --since <ts> # poll from specific timestampSource Adapter Contract
A source adapter is a TypeScript module that exports:
interface ChannelSource {
name: string;
// Initialize with adapter-specific config
init(config: Record<string, any>): Promise<void>;
// Poll for new events since cursor
// Returns events + new cursor position
poll(cursor: string | null): Promise<{
events: ChannelEvent[];
cursor: string;
}>;
// Optional: recommended poll interval (server-driven, e.g. GitHub X-Poll-Interval)
recommendedInterval?(): number | null;
}Sink Adapter Contract
interface ChannelSink {
name: string;
init(config: Record<string, any>): Promise<void>;
deliver(events: ChannelEvent[]): Promise<void>;
}Cursor Store
Default: ~/.opencli-channel/cursors.json
{
"my-project-reviews": {
"cursor": "2026-03-24T17:30:00Z",
"last_poll": "2026-03-24T17:31:02Z",
"events_delivered": 142
}
}Simple, inspectable, portable. Can be swapped for SQLite via config if needed.
Why Not Use Existing Tools?
| Tool | Gap |
|---|---|
| Hookdeck Outpost | Outbound only — delivers events from you, doesn't poll events for you |
| Trigger.dev / Inngest | Workflow engines — handles "what to do after event", not "how to get events" |
| n8n / Zapier | UI-first, not CLI-native, not designed for agent consumption |
| Telegraf / Fluentd | Same architecture, but for metrics/logs — not API events |
| NATS / Redis Streams | Transport layer only — no polling, no platform adapters |
Relationship with OpenCLI
Channel is complementary to OpenCLI's existing adapter model:
- OpenCLI adapters = agent → platform (read/write)
- Channel sources = platform → agent (events)
Channel sources can reuse OpenCLI's browser session and adapter infrastructure where applicable. For example, a Twitter channel source could internally call opencli twitter notifications to detect new mentions.
The daemon can live as:
- A subcommand of OpenCLI itself (
opencli channel ...) - A separate package that imports OpenCLI as a dependency
- Both — built-in basic support + standalone for advanced use
Option 1 (subcommand) is simplest for users and aligns with OpenCLI's "universal CLI hub" vision.
Reference Implementation
GitHub as the first source adapter:
- Uses GitHub Events API (
/repos/{owner}/{repo}/events) - Respects
X-Poll-Intervalheader andETag/If-None-Matchfor efficient polling - Supports filtering by event type
- Natural cursor: event ID + timestamp
stdout as the first sink:
- JSON lines to stdout, pipe-friendly
- Zero dependencies, works everywhere
Open Questions
- Auth: Should Channel manage its own tokens, or delegate to OpenCLI's session/token management?
- Backpressure: What happens when a sink is down? Queue locally? Drop? Retry with backoff?
- Multi-tenancy: One daemon serving multiple agents/users, or one daemon per user?
- Webhook sources: Some platforms push events (GitHub Webhooks). Should Channel also accept inbound webhooks alongside polling? (This would require a listener port.)
Next Steps
- Discuss this RFC — feedback on architecture, naming, scope
- Implement core + GitHub source + stdout sink as proof of concept
- Add OpenCLI integration (subcommand or separate package)
- Community contributes additional source adapters
Inspired by fetchmail's model, Unix pipe philosophy, and the emerging need for AI agents to participate in bidirectional workflows across platforms.