Open-source orchestration for zero-human companies · v1.0.5

Run multiple AI agents. Without babysitting any of them.

Coordinate Claude, Codex, Cursor in parallel — with retry, git isolation, and state tracking. Set a goal at 10pm. Wake up to pull requests.

10 ready-made teams 5 AI tool adapters 0 cloud dependencies
zsh — my-project
$ orch org deploy startup-mvp --goal "Build auth module"
  ✓ Deployed team — 5 agents
  ✓ CTO decomposed goal → 6 tasks
 
$ orch run --all --watch
  ● Running — 5 agents dispatched
  Backend A implementing OAuth2 [feature/oauth]
  Backend B JWT token service [feature/jwt]
  ✉ Backend A QA "Auth module ready for testing"
  ✓ 6 PRs merged → main · $4.20 · you were asleep since 22:05
Scroll
v1.0.5 — MIT
5
AI Tool Adapters
10
Ready-Made Teams
0
Cloud Dependencies
30s
To First Run
Why ORCH? Unlike parallel launchers, ORCH agents talk to each other, share context, track work through a formal state machine, and run 24/7 as a daemon.

Works with the tools you already use

Claude
Anthropic
native
OpenCode
Multi-provider
native
Codex
OpenAI
native
Cursor
Cursor CLI
native
Shell
Any CLI Tool
universal
MIT licensed 97 source files 100% TypeScript Zero runtime deps

The coordination problem

One agent is a tool.
A coordinated team is a superpower

Without orchestration
With ORCH

Manual juggling

You copy-paste output between AI tools manually. Context gets lost. Mistakes compound.

productivity killer

Automatic context flow

Agents message each other directly. Context flows through shared store and prompt injection.

coordinated

Agents in silos

Parallel agents overwrite each other's files. Merge conflicts pile up. Work gets lost.

wasted effort

Worktree isolation

Each agent gets its own git worktree. Parallel execution without file conflicts. Auto merge-back.

safe parallelism

Zero governance

No review process. No state tracking. A failed run leaves broken state with no way back.

high risk

State machine governance

Every task flows through todoin_progressreviewdone. Nothing gets lost.

auditable

Cloud lock-in

Most tools require cloud accounts and send your code to external servers. No control over your data.

vendor dependency

Zero cloud, fully local

Everything runs on your machine. State stored in .orchestry/ as YAML, JSON, JSONL. Git clone and run.

private

Constant babysitting

You wait for each AI run to finish. No parallel execution. No automatic retries when things fail.

time sink

Autonomous execution

The orchestrator dispatches, monitors, retries with backoff, and merges — unattended. You review the results.

automated

Close laptop, agents die

Agents only run while your terminal is open. Close the lid at night — everything stops. No CI/CD integration.

fragile

Daemon runs 24/7

orch serve runs headless on any server. Structured JSON logs, graceful shutdown, pm2/systemd ready. Agents work while you sleep.

always on

See for yourself

From install to agents working in 30 seconds

zsh — my-project
$ npm install -g @oxgeneral/orch
  ✓ orchestry installed
$ cd ~/my-project && orch
  ✓ initialized · Created .orchestry/
  → Launching TUI dashboard…
$ orch task add "Implement auth module" -p 1
  ✓ Created tsk_a1b — priority 1
$ orch run --all --watch
  ● Running — 3 agents active · scope isolation · auto merge-back
    backend-a: implementing auth module [feature/auth]
    qa-a: writing test suite [test/auth]
    reviewer: waiting for review tasks
$ orch task review tsk_a1b --approve
  ✓ Task approved · merging [feature/auth] → main
  # or run headless on a server:
$ orch serve --log-file orch.jsonl
  ● Watching — daemon mode · JSON logs · Ctrl+C to stop

Your agents, one dashboard

Real-time TUI — see what every agent is doing, track token costs, review results. Tasks, agents, goals — all in one terminal.

TUI Dashboard

Ready to stop babysitting your AI agents? One command to install. One command to run.

Run Your First Multi-Agent Goal

Built for
real projects, not demos

13 features · production-grade
Strategy new

Set goals. Agents execute.

Set a goal — your CTO agent decomposes it into tasks, assigns to departments, and tracks progress. You set direction. AI executes.

  • Agent Teams — group agents under a lead, broadcast context, coordinate work
  • Inter-agent Messaging — direct messages, broadcasts, injected into prompts at dispatch
  • Shared Context — key-value store readable by all agents, LiquidJS templates
Departments new

Teams with roles. Not just agent names.

CTO, Backend, QA, Reviewer — organized in teams with leads, shared task pools, and messaging. Each agent knows its role.

  • Goals & Autonomy — define goals, agents generate and execute tasks autonomously
  • Reactive Dispatch — sub-second task pickup, no polling, events trigger agents immediately
  • Smart Retries — exponential backoff, stall detection, zombie cleanup
Daemon new

Runs 24/7. Even while you sleep.

orch serve — headless daemon mode. Structured JSON logs for Datadog, Grafana, or jq. Deploy with pm2 or systemd. Add tasks from any terminal — daemon picks them up.

  • Watch mode — runs indefinitely, picks up new tasks on every tick
  • Once mode — process all tasks and exit with code 0/1 — perfect for CI/CD
  • Graceful shutdown — SIGINT/SIGTERM → waits for agents → saves state → releases lock
Safety

Nothing Ships Without Your Approval.

Every task flows through the state machine. Every agent isolated on its own branch. Every change reviewed before merge. You're the final gate.

  • State Machinetodoin_progressreviewdone
  • Worktree Isolation — each agent gets its own git worktree, parallel without conflicts
  • Auto Merge-back — agent finishes, changes merge to main, no manual git juggling
Developer Experience

Zero Infrastructure. Just npm install.

No database. No cloud. No Docker. No signup. State in .orchestry/ — YAML, JSON, JSONL. Git clone and you're running.

  • TUI Dashboard — live tasks, agent activity, token usage, keyboard-driven
  • Rework Loop — reject with feedback, the agent retries with your notes
  • Zero Infrastructure — no cloud, no DB, YAML + JSON in .orchestry/

Only in ORCH

Agent teams, not just parallel execution

Keep using Claude and Cursor — now they're organized into teams. CTO routes work, Backend builds, QA tests, Reviewer checks. They message each other, share context, and self-coordinate.

CTO LEAD Backend QA Reviewer
  • Team leads — assign a lead agent who routes work and resolves conflicts
  • Direct messaging — agents send targeted instructions to specific teammates
  • Broadcasts — push announcements to the full team or scoped groups
  • Mailbox delivery — messages land in agent prompts at dispatch time
  • Shared context store — key-value pairs any agent can read or write
  • Automatic work distribution — teams claim tasks from a shared pool
Agent Teams CLI
# Create a team with a lead
$ orch team create platform --lead architect
  ✓ Created team "platform" → team_k2m

# Add a member
$ orch team join team_k2m backend
  ✓ Agent backend joined team platform

# Direct message
$ orch msg send backend "Use PostgreSQL for the new schema"
  ✓ Message sent → backend

# Broadcast to team
$ orch msg broadcast "API v2 spec is ready" --team team_k2m
  ✓ Broadcast sent to 3 agent(s)

# Share context
$ orch context set db_schema "users(id,email,role)"
  → Shared with all agents

How it works

Four steps to your first multi-agent run

01

Install

One package, no dependencies. Ready in seconds.

npm i -g @oxgeneral/orch
02

Define work

Add tasks with scopes, priorities, and agent assignments — or let ORCH auto-assign.

orch task add
03

Run

Agents execute in parallel on isolated worktrees, messaging and sharing context.

orch run --all --watch
04

Review & ship

Approve, reject with feedback, or let auto merge-back close the loop.

orch task review --approve

What founders build with ORCH

What people build with ORCH

Startup

Weekend MVP Sprint

Define your vision as a goal. ORCH spins up backend, frontend, QA, and reviewer agents that build your API, UI, tests, and landing page in parallel. Ship a tested MVP in 48 hours, not 3 weeks.

Agents: Backend A/B, Front-End, QA, Reviewer, Marketer
Engineering Team

Sprint Backlog Blitz

Load 18 sprint tasks with dependencies. ORCH dispatches agents across isolated worktrees, respects ordering, retries failures, and auto-merges. Engineers wake up to draft PRs, not empty boards.

Agents: Backend A/B, Front-End, QA A/B, Reviewer
Migration

JS → TypeScript at Scale

Agents convert modules in parallel, each in its own worktree. QA runs tsc --noEmit after each merge. Reviewer rejects any any usage. Main branch stays green at every step.

Agents: CTO, Backend A/B, QA, Reviewer
DevOps

Automated PR Review Pipeline

Four agents review every PR in parallel: security scanning, performance analysis, style enforcement, and test coverage. A CTO agent synthesizes a single merge verdict in under 10 minutes.

Agents: Security, Performance, Style, QA, CTO
Creative

Product Launch War Room

Ship code and content simultaneously. Engineering team closes features while Marketing team writes blog posts, social threads, and docs — all from the same goal, with agents sharing context across teams.

Agents: Backend, Front-End, Content Creator, Marketer, Growth Hacker
Security

Multi-Layer Security Scanning

Chain SAST, dependency audit, and secret detection into one pipeline. Agents correlate findings across layers, deduplicate, assign severity, and auto-fix high-priority items in isolated worktrees.

Agents: Shell (Semgrep, Trivy, Gitleaks), Bug Hunter, Reviewer
Open Source

Contributor PRs at Scale

25 open PRs from first-time contributors? ORCH checks CLA, runs tests, reviews logic, and posts structured feedback within hours. You focus only on PRs that are ready for a human judgment call.

Agents: Shell (gh), QA, Reviewer, CTO
QA

Test Coverage Blitz

From 40% to 80% coverage without a dedicated sprint. Agents claim uncovered modules, generate meaningful tests in parallel, and QA rejects tautological assertions. You review, not write.

Agents: Shell (c8), Backend A/B, QA A/B, Reviewer
CI/CD

Headless Agent Pipeline

Deploy orch serve on a VPS with pm2. Push tasks via CLI from your laptop. Agents execute 24/7, structured JSON logs stream to Grafana. Wake up to completed PRs, not stale terminals.

Mode: orch serve --log-file /var/log/orch.jsonl
Data

Analytics Pipeline

Drop three CSVs, get an executive report by morning. Shell agents clean data, DuckDB joins and computes KPIs, matplotlib generates charts, and a Content Creator writes the narrative with anomaly callouts.

Agents: Shell (pandas, duckdb, matplotlib), Content Creator

Frequently asked

Questions

Is ORCH free?
ORCH is open source under MIT license. You pay only for the AI APIs you already use (Claude, Codex, etc.). Example: 5 agents, 6 tasks, $4.20 in tokens. The TUI shows costs per agent in real time.
What AI models does it support?
ORCH ships with 5 adapters: Claude (Anthropic), OpenCode (multi-provider via OpenRouter — Gemini, DeepSeek, and more), Codex (OpenAI), Cursor, and a universal Shell adapter that works with any CLI tool. If it takes a prompt and returns output, ORCH can orchestrate it.
Will agents mess up my codebase?
Each agent runs in an isolated git worktree on its own branch. Changes go through a review step before merging. You approve or reject every change.
Does my code leave my machine?
Only when communicating with AI APIs (same as using any AI tool directly). ORCH itself stores everything locally in .orchestry/ — no telemetry, no cloud state, no external dependencies.
How is this different from running Claude/Cursor directly?
ORCH coordinates multiple agents simultaneously: state machine tracking, inter-agent messaging, shared context, automatic retries, worktree isolation, and merge-back. It's the orchestration layer on top of your existing AI tools.
Can I use it with my existing project?
Yes. Run orch in any git repository. It creates a .orchestry/ directory and you're ready to go. No configuration required — sensible defaults and pre-configured agent templates included.
Can I run agents 24/7 on a server?
orch serve runs the orchestrator as a headless daemon — no TUI required. Structured JSON logs to stdout for Datadog, Grafana Loki, or jq. Deploy with pm2 or systemd. Add tasks from another terminal — the daemon picks them up on the next tick. Use --once for CI/CD pipelines (exit 0 = all done, exit 1 = failures).
What happens when an agent fails?
ORCH retries with exponential backoff. If an agent stalls, the process is killed and the task is re-queued. Failed runs preserve full event logs. No state is lost.
Stop babysitting. Start shipping.

Coordinate your agents
in 30 seconds

One command to install. One command to run. First PRs in 15 minutes.

$ npm install -g @oxgeneral/orch click to copy
then run
$ orch click to copy
MIT licensed 5 adapters 0 cloud deps 24/7 daemon mode