Autonomous Development for Vercel with Warp Coder
You write the issue. The agent writes the code, opens a PR, reviews it, merges it, and deploys to Vercel. This guide walks through the full setup — every step, every config field, no hand-waving.
By the end you'll have @warpmetrics/coder polling a GitHub Projects board, picking up issues, implementing them with Claude Code, and pushing production deploys to Vercel. The whole loop runs unattended. You stay in control through the board: move issues to Todo when you want work done, reply to comments when the agent asks questions, and approve deploys when you're ready.
What you'll build
The pipeline has six stages. Each one maps to a column on your GitHub Projects board:
- Implement — Agent reads the issue, clones your repo, writes code, runs tests, opens a PR
- Review — A separate AI pass reviews the diff for bugs, security issues, and style violations
- Revise — If the reviewer requests changes, the agent applies fixes and re-submits (up to 3 cycles)
- Merge — Squash-merges the PR and deletes the branch
- Deploy — Runs
vercel deploy --prodagainst your project - Release — Generates changelog entries and marks the issue as done
Every step is tracked in WarpMetrics so you can see costs, durations, success rates, and failure patterns across all your agent runs.
Prerequisites
You need five tools installed and authenticated before starting:
# Check what you already have
node --version # 18+
git --version # 2.30+
gh --version # 2.24+
claude --version # 1.0+
vercel --version # latestInstall anything missing
# Node.js — https://nodejs.org or use nvm/fnm
nvm install 22
# GitHub CLI
brew install gh # macOS
# https://cli.github.com # other platforms
# Claude Code (requires Anthropic API key or Claude Max subscription)
npm install -g @anthropic-ai/claude-code
# Vercel CLI
npm install -g vercelStep 1 — Authenticate your tools
GitHub CLI
gh auth loginSelect GitHub.com, HTTPS, and Login with a web browser. Then add the scopes the agent needs:
gh auth refresh --scopes project,repo,read:orgVerify the scopes are present:
gh auth statusLook for project, repo, and read:org in the output.
Claude Code
claudeFollow the interactive prompts. Once authenticated, test it:
claude -p "Say hello"Vercel
vercel loginThen link your project so the CLI knows which Vercel project to deploy:
cd ~/your-project
vercel linkThis creates a .vercel/ directory with your project and org IDs. The agent needs this to deploy without prompts.
Step 2 — Create a GitHub Projects board
The agent uses a GitHub Projects v2 board to track state. Each column is a stage in the pipeline.
- Go to your GitHub org (or personal profile) → Projects → New project
- Choose Board layout
- Name it something like Agent Pipeline
Add these columns
Click + on the right side of the board to add columns. Name them exactly:
| Column | What it means |
|---|---|
| Todo | Agent picks up issues from here |
| In Progress | Agent is writing code |
| In Review | PR open, AI review in progress |
| Deploy | Merged, waiting for deploy |
| Done | Deployed and released |
| Blocked | Something failed — needs your attention |
| Waiting | Agent asked a question — reply on the issue |
Find your project number
Open the board in your browser. The number is in the URL:
https://github.com/orgs/acme/projects/7
^
project numberYou'll need this during setup.
Step 3 — Get your API keys
WarpMetrics
WarpMetrics tracks every agent run: costs, durations, outcomes, and the full state machine history.
- Sign up at warpmetrics.com
- Create a project
- Go to Settings → API Keys and create one
The key starts with wm_live_. Keep it handy.
GitHub Personal Access Token
The agent needs a token to read issues, push branches, open PRs, and merge code.
- Go to GitHub → Settings → Developer settings → Personal access tokens → Fine-grained tokens
- Click Generate new token
- Scope it to your target repositories with these permissions:
- Contents — Read and write
- Issues — Read and write
- Pull requests — Read and write
- Projects — Read and write
- Metadata — Read-only
- Copy the token
Tip: Create a second token (or a bot account) for the review step. That way the implementer and reviewer show up as different authors on the PR, which makes the history cleaner and avoids GitHub's "author can't approve their own PR" restriction.
Vercel token (for non-interactive deploys)
The agent runs deploys in a subprocess, so it can't open a browser to authenticate. Generate a token:
- Go to vercel.com/account/tokens
- Create a token with deploy permissions for your project
Step 4 — Prepare your Vercel project
Make sure you can deploy from the command line before involving the agent. If this doesn't work, the agent won't be able to deploy either.
cd ~/your-project
vercel deploy --prodIf that succeeds, add a deploy script to your package.json:
{
"scripts": {
"deploy:prod": "vercel deploy --prod --yes --token=$VERCEL_TOKEN"
}
}The --yes flag skips interactive confirmation. The --token flag authenticates without a browser.
Vercel handles the build automatically. When you run
vercel deploy, it uploads your source and runs whatever build command is configured in your Vercel project settings. You don't need a separate build step.
Step 5 — Initialize Warp Coder
Create a working directory
Warp Coder runs from its own directory — not inside your repo. It clones repos into subdirectories as needed.
mkdir ~/warp-agent && cd ~/warp-agentRun the setup wizard
npx @warpmetrics/coder initThe wizard walks through:
- GitHub CLI scopes — Verifies
projectandrepoare present - WarpMetrics API key — Paste your
wm_live_key - Review token — Paste a second GitHub token (or skip to use the same one)
- Repository URLs — Enter your SSH URLs, e.g.
git@github.com:acme/website.git - Project board — Enter the project number and org/user owner
- Column mapping — Confirms which board columns map to which states
When it finishes, you'll have:
~/warp-agent/
├── .warp-coder/
│ └── config.json
├── .env
└── .gitignoreStep 6 — Configure the Vercel deploy command
The init wizard creates the config but doesn't know about your deploy command. Open .warp-coder/config.json and add the deploy field to your repo entry:
{
"board": {
"provider": "github",
"project": 7,
"owner": "acme",
"columns": {
"todo": "Todo",
"inProgress": "In Progress",
"inReview": "In Review",
"deploy": "Deploy",
"done": "Done",
"blocked": "Blocked",
"waiting": "Waiting"
}
},
"claude": {
"maxTurns": 20
},
"pollInterval": 30,
"maxRevisions": 3,
"repos": [
{
"url": "git@github.com:acme/website.git",
"deploy": "npm run deploy:prod"
}
]
}Then add your Vercel token to .env:
WARP_CODER_WARPMETRICS_KEY=wm_live_...
WARP_CODER_GITHUB_TOKEN=github_pat_...
WARP_CODER_REVIEW_TOKEN=github_pat_...
VERCEL_TOKEN=...Multi-repo setups
If your product spans multiple repos (e.g. a Next.js frontend and an Express API, both on Vercel), list them all:
{
"repos": [
{
"url": "git@github.com:acme/frontend.git",
"deploy": "npm run deploy:prod"
},
{
"url": "git@github.com:acme/api.git",
"deploy": "npm run deploy:prod"
}
]
}The agent handles cross-repo changes in a single issue. It clones whichever repos it needs, creates branches, and opens separate PRs. Deploy runs in dependency order.
Step 7 — Add quality gates with hooks
Hooks run shell commands at key points in the pipeline. They're optional but recommended — they catch mistakes before they reach production.
Add a hooks key to your config:
{
"hooks": {
"onBeforePush": "npm run lint && npm run test",
"onBeforeMerge": "npm run test",
"timeout": 300
}
}If a hook fails (non-zero exit), the pipeline stops and the issue moves to Blocked.
Available hooks
| Hook | Runs when |
|---|---|
onBranchCreate |
After the agent creates a feature branch |
onBeforePush |
Before pushing the implementation |
onPRCreated |
After a PR is opened |
onBeforeMerge |
Before merging an approved PR |
onMerged |
After a successful merge |
Each hook has access to environment variables: ISSUE_NUMBER, PR_NUMBER, BRANCH, and REPO.
Step 8 — Teach the agent your standards with skills
Skills are markdown files that give the agent project-specific instructions. The agent reads them before implementing and reviewing code.
mkdir -p ~/warp-agent/.warp-coder/skills/projectCreate ~/warp-agent/.warp-coder/skills/project/SKILL.md:
## Stack
- Next.js 15 with App Router
- TypeScript strict mode
- Tailwind CSS
- Drizzle ORM with Postgres
## Rules
- All new components go in src/components/
- API routes validate input with Zod schemas
- No default exports — use named exports everywhere
- Tests live next to source files: Button.tsx → Button.test.tsx
- Never commit console.log to production codeThe agent copies skills into each workspace before running Claude Code. During reviews, it reads them to decide whether to approve or request changes.
You can create multiple skill directories for different concerns:
.warp-coder/skills/
├── project/SKILL.md # Stack, structure, conventions
├── review/SKILL.md # Review criteria, things to flag
└── security/SKILL.md # Auth patterns, input validation rulesStep 9 — Verify and test
Validate the config
npx @warpmetrics/coder verifyThis checks that your state machine graph is consistent — all transitions are valid, all executors exist, all outcomes are defined.
Run a test issue
Before going fully autonomous, run through the pipeline with a single small issue.
Create an issue on your repo:
Title: Add a health check endpoint
Create
app/api/health/route.tsthat returns{ status: "ok" }with a 200 response. Add a simple test.Add the issue to your project board in the Todo column
Start the agent:
npx @warpmetrics/coder watchWatch the terminal. You'll see a spinner showing the active issue and elapsed time. The agent will:
- Move the issue to In Progress
- Clone your repo, create
agent/issue-Nbranch - Implement the endpoint
- Commit, push, and open a PR
- Move to In Review
- Run a code review pass
- Merge if approved (or revise if changes requested)
- Wait for deploy approval in the Deploy column
Once you're satisfied, approve the deploy by moving the issue forward (the agent checks the board state on each poll).
Step 10 — Run the agent
For production use, keep the agent running as a persistent process.
Using PM2
npm install -g pm2
pm2 start npx -- @warpmetrics/coder watch
pm2 save
pm2 startupPM2 restarts the process if it crashes and auto-starts on boot.
Using a background process
cd ~/warp-agent
nohup npx @warpmetrics/coder watch > agent.log 2>&1 &The agent polls your board every 30 seconds (configurable via pollInterval). When it finds issues in Todo, it picks them up.
How the pipeline works
Todo In Progress In Review Deploy Done
│ │ │ │ │
│ ┌─────────┐ │ ┌─────────┐ │ ┌─────────┐ │ ┌─────────┐ │
├─→│Implement├──→├─→│ Review ├─→├─→│ Merge ├─→├─→│ Deploy ├─→│
│ └─────────┘ │ └────┬────┘ │ └─────────┘ │ └─────────┘ │
│ │ │ │ │ │
│ │ ┌────▼────┐ │ │ │
│ │ │ Revise │ │ │ │
│ │ └────┬────┘ │ │ │
│ │ │ │ │ │
│ │ └───────┘ │ │
│ │ (up to 3x) │ │When the agent needs help
Waiting — The agent couldn't determine the right approach. It posts a comment on the issue asking for clarification, and moves the issue to the Waiting column. Reply on the issue, and the agent picks it back up on the next poll.
Blocked — Something failed: implementation error, test failure, merge conflict, or deploy crash. The issue moves to Blocked with a comment explaining what went wrong. Fix the underlying problem, then move the issue back to Todo to retry.
Monitor your agent with WarpMetrics
Every step is tracked automatically. Open your WarpMetrics dashboard to see:
- Run timeline — Each issue's journey through the pipeline
- Cost breakdown — Token usage and LLM spend per issue
- Outcome rates — Success, failure, and revision patterns
- Deployment history — Which issues shipped when
Query your agent from Claude Desktop
Install the WarpMetrics MCP server to ask questions about your agent's performance in natural language:
npm install -g @warpmetrics/mcpAdd it to ~/.claude/claude_desktop_config.json:
{
"mcpServers": {
"warpmetrics": {
"command": "warpmetrics-mcp",
"env": {
"WARPMETRICS_API_KEY": "wm_live_..."
}
}
}
}Then ask Claude:
- "How many issues did the agent ship this week?"
- "What's the deploy success rate?"
- "Show me the most expensive runs"
- "Which issues are currently blocked?"
Full configuration reference
.warp-coder/config.json
{
"board": {
"provider": "github",
"project": 7,
"owner": "acme",
"columns": {
"todo": "Todo",
"inProgress": "In Progress",
"inReview": "In Review",
"deploy": "Deploy",
"done": "Done",
"blocked": "Blocked",
"waiting": "Waiting"
}
},
"hooks": {
"onBeforePush": "npm run lint && npm run test",
"onBeforeMerge": "npm run test",
"timeout": 300
},
"claude": {
"maxTurns": 20
},
"pollInterval": 30,
"maxRevisions": 3,
"repos": [
{
"url": "git@github.com:acme/website.git",
"deploy": "npm run deploy:prod"
}
]
}.env
WARP_CODER_WARPMETRICS_KEY=wm_live_...
WARP_CODER_GITHUB_TOKEN=github_pat_...
WARP_CODER_REVIEW_TOKEN=github_pat_...
VERCEL_TOKEN=...CLI commands
| Command | Description |
|---|---|
npx @warpmetrics/coder init |
Interactive setup wizard |
npx @warpmetrics/coder watch |
Start the polling loop |
npx @warpmetrics/coder verify |
Validate the pipeline config |
npx @warpmetrics/coder debug [issue#] |
Test the state machine interactively |
npx @warpmetrics/coder memory |
Print the agent's learned lessons |
npx @warpmetrics/coder compact |
Force-rewrite the memory file |
npx @warpmetrics/coder release |
Release shipped issues |
npx @warpmetrics/coder release --preview |
Preview changelog without releasing |
Troubleshooting
Agent doesn't pick up issues
- Check the issue is in the Todo column on the right board
- Verify the project number matches your config
- Run
gh auth statusand confirm scopes includeprojectandrepo - Test interactively:
npx @warpmetrics/coder debug <issue-number>
Deploy fails
- Test manually first:
cd your-repo && npm run deploy:prod - Verify your Vercel token:
vercel whoami --token=$VERCEL_TOKEN - Make sure
vercel linkhas been run in the repo (.vercel/directory must exist) - The deploy step times out after 10 minutes by default
Claude Code errors
- Verify
claude -p "Hello"returns a response - Check your Anthropic API key or subscription is active
- Increase
claude.maxTurnsin config if the agent runs out of turns (default: 20)
Reviews are too strict or too lenient
- Add or refine
.warp-coder/skills/review/SKILL.mdwith specific criteria - The agent reads these before every review pass
Agent stuck in a revision loop
- The pipeline caps at 3 revision attempts by default
- If it hits the limit, the issue moves to Blocked
- Check the PR comments for the specific feedback loop
- Adjust
maxRevisionsin config if needed
Writing effective issues
The agent performs best with clear, specific issues. Vague instructions lead to vague implementations.
Good issue:
Title: Add rate limiting to POST /api/contact
The endpoint at
src/app/api/contact/route.tshas no rate limiting. Add IP-based rate limiting using Upstash:
- 5 requests per minute per IP
- Return 429 with
{ error: "Too many requests" }when exceeded- Use
@upstash/ratelimitand@upstash/redis- Env vars
UPSTASH_REDIS_REST_URLandUPSTASH_REDIS_REST_TOKENare already in Vercel
Bad issue:
Make the contact form better
The more context you give — file paths, package preferences, expected behavior, edge cases — the better the result. Think of it like writing a spec for a junior developer who's smart but has never seen your codebase.
Start building with WarpMetrics
Track every LLM call, measure outcomes, and let your agents query their own performance data.