SDK Reference
The @warpmetrics/warp SDK instruments your LLM clients and sends telemetry to Warpmetrics.
Installation
npm install @warpmetrics/warpConfiguration
The SDK reads configuration from environment variables:
WARPMETRICS_API_KEYrequiredYour API key (starts with wm_live_ or wm_test_)
WARPMETRICS_API_URLAPI base URL. Default: https://api.warpmetrics.com
WARPMETRICS_ENABLEDSet to false to disable tracking. Default: true.
API Reference
warp
warp(client: OpenAI | Anthropic): ProxiedClientWraps an OpenAI or Anthropic client. Returns a proxied version that automatically tracks all API calls including tokens, cost, latency, and status.
import { warp } from '@warpmetrics/warp';
import OpenAI from 'openai';
import Anthropic from '@anthropic-ai/sdk';
const openai = warp(new OpenAI());
const anthropic = warp(new Anthropic());run
run(label: string, opts?): Run | run(ref: Act, label: string, opts?): RunCreates a new run. The label is used to categorize and group runs (e.g., 'Code review', 'Support agent'). When ref is an act, creates a follow-up run linked to that act. Returns a frozen run object.
import { run } from '@warpmetrics/warp';
const r = run('Code review');
// Or as a follow-up to an act
const r2 = run(a, 'Code review');group
group(ref: Run | Group, label: string, opts?): GroupCreates a new group linked to a run or parent group. Groups organize related calls into phases (e.g., 'Planning', 'Execution'). The ref is required and auto-links the group.
import { run, group } from '@warpmetrics/warp';
const r = run('Code review');
const g = group(r, 'Planning');call
call(ref: Run | Group, response: Response, opts?): voidEmits a tracked LLM call and links it to a run or group. Only responses passed to call() are sent to the API — unclaimed responses are never transmitted.
import { call } from '@warpmetrics/warp';
const res = await openai.chat.completions.create({...});
call(r, res); // Emit and link call to run
// Or link to a group
call(g, res);outcome
outcome(ref: Run | Group | Call, name: string, opts?): OutcomeRecords an outcome for a run, group, or call. The name should be a human-readable label in Title Case (e.g., 'Completed', 'Failed', 'Rate Limited'). Use classifications in the dashboard to map these to success/failure. The opts bag can carry arbitrary metadata.
import { outcome } from '@warpmetrics/warp';
outcome(r, 'Completed', { reason: 'All checks passed' });act
act(ref: Outcome, name: string, opts?): ActRecords an action to take after an outcome. Use this to close the improvement loop — declare a next step (retry, change prompt, switch model) and link it to a follow-up run.
import { act, run } from '@warpmetrics/warp';
const o = outcome(r, 'Failed', { reason: 'timeout' });
const a = act(o, 'Retry');
const r2 = run(a, 'Code review'); // follow-up runref
ref(target: Run | Group | Call | Response): stringReturns the Warpmetrics tracking ID for any tracked entity. Useful for logging or correlating with external systems.
import { ref } from '@warpmetrics/warp';
console.log(ref(r)); // wm_run_01abc...
console.log(ref(res)); // wm_call_01abc...flush
flush(): Promise<void>Manually flush all pending events to the API. Events are automatically batched and flushed, but you can call this to ensure delivery before process exit.
import { flush } from '@warpmetrics/warp';
await flush(); // Ensure all events are sentStreaming Support
The SDK automatically handles streaming responses. Token counts and latency are captured as the stream completes. Costs are calculated server-side. No extra code needed.
const stream = await openai.chat.completions.create({
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Hello!' }],
stream: true,
});
for await (const chunk of stream) {
// Process chunks as usual
}
// Emit the tracked call after stream completes
call(r, stream);