Acts & Feedback Loops

Acts record the action to be taken after an outcome and create traceable chains between runs. They're how you model retry logic, self-improvement, and iterative refinement.

What is an act?

An act represents a decision to do something in response to an outcome. If an outcome says "what happened," an act says "what we're going to do about it."

Acts exist to connect the end of one run to the beginning of the next. Without them, retries and iterations are just disconnected runs. With them, you get a traceable chain:

Run 1 (Code review)
└── Outcome: "failed" (tests failing)
    └── Act: "retry" (fix and rerun)
        └── Run 2 (Code review) ← follow-up
            └── Outcome: "completed"

ID format: wm_act_<ulid>

Why acts exist

The best AI agents don't just run once. They evaluate their own output, detect problems, and try again. This creates a loop:

RunOutcomeActRun...

Acts make this loop observable. Without them, you'd see 3 runs with the same label and no way to know they're related. With acts, the dashboard shows the full chain: which attempt led to which, what the agent decided to do differently, and whether the retry actually helped.

API

act(target: Outcome, name: string, opts?: object): Act

Records an action to take after an outcome. Returns a frozen act handle.

Parameters
target— The outcome this act responds to. Must be an outcome handle or a wm_oc_ ref string.
name— What action is being taken: "retry", "refine-prompt", "switch-model", "escalate", etc.
opts— Optional metadata about the action (what changed, why, strategy, etc.).
Returns

A frozen object with id and _type: 'act'. Pass this as the first argument to run() to create a follow-up run.

run(act: Act, label: string, opts?: object): Run

Creates a follow-up run linked to an act. The run is part of the same chain.

Basic example

Retry with refinement
import { run, group, call, outcome, act, flush } from '@warpmetrics/warp';

// First attempt
const r1 = run('Content generator');
const draft = group(r1, 'Drafting');
call(draft, await openai.chat.completions.create({...}));

// Evaluate the result
const oc = outcome(r1, 'Low Quality', {
  reason: 'Too generic, needs more specific examples',
});

// Decide to retry with a different approach
const a = act(oc, 'Refine Prompt', {
  change: 'Added domain-specific examples to prompt',
});

// Second attempt — linked to the act
const r2 = run(a, 'Content generator');
const draft2 = group(r2, 'Drafting');
call(draft2, await openai.chat.completions.create({...}));

outcome(r2, 'Completed', { quality: 'good' });

await flush();

Self-improving agent pattern

Acts shine in agents that evaluate and improve their own output. Here's a pattern for an agent that loops until it's satisfied:

Self-improving agent
async function selfImprovingAgent(task, maxAttempts = 3) {
  let actRef = null;

  for (let attempt = 1; attempt <= maxAttempts; attempt++) {
    // Create run (first attempt or follow-up)
    const r = actRef
      ? run(actRef, 'Writer', { attempt })
      : run('Writer', { attempt });

    // Generate
    const gen = group(r, 'Generate');
    const result = await openai.chat.completions.create({
      model: 'gpt-4o',
      messages: [{ role: 'user', content: task }],
    });
    call(gen, result);

    // Evaluate
    const review = group(r, 'Self-review');
    const evaluation = await openai.chat.completions.create({
      model: 'gpt-4o',
      messages: [{
        role: 'user',
        content: `Rate this output 1-10: ${result.choices[0].message.content}`,
      }],
    });
    call(review, evaluation);

    const score = parseInt(evaluation.choices[0].message.content);

    if (score >= 8) {
      outcome(r, 'Completed', { score });
      break;
    }

    // Not good enough — act and retry
    const oc = outcome(r, 'Below Threshold', { score });
    actRef = act(oc, 'retry', { targetScore: 8, actualScore: score });
  }

  await flush();
}

Common act names

Like outcomes, act names should be consistent and descriptive:

RetrySame approach, try again (transient failure, rate limit)
Refine PromptAdjusted the prompt based on the output quality
Switch ModelTrying a different model (e.g., gpt-4o → claude)
DecomposeBreaking the task into smaller sub-tasks
EscalateHanding off to a human or more capable system
Add ContextAdding more context or examples to the input
Fix and RetryFixing a specific issue and retrying

What gets tracked

In the dashboard, act chains let you:

-See the full retry history for any run — how many attempts, what changed each time
-Measure improvement — did the retry produce a better outcome?
-Identify patterns — which act types lead to successful retries?
-Track act frequency — how often does your agent need to retry?

Important constraint

Acts can only be created from outcomes. You can't create an act from a run or group directly. The chain is always:

outcome → act → run → outcome → act → run → ...

This is by design. An act is a response to a specific outcome. Without knowing what happened (the outcome), you can't meaningfully decide what to do next (the act).

Tips

-Not every agent needs acts. They're for agents that retry, iterate, or self-improve. Simple single-pass agents can skip them entirely.
-Put the 'what changed' in the act opts. This makes it easy to see what the agent tried differently in each iteration.
-Keep act chains short. If your agent is retrying more than 3-5 times, the problem is likely in the prompt or approach, not the retry logic.
-Use act stats in the dashboard to identify which actions actually lead to improvement.