Agent run control

Claude Code keeps working while your evening stays yours.

HeadsDown keeps the loop moving: nothing needs you until a window-closing or review-needed choice appears, then non-urgent asks queue for morning and return ready to resume.

Nothing needs you Window closing Review needed Queued for morning Ready to resume
Sample window-closing decision
{
  "currentState": "window_closing",
  "recommendedActionKey": "allow_for_duration",
  "allowedActionKeys": ["allow_for_duration", "pause_and_summarize", "keep_queued"],
  "privacyMode": "metadata_only",
  "contentReceived": false
}

Claude Code controls the model. HeadsDown controls the run.

HeadsDown does not pick Claude's model. It tells the run whether to continue, narrow, ask, queue, wrap, or resume based on your rules and the metadata path.

Metadata-only agent-run event path is live Routing-decision API is draft and labeled as such No proof metrics without evidence

How HeadsDown keeps the run moving

The agent asks before it interrupts. HeadsDown returns the next safe move.

The query is metadata. The response is a direction. The prompt, code, diff, logs, and conversation stay with the integration that already has them.

1

Your rules stay current

HeadsDown reads the state you already manage: working windows, focus blocks, account settings, connected-tool settings, and the current mode your tools should respect.

2

The agent sends metadata

The metadata path uses categories, counts, buckets, booleans, call keys, action keys, reason codes, opaque refs, and validation state. It is not the place to send work content.

3

HeadsDown returns the next move

The run gets a structured direction: stay quiet, surface a review, queue for morning, save a handoff, or resume. Stops are user-elected, not automatic.

Canonical moments

Five moments that keep the run moving right.

These examples are sample states unless the connected client has shipped the matching surface. They show the product language, not production proof metrics.

Nothing needs you

Quiet while work stays on path.

The run is inside approved bounds, so no interruption is needed.

Window closing

Extend or wrap cleanly.

HeadsDown surfaces the decision before the window ends so the user can choose the next move.

Review needed

Ask only when judgment is required.

Boundary decisions queue as a clear review request, not a vague interruption.

Queued for morning

Evening stays yours.

Non-urgent asks wait for the next reachable window while the run keeps moving inside guardrails.

Ready to resume

Return without rework.

Saved handoff plus queued decisions make the next step immediately clear.

Queued-for-morning example

Your evening stays yours.

When the user is away, HeadsDown tells the agent to defer human-input decisions, save a clean handoff, and keep working on what can safely continue.

  • Non-urgent asks queue for the next reachable window.
  • Human-blocking decisions become reviewable items, not after-hours pings.
  • Local integration memory keeps the execution context; HeadsDown receives derived facts only.
Sample queued-for-morning state
{
  "currentState": "queued_for_morning",
  "recommendedActionKey": "queue_for_morning",
  "deferUntil": "next_reachable_window",
  "handoffSaved": true,
  "userInterrupted": false
}
Sample ready-to-resume state
{
  "currentState": "ready_to_resume",
  "recommendedActionKey": "resume_run",
  "queuedDecisionCount": 1,
  "handoffSaved": true,
  "contentReceived": false
}

Ready-to-resume example

Morning starts with a clean handoff.

HeadsDown keeps agents going, then brings back the exact queued decision and next approved step so work resumes without rebuilding context.

This is sample/demo behavior while final cross-client screenshots and surfaces land. It is not a production proof metric.

Guided demo

Watch HeadsDown queue the ask, then bring it back cleanly.

The demo is a scripted sample of one developer and one agent decision loop. It shows the product story without claiming production proof metrics.

Why prompts are not enough

A prompt cannot know what changed after it was written.

Your calendar moves. Your evening starts. A run grows from small to risky. HeadsDown turns those changing constraints into calls the agent can act on.

Prompts go stale

They capture intent at launch, not the user's current window, interruptions, queue, or rule changes.

Agents need structured calls

A call key and action key are easier to follow than another paragraph of policy text.

Receipts build trust

Users can see what was queued, narrowed, or brought back, without exposing prompts or code to HeadsDown.

Proof and trust

Public claims stay tied to shipped or labeled behavior.

The trust pages explain what is live, what is draft, and what is planned. No SOC 2 claim, retention promise, data residency promise, or proof metric appears here without backing.

HeadsDown starts with agent-run control because developers feel the pain first. The broader platform is the same primitive: systems ask before interrupting a person, and HeadsDown returns the play.

Let your agents run without babysitting them.

Create an account to set your rules, then watch the demo to see how supported agent integrations use HeadsDown routing decisions.

FAQ

Questions before you let an agent run?

No. The metadata path uses routing facts such as task category, size buckets, opaque workspace refs, call keys, action keys, validation state, and timing. It does not need source code, prompts, conversation history, logs, repo names, or file contents.
No. For Claude Code, Claude Code controls the model. HeadsDown controls the run: continue, narrow, queue, ask, wrap, or resume. Other integrations should only claim model-related behavior when that behavior actually ships.
A prompt is static. HeadsDown uses current state, event metadata, and standing rules to return a structured call at the moment the agent needs one.
The consumer app, calendar/push surfaces, API keys, agent-run event submission path, metadata validator, and calibration substrate are live. The public routing-decision endpoint, partner OAuth, developer dashboard, metering, public docs site, and some cross-client screenshots are draft or planned until labeled otherwise.