---
name: delvry
description: Use when you need a fact, opinion, or piece of context from a person who is not in the current session — stakeholder, client, teammate, expert. Drafts a structured survey, mints per-recipient links via the Delvry service, and retrieves responses on demand or via blocking long-poll.
---

# Delvry — Asynchronous Q&A with humans

> **If you are an AI agent, use the MCP tools listed below.** The `@delvry/cli` package is designed for humans at a terminal — it requires interactive login and is not suited for automated use. Agents should call the MCP tools directly via the `@delvry/mcp` server configured with `DELVRY_API_KEY`.

You are about to ask a question that needs an answer from someone outside this session. Use Delvry to gather structured responses asynchronously.

## When to reach for Delvry

Use Delvry when **the answer must come from someone not in the current session** — and the operator alone can't give you the answer. Examples:
- Founder / executive perspective on a strategic call
- Client preferences for an in-progress deliverable
- Teammate domain expertise (legal, security, finance, ops)
- Sprint retros or team check-ins
- Multi-stakeholder RFC input before a decision

Do not use Delvry when:
- The operator can answer themselves (just ask them in-session)
- The answer is in code, docs, or a known knowledge base (read it instead)
- You need synchronous interactive dialogue (Phase 2 — not yet shipped)

## Standard flow

0. **Check for an API key.** If `DELVRY_API_KEY` is unset (or any tool returns 401), call `delvry_signup({ email, account_name })` once. Persist `first_api_key` to your secret store, then re-invoke the MCP server with `DELVRY_API_KEY` set. Skip if a key is already configured.
1. **Identify the audience.** Named individuals or open/anonymous?
2. **Draft the survey JSON.** Keep it small (1–7 questions). Use `show_if` for branching. Use `pages` to group related questions. Mark optional questions with `required: false` explicitly.
3. **Call `delvry_create_survey`.** Returns `survey_id` and either `links[]` (named) or `anonymous_url` (anonymous).
4. **Hand the links to the operator** with a copy-paste-ready distribution snippet:
   > "Please share these links — one per recipient. Each is single-use, expires in 14 days, and tracks who's answered."
5. **Decide on retrieval mode:**
   - **Synchronous** (you expect responses within minutes) → `delvry_await_responses` with appropriate `timeout_seconds`.
   - **Asynchronous** (responses take hours/days) → print `survey_id` and tell the operator they can resume the session later via the survey ID; you'll fetch the responses then with `delvry_get_responses`.
6. **Summarize responses into actionable context** for the next step in the agent's task. Don't just dump the raw responses — extract the signal.

## Survey design principles

- **Small > complete.** Five focused questions beat a 20-question manifesto. People drop off.
- **Branch aggressively.** Hide irrelevant questions with `show_if`.
- **Open-ended last.** Lead with closed (single_choice / multi_choice) to warm up; ask `long_text` after.
- **Mobile-first prompts.** Most stakeholders answer on phones. Keep prompts under 100 chars where possible.
- **No leading questions.** Don't bias the response.
- **One topic per survey.** If two topics, that's two surveys.

## Survey-level fields

- `title` — required. Shown at the top of the form.
- `description` — optional. Shown below the title.
- `thank_you_message` — optional (≤2000 chars, plaintext, multiline preserved). Shown after the recipient submits.
- `expires_at` — ISO datetime; survey closes automatically.
- `max_responses` — cap on total responses (anonymous surveys).

## Audience modes

- **`named_recipients`** — provide `recipients: [{ name, external_id? }]`. Each recipient gets a unique signed link. Use when you need to know who answered (status tracking).
- **`anonymous`** — set `max_responses` and/or `expires_at`. One shared `anonymous_url`. Use for retros, open polls, broad input.

## Question types (v1.6)

- `short_text` — single-line text. `max_length` defaults to 500.
- `long_text` — multi-line text. `max_length` defaults to 5000.
- `single_choice` — radio. Provide `options: [{ value, label }]`.
- `multi_choice` — checkboxes. Add `min_selections` / `max_selections` if needed.
- `number` — numeric input. Add `min` / `max` / `step` if needed.
- `yes_no` — boolean answer; renders as Yes/No buttons.
  ```json
  {"id":"q","type":"yes_no","prompt":"Use EHR?"}
  ```
- `scale` — integer in `[min,max]`; renders as a row of buttons. Use for Likert/NPS-lite. Range capped at 0-10.
  ```json
  {"id":"q","type":"scale","min":1,"max":5,"min_label":"smooth","max_label":"chaos"}
  ```
- `ranking` — ordered array of option values; click-to-rank UI.
  ```json
  {"id":"q","type":"ranking","options":[{"value":"a","label":"A"}],"max_rank":3}
  ```
- `date` — ISO `YYYY-MM-DD` answer.
  ```json
  {"id":"q","type":"date","min":"2020-01-01"}
  ```

All question types accept an optional `help: string` field (≤500 chars) that renders helper text below the prompt:
```json
{"id":"q","type":"short_text","prompt":"Your name?","help":"As it appears on your badge."}
```

(File upload — not yet supported.)

## Branching with `show_if`

```json
{ "id": "q_followup", "type": "short_text", "prompt": "Which?", "show_if": { "q_main": "other" } }
```

- Multiple keys → AND
- Array value → "matches any of"
- Reference must come earlier in the survey
- Reference to a hidden question hides this one too

## Multi-page surveys

```json
"pages": [["q1", "q2"], ["q3"], ["q4", "q5"]]
```

Each page has Next/Back. Pages where every question is hidden auto-skip.

## Examples

See the `examples/` folder for four worked surveys:

- `stakeholder-context.json` — gathering project perspective from execs
- `client-onboarding.json` — new client intake with branching
- `team-retro.json` — anonymous sprint retro
- `tech-decision-rfc.json` — multi-stakeholder RFC with conditional reasoning

## Tools available

- `delvry_signup({ email, account_name })` — **Use ONCE per project**. Creates a fresh Delvry account and returns the API key. Store the returned `first_api_key` as `DELVRY_API_KEY` for subsequent tool calls. Rate-limited 5/hour per IP.
- `delvry_create_survey({ title, audience_mode, questions, recipients?, pages?, expires_at? })` — primary entrypoint.
- `delvry_ask({ question, recipients? })` — shorthand for one-question (`long_text`) surveys.
- `delvry_get_responses({ survey_id, since?, limit? })` — pull responses on demand.
- `delvry_await_responses({ survey_id, timeout_seconds, min_responses? })` — block until responses arrive (or timeout).
- `delvry_get_recipients({ survey_id })` — see who's answered.
- `delvry_close_survey({ survey_id })` — stop accepting new responses.
- `delvry_list_surveys({ status? })` — your account's surveys.
- `delvry_validate_survey({...survey...})` — dry-run survey design without persisting. Returns `{ok, errors?, normalized?}`.
- `delvry_summarize_responses({ survey_id })` — returns aggregated summary (distributions, stats, excerpts) for all responses.
- `delvry_delete_response({ survey_id, response_id })` — deletes a single response. Cascades webhook_deliveries.

## Configuration

The MCP server reads `DELVRY_API_KEY` from env and optionally `DELVRY_ENDPOINT` (defaults to the hosted instance). For self-host: `DELVRY_ENDPOINT=https://delvry.your-org.workers.dev`.

## Iterative QA (threads — multi-round agentic Q&A)

Use **threads** instead of surveys when:
- The agent is working on a project that requires several rounds of human input.
- The agent has *hypotheses* it wants confirmed (assertions / assumptions) — much faster than open questions.
- One or many humans need to be in the loop, possibly added across rounds.
- The work spans days/weeks, and the agent process won't be alive when humans respond.

**Threads vs surveys at a glance:**

| | Survey | Thread |
|---|---|---|
| Rounds | One-shot | Many (iterations) |
| Item types | 9 question types | 4 item types: assertion, assumption, choice, question |
| Output | Raw responses | Knowledge Receipt (typed slots, provenance, explicit conflicts) |
| Multi-recipient | Same questions to all | Per-item targeting + multi-recipient aggregation |
| Persistence | One result set | Long-lived; carries `agent_context` for cold resume |

### The four item types

Pick the fastest type that fits — open `question`s have the slowest response rate.

- **`assertion`** — *State a belief, ask for confirmation.* "Deploys go through staging on Fridays only." The recipient taps Confirm / Deny / Edit. Use when you've inferred something from code and just need a sanity check.
- **`assumption`** — *State a working hypothesis with optional confidence.* "I'm assuming `UserService.authenticate()` is the only login entry point. (60% — saw 1 caller.)" Recipient confirms or corrects.
- **`choice`** — *Multiple choice.* "Which billing model are we using? per-seat / per-event / hybrid / don't know." Use when you've narrowed to N options.
- **`question`** — *Open-ended.* "Walk me through what happens when a refund is requested." Use only when you have no hypothesis.

### The agent loop

```
delvry_thread_create({ project_label, agent_context })
  → thread_id

delvry_inquiry_post({
  thread_id,
  items: [
    { type: "assertion", slot_id: "deploy.cadence", slot_schema: { type: "string" },
      prompt: "Deploys go through staging on Fridays only.",
      asserted_value: "Fridays only", required: true },
    { type: "question", slot_id: "refund.process",
      prompt: "Walk me through what happens when a refund is requested.",
      required: true },
  ],
  recipients: [{ display_name: "Pat", contact_email: "pat@example.com" }],
  agent_brief: "Claude is debugging the billing system.",
  expires_at: "2026-05-30T00:00:00Z",
})
  → inquiry_id, links[]

# Hand links to the operator. Detach. Wait days.

# Next session:
delvry_thread_get({ thread_id })
  → { receipt: { slots: { "deploy.cadence": { status: "confirmed", value: "Fridays only", sources: [...] } } } }

# Decide: more inquiries (follow-ups / new slots), or delvry_thread_close.
```

### Slot schema evolution

Each `slot_id` is introduced once with a `slot_schema`. Subsequent inquiries that reference the same `slot_id` *omit* the schema (re-supplying a different one is rejected). The thread's full slot schema is the union over all inquiries — you can keep adding new slots in later iterations as you learn more about what to ask.

### Multi-recipient — conflicts are surfaced, not resolved

When two recipients give different answers for the same shared slot, the receipt entry's `status` becomes `"conflicting"` with all values listed. The agent decides:
- accept majority,
- ask a tie-breaker in the next inquiry,
- escalate to the operator.

Delvry never picks a winner.

### `agent_context` — your save state

The thread carries an opaque ≤64 KB blob you own. Write it on every iteration with whatever the *next* session needs to resume cold:
- the original task,
- current hypotheses,
- file refs / external pointers,
- what's outstanding.

Read it back with `delvry_thread_get({ thread_id })`. Delvry never parses it.

### When NOT to use threads

- You only need one round → use `delvry_create_survey`/`delvry_ask`.
- You need synchronous interactive dialogue with a human in the same session → operator can answer in-session; threads are async.
- The answer is in code or docs → read it directly.

## Survey JSON Schema

Drop this into `.claude/skills/delvry/survey.schema.json` next to SKILL.md to
get IDE autocomplete and validation when authoring surveys:

```json
{
  "$schema": "https://json-schema.org/draft/2020-12/schema",
  "title": "Delvry Survey",
  "type": "object",
  "required": [
    "version",
    "title",
    "audience_mode",
    "questions"
  ],
  "properties": {
    "version": {
      "const": "1"
    },
    "title": {
      "type": "string",
      "minLength": 1,
      "maxLength": 200
    },
    "description": {
      "type": "string",
      "maxLength": 2000
    },
    "audience_mode": {
      "enum": [
        "anonymous",
        "named_recipients"
      ]
    },
    "expires_at": {
      "type": "string",
      "format": "date-time"
    },
    "max_responses": {
      "type": [
        "integer",
        "null"
      ],
      "minimum": 1
    },
    "questions": {
      "type": "array",
      "minItems": 1,
      "maxItems": 50,
      "items": {
        "$ref": "#/$defs/Question"
      }
    },
    "pages": {
      "type": "array",
      "maxItems": 10,
      "items": {
        "type": "array",
        "items": {
          "type": "string"
        },
        "minItems": 1
      }
    },
    "recipients": {
      "type": "array",
      "items": {
        "$ref": "#/$defs/Recipient"
      }
    },
    "thank_you_message": {
      "type": "string",
      "maxLength": 2000
    },
    "theme": {
      "type": "object",
      "properties": {
        "logo_url": {
          "type": "string",
          "format": "uri"
        },
        "accent_color": {
          "type": "string",
          "pattern": "^#[0-9a-fA-F]{6}$"
        }
      }
    }
  },
  "$defs": {
    "Recipient": {
      "type": "object",
      "required": [
        "name"
      ],
      "properties": {
        "name": {
          "type": "string",
          "minLength": 1,
          "maxLength": 200
        },
        "external_id": {
          "type": "string",
          "maxLength": 200
        }
      }
    },
    "Question": {
      "type": "object",
      "required": [
        "id",
        "type",
        "prompt"
      ],
      "properties": {
        "id": {
          "type": "string",
          "pattern": "^[a-z0-9_]{1,64}$"
        },
        "type": {
          "enum": [
            "short_text",
            "long_text",
            "single_choice",
            "multi_choice",
            "number",
            "yes_no",
            "scale",
            "ranking",
            "date"
          ]
        },
        "prompt": {
          "type": "string",
          "minLength": 1,
          "maxLength": 2000
        },
        "help": {
          "type": "string",
          "maxLength": 500
        },
        "required": {
          "type": "boolean"
        },
        "show_if": {
          "type": "object",
          "additionalProperties": {
            "oneOf": [
              {
                "type": "string"
              },
              {
                "type": "number"
              },
              {
                "type": "array"
              }
            ]
          }
        },
        "options": {
          "type": "array",
          "items": {
            "type": "object",
            "required": [
              "value",
              "label"
            ],
            "properties": {
              "value": {
                "type": "string"
              },
              "label": {
                "type": "string"
              }
            }
          }
        },
        "min_length": {
          "type": "integer",
          "minimum": 0
        },
        "max_length": {
          "type": "integer",
          "minimum": 1
        },
        "min_selections": {
          "type": "integer",
          "minimum": 0
        },
        "max_selections": {
          "type": "integer",
          "minimum": 1
        },
        "min": {
          "oneOf": [
            {
              "type": "number"
            },
            {
              "type": "string",
              "pattern": "^\\d{4}-\\d{2}-\\d{2}$"
            }
          ]
        },
        "max": {
          "oneOf": [
            {
              "type": "number"
            },
            {
              "type": "string",
              "pattern": "^\\d{4}-\\d{2}-\\d{2}$"
            }
          ]
        },
        "step": {
          "type": "number",
          "exclusiveMinimum": 0
        },
        "min_label": {
          "type": "string",
          "maxLength": 50
        },
        "max_label": {
          "type": "string",
          "maxLength": 50
        },
        "max_rank": {
          "type": "integer",
          "minimum": 1,
          "maximum": 20
        }
      }
    }
  }
}
```
