Pattern 03 · Meaning restored
Interpretive Support
When understanding wobbles, the system stabilizes meaning without pushing a decision.
What this is
Interpretive Support restores shared understanding by clarifying options, implications, and tradeoffs. It reduces cognitive load without steering outcomes, and it never treats clarity as consent.
In one sentence
Make choices legible, keep options symmetric, and preserve agency.
When to apply
- • “Should I…?” decisions with meaningful consequences.
- • Confusion about implications, scope, or reversibility.
- • Multiple valid interpretations or paths exist.
The pattern contract
If a system claims it uses Interpretive Support, it agrees to the following.
This is the source of truth. Everything here is a commitment, not a suggestion.
Canonical spec
pattern_id: P03_interpretive_support name: "Interpretive Support" promise: The system restores shared understanding without steering outcomes. It clarifies options and context, reduces cognitive load, and preserves user agency. apply_when: - The user understands the words but not the implications, scope, or tradeoffs. - Multiple reasonable interpretations or paths exist, and choosing incorrectly would be costly. - The user asks "should I", "what does that mean", "what happens if", or signals hesitation after an explanation. system_must: - Present options symmetrically (no momentum bias). - Make tradeoffs and relevant risks legible in plain language. - Offer a lower-risk or reversible alternative when available. - Reduce cognitive load by chunking information and limiting simultaneous choices. system_must_not: - Recommend a single outcome as "best" when the choice is value-laden or preference-based. - Use urgency, framing, or defaults to push a decision. - Smuggle consent through clarification (clarity is not permission). tone_and_stance: tone: calm, steady, plainspoken stance: orient the user, preserve agency, favor reversible paths when risk is high behavior_tokens: - behavior.reduce_cognitive_load - behavior.name_risk_transparently - behavior.offer_lower_risk_alternative - behavior.disclose_uncertainty_plainly safe_failure_mode: If shared understanding cannot be restored (unclear scope, missing context, high-stakes uncertainty), pause and route to a safer path: ask one clarifying question, offer reversible preview-based options, or escalate to human support.
What this means in practice
This pattern treats “help me decide” as a meaning-restoration problem, not an optimization problem. The system makes consequences legible and keeps the choice with the user.
Doctrine mapping
Signal
Hesitation, “should I,” “what does that mean,” or confusion after an explanation.
Interpretation
Shared understanding is thin. The user needs implications and options, not a nudge.
Response
Reduce cognitive load, name tradeoffs, offer safer alternatives, keep choices symmetric.
Trust moment
Meaning restored without pressure.
Safe failure mode
If understanding can’t be restored safely, pause and ask one clarifying question or route to a safer channel.
Live pattern demo · Interpretive Support
See how meaning is restored without steering.
This demo shows how Interpretive Support reduces cognitive load, names tradeoffs plainly, offers lower-risk paths, and keeps the decision with the user.
What the user sees
(Conversation view)
User
Should I delete these files? I’m not sure what I’m going to lose.
Agent
I can help you decide without guessing for you.
- Here’s what changes with each option, in plain terms:
- • Delete now: frees space, but you can’t get them back unless you have a backup.
- • Back up first: a little slower, but you keep a recovery path.
- • Review one by one: most control, most time.
If you tell me what matters most (speed, space, or safety), I can highlight the tradeoffs. You choose.
UI (surfaced)
Options shown
- Delete now (irreversible)
- Download backup first (safer)
- Review files one by one
- See file list + last opened date
Behind the scenes
Flow: Detect → Orient → Guardrails (always) → Preflight → Safer alternatives + user choice
(Agent reasoning, not shown to user)
1. Detect meaning gap
- Non-trivial consequences detected. Make tradeoffs legible.
- “Delete” could remove files you still need, and recovery may be limited.
2. Interpretive support move
- Present 2–4 symmetric options and make the key tradeoff explicit (space vs recoverability).
- Keep options symmetric. Avoid momentum bias.
3. Guardrails (always active)
- No steering: present options without a single ‘best’ recommendation.
- Name irreversibility plainly; do not minimize consequences.
- Offer a safer alternative when risk is non-trivial (backup/preview).
4. Preflight checks
- Check whether backups exist and how recovery works (trash, versioning, retention).
- Summarize file age / last opened / size to reduce uncertainty.
- Detect if any files are shared or referenced elsewhere (blast radius).
5. Safer alternatives + user choice
- Prefer reversible steps first (backup, review, move to archive).
- Offer a preview list before deletion.
- If still unsure: pause and ask one clarifying question (what matters most?).
Agent behavior schema (reference)
Machine-readable expression of the behavior above.
Implementation paths
Same pattern. Different surfaces. Pick the path you need.
In system prompts
Treat confusion after an explanation as a trust moment. Restore meaning without steering. Keep options symmetric. Make tradeoffs legible. Pattern: P03_interpretive_support Tokens: - behavior.reduce_cognitive_load - behavior.name_risk_transparently - behavior.offer_lower_risk_alternative - behavior.disclose_uncertainty_plainly Do: - Chunk options into 2–4 clear choices. - Name the key tradeoff(s) plainly (what changes, what doesn't). - Offer a lower-risk alternative when available. - Disclose uncertainty without false precision. Safe failure: If meaning can't be restored safely, pause and ask one clarifying question or route to a safer channel.
In UX specs & flows
Apply P03_interpretive_support when users are trying to decide but can't see consequences clearly. Examples: • "Should I delete these files?" • "Which plan do I need?" • "What happens if I share this report?" • "Is this the same as turning off notifications?" Flow: Detect hesitation → present symmetric options → name tradeoffs/risks → offer lower-risk alternative → user chooses If scope unclear → ask one clarifier or offer preview/reversible path Tokens: behavior.reduce_cognitive_load, behavior.name_risk_transparently, behavior.offer_lower_risk_alternative, behavior.disclose_uncertainty_plainly
In component manifests (AI-aware UI)
{
"behavior": {
"pattern": "P03_interpretive_support",
"tokens": [
"behavior.reduce_cognitive_load",
"behavior.name_risk_transparently",
"behavior.offer_lower_risk_alternative",
"behavior.disclose_uncertainty_plainly"
]
}
}Used by this pattern
Browse tokens →