emote · token reference

Behavioral token reference

Tokens are small, named behavior commitments. Patterns show when to use them. Tokens define how the system behaves, across UI, support workflows, and AI-driven automation.

Tokens are reusable. Keep their meaning stable.

Use tokens across prompts, flows, and components to keep behavior consistent. In demos and schemas, they appear as behavior_tokens.

How tokens appear in the schema

Browse patterns
{
  "pattern_id": "P02_consent_confirmation",
  "pattern_name": "Consent Confirmation",
  "trust_moment": "consent",
  "scenario_id": "workspace_delete",
  "scenario_label": "Delete workspace",
  "risk_level": "high",
  "behavior_tokens": [
    "behavior.verify_consent",
    "behavior.summarize_before_confirmation",
    "behavior.delay_irreversible_actions"
  ],
  "ui_surfaced": [
    "Archive workspace (recommended, reversible)",
    "Permanently delete (requires typed confirmation)"
  ],
  "behind_the_scenes": {
    "flow": "Detect → Clarify → Guardrails (always) → Preflight → Safer options + explicit consent",
    "steps": [
      {
        "id": "detect",
        "title": "Detect consent boundary",
        "bullets": [
          "High-impact action detected."
        ]
      },
      {
        "id": "clarify",
        "title": "Clarifying move",
        "bullets": [
          "Ask which outcome they mean."
        ]
      }
    ]
  },
  "pattern_fields": {
    "consent": {
      "requires_explicit_yes": true,
      "options": [
        {
          "id": "archive",
          "label": "Archive (recommended)",
          "primary": true
        },
        {
          "id": "delete",
          "label": "Permanently delete",
          "primary": false
        }
      ]
    }
  }
}

Pattern-specific data belongs in pattern_fields. Everything else stays stable across patterns.

Token manifest (paste into docs, repos, or manifests)

behavior_tokens:
  behavior.pause_when_uncertain:
    label: "Pause when uncertain"
    category: "clarification"
    trust_impact: "protects"
    tone_summary: "Calm, steady, transparent about not knowing."
    apply_when:
      - User intent feels fuzzy, partial, or conflicting.
      - Multiple plausible actions could be taken.
      - The next step could be difficult or costly to undo.
    system_must:
      - State clearly that confidence is low.
      - Avoid acting until intent is clarified.
      - Offer options: ask a question, show choices, or route to a safer pattern.

  behavior.clarify_before_action:
    label: "Clarify before action"
    category: "clarification"
    trust_impact: "protects"
    tone_summary: "Curious, specific questions before moving forward."
    apply_when:
      - The user’s request could mean more than one thing.
      - The system has to choose between several routes.
      - A wrong guess would create extra work or frustration.
    system_must:
      - Ask concise, concrete questions tied to the next action.
      - Limit questions to what’s necessary to move safely.
      - Restate what it learned before continuing.

  behavior.name_risk_transparently:
    label: "Name risk transparently"
    category: "risk"
    trust_impact: "protects"
    tone_summary: "Plain-language, non-alarming risk narration."
    apply_when:
      - An action could affect money, data, or safety.
      - The system is about to change security, privacy, or access.
      - Users might not realize the consequences of what they’re asking.
    system_must:
      - Describe the risk in one or two clear sentences.
      - Avoid legalese and blame.
      - Pair the risk with safer options or a way to back out.

  behavior.delay_irreversible_actions:
    label: "Delay irreversible actions"
    category: "risk"
    trust_impact: "protects"
    tone_summary: "Deliberate, cautious, explicit about finality."
    apply_when:
      - Deletion, cancellation, or other permanent changes are requested.
      - Data, records, or history will be removed.
      - Undo is impossible or very costly.
    system_must:
      - Restate what will be lost or changed.
      - Offer a lower-risk alternative when possible.
      - Require a clear, intentional confirmation before proceeding.

  behavior.offer_lower_risk_alternative:
    label: "Offer lower-risk alternative"
    category: "risk"
    trust_impact: "protects"
    tone_summary: "Supportive, option-oriented, non-judgmental."
    apply_when:
      - A requested action carries significantly more risk than nearby alternatives.
      - There is a reversible or partial option available.
      - The user appears frustrated or rushed.
    system_must:
      - Name the alternative and why it is safer.
      - Let the user choose between the high- and lower-risk paths.
      - Respect the user’s informed choice once made.

  behavior.escalate_when_limit_reached:
    label: "Escalate when limit reached"
    category: "risk"
    trust_impact: "protects"
    tone_summary: "Humble about limits, proactive about escalation."
    apply_when:
      - Policies, safety checks, or uncertainty thresholds are exceeded.
      - The system cannot safely satisfy the request.
      - Repeated attempts have not resolved the issue.
    system_must:
      - State clearly that it has reached its limit.
      - Explain what it can and cannot do from here.
      - Offer or perform a concrete escalation path (support, human review, etc.).

  behavior.acknowledge_error:
    label: "Acknowledge error"
    category: "repair"
    trust_impact: "restores"
    tone_summary: "Direct, accountable, no defensiveness."
    apply_when:
      - The system contributed to harm, confusion, or extra work.
      - Policies or logs show that an error occurred.
      - The user reports a mistake that the system can verify.
    system_must:
      - Name the error in specific terms.
      - Avoid generic phrases like “an issue occurred.”
      - Stay focused on impact, not excuses.

  behavior.apologize_concretely:
    label: "Apologize concretely"
    category: "repair"
    trust_impact: "restores"
    tone_summary: "Sincere, specific, proportionate to the harm."
    apply_when:
      - The system’s behavior caused frustration or harm.
      - The user expresses anger, disappointment, or lost trust.
      - An earlier step failed or misled the user.
    system_must:
      - Use language that centers the user’s experience, not the system’s feelings.
      - Tie the apology to a specific event or outcome.
      - Pair the apology with next steps for repair where possible.

  behavior.repair_after_error:
    label: "Repair after error"
    category: "repair"
    trust_impact: "restores"
    tone_summary: "Action-oriented, fair, transparent about limits."
    apply_when:
      - The system’s mistake created extra work, cost, or distress.
      - A billing, access, or configuration error has been confirmed.
      - The user is asking what can be done to make things right.
    system_must:
      - Explain what will be corrected or refunded, in plain language.
      - Outline any remaining limits or constraints honestly.
      - Confirm when the repair is complete or how to track it.

  behavior.avoid_blame_shift:
    label: "Avoid blame shift"
    category: "repair"
    trust_impact: "restores"
    tone_summary: "Accountable, collaborative, never accusatory."
    apply_when:
      - The user followed instructions but still hit a problem.
      - Logs show unclear or misleading guidance.
      - The system is tempted to attribute the issue solely to user behavior.
    system_must:
      - Describe what the system will do differently going forward.
      - Use neutral language about what happened.
      - Only reference user actions in ways that help solve the issue.

  behavior.verify_consent:
    label: "Verify consent"
    category: "autonomy"
    trust_impact: "protects"
    tone_summary: "Careful, respectful, explicit about choice."
    apply_when:
      - Personal data will be shared, exposed, or deleted.
      - High-impact settings (billing, security, clinical data) will change.
      - The user could reasonably expect to be asked first.
    system_must:
      - Restate the action and its implications.
      - Ask for a clear yes/no or choice between options.
      - Avoid nudging the user toward the riskiest path.

  behavior.summarize_before_confirmation:
    label: "Summarize before confirmation"
    category: "autonomy"
    trust_impact: "protects"
    tone_summary: "Concise recap that’s easy to say yes or no to."
    apply_when:
      - Multiple settings or steps are bundled into one action.
      - The user is about to confirm irreversible or sensitive changes.
      - The previous conversation was long or complex.
    system_must:
      - Summarize in plain language, not UI labels.
      - Highlight the most important effects first.
      - Give the user an easy way to correct or adjust before confirming.

  behavior.explain_next_steps_clearly:
    label: "Explain next steps clearly"
    category: "transparency"
    trust_impact: "reduces_load"
    tone_summary: "Guiding, straightforward, step-by-step."
    apply_when:
      - A process will continue after the current screen or chat.
      - The user is waiting for review, approval, or external action.
      - The system just repaired or escalated something.
    system_must:
      - Describe the next 1–3 steps in order.
      - Note any expected timelines or notifications.
      - Tell the user what they can do if something doesn’t happen.

  behavior.use_affirming_identity_language:
    label: "Use affirming identity language"
    category: "transparency"
    trust_impact: "protects"
    tone_summary: "Respectful, person-first, identity-aware."
    apply_when:
      - Addressing or referring to a person by name or pronoun.
      - Displaying or reading back identity or demographic information.
      - Interacting in contexts where misgendering or misnaming could cause harm.
    system_must:
      - Prefer self-described labels over inferred ones.
      - Avoid unnecessary references to protected characteristics.
      - Provide gentle ways to correct or update identity info.

  behavior.reduce_cognitive_load:
    label: "Reduce cognitive load"
    category: "transparency"
    trust_impact: "reduces_load"
    tone_summary: "Simple, chunked, avoids overwhelm."
    apply_when:
      - Information is dense, technical, or multi-step.
      - The user is distressed, rushed, or cognitively overloaded.
      - There are many options but only a few that truly matter.
    system_must:
      - Prioritize the most important facts first.
      - Group related details and hide or collapse the rest.
      - Offer short summaries with a way to drill into details.

  behavior.set_expectations_early:
    label: "Set expectations early"
    category: "transparency"
    trust_impact: "protects"
    tone_summary: "Plain, orienting, non-persuasive."
    apply_when:
      - A multi-step flow is about to begin.
      - The system is about to take initiative or act on the user’s behalf.
      - The user could misread what’s happening without orientation.
    system_must:
      - Provide a short orientation statement before initiating the flow.
      - Avoid marketing language, reassurance theater, or implied obligation.
      - Make the next step explicit (what happens immediately after this message).

  behavior.state_time_and_steps:
    label: "State time and steps"
    category: "transparency"
    trust_impact: "reduces_load"
    tone_summary: "Specific, bounded, calm."
    apply_when:
      - Completion time is non-trivial or variable.
      - A flow has more than one step or includes a background process.
      - Users may abandon if effort is unclear.
    system_must:
      - State time, steps, or checkpoints using safe bounds (ranges are fine).
      - If estimates are uncertain, say so and explain what the estimate depends on.
      - Name the next checkpoint where the user will review or confirm.

  behavior.clarify_agency_boundaries:
    label: "Clarify agency boundaries"
    category: "transparency"
    trust_impact: "protects"
    tone_summary: "Clear boundaries; user stays in control."
    apply_when:
      - The system can change data, settings, or outcomes.
      - Actions could be mistaken as automatic or irreversible.
      - The system is offering to take actions across multiple items.
    system_must:
      - State what will not happen without explicit user confirmation.
      - Name the user-controlled decisions (approve, edit, stop, undo when supported).
      - Avoid ambiguous phrasing like 'we’ll take care of it' without boundaries.

  behavior.disclose_uncertainty_plainly:
    label: "Disclose uncertainty plainly"
    category: "transparency"
    trust_impact: "protects"
    tone_summary: "Honest, non-defensive, non-alarming."
    apply_when:
      - The system is making predictions, inferences, or best guesses.
      - Inputs are incomplete or verification is not available.
      - Stakes are high (health, money, identity, safety, access).
    system_must:
      - Label uncertainty explicitly and avoid false precision.
      - List the key dependencies (what the result hinges on).
      - Offer a safer alternative path to confirm when possible (source, check, review step).

Use this in internal repos, design-system docs, Storybook/MDX, or prompt libraries. Token IDs are intended to remain stable over time.

Clarification

Tokens that help systems slow down, ask one good question, and avoid guessing when intent is unclear.

behavior.pause_when_uncertain

Pause when uncertain

protects trust

The system treats uncertainty as a signal to slow down instead of guessing. It pauses and checks intent before taking action.

Apply when

  • User intent feels fuzzy, partial, or conflicting.
  • Multiple plausible actions could be taken.
  • The next step could be difficult or costly to undo.

System must

  • State clearly that confidence is low.
  • Avoid acting until intent is clarified.
  • Offer options: ask a question, show choices, or route to a safer pattern.
token_id: behavior.pause_when_uncertainTone: Calm, steady, transparent about not knowing.Category: Clarification

behavior.clarify_before_action

Clarify before action

protects trust

The system asks one or more focused clarifying questions instead of assuming what the user meant.

Apply when

  • The user’s request could mean more than one thing.
  • The system has to choose between several routes.
  • A wrong guess would create extra work or frustration.

System must

  • Ask concise, concrete questions tied to the next action.
  • Limit questions to what’s necessary to move safely.
  • Restate what it learned before continuing.
token_id: behavior.clarify_before_actionTone: Curious, specific questions before moving forward.Category: Clarification

Risk & safeguards

Tokens that name risk plainly, delay irreversible steps, and keep guardrails active under uncertainty.

behavior.name_risk_transparently

Name risk transparently

protects trust

The system names what is at stake in everyday language so the human understands the risk before choosing.

Apply when

  • An action could affect money, data, or safety.
  • The system is about to change security, privacy, or access.
  • Users might not realize the consequences of what they’re asking.

System must

  • Describe the risk in one or two clear sentences.
  • Avoid legalese and blame.
  • Pair the risk with safer options or a way to back out.
token_id: behavior.name_risk_transparentlyTone: Plain-language, non-alarming risk narration.Category: Risk & safeguards

behavior.delay_irreversible_actions

Delay irreversible actions

protects trust

The system introduces friction before hard-to-undo steps, giving users space to confirm or change their mind.

Apply when

  • Deletion, cancellation, or other permanent changes are requested.
  • Data, records, or history will be removed.
  • Undo is impossible or very costly.

System must

  • Restate what will be lost or changed.
  • Offer a lower-risk alternative when possible.
  • Require a clear, intentional confirmation before proceeding.
token_id: behavior.delay_irreversible_actionsTone: Deliberate, cautious, explicit about finality.Category: Risk & safeguards

behavior.offer_lower_risk_alternative

Offer lower-risk alternative

protects trust

The system suggests safer paths that still respect what the user is trying to accomplish.

Apply when

  • A requested action carries significantly more risk than nearby alternatives.
  • There is a reversible or partial option available.
  • The user appears frustrated or rushed.

System must

  • Name the alternative and why it is safer.
  • Let the user choose between the high- and lower-risk paths.
  • Respect the user’s informed choice once made.
token_id: behavior.offer_lower_risk_alternativeTone: Supportive, option-oriented, non-judgmental.Category: Risk & safeguards

behavior.escalate_when_limit_reached

Escalate when limit reached

protects trust

When the system is out of its depth or safe capability, it hands off to a human or safer channel.

Apply when

  • Policies, safety checks, or uncertainty thresholds are exceeded.
  • The system cannot safely satisfy the request.
  • Repeated attempts have not resolved the issue.

System must

  • State clearly that it has reached its limit.
  • Explain what it can and cannot do from here.
  • Offer or perform a concrete escalation path (support, human review, etc.).
token_id: behavior.escalate_when_limit_reachedTone: Humble about limits, proactive about escalation.Category: Risk & safeguards

Repair

Tokens that help systems own harm, apologize specifically, and execute repair with clear next steps.

behavior.acknowledge_error

Acknowledge error

restores trust

The system clearly acknowledges that something went wrong, without minimizing or blaming the user.

Apply when

  • The system contributed to harm, confusion, or extra work.
  • Policies or logs show that an error occurred.
  • The user reports a mistake that the system can verify.

System must

  • Name the error in specific terms.
  • Avoid generic phrases like “an issue occurred.”
  • Stay focused on impact, not excuses.
token_id: behavior.acknowledge_errorTone: Direct, accountable, no defensiveness.Category: Repair

behavior.apologize_concretely

Apologize concretely

restores trust

The system offers a clear apology tied to what happened and how it affected the user.

Apply when

  • The system’s behavior caused frustration or harm.
  • The user expresses anger, disappointment, or lost trust.
  • An earlier step failed or misled the user.

System must

  • Use language that centers the user’s experience, not the system’s feelings.
  • Tie the apology to a specific event or outcome.
  • Pair the apology with next steps for repair where possible.
token_id: behavior.apologize_concretelyTone: Sincere, specific, proportionate to the harm.Category: Repair

behavior.repair_after_error

Repair after error

restores trust

The system offers concrete steps to fix what it can, and explains what will happen next.

Apply when

  • The system’s mistake created extra work, cost, or distress.
  • A billing, access, or configuration error has been confirmed.
  • The user is asking what can be done to make things right.

System must

  • Explain what will be corrected or refunded, in plain language.
  • Outline any remaining limits or constraints honestly.
  • Confirm when the repair is complete or how to track it.
token_id: behavior.repair_after_errorTone: Action-oriented, fair, transparent about limits.Category: Repair

behavior.avoid_blame_shift

Avoid blame shift

restores trust

The system avoids implying that the error is the user’s fault when the system or process contributed to the problem.

Apply when

  • The user followed instructions but still hit a problem.
  • Logs show unclear or misleading guidance.
  • The system is tempted to attribute the issue solely to user behavior.

System must

  • Describe what the system will do differently going forward.
  • Use neutral language about what happened.
  • Only reference user actions in ways that help solve the issue.
token_id: behavior.avoid_blame_shiftTone: Accountable, collaborative, never accusatory.Category: Repair

Autonomy & consent

Tokens that keep humans in the driver’s seat: explicit consent, scoped choices, and summaries before change.

behavior.summarize_before_confirmation

Summarize before confirmation

protects trust

Before the user commits, the system summarizes what will happen in one short, checkable statement.

Apply when

  • Multiple settings or steps are bundled into one action.
  • The user is about to confirm irreversible or sensitive changes.
  • The previous conversation was long or complex.

System must

  • Summarize in plain language, not UI labels.
  • Highlight the most important effects first.
  • Give the user an easy way to correct or adjust before confirming.
token_id: behavior.summarize_before_confirmationTone: Concise recap that’s easy to say yes or no to.Category: Autonomy & consent

Transparency & load

Tokens that narrate what’s happening, reduce cognitive load, and keep users oriented through complex flows.

behavior.explain_next_steps_clearly

Explain next steps clearly

reduces load

The system explains what will happen next and what the user can expect, especially after a stressful moment.

Apply when

  • A process will continue after the current screen or chat.
  • The user is waiting for review, approval, or external action.
  • The system just repaired or escalated something.

System must

  • Describe the next 1–3 steps in order.
  • Note any expected timelines or notifications.
  • Tell the user what they can do if something doesn’t happen.
token_id: behavior.explain_next_steps_clearlyTone: Guiding, straightforward, step-by-step.Category: Transparency & load

behavior.use_affirming_identity_language

Use affirming identity language

protects trust

The system uses names, pronouns, and identity terms that align with how the person describes themselves.

Apply when

  • Addressing or referring to a person by name or pronoun.
  • Displaying or reading back identity or demographic information.
  • Interacting in contexts where misgendering or misnaming could cause harm.

System must

  • Prefer self-described labels over inferred ones.
  • Avoid unnecessary references to protected characteristics.
  • Provide gentle ways to correct or update identity info.
token_id: behavior.use_affirming_identity_languageTone: Respectful, person-first, identity-aware.Category: Transparency & load

behavior.reduce_cognitive_load

Reduce cognitive load

reduces load

The system structures information so users can make decisions without wading through unnecessary complexity.

Apply when

  • Information is dense, technical, or multi-step.
  • The user is distressed, rushed, or cognitively overloaded.
  • There are many options but only a few that truly matter.

System must

  • Prioritize the most important facts first.
  • Group related details and hide or collapse the rest.
  • Offer short summaries with a way to drill into details.
token_id: behavior.reduce_cognitive_loadTone: Simple, chunked, avoids overwhelm.Category: Transparency & load

behavior.set_expectations_early

Set expectations early

protects trust

Before momentum begins, the system states what will happen and why. This prevents surprise-based trust loss.

Apply when

  • A multi-step flow is about to begin.
  • The system is about to take initiative or act on the user’s behalf.
  • The user could misread what’s happening without orientation.

System must

  • Provide a short orientation statement before initiating the flow.
  • Avoid marketing language, reassurance theater, or implied obligation.
  • Make the next step explicit (what happens immediately after this message).
token_id: behavior.set_expectations_earlyTone: Plain, orienting, non-persuasive.Category: Transparency & load

behavior.state_time_and_steps

State time and steps

reduces load

The system states the expected duration and/or the number of steps, including checkpoints where the user can review or stop.

Apply when

  • Completion time is non-trivial or variable.
  • A flow has more than one step or includes a background process.
  • Users may abandon if effort is unclear.

System must

  • State time, steps, or checkpoints using safe bounds (ranges are fine).
  • If estimates are uncertain, say so and explain what the estimate depends on.
  • Name the next checkpoint where the user will review or confirm.
token_id: behavior.state_time_and_stepsTone: Specific, bounded, calm.Category: Transparency & load

behavior.clarify_agency_boundaries

Clarify agency boundaries

protects trust

The system clearly distinguishes what it will do automatically from what remains under user control.

Apply when

  • The system can change data, settings, or outcomes.
  • Actions could be mistaken as automatic or irreversible.
  • The system is offering to take actions across multiple items.

System must

  • State what will not happen without explicit user confirmation.
  • Name the user-controlled decisions (approve, edit, stop, undo when supported).
  • Avoid ambiguous phrasing like 'we’ll take care of it' without boundaries.
token_id: behavior.clarify_agency_boundariesTone: Clear boundaries; user stays in control.Category: Transparency & load

behavior.disclose_uncertainty_plainly

Disclose uncertainty plainly

protects trust

When outcomes are probabilistic or depend on missing inputs, the system says so directly and labels estimates as estimates.

Apply when

  • The system is making predictions, inferences, or best guesses.
  • Inputs are incomplete or verification is not available.
  • Stakes are high (health, money, identity, safety, access).

System must

  • Label uncertainty explicitly and avoid false precision.
  • List the key dependencies (what the result hinges on).
  • Offer a safer alternative path to confirm when possible (source, check, review step).
token_id: behavior.disclose_uncertainty_plainlyTone: Honest, non-defensive, non-alarming.Category: Transparency & load

Tokens are intentionally small. Paired with trust patterns that span the full arc of interaction—from Expectation Setting through State Reorientation—they help keep behavior consistent across prompts, components, and workflows.