emote · research · gap analysis
Platform Controls vs. Behavioral Obligations: What Anthropic and OpenAI Specify
Why Claude/ChatGPT backend selection improves capability, but does not define the behavioral obligations required at trust-sensitive interaction moments.
Core finding
Platform documentation excels at prompt mechanics, output shaping, and model safety constraints. It does not provide auditable, moment-specific behavioral contracts for when trust is most fragile.
The practical gap is not model intelligence. It is the absence of required commitments for how systems must behave before action, under ambiguity, before consequential moves, and after failure.
What the docs cover today
Anthropic documentation
Strong coverage for prompting tactics and guardrails (clarity, examples, role prompting, uncertainty handling techniques). These are implementation techniques, not binding trust-moment contracts.
OpenAI Model Spec + API docs
More explicit behavioral guidance, including clarifying questions and side-effect caution. Key trust-critical guidance remains contextual and not a strict, auditable obligation.
Trust-moment mapping (summarized)
| Moment | Emote contract | Platform documentation |
|---|---|---|
| P01 — Expectation Setting | Before acting, the system states intent, duration, and scope. | Not specified as a required pre-action behavior contract. |
| P02 — Ambiguity Detection | When intent is unclear, pause, ask at least one clarifier, and delay irreversible actions. | Partially present as guidance; not a binding obligation. |
| P03 — Interpretive Support | Clarify options symmetrically without steering outcomes. | No explicit contract distinguishing support from momentum bias. |
| P04 — Consent Confirmation | Before consequential action, restate scope and verify permission. | No standardized consent checkpoint pattern. |
| P05 — Repair & Apology | After harm/confusion, acknowledge impact, apologize, and define repair steps. | No required post-error behavioral repair structure. |
| P06 — State Reorientation | After disruption, re-anchor context and offer a clear re-entry path. | No defined reorientation pattern after interruption or failure. |
Why this matters in production
- Guidance can improve behavior, but guidance can also be bypassed by product context or prompt drift.
- Techniques are optional unless promoted into explicit contracts with must/must-not conditions.
- Without interaction-level contract logging, teams cannot audit whether trust-preserving behavior actually happened.
Comparative scorecard
Persona/tone/scope control
Platform: Well documented
Emote: Out of scope (platform-managed)
Output format consistency
Platform: Well documented
Emote: Out of scope (platform-managed)
Pause before acting
Platform: Guidance-level only
Emote: Binding via P02
Consent before consequential actions
Platform: Not contractually specified
Emote: Binding via P04
Repair behavior after errors
Platform: Not contractually specified
Emote: Binding via P05
State reorientation after disruption
Platform: Not contractually specified
Emote: Binding via P06
Auditable behavioral contracts
Platform: Not specified
Emote: Pattern + token obligations
Primary sources reviewed
- Anthropic prompt engineering docs and guardrail references (docs.anthropic.com )
- OpenAI Model Spec, Dec 18 2025 (model-spec.openai.com )
- OpenAI API reference and prompt engineering guides (platform.openai.com/docs )