Cover for Abuse / Misuse Threat Model | Product Management Framework | Altareen
0% helpful
0 likes 0 dislikes

Inputs Required

Intended use + non-goals; user roles and permissions; data types handled; system architecture (tools, connectors, logging); known failure modes; attacker assumptions; relevant policies/regulatory constraints.

Output Artifact

Threat model doc: abuse scenarios, likelihood/impact, mitigations (product + policy + technical), detection/monitoring signals, and escalation/response steps.

When to use this

When your AI feature could be exploited (prompt injection, data exfiltration, jailbreaks, harmful content, fraud), especially if it connects to tools, sensitive data, or enterprise systems.

Link copied