Definition
An attack model is a formal description of the assumed capabilities, resources, goals, and methods of a potential adversary in the context of security analysis. It delineates the constraints under which an attacker operates, thereby providing a framework for assessing the resilience of systems, protocols, or algorithms against specific threats.
Overview
Attack models are employed across multiple domains, including cryptography, network security, software engineering, and adversarial machine learning. By specifying what an attacker can observe, modify, or inject, and what objectives they pursue (e.g., confidentiality breach, integrity violation, denial of service), analysts can systematically evaluate security guarantees, design countermeasures, and prove security properties. Commonly, an attack model is paired with a corresponding defense model to illustrate the interaction between offensive and defensive strategies.
Etymology / Origin
The term combines the generic English word “attack,” referring to hostile action, with “model,” denoting an abstract representation used for analysis. Its usage emerged in the late 20th century alongside the development of formal methods in cryptography, notably in the 1970s and 1980s when researchers began to formally categorize adversarial capabilities (e.g., “chosen‑plaintext attack,” “adaptive chosen‑ciphertext attack”). The phrase gained broader prominence with the rise of threat modeling methodologies such as STRIDE (1999) and the formalization of adversarial settings in machine learning research in the 2010s.
Characteristics
| Characteristic | Description |
|---|---|
| Adversary Capabilities | Defines what the attacker can do (e.g., read/write data, intercept communications, corrupt nodes, query a model). |
| Knowledge Assumptions | Specifies the information available to the attacker (e.g., algorithm details, public keys, training data). |
| Resources | Limits on computational power, time, monetary cost, or physical access. |
| Goals | Objectives such as data exfiltration, system disruption, privacy violation, or model misclassification. |
| Interaction Model | Describes how the attacker interacts with the target (e.g., offline vs. online, passive vs. active). |
| Adaptivity | Whether the attacker can adapt strategies based on observed outcomes (e.g., adaptive chosen‑ciphertext attacks). |
| Formalization | Often expressed mathematically or through game‑theoretic frameworks to enable rigorous proofs. |
Related Topics
- Threat model – A broader conceptualization that includes both attacker and defender perspectives, often encompassing environmental and operational considerations.
- Cryptographic attack types – Specific instances of attack models such as chosen‑plaintext attack, known‑plaintext attack, and side‑channel attack.
- Adversarial machine learning – The study of attack models targeting machine‑learning systems, including evasion attacks and poisoning attacks.
- Security proof – Formal verification that a system is secure under a defined attack model.
- Risk assessment – The process of evaluating the likelihood and impact of attacks described by an attack model.
- Game theory in security – Analytical tools used to model strategic interactions between attackers and defenders.