Abuse Case Model
Definition
An Abuse Case Model is built as an extension of a Use Case.
Example
Identifying abuse before designing architecture: Embedding Game-Based Threat Modelling into Agile Delivery at a Major Latin American Bank
CyberSec Games March 3, 2026
This case study is based on the account of Max Alejandro Gómez-Sánchez Vergaray, AppSec & DevSecOps Consultant, who designed and led an application security programme during the digital transformation of one of the largest banks in Peru. Over the course of the programme, more than 3,000 people were trained in secure software development using OWASP Cornucopia.
Why Security Became Urgent When I began leading the application security program, teams were delivering software at scale and the bank was undergoing digital transformation. Agile methods were well established but while security existed on paper it was not yet embedded into everyday engineering practice and lacked sponsorship.
Then the bank experienced a cyber attack.
Delivery teams paused feature work to resolve security problems and executive attention followed. External consultants were engaged at significant cost, and a threat modelling approach was introduced based on Trike - a semi-automated formal model focused on Denial of Service and Elevation of Privilege threats.
While the methodology was sound, the problem was sustainability. Consultancy-led threat modelling can assess systems, but it does not automatically create internal engineering capability. My goal was to ensure that security thinking became part of how teams built software, not something triggered only by incident or audit.
The Structural Problem in Agile In a traditional SDLC, security activities are distributed across phases. Requirements analysis, design, development, and testing are each linked to security activities and checkpoints.
Agile changes that dynamic. Work begins with high-level user story exploration. Stories are refined and prioritised in sprint planning. Architecture often evolves in parallel. By the time a traditional threat model is created around an architectural diagram, many functional decisions have already been made.
In practice, identifying security requirements during architectural threat modelling is often too late. The backlog is already shaped. Trade-offs have been made. Time pressure is real.
I realized that if we wanted security to influence design meaningfully, it had to be integrated before architecture crystallized.
The Two Concepts That Changed the Model To make this work, I relied on two foundational concepts: abuse case modelling and Secure Scrum.
Abuse case modelling takes each legitimate use case and models a corresponding negative scenario, behavior undesired by the system owner. The goal is not simply to enumerate threats, but to identify the security requirements that prevent those undesirable outcomes.
For example, if a use case involves user authentication, an abuse case might explore credential stuffing or brute force attempts. The discussion then naturally leads to security requirements such as rate limiting, password strength policies, or multi-factor enforcement.
Secure Scrum builds on this by integrating security requirements directly into user stories. Instead of treating security as an external checklist, security requirements become part of system requirements from the moment user stories are identified.
Why I Chose OWASP Cornucopia I had experience with OWASP’s research and its card-based framework, OWASP Cornucopia. Traditionally, Cornucopia is played against an architectural diagram to identify threats.
I decided to reposition it.
The bank operated using a SAFe-aligned model with quarterly planning events so that an overall vision, work items, and dependencies are all identified before sprints begin for the quarter. These sessions brought cross-functional teams together to define plans and break them down into epics and stories
This was the ideal point to insert OWASP Cornucopia.
We shuffled the cards and to participants, ensuring a mix of suits. A facilitator guided the session, clarifying complex cards and moderating discussions. Epics were placed visibly on the wall. Each participant examined their cards and argued how specific threats or misuse scenarios related to particular epics or user stories.
The real power was not found in the mechanics of the game, but in the conversations it triggered. Engineers and Product Owners collectively explored what could go wrong and reasoned about attacker behavior in the context of real features.
From those discussions, we derived security requirements and attached them directly to backlog items.
Moving the “Magic” Earlier In many Agile environments, threat modelling occurs after architecture diagrams are drafted. By then, the cost of change is higher.
By integrating Cornucopia into quarterly planning, we moved the magic earlier. We “spread left” as far as we could, to the very first artifacts in our agile workflow.
As a result, security requirements were identified well before sprint work began. When architecture was later discussed, it already reflected security-conscious decisions. Subsequent structured threat modelling sessions were more focused and more effective because they operated on stronger inputs.
We also aligned our requirements with recognized standards such as the OWASP Application Security Verification Standard. This ensured that our approach was not informal or purely cultural. It was grounded in established verification practices.
To support governance, we introduced a lightweight traceability matrix linking each user story to its associated security requirements and related weaknesses. This allowed us to manage defects systematically if requirements were not properly implemented, while keeping overhead manageable.
Scaling Capability What began with a single team expanded steadily. Engineers appreciated that the method fit their workflow rather than disrupting it. Over time, I trained more than 3,000 people across the organization in secure software development practices using Cornucopia as an entry point.
Eventually, I implemented a train-the-trainer model. Security could not depend on one facilitator. Engineering leads were trained to run their own sessions. This distributed ownership was critical to sustainability.
Annual retraining ensured that security culture did not fade. Teams developed shared vocabulary around misuse, abuse cases, and preventive controls. Security thinking became habitual rather than exceptional.
The Real Value The greatest value was not the number of threats identified. It was the change in behavior.
Security requirements became visible within user stories. Product Owners understood the impact of prioritization decisions. Architecture discussions included security trade-offs as standard practice. Late-stage rework decreased because risks surfaced earlier.
In recent years, automated and AI-driven threat modelling tools have become more common. Automation is valuable, but tools do not replace shared reasoning. When engineers collectively analyze how their systems could be abused, they develop judgement. That judgement influences countless micro-decisions that no tool can fully predict.
I often share a piece of advice I once received: it is much better to have a good threat model that’s finished than a perfect threat model that isn’t finished.
In Agile environments, pragmatism matters.
Vulnerabilities
Solution
- Start with a use case.
References
- See the wiki Threat Model for more details on that subject.