A Guide to Threat Modelling for Developers

Secure software design, little and often

This article provides clear and simple steps to help teams that want to adopt threat modelling. Threat modelling is a risk-based approach to designing secure systems. It is based on identifying threats in order to develop mitigations to them. With cyber security risk increasing and enterprises becoming more aware of their liabilities, software development teams need effective ways to build security into software. Unfortunately, they often struggle to adopt threat modelling. Many methodologies require complicated, exhaustive upfront analysis which does not match how modern software teams work. Therefore, rather than stopping everything to create the perfect threat model, I encourage teams to start simple and grow from there.

28 May 2020



How to simplify a complex problem

What are the security requirements for the software you are building? Finding a good answer is surprisingly complex. You want to prevent cyber losses over the lifetime of the system. But what are the concrete stories, acceptance criteria and technical scope that delivers that outcome? That is the puzzle addressed in this guide.

Somewhat unhelpfully, cyber specialists will often ask: 'What is your threat model?' This answer is very non-specific and uncertain, much like turning around and saying 'it depends'. Worse, 'threat model' is obscure technical jargon for most people adding unnecessary mystique. And if you research the topic of threat modelling the information can be overwhelming and hard to action. There is no agreed standard for a 'threat model' or anything like that.

So what are threat models and what is threat modelling? The core of the concept is very simple. It is about understanding causes in relation to cyber security losses. It is about using that understanding to protect your system in a risk-based way. It means starting from the potential threats in your particular case, rather than just following a checklist.

Coming to understand the threat model for your system is not simple. There are an unlimited number of threats you can imagine to any system, and many of them could be likely. The reality of threats is that many causes combine. Cyber threats chain in unexpected, unpredictable and even chaotic ways. Factors to do with culture, process and technology all contribute. This complexity and uncertainty is at the root of the cyber security problem. This is why security requirements are so hard for software development teams to agree upon.

The stories behind real breaches show how complex threats and causality can be- often the details are astounding. The NotPetya story is a great example. Nation state malware was traded by a group called the "ShadowBrokers" and then weaponised. The eventual impact was major losses to organisations almost at random. Mearsk, the shipping firm, had to halt the progress of shipping. The confectioner Cadbury's had to stop making chocolate. What were their respective threat models? What development team could imagine such a complex chain of causality and collatoral damage? How long would it take your team to model this, and every other dangerous possibility?

Is threat modelling too complex to be of value? Should developers just follow a checklist, 'cross their fingers' and hope they get lucky? Skepticism can be healthy, but learning threat modelling is a key skill for developers, I believe. What we need is the right approach, and tools to tame the complexity. This guide has been written in that spirit, and begins with three ideas which make identifying good, risk-based security requirements much simpler.

Start from the technology

The first recommendation is to focus primarily on technical rather than broad threats, at least at first.

  • Broad threats and threat sources include hacker groups, bad actors, disillusioned employees, human error or epidemics of new worm-like malware. These kinds of causes emerge from the world at large and are extremely various, uncertain and unpredictable. They are relative to the value of your system's data and services to your organisation and to others. These are the kinds of dramatic risks it is easy to talk about with non-technical folks.
  • Technical threats and vulnerabilities are much more granular, such as particular weaknesses in software or missing security controls such as encryption or authorisation. These kinds of threats emerge from the structure and data-flow inherent in the system your team is building. Usually a bunch of technical threats combine together to allow a broad threat to impact your system.

By following this guide you will mainly focus on finding technical threats. This helps simplify the elaboration process, because the structure and data-flow of your system is something about which you can be certain. But it also means you can start from your existing strengths as a software developer, understanding technical stuff. This is a much stronger ground to start on than high-level risk analysis of threat sources, about which you may know little.

Don't forget about the bigger picture entirely though. A pragmatic and risk-based understanding of what broad threats are possible helps prioritise one technical threat over another. For example simple human error is usually much more likely than a nation state attack (see sidebar). That thinking can go into selecting what security scope to start examining first. When you focus on identifying technical threats first, it is then much easier to relate them back to broader threats that justify fixes and additional controls.

Take a collaborative approach

The second recommendation is to adopt a collaborative, team based approach. Identifying security requirements is not easy, and a diversity of perspectives will lead to better decision making. There will always be another vulnerability or technical threat to find, so bringing a wide variety of perspectives to the exercise makes brainstorming more robust. It also increases the likelihood you will identify the most important threats. Threat modelling in a group helps address risk holistically and helps the whole team to learn how to think and talk effectively about security.

Getting product owners involved is a great oppotunity from a risk management perspective. Product owners have insights into user behaviour and business context that software developers simply lack. They should know about the value of particular services to the business and the impact if that data was exposed or lost. When cyber security losses occur they are business losses. If the worst does happen then the causes will likely be particular to your organisation and the technology you are using. The cyber security problem is not just about ticking technical boxes, its about making good investment decisions to protect the business.

Threat modelling 'little and often'

The third recommendation is to start threat modelling 'little and often'. Each threat modelling session with the team should be short and focussed enough to be quickly digested into something that can be delivered. Start by analysing the thinnest slice of your system possible; just what you are working on right now. Rather than trying to analyse your entire system upfront, build your team's muscle memory with threat modelling a little bit at a time.

Practises which require a completely specified software design do not match how agile teams work. There is no reason why threat modelling needs to be an exhaustive upfront analysis. Too often teams are overwhelmed by comprehensive and highly structured approaches to threat modelling[1]. I have seen teams try such approaches and run out of time and patience before any real threats were identified- let alone fixes delivered!

Rather than creating and maintaining an exhaustive 'threat model' document, do threat modelling 'little and often'. When you work this way each threat modelling session is tiny, having little impact. Yet the cumulative effect of doing them has a huge impact. When you know you'll be doing this again every iteration, there's less incentive to try to do everything at once and more incentive to prioritize the most important work right now.

Preparing to start

This section of the guide starts to make things more detailed and concrete so you can plan to start threat modelling with your team.

The three key questions

Understanding the simple structure of a threat modelling session, and doing a little bit of planning goes a long way to getting a great result.

The first thing to introduce is a simple and flexible structure for threat modelling [2]. This is based on three key questions. It helps to commit this structure to memory. You can use the three question structure as a guide whenever you need to assess threats. Like riding a bike, once you have mastered the basics you will be able to apply and grow those skills.

ActivityQuestionOutcome
Explain and exploreWhat are you building?A technical diagram
Brainstorm threatsWhat can go wrong?A list of technical threats
Prioritise and fixWhat are you going to do?Priorised fixes added to backlog

This guide follows the three question structure. In each threat modelling session, you should spend about a third of your time answering each question. Then you will come out with a useful result. The rest of the guide will break this basic structure into more detailed steps, pointers and explanations to help you run successful threat modelling sessions.

Practical considerations

There are some things you need to get straight before you run a threat modelling session. The following pointers should help you plan.

Who should be involved?

Try to involve the whole delivery team in each session, which is to say both technical and non-technical roles. This brings more perspectives and ideas to the table, but also builds shared understanding. Excluding product owners, business analysts and delivery managers can mean the work to fix security flaws does not get done, as the value will not be widely understood.

You definitely do not need a security specialist to start threat modelling and discover valuable security scope. However, a threat modelling session is the perfect opportunity to collaborate with specialists, security architects or your risk management team. It will all depend on what the roles and expertise are like in your organisation.

Cadance and duration

To start, I recommmend a session length of 90 minutes. You need to give the team the time and space to learn the structure and security concepts involved. Things should get much faster once you get going, though. The most impactful threat modelling session I have ever participated in took less than 15 minutes. Short and snappy sessions are possible once everyone in the team has built 'muscle memory' with the practise.

I am often asked how frequent threat modelling sessions should be. I do not think there is any right answer, it depends on your team. I think of threat modelling just like any other team design session. I would not be so rigid as saying it has to be every single week. However, I have worked with many teams with a risk profile that would justify threat modelling every sprint. At the other extreme if it has been a few sprints without any threat modelling, the practise is clearly not continuous enough to be considered mature.

Running sessions face-to-face vs. running remotely

A face-to-face threat modelling session could happen in a meeting room, or more informally in the team's normal work area- if you have space. A typical session involves drawing a diagram to explain and explore the scope, brainstorming threats, then prioritising fixes for the backlog. However, a face-to-face session is not always possible.

When you run a session remotely you just need to plan a little differently so everyone can participate virtually. You will need video conferencing and collaboration tools. Agree and setup these tools ahead of time. Teams at ThoughtWorks have had success with a variety of tools, including Mural, Miro, Google Jamboard and Google Docs.

Get accustomed to your tools ahead of the session and get participants to test they have access. Whichever tool you choose, ensure you have security approval from your organisation to use the tool. Threat modelling outputs represent sensitive information for a number of reasons and must be protected.

Here are some more pointers to bear in mind when working remotely:

  • It can help to create diagrams asyncronously before the exercise. This is because it can consumes a good amount of time to draw diagrams on virtual boards.
  • Pay even more attention to creating a common understanding of the concepts and symbols you are using to illustrate the system. Explain diagram symbols, data flow arrows and colours of digital stickies.
  • Be more intentional to ensure everyone is engaged in the exercise. Perhaps use some security related brain teasers as an ice-breaker. Refer to broader guides on remote facilitation.
  • If you have a large group of people, it may make sense to create smaller groups and then consolidate the output. A couple of small sessions is better and more sustainable than one big session.
  • You will need more breaks than required in a face-to-face session. Remote work is tiring.

Regardless if your session is remote or face-to-face, you should aim to finish on time. And with some concrete outcomes! This needs discipline - facilitating timings could be a role taken by a delivery manager or someone experienced making sure workshop sessions succeed.

Mona Fenzl and Sarah Schmid from ThoughtWorks Germany have had some success using a collaboration tool called Mural. They used it to create a threat modelling template to help other teams get started with that tool.

Scoping the session

Deciding the right focus and level of detail for your session is called 'identifying the scope'. Make sure you have decided this ahead of getting people together to perform the activity. Be guided by what has most value right now. Perhaps its simply the user stories you are working on this iteration?

Do not try and bite off too much scope at once! If you try threat modelling the entire system at once, either you will make no findings in the time available or you will overrun dramatically and there will be no appetite or budget to do it again. It is much better to timebox threat modelling into manageable chunks, performing the activity 'little and often'.

Here are some examples of scopes which have worked well:

  • Scope in current iteration.
  • An upcoming security sensitive feature, such as a new user registration flow.
  • The continuous delivery pipeline and delivery infrastructure.
  • A particular microservice and its collaborating services
  • A high level overview of a system to identify security tech debt.

Whatever scope your team chooses, make sure it is not too big for you to cover in the time available.

A worked example: scoping

The rest of the guide uses a real feature to show the concrete steps involved in threat modelling. There is a development team at a retail organisation which is building a platform to sell groceries for home delivery. Here is the epic they have in the upcoming sprint:

As a customer, I need a page where I can see my customer details, So that I can confirm they are correct

If you have ever used an online store, you will be able to imagine a page which is used to update address details and perhaps view a loyalty card balance.

From experience, a feature of this size is a pretty reasonable scope for threat modelling session.

Explain and explore

Do not be tempted to pull a stale image off the Wiki. Talking through how the system is right now- or will be soon, builds shared understanding.

"What are you building?"

Diagrams are the perfect tool to explain and explore how software is structured, and designed to communicate. This section of the guide provides detailed pointers on the diagram which will serve as the foundation of your threat modelling session.

Draw a 'lo-fi' technical diagram

Drawing a picture gets everyone on the same page. Before you can start thinking of threats, risk and mitigations you need a shared technical understanding of the software or infrastructure you are dealing with.

i. Show relevant components.

Luckily, developers will be comfortable drawing diagrams to explore software designs. Tap into these established skills by drawing a simple technical diagram of your agreed scope on the whiteboard or flipchart.

Nothing needs to be sophisticated or perfect - just draw boxes for the main components and label them.

ii. Show users on the diagram.

Ultimately, systems are designed to allow people to do things. Users matter as they are the ones authorised to things in the system. Represent and label them on your diagram.

  • Some users can be more trusted than others. For example end-users usually have less freedom to perform operations in a system than administrators. If multiple groups of users are relevant to the scope of the session, represent and label each group.
  • Not all systems are user facing. If your system is backend system (perhaps a downstream microservice which only accepts requests from other systems) then represent collaborating systems that are authorised to interact with the system.

Trust is essentially about who or what should have the freedom to do a particular thing. Make sure you illustrate those 'actors', as they are important for security.

A worked example: Lo-fi diagram

Returning to the real feature introduced above, lets see how the team chose to illustrate the new 'customer details page' functionality. They drew the following diagram.

It illustrates an understanding that the system:

  • is based on a microservice architecture
  • already has an identity provider in place which allows the customer to authenticate.
  • has a backend service for customer details (which is written in Java)
  • has a Backend for Frontend (BFF) service and frontend UI (which are written in Javascript and React)
  • has users who are customers, and want to edit their profile

No detailed knowledge of these technologies is required to follow this guide, but these facts illustrate the level of detail you should discuss while drawing the diagram.

Show how data flows

It is important to add details to show how data flows around your system.

iii. Draw arrows to show data-flow

Attackers often use the same pathways to pivot around systems that legitimate users do. The difference is they abuse them or use them in ways that nobody thought of checking. So it is important to show the trusted paths around your system, to helps see where real threats could happen.

Show data-flows with arrows, starting from the users and collaborating systems. Nowadays, most data-flows are request-response and therefore bi-directional. But I recommend that you draw directional arrows from where requests originate. From experience this makes it much easier to brainstorm threats later on.

iv. Label networks and show boundaries.

It is more likely threats will originate from certain networks. Networks are configured to restrict the freedom of traffic to flow from one place to another. That restrictiveness (or openness) will help determine what threats are possible or likely. The open Internet is more dangerous than a well protected backend network (even if your backend network is a VPC hosted by your cloud provider).

In another colour, draw dotted lines to show the boundaries between different networks in your system. These are often called 'authorisation boundaries'. Sometimes it is worth illustrating gateways devices such as load balancers or firewalls. Other times those devices are not so important to your scope in that session, and that is okay too. If you are unsure, it might make sense to invite someone with DevOps or infrastructure knowledge to your next session.

A worked example: Show Data-flow

In our case arrows are added to illustrate data-flows from the customer who wants to see their details, to the UI, then on to the BFF service and onto the customer details resource server to obtain or update the data. There is also a data-flow to the Identity server which issues a token that authorises the session.

There is also an authorisation boundary because the UI is on the Internet, whereas the other components are within the organisation's cloud hosting.

v. Show your assets.

It is helpful to quickly indicate on the diagram where data or services with business value sit. For example, this may be where you store personal data. If your system processes payments, perhaps its the service which does that. The assets in your system are information that needs to be kept confidential or intact, but also services which need to be kept available.

Do not spend too long on this step. The purpose is just to provide a little bit of context to help with brainstorming and prioritisation. If you spend more than 5 minutes on this, then its probably too long.

A worked example: Show your assets

The team identified the personally identifable information (PII) stored by the Customer Service and the credential store in the Identity provider as the assets with most business value.

Brainstorm threats

"What can go wrong?"

For the second part of your session, simply brainstorm threats to the system you have drawn. This section provides detailed steps and pointers to help you come up with a good range of relevant security threats.

Use STRIDE to help

If your team is beginning with threat modelling, STRIDE is perfect. STRIDE is a very light framework that gives you a head-start brainstorming security threats. It is a mnemonic, where each letter refers to a security concept. The point is not about categorising what you find, but helping you brainstorm effectively.

Invest some time understanding and discussing each of the six security concepts with the team before you start. To help learn, ThoughtWorks has created a set of STRIDE cue cards. These six cards are linked immediately below. They also include lists of examples on the reverse side.

On the Internet, nobody knows you are a dog

Can the data overflow and become instructions?

If there's no evidence, its easy to deny it happened

Who else could be looking?

Could the service be taken down?

How easy is it to circumvent protections?

Evil brainstorming

Coming up with ways to attack, break or frustrate a particular bit of software is threat modelling at its essence. It can also be great fun!

i. Brainstorm threats!

Everyone in the group joins in with suggesting threats. Finding the widest diversity of threats is a good thing, we are interested in possibilities rather than the 'happy path'. A few off-the-wall ideas can help too. Encourage that diversity by making sure everyone is involved and no single voice is allowed to dominate. Make sure everyone has access to pens and stickies and suggests at least one potential threat regardless of background or experience.

Use creative, divergent thinking and leverage the experience and various perspectives present in the group. Later you will prioritise those threats which are most risky or important, so there is no danger in having too many potential threats.

ii. Follow the data-flow lines!

If you need of inspiration, follow data-flow lines on your diagram one by one. How might one of the STRIDE concepts apply, for that data-flow? Does that suggest a particular threat which might need to be addressed? Working this way helps identify technical mechanisms and data-flows that attackers might use.

There is a data-flow to any cyber-attack, just like the trusted data-flows you have drawn. Attackers use the same pathways to pivot around system that have been put in place for trusted users. Cyber security losses happen when there are insufficient constraints at a technical level to prevent the bad things from happening.

iii. Capture threats on stickies

For each threat, spend a moment capturing it. 'SQL injection from Internet', 'lack of encryption in database', 'no Multi-Factor authentication' are good examples. Questions are also good, such as 'do we need to store this data?', 'could there be an authorisation bypass?' and 'who will revoke leavers accounts?'

You will find it is quite natural to then place these stickies in a particular spot on the diagram, alongside a particular user, component or data-flow. Include just enough detail so you know what the sticky means and move on to brainstorming the next threat.

A worked example: Threats identified

Using the diagram created earlier, the team brainstormed around each data-flow in the system, using STRIDE to help.

As we discover potential threats in the mechanisms of the software, write them on stickies and annotate the lo-fi diagram.

Here's what they had come up with, at the end of the brainstorming part of the session:

Data-flow
Threat
Customer→Identity Service
  • authentication is password based, no two-factor authentication
Customer→UI
  • a DOM based cross-site scripting (XSS) attack
UI→Bff
  • absence/weakness of identity token validation
  • injection attack, such as SQL injection, stored XSS
  • lack of logging of identity of caller
  • badly configured TLS transport encryption
  • misconfigured graphQL introspection enabled in production
  • network layer flood of traffic from a botnet
  • failure to prevent authenticated user from accessing someone else's details
  • lack of regular patching could lead to remote code execution
Bff→Customer Service
  • absence or weakness of 'server to server' authentication
  • lack of logging of identity of caller
  • overly permissive security groups allows customer service to be accessed from Internet
  • failure to prevent authenticated user from accessing someone else's details

Notice that many of the threats occurred where the data-flow crossed the authorisation boundary from the Internet into the system. However threats were identified in the browser-based UI and within the backend network also.

Prioritise and fix

You are now going to build on the diagram the group has created, annotated with threats.

"What are you going to do about it?"

Software teams are incentivised to deliver, and rarely have unlimited bandwidth to go away and address every threat identified. And some of the threats may pose an insignificant risk. You need to filter down and prioritise a few most important actions which you can take away and execute on effectively.

Prioritise threats by risk

i. Share knowledge useful for prioritisation

It can help to get a shared understanding of what we know about the system's risk profile. Spend no more than 5 minutes on this. Roughly, there are two types of knowledge that is helpful when prioritising threats. Product owners or security teams often have excellent insights to share.

  • Business value - what kind of losses put the organisation's objectives into jeapody? Is it having the customer database stolen? Reputational impact due to lost business?
  • Broader threat - what are the likely root causes of losses due to cyber security issues for this organisation? Are we worried about fraud? Malicious insiders? Particularly capable hackers?

If nobody in the group has any insight into broader risk, then that is OK too. Access to this knowledge is not a pre-requisite. You can prioritise based on technical risk alone and still get a big benefit from threat modelling.

ii. Everyone vote for top riskiest threats!

This should be familiar to anyone who has done 'dot voting' in a retrospective or workshop before. Everyone gets some votes and casts them for the riskiest threats. Perhaps start with three votes each for your first session. But it can depend on how many people are in the group and how many threats you found. The goal is to whittle down to the most valuable threats to start with. Remember that risk is not just about how probable the threat is, but also the scale of potential loss.

Everyone casts their votes using dots, according to their own perception of the risk.

Everyone needs to cast their votes by marking dots on the sticky notes. Everyone gets a fixed number of votes. It is fine to vote on the same threat more than once if you think that is right.

Voting in this way will yield good risk decisions for low investment, reflecting the diverse perspectives in the group. People have an intuitive sense of risk and will be able to cast their votes with minimal prompting.

iii. Identify the top riskiest threats

You will need to count up the number of votes for each threat, and mark those with the highest votes somehow. Perhaps circle the riskiest threats with a pen.

Often I am asked how many threats we should seek to identify in a session. The first time, three can be a good number. That provides a good balance for the amount of time invested. But use your judgement and experiment. There may be one very risky item- right now it makes sense just to address that. Equally four or five threats can emerge from a single session.

iv. Take a photo of the annotated diagram, and record threats

Take a photograph using a mobile phone camera at this stage to capture the output of the brainstorming and prioritisation. If you are working with remote tools you can take a screenshot or an export. It makes a lot of sense to upload the image to your Wiki or store them in a repository somewhere.

Add fixes to your backlog

It is essential the time investment by the team leads to followup work. Software teams already have a powerful way to represent and sequence activity to deliver software: the backlog. It is now time to work out what fixes you actually need to reduce risk. Use the team's existing processes and ways of working to support this (see sidebar).

v. Capture security fixes in backlog

For each prioritised threat, define concrete next steps. In the language of security, you can call these 'controls', 'mitigations', 'safeguards' or simply just security fixes. These fixes may take on a variety of forms. Make these fixes concrete so that it is clear when fix is "done", and the risk has therefore been reduced.

  • Acceptance Criteria are the most common type of security fix to come out of threat modelling. These will be added to reflect extra scope, on an existing story. For example there might be a story to perform an action, and the extra acceptance criteria might be an authorisation check. Acceptance criteria should be testable.
  • Stories might crop up to implement a particular control, or get split out of an existing story if it makes sense to the business analyst and the team. For example, integrating a single page app with an identity system might form an 'Authenticate User' story.
  • Timeboxed Spikes are really useful if we are either unsure if we are actually vulnerable - perhaps the backend calls are sanitised automatically? - or if we are unsure of the best solution to the issue and it's worth investing some developer time to find out.
  • Definition of Done is the set of conditions and acceptance criteria the team must meet in order to consider a feature done. If you identify that all API calls need to be authenticated, authorised and logged, then you should reflect that in your definition of done. This is so that you can test for it consistently before signing stories off.
  • Epics are significant bits of security architecture which are identified as part of threat modelling. Examples might be introducing an identity provider, a security events system, or configuring the network in a particular way. Expect threat modelling to generate epics early on in a project.

vi. Wrap up and close out the session

If you write the fixes down on cards or paper in the session, make sure someone takes responsibility for adding them to your project tracking tool or agile board. Ideally the product owner will be in the threat modelling session. If not then take an action to talk whoever prioritises work through the threats so that you prioritise appropriately.

The best way to make sure your threat modelling has an impact is to deliver the fixes and then do threat modelling again.

A worked example: Scope in the backlog

When they voted, the team decided that three threats were the most risky- and worthy of fixes.

Authorisation bypass direct to API

Although the user has to be logged in to see the page (is authenticated), the team realised there is nothing to stop unauthenicated requests direct to the API. This would have been a pretty major flaw if it had made it into production! The team had not spotted it before the session.

They added the following acceptance criteria to the story so it can be tested explicitly as part of story sign-off.

GIVEN an API request from the single page app to the API

WHEN there is no valid authorisation token for the current user included in the request

THEN the API request is rejected as unauthorised

XSS or injection via user input

The user profile feature allows user input for personal details, addresses and delivery preferences. These details are interpreted by various legacy backend systems which may be vulnerable to SQL and XML injection attacks.

The team knew that they would be implementing a lot of features in coming iterations which accept input from the user and store it in the backend. Rather than add these kinds of checks to every single story they added the following to the team's definition of done. These means it can be checked for at story sign-off consistently.

All API changes tested for sanitisation of XSS, SQL and XML injection

Denial of service from Internet

The security specialist who attended the session from the cyber risk team advised that loss of revenue due to distributed denial of service by online criminals had been highlighted in their work.

Given this requirement involves integrating the software with a third-party security service, in this case a content delivery network- the team wrote a specific story to capture the required work. The security specialist agreed to pair with the team on implementation.

As a cyber risk specialist

I need all Internet facing UI and API requests to pass through the Content Delivery Network

So that we can mitigate loss of revenue due to denial of service by criminals

With the work defined and ready to be added to the backlog, the threat modelling session is complete. Until next time!

Grow your practise

"Did we do a good enough job?"

At the start of this guide, I introduced the three question structure. But there are actually four questions, because we always need to obtain feedback and improve.

Reflect and keep improving

Feedback and continuous improvement is central to managing risk. Neither the systems we build nor the threats they face are simple, as I stressed at the start of this guide. And every team is different- with different skills, tools, constraints and personalities. There is no single way to threat model, this guide simply provides some basics to get you started. Much like test-driven development or continuous delivery, threat modelling rewards investment.

One way to improve is to perform a retrospective on your threat modelling efforts, once you have run a few sessions. Ask what went well and what could be improved. Is the timing right? Was the scope too granular? Not granular enough? What about the location or remote tools you have used? What issues cropped up after the session? How long did the scope take to deliver? By asking such questions, the team will adapt and build mastery over time, doubling down on what works and discarding what adds little value.

Over time you can grow a practise best suited to your team or organisation. Here are just a few ideas for next steps:

  • Experiment with different types of diagram. For something like an OAUTH2 authentication flow, something like a UML sequence diagram might be better than a simple component diagram.
  • Use domain specific threat libraries. There are many resources that can help you brainstorm threats. OWASP has some great resources around Mobile or APIs. There has been lots of recent interest in using the ATT&CK framework.
  • No practise is a silver-bullet. Threat modelling is not an efficient way to find basic coding or dependency issues. Complement threat modelling using automated tools in the software delivery pipeline.
  • Like everything else in software, if it isn't tested then you can't consider it done. Use threat modelling to discover the failure conditions to test for.
  • Run a session on your system's risk profile. This kind of analysis is often focussed on the type of data you are processing and the value of your services to others- in security jargon, take a deep-dive into the business value of your 'assets'.
  • Run a session on broader threats to your system. It is common to analyse the capabilities and motivations of potential attackers based on the best available information, for example threat intelligence, documentation of real attack techniques and real incidents from similar systems.
  • Join threat modelling communities such as the Threat Modelling channel on OWASP Slack, or follow the Threat Modelling SubReddit. Follow other folks doing threat modelling on Twitter.

In conclusion

Many 'solutions' in security seem designed to keep security out of the hands of developers. That does not make them bad solutions. Automated checks in the pipeline are effective at finding vulnerabilties- you should use them. Penetration tests can find issues you would not yourself. Platforms with secure defaults can eliminate many common threats. However each 'solution' only addresses a limited class of threat. Whereas cyber-risk is not a simple thing, it is multi-facated and in constant flux. Understanding this risk is at the core of effective approaches to managing it.

The killer application of Threat Modelling is promoting security understanding across the whole team. This is the first step to making security everyone's responsibility. Just like business outcomes, quality, integration and infrastructure - security can be central to how teams think about software delivery. Rather than something bolted on at the end. In my experience, a team with the muscle memory of threat modelling is a team that proactively manages cyber-risk.

I remember being told that Threat Modelling was just too hard for most development teams. I just wasn't willing to accept that was the case. I understood the chance to improve how developers address security. Since then I have helped loads of teams get started with Threat Modelling and their journey to understand security properly. I have never come across a team who found threat modelling or security too hard. Particularly when explained in an accessible way. That is exactly what I have tried to do in this guide.

I believe threat modelling is a transformative practise for software development teams. A collaborative and risk-based practise which can be applied continuously. If you are part of a software team, please do send this article to your team and suggest a threat modelling session. Or forward it to whoever is responsible for software security at your organisation. By setting aside a little time to run a session you can get started. By starting to understand the threats, risk and security fixes needed for your system, you are taking a step closer to effective cyber security.

I hope this guide helps your team to start threat modelling. At the very least a session should provide value straight away- with good security stories and acceptance criteria added to your backlog. This method has helped many development teams at ThoughtWorks' clients adopt an holistic approach to cyber security. I hope it is useful to you also.


Acknowledgements

Thanks to Martin Fowler, Adam Shostack, Charles Weir and Avi Douglan for providing detailed, thoughtful feedback on early drafts of this article. Any sophistication or nuance is due to them.

Thanks to Nalinikanth Meesala, Mona Fenzl and Sarah Schmid from ThoughtWorks who helped me with the guidance on running threat modelling sessions remotely - they deserve all the credit for that section of the guide!

Thanks to the folks at the Sociotechnical Group at the UK National Cyber Security Centre (and the Developer Centred Security research portfolio under RISCS) for helping me realise that Threat Modelling is not 'simple'.

Thanks to everyone who participates in the OWASP Threat Modelling project for all the input and the open sharing of wisdom.

Thanks to Katie Larson for her fantastic proof reading, helping to extract readable sentences from my stream of consciousness.

Thanks to Pete Staples and Jeni Oglivy in ThoughtWorks marketing for making the STRIDE cue cards look awesome. And to Matt Pettitt for the help in formatting them for this webpage. Credit to the photographers at Unsplash.com where all the images in this guide are sourced from.

And huge thanks to everyone in ThoughtWorks who has worked with me to refine our approach and guidance on Threat Modelling. But especially Harinee Muralinath, Jaydeep Chakrabarty, Fulvio Meden, Neelu Tripathy and Robin Doherty.

Footnotes

1: Some structured approaches worth mentioning: PASTA and OCTAVE. Such approaches are intended to be implemented by a full time security specialist. And they are suited to environments where there is a preference for big design upfront. While there is a lot that is helpful, software developers in an agile environment will struggle to meaningfully adopt these techniques. Investing heavily in producing lots of artefacts (usually attack trees!) is a common pattern when following these guides. Then, having 'blown the budget', teams fail to follow through with software changes that reduce risk.

2: Adam Shostack, who has written extensively on threat modelling and has provided feedback on this guide takes credit for the three question structure. He also adds a fourth question "Did we do a good enough job?" I don't disagree with Adam that we need to reflect and improve on the outcomes of our threat modelling. However, I have omitted this question from the basic structure as I believe it can be addressed elsewhere. Iterating and improving based on feedback should be implicit in agile software development, particularly when we are threat modelling 'little and often'. I have included a final section in this guide on how to reflect, improve and grow your team's threat modelling outcomes.

Significant Revisions

28 May 2020: Published final installment

27 May 2020: Published Prioritise and Fix

26 May 2020: Published Brainstorm Threats

20 May 2020: Published Explain and Explore

19 May 2020: Published Preparing to start

18 May 2020: Published first section