CCCU

Week-7: Threat Modelling

Module Code: P19133 for Cybersecurity Fundamentals

Credits: 20

Module Leader: Ali Jaddoa
CSF - P19133
CCCU

Today's session: Threat modelling

  • Theory Session:
    • Introduction to Threat Modelling Concepts and Methodologies.
    • Threat Modelling Frameworks and Practical Examples.
  • Lab Building a Threat Model for an application.
CSF - P19133
CCCU

What is Threat Modelling?

A structured approach to identifying, assessing, and addressing security threats in a system or application.
width:1OO% center

  • The art and science of assessing whether your defences are sufficient to counter relevant threats.
  • Systematically examining critical questions to identify potential security issues, often without relying on tools, enabling teams to uncover vulnerabilities and strengthen defences.
CSF - P19133
CCCU

width:1OO% center

  • In cybersecurity, we look at assets, possible threats, and the defences or counter-measures we can put in place.
  • Controls reduce risk
CSF - P19133
CCCU

Q: How is different from Risk Assessment (RA)?

  • RA: A compliance activity that evaluates existing systems to measure and document security risks, focusing on what has already been built.
  • TM: A proactive process to identify potential threats and guide changes in current and future system designs to enhance security.
Aspect Threat Modelling Risk Assessment
Objective Identifies specific attack paths and methods on systems Evaluates likelihood and impact of risks on the organisation
Focus Targets potential vulnerabilities and attacker actions Considers broader impacts, including financial and operational risks
Outcome Informs targeted defences against specific threats Prioritises risks for overall organisational management
CSF - P19133
CCCU

Important Questions:

  1. What are you building?
  2. What can go wrong?
  3. What are you going to do about it?
  4. Did you do an acceptable job at 1-3?
CSF - P19133
CCCU

Back to Basics

width:1OO% center

  • Vulnerability - Threat - Attack
  • Exploit: a successful attack
  • (NEW) Trust Boundary: where the level of trust changes for data or code
CSF - P19133
CCCU

Threats 121

  • Threats represent a potential danger to the security of one or more assets or components
  • Threats could be malicious, accidental, due to a natural event, an insider, an outsider.
  • Threats have certain sources (Social, Operational, Technical, Environments)
  • A single software choice can result in many threats.
  • Threats exist even if there are no vulnerabilities
    • No relaxing
    • Threats change with system changes
      • How can a change in software result fewer threats?
CSF - P19133
CCCU

Why Threat Modelling?

Enables organisations to proactively identify and mitigate attack vectors, prioritising finite resources for maximum impact.

  • Resource Prioritisation: Focus on critical risks within limited resources.
  • Threat Identification: Recognise potential threat events.
  • Threat Characterisation: Understand attacker TTPs (Tactics, Techniques, and Procedures).
  • Control Measures: Implement effective defences.
  • Lateral Movement Prevention: Limit attackers’ ability to access key assets.
  • Avoiding Blind Spots: Ensure comprehensive threat visibility.
CSF - P19133
CCCU

Key Steps in Threat Modelling

  1. Identify Threats: Recognise potential attack vectors (e.g., phishing, weak passwords).
  2. Characterise Threats: Understand attacker methods (e.g., brute-force login).
  3. Prioritise Risks: Focus on high-impact threats within resource limits.
  4. Implement Controls: Use multi-factor authentication, monitor logins.

Outcome: Reduced risk of breaches, stronger security posture.

CSF - P19133
CCCU

Threat Modelling Methodologies:

  • Asset-Centric Approach

  • Attacker-Centric Approach

  • System-Centric Approach

CSF - P19133
CCCU

Threat Modelling Methodologies: Asset-Centric Approach

  • Protect valuable assets.

  • Focuses on identifying and safeguarding critical assets such as data, intellectual property, user information, and physical components.

    1. Identify Assets: List all valuable assets needing protection.
    2. Assess Impact: Determine the impact of potential threats to these assets.
    3. Determine Controls: Implement security measures to protect these assets.
  • Benefits: Ensures critical components are adequately protected.

  • Challenges: May overlook vulnerabilities that do not directly threaten the identified assets.

CSF - P19133
CCCU

Threat Modelling Methodologies: Attacker-Centric Approach

  • Identify potential attackers and their goals.
  • Focuses on understanding the attackers' motivations, capabilities, and objectives to better anticipate and defend against their actions.
    1. Profile Attackers: Define potential attackers based on motivation, skill level, and resources.
    2. Understand Goals: Identify what attackers aim to achieve (e.g., financial gain, disruption, data theft).
    3. Simulate Attacks: Model how these attackers might exploit vulnerabilities.
  • Benefits: Prepares defenses tailored to specific threats.
  • Challenges: Requires ongoing updates to address evolving threats and techniques.
CSF - P19133
CCCU

Threat Modelling Methodologies: System-Centric Approach

  • Focus on vulnerabilities within the system’s architecture.
  • Examines the system’s design and interactions to identify and mitigate potential weaknesses.
    1. Analyse Architecture: Map out the system architecture and components.
    2. Identify Vulnerabilities: Look for weaknesses within the design and implementation.
    3. Mitigate Risks: Develop and apply controls to address vulnerabilities.
  • Benefits: Provides a thorough examination of system structure, helping to identify deep-seated vulnerabilities.
  • Challenges: Can be complex and time-consuming, especially for large or intricate systems.
CSF - P19133
CCCU

Threats Modelling Stages: MS

width:1OO% center

CSF - P19133
CCCU

Methodology

  • Based on OWASP (Open Worldwide Application Security Project)

    Step 1 Scope Definition

    Step 2 System Decomposition

    Step 3 Threat Identification

    Step 4 Attack Modelling

    OWASP: is an online community that produces freely available articles, methodologies, documentation, tools, and technologies in the fields of IoT, system software and web application security

CSF - P19133
CCCU

Step 1 - Scope Definition

  • Task A: Gather Information

    • software design document (SDD), technical specification or any system-related documentation.
  • Task B: Demarcate Perimeter Boundary: to determine the scope for threat modelling.

    • Components that support the functioning and running of the system e.g. servers, databases, client workstations, hosts, switches, routers etc.

    • Components that support the cybersecurity of the system e.g. firewalls, IDS and IPS.

CSF - P19133
CCCU

Step-2 System Decomposition

Breaking down a system into its different components.

  • Task-A: components which a potential attacker may be interested in.

    width:1OO% center

CSF - P19133
CCCU

Step-2 System Decomposition: Cont'

  • Task B: Draw How Data Flows

    width:1OO% center

    width:1OO% center

CSF - P19133
CCCU

Step-2 System Decomposition

Task C: Divide Out Trust Boundaries: identify the respective limits of access, as well as the required levels of authorisation e.g. trust levels, granted to subjects.

width:1OO% center

CSF - P19133
CCCU

Step-3: Threat Identification

Involves identifying threat vectors and listing threat events;

Useful threat information:

  • Timely: Information should be received in a timely manner as information that is outdated is useless to Users;
  • Relevant: Information needs to be relevant to the context of the Users. For example, industrial control systems may have different priorities compared to financial institutions; and
  • Actionable: Information should be actionable for the correct group of users. Users must be able to react to information at the appropriate level e.g. tactical or strategical level.
CSF - P19133
CCCU

Step-3: Threat Identification: Cont'

Task-A: Identify Threat Vectors:

  • Threat vectors are paths through which an attacker can exploit to penetrate a system component or bypass defences

    width:1OO% center

CSF - P19133
CCCU

Step-3: Threat Identification: Cont'

Task-B: List Possible Threat Events:

  • Threat events can be characterised by the tactics techniques and procedures (TTP) employed by the attacker.
    width:1OO% center
  • STRIDE, PASTA, DREAD, OCTAVE, NIST SP 800-154,
CSF - P19133
CCCU

Step-3: Threat Identification: Cont'

width:1OO% center

CSF - P19133
CCCU

Step-4 Attack Modelling:

Mapping sequence of attack, describing tactics, techniques and procedures

  • Describes attacker’s intrusion approach so that users can identify mitigation controls needed to defend the system and prioritise its implementation.

  • To model the attack, you can use either:

    • MITRE ATT&CK:comprehensive matrix of tactics and techniques used by attackers to compromise an organisation’s system

    OR

    • or Cyber Kill Chain: series of well-defined sequence of stages that the attackers are likely to complete in order to achieve their end objective
CSF - P19133
CCCU

STRIDE (TTP)

Model for identifying computer security threats.

width:1OO% center

CSF - P19133
CCCU
STRIDE-LM Threat Property Definition
S Spoofing Authentication Impersonating someone or something
T Tampering Integrity / Access Controls Modifying data or code
R Repudiation Non-repudiation Claiming to have not performed a specific action
I Information Disclosure Confidentiality Exposing information or data to unauthorised individuals or roles
D Denial of Service Availability Deny or degrade service
E Elevation of Privilege Authorisation / Least Privilege Gain capabilities without proper authorisation
LM Lateral Movement Segmentation / Least Privilege Expand influence post-compromise; often dependent on Elevation of Privilege
CSF - P19133
CCCU

Threats modeling Task Questions (YOUR JOB)

  • What are we working on?
    • diagrams, assests, trust levels, etc.
  • What can go wrong?
    • STRIDE
  • What are we going to do about it?
    • Countermeasures and Mitigation
  • Did we do a good enough job?
    • Evaluation
CSF - P19133
CCCU

Example: Threat Modeling for GenAI integration.

width:1OO% center

width:1OO% center

CSF - P19133
CCCU

Step-1: Scope Definition

  • Define what part of the system will be threat modeled, focusing on components like input handling, output generation, data storage, and user interaction.

  • Key Questions:

    • What are the primary functions of the system?
    • Who are the system users (both legitimate and potential attackers)?
    • What data does the LLM process, and what are its sources and destinations?
CSF - P19133
CCCU

Step 2: System Decomposition

  • External prompt sources (e.g., users, website content, email bodies, etc.)
  • The LLM model itself (e.g., GPT4, LLaMA, Claude, etc)
  • Server-side functions (e.g., LangChain tools and components)
  • Private data sources (e.g., internal documents or vector databases)

width:1OO% center

CSF - P19133
CCCU

Step 2: System Decomposition & TB

Trust boundary: the point within a system where different levels of trust or different security policies are enforced/applied.

width:1OO% center

  • TB1: between the external endpoints and the LLM
  • opportunities for injection
  • Both user's and LLM inputs are considered untrusted
  • Two-way trust boundary
  • Cross site scripting
CSF - P19133
CCCU

Step 2: System Decomposition & TB - Cont'

width:1OO% center

  • TB2: considered when building applications around LLMs
    • Controls along TB2 would mitigate the impact of:
      • an LLM passing input directly into an exec(); function
      • modifying the presentation of a XSS payload being passed back to the user.
  • TB3: LLMs themselves do not adhere to authorisation controls internally
    • Controls along TB3 would mitigate:
      • the ability for either the LLM or an external user to gain access to sensitive data stores.
CSF - P19133
CCCU

Step 3 : Threat Identification: STRIDE

width:1OO% center

For each TB we are going to have:

  • Strengths and weaknesses for each STRIDE item and for each compents
    width:1OO% center

  • List of vulnerabilities
    width:1OO% center

CSF - P19133
CCCU

Can we consider hallucinations as a vulnerability?

Use the following to discuss

CSF - P19133
CCCU

What about training data poisoning, bias, or hate speech?

Step 4 – Attack Modelling: to assess the work

  • To model the attack, you can use either:

    • MITRE ATT&CK:comprehensive matrix of tactics and techniques used by attackers to compromise an organisation’s system

    OR

    • or Lockheed Martin Kill Chain: series of well-defined sequence of stages that the attackers are likely to complete in order to achieve their end objective.
      • Steps below
CSF - P19133
CCCU
CSF - P19133
CCCU

Lab

  • GeneAI Threat Modelling, see here.

  • More resources are availbale on the book section on the BB page.

CSF - P19133

- Lab Session 2: Advanced Threat Modelling with STRIDE and Attack Trees.

Users should establish the technical scope, system architecture, and system components before performing threat modelling for a system.

- developers and end-users in various industries

- Anyone

- LLMs process diverse data including user inputs and large text corpora, with data flowing from user interactions to outputs in applications and databases

--- ### Step 3 – Threat Identification: **STRIDE - TB1** ![width:1OO% height:200px center](../../figures/TB_VUL.png) - **For the External points** | **Category** | **Strengths** | **Weaknesses** | |----------------------------|-----------------------------------------|-------------------------------------------------------------| | **1-Spoofing** | | **V1**: Modify System prompt (prompt injection) | | **2-Tampering** | | **V2**: Modify LLM parameters (temperature, length, model, etc.) | | **3-Repudiation** | Proper authentication and authorisation (assumed) | | | **4-Information Disclosure** | | **V3**: Input sensitive information to a third-party site (user behavior) | | **5-Denial of Service** | | | | **6-Elevation of Privilege** | Proper authentication and authorisation (assumed) | | --- ### Step 3 – Threat Identification: **STRIDE - TB1** ![width:1OO% height:200px center](../../figures/TB_VUL.png) - **For LLMs** | **Category** | **Strengths** | **Weaknesses** | |----------------------------|-----------------------------------------|-------------------------------------------------------------| | **1-Spoofing** | - | -| | **2-Tampering** | - | - | | **3-Repudiation** | -| - | | **4-Information Disclosure** |- | **V4**: LLMs are unable to filter sensitive information (open research) | | **5-Denial of Service** |-|-| | **6-Elevation of Privilege** | - | - | --- ### Step 3 – Threat Identification: **STRIDE - TB1** #### List of vulnerabilities | V_ID | Description | E.g., | |--------|---------------------------------------------------|-----------------------------------------------------------------------------------------------------------| | V1 | Modify System prompt (prompt injection) | Users can modify the system-level prompt restrictions to "jailbreak" the LLM and overwrite previous controls in place | | V2 | Modify LLM parameters (temperature, length, model, etc.) | Users can modify API parameters as input to the LLM such as temperature, number of tokens returned, and model being used. | | V3 | Input sensitive information to a third-party site (user behavior) | Users may knowingly or unknowingly submit private information such as HIPAA details or trade secrets into LLMs. | | V4 | LLMs are unable to filter sensitive information (open research area) | LLMs are not able to hide sensitive information. Anything presented to an LLM can be retrieved by a user. This is an open area of research. | --- ### Step 3 – Threat Identification: **STRIDE - TB2** ![width:1OO% height:200px center](../../figures/TB_VUL_2.png) - **LLMs** | **Category** | **Strengths** | **Weaknesses** | |----------------------------|-----------------------------------------|-------------------------------------------------------------| | **1-Spoofing** | - |V5: Output controlled by prompt input (unfiltered)| | **2-Tampering** | - |Output controlled by prompt input (unfiltered)| | **3-Repudiation** | - | - | | **4-Information Disclosure** | - | - | | **5-Denial of Service** | - | - | | **6-Elevation of Privilege** | - | - | --- ### Step 3 – Threat Identification: **STRIDE - TB2** ![width:1OO% height:200px center](../../figures/TB_VUL_2.png) - **For Server-Side Functions** | **Category** | **Strengths** | **Weaknesses** | |----------------------------|-----------------------------------------|-------------------------------------------------------------| | **1-Spoofing** |Server-side functions maintain separate access to LLM from users| - | | **2-Tampering** | - |V6: Server-side output can be fed directly back into LLM (requires filter)| | **3-Repudiation** | - | - | | **4-Information Disclosure** | - |V6: Server-side output can be fed directly back into LLM (requires filter)| | **5-Denial of Service** | - | - | | **6-Elevation of Privilege** | - | - | --- ### Step 3 – Threat Identification: **STRIDE - TB2** #### List of vulnerabilities ![width:1OO% height:100px center](../../figures/TB_VUL_2.png) | V_ID | Description | E.g., | |---------|-----------------------------------------------------------|----------------------------------------------------------------------------------------------------------| | V5 | Output controlled by prompt input (unfiltered) | LLM output can be controlled by users and external entities. Unfiltered acceptance of LLM output could lead to unintended code execution. | | V6 | Server-side output can be fed directly back into LLM (requires filter) | Unrestricted input to server-side functions can result in sensitive information disclosure or server-side request forgery (SSRF). Server-side controls would mitigate this impact. | --- ### Step 3 – Threat Identification: **STRIDE - TB3** ![width:1OO% height:200px center](../../figures/TB_VUL3.png) - **For the LLMs** | **Category** | **Strengths** | **Weaknesses** | |----------------------------|-----------------------------------------|-------------------------------------------------------------| | **1-Spoofing** | - | V5: Output controlled by prompt input (unfiltered) | | **2-Tampering** | - | V5: Output controlled by prompt input (unfiltered) | | **3-Repudiation** | - | - | | **4-Information Disclosure** | - | - | | **5-Denial of Service** | - | - | | **6-Elevation of Privilege** | - | - | --- ### Step 3 – Threat Identification: **STRIDE - TB3** ![width:1OO% height:200px center](../../figures/TB_VUL3.png) - **Private Data Sources** | **Category** | **Strengths** | **Weaknesses** | |----------------------------|-----------------------------------------|-------------------------------------------------------------| | **1-Spoofing** | - | - | | **2-Tampering** | - | - | | **3-Repudiation** | - | - | | **4-Information Disclosure** | - | V7: Access to sensitive information | | **5-Denial of Service** | - | - | | **6-Elevation of Privilege** | - | - | --- ### Step 3 – Threat Identification: **STRIDE - TB3** #### List of vulnerabilities ![width:1OO% height:200px center](../../figures/TB_VUL3.png) | **V_ID** | **Description** | **E.g.,** | |-------------|--------------------------------------------------|--------------------------------------------------------------------------------------------------| | **V5** | Output controlled by prompt input (unfiltered) | LLM output can be controlled by users and external entities. Unfiltered acceptance of LLM output could lead to unintended code execution. | | **V7** | Access to sensitive information | LLMs have no concept of authorisation or confidentiality. Unrestricted access to private data stores would allow users to retrieve sensitive information. | ---

--- | **REC_ID** | **Recommendations for Mitigation** | |------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | **REC1** | Avoid training LLMs on non-public data. Treat all LLM output as untrusted and enforce data/action restrictions based on LLM requests. | | **REC2** | Limit exposed API surfaces to external prompts. Treat all external inputs as untrusted, applying filtering where needed. | | **REC3** | Educate users on safe behavior during signup and provide consistent notifications when connecting to the LLM. | | **REC4** | Do not train LLMs on sensitive data. Enforce authorisation controls at the data source level, not within the LLM. | | **REC5** | Treat LLM output as untrusted; apply restrictions before using it in other functions to mitigate malicious prompt impact. | | **REC6** | Filter server-side function outputs and sanitise sensitive information before retraining or returning output to users. | | **REC7** | Treat LLM access like typical user access. Enforce authentication/authorisation controls prior to data access, as LLMs cannot protect sensitive information. | ---

--- ### Step 4 – Attack Modelling: Lockheed Martin Kill Chain ![width:1OO% height:500px center](../../figures/CKC.png)