INSIGHTS | March 3, 2025

Preparing for Downstream Attacks on AI Systems

The tech industry must manage AI security threats with the same eagerness it has for adopting the new technologies.

New Technologies Bring Old and New Risks

AI technologies are new and exciting, making new use cases possible. But despite the enthusiasm with which organizations are adopting AI, the supply chain and build pipeline for AI infrastructure are not yet sufficiently secure. Business, IT, and cybersecurity leaders have considerable work to do to identify the issues and resolve them, even as they help their organizations streamline adoption in a complex global environment with conflicting regulatory requirements.

Background

As AI technologies become integrated into critical business operations and systems, they become increasingly attractive targets for malicious threat actors who may discover and exploit the new vulnerabilities present in the new technologies. That should reasonably concern any CISO or CIO.

Attackers certainly have plenty of opportunity due to the rapid adoption of AI capabilities. Today businesses use AI to improve their operations, and a Forbes survey notes there is extensive adoption in customer service, customer relationship management (CRM), and inventory management. More use cases are on the way as products mature. In January 2024, global startup data platform Tracxn reported there were 67,199 AI and machine learning startups in the market, joining numerous mature AI companies.

The swift uptick in AI adoption means these new systems have capabilities and vulnerabilities yet to be discovered and managed, which serves as a significant source of latent risk to an organization, particularly when the applications touch so much of an organization’s data.

AI infrastructure encompasses several components and systems that support models’ development, deployment, and operation. These include data sources (such as datasets and data storage), development tools, computational resources (such as GPUs, TPUs, IPUs, cloud services, and APIs), and deployment pipelines. Most organizations source most of the hardware elements from external vendors, who in turn become critical links in the supply chain.

Naturally, anything a business depends on needs to be protected, and security has to be built in. Risks and mitigation options should be identified early across the full stack of hardware, software, and services supply chain to manage risks as well as anticipate and defend against threats.

Also, any new foundational elements in an organization’s infrastructure create new complexities; as Scotty pointed out in Star Trek III: The Search for Spock, “The more they overthink the plumbing, the easier it is to stop up the drain.”

The Frailties in the AI Build Pipeline

To coin another apt movie quote, “With great power comes great responsibility.” AI offers tremendous power – accompanied by new security concerns, many yet to be identified.

It shouldn’t only be up to technical staff to uncover the risks associated with integrating AI solutions; both new and familiar steps should be taken to address the risks inherent in AI systems. The build pipeline for AI typically involves several stages, often in iteration, similar to DevOps and CI/CD pipelines. The AI world includes new deployment teams: AIOps, MLOps, LLMOps, and more. While these new teams and processes may have different names, they perform common core functions.

Broadly, attack vectors can be found in four major areas:

  • Development: Data scientists and developers write and test code for model libraries. Data is collected, cleaned, and prepared for training. Some data may come from third parties or be generated for a vertical market. Applications are built based on these models, with the goal of improving data analysis so that people can make better decisions.
  • Training: The AI models are trained using the collected datasets, which depend on complex algorithms and use substantial computational power. The organization or its external provider validates and tests Large Language Models (LLMs) and others to meet quality and performance criteria.
  • Deployment: The organization deploys the application and data models to production environments. This may involve several DevOps practices, such as containerization (such as Docker), orchestration (such as Kubernetes), and application integration schemes (such as APIs and microservices).
  • Monitoring and maintenance: As with any other enterprise system, the software supply chain for AI systems requires performance monitoring and the standard complement of updates and patches. AI systems add more to the list, such as model performance monitoring.

What Could Possibly Go Wrong?

What Couldn’t?

Security professionals are trained to see the weak points in any system, and the AI supply chain and build pipeline are no exception. Attack surface is present at each step in the AI build pipeline, adding to the usual areas of concern in software development and deployment.

Poisoning the Data

The most exposed element is the data itself.“Garbage in, garbage out” is an old tenet of computer science that describes no amount of processing can turn garbage data into useful information. Worse outcomes are a consequence of an intentional effort to degrade the dataset, especially when that degradation is surreptitious, subtle, and impactful. Malicious data injected into training datasets can corrupt or bias AI models to create misleading outputs, intentionally generating incorrect predictions or decisions. Over time, malicious actors will be motivated to develop more sophisticated techniques to evade detection and to poison larger datasets, including the third-party data on which many IT systems rely.

An attacker who could gain from compromising model integrity might inject corrupted data into a training database or hack the data collection tools to insert biased data intentionally. They may craft malicious prompts to mislead LLMs into suggesting inaccurate outputs, comparable to the way Twitter bots affect trending topics.

While the term “poisoning” might suggest a deliberate intent to manipulate data and affect the model’s output, much like an intentional backdoor coded into a program, bias can also be introduced by accident, like an unintentional coding error that results in a bug that could be exploited by a threat actor. IOActive previously identified bias resulting from poor training data set composition in facial recognition in commercially available mobile devices. The presence of these unintentional biases makes the detection and response to poisoning more complex and resource intensive.

Many LLMs are trained on massive data oceans culled from the public internet, and there is no realistic way to separate the signal from the noise in those datasets. Scraping the internet is a simple and efficient way to access a large dataset, but it also carries the risk of data poisoning, whether deliberate or incidental.

While unintended poisoning is a known and accepted problem – compounded by the fact that LLMs trained on public datasets are now ingesting their own output, some of which is incorrect or nonsensical – deliberate data poisoning is much more complicated. The use of public datasets enables anyone, including malicious actors, to contribute to them and poison them in any number of ways, and there’s not much that LLM designers can do about it. In some cases, this recursive training with generated data can result in model collapse, which offers an intriguing new attack impact for malicious threat actors.

This will, at minimum, add to the burden of the AI training process. Database, LLM, and application testing needs to expand beyond “Does it work?” and “Is its performance acceptable?” to “Is it safe?” and “How can we be sure of that?”

Example: DeepSeek’s Purposeful Ideological Bias

In some cases, there is obvious ideological bias purposefully introduced into models to comply with local regulations that further the ideological and public relations goals of the controlling authority. Companies operating under repressing regimes have no choice but to produce intentionally flawed LLMs that are politically indoctrinated to comply with the local legal requirements and worldview.

Many companies and investors experienced shock, when news of DeepSeek’s training and inference costs were widely disseminated in January 2025. As numerous people evaluated the DeepSeek model, it became clear that it adhered to the People’s Republic of China (PRC) propaganda talking points, which come directly from the carefully cultivated worldview of the Chinese Communist Party (CCP). DeepSeek had no choice in falsifying the facts related to events like the Tiananmen Square Massacre, repression of the Uyghurs, the coronavirus pandemic, and the Russia Federation’s invasion of Ukraine.

While these examples of censorship may not seem like an immediate security concern, organizations integrating LLMs into critical workflows should consider the risks of relying on models controlled by entities with heavy-handed content restrictions.

If an AI system’s outputs are influenced by ideological filtering, it could impact decision-making, risk assessments, or even regulatory compliance in unforeseen ways. Dependence on such models introduces a layer of opaque external control, which could become a security or operational risk over time.

Failing to Apply Access Controls

Not every security issue is due to ill intent, but while ignorance is more common than malice, it can be just as dangerous.

Imagine a scenario where a global organization builds an internal AI solution that handles confidential data. For instance, the tool might enable staff to query its internal email traffic or summarize incoming emails. Building such a system requires fine-tuned access control. Otherwise, with a bit of clever prompt engineering or random dumb luck, the AI model would cheerfully display inappropriate emails to the wrong people, such as personal employee data or the CEO’s discussion of a possible acquisition.

That’s absolutely a privacy and security vulnerability.

Risks From AI-enabled Third-party Products

Most application and cloud service products – including many security products – now include some form of AI features, from a simple chatbot on a SaaS platform to an XDR solution backed by deep AI analysis. These features and their associated attack surface are present even if they add zero value and an unwanted by the customer.

While AI-based features potentially offer greater efficiency and insights for security teams, the downside is that customers have little or no insight into the functioning, risks, and impacts from AI systems, LLMs, and the foundational models those products incorporate. The opacity of these products is a new risk factor that enterprises need to be aware of and take into account when assessing whether to implement a given solution.

While quantifying that risk is difficult, if not impossible, it’s vital that enterprise teams perform risk assessments and get as much information from vendors as possible about their AI systems.

Compromising the Deployment Itself

Malicious threat actors can target software dependencies, hardware components, or integration points. Attacks on CI/CD pipelines can introduce malicious code or alter AI models to generate backdoors in production systems. Misconfigured cloud services and insecure APIs can expose AI models and data to unauthorized access or manipulation.

These are largely commonplace issues across any enterprise application system. However, given the newness of AI software libraries, for instance, it is unwise to rely on the sturdiness of any component. There’s a difference between “nobody has broken in yet” and “nobody has tried.” Eventually, someone will try and succeed.

A malicious AI attack could also be achieved through unauthorized API access. It isn’t new for bad actors to exploit API vulnerabilities, and in the context of AI applications, attacks like SQL injection can wreak widespread havoc.

These are just a few of the possibilities. Additional vulnerabilities to consider:

  • Model Extraction[1][2][3][4]
  • Reimplementing an AI model using reverse engineering
  • Conducting a privacy attack to extract sensitive information by analyzing the model outputs and inferring the training data based on those results
  • Embedding a backdoor in the training data, which is triggered after deployment

While it’s difficult to know which attack vectors to worry about most urgently today, unfortunately, bad actors are as innovative as any developers.

Devising an AI Infrastructure Security Plan

To address these potential issues, organizations should focus on understanding and mitigating the attack surfaces, just as they do with any other at-risk assets.

Two major tools for better securing the AI supply chain are MITRE ATLAS and AI Red Teaming. These tools can work in combination with other evolving resources, including the US National Institute of Standards (NIST) Artificial Intelligence Risk Management Framework (AI RMF) and supporting resources.

MITRE ATLAS

The non-profit organization MITRE offers an extension of its MITRE ATT&CK framework, the Adversarial Threat Landscape for Artificial-Intelligence Systems (ATLAS). ATLAS includes a comprehensive knowledgebase of the tactics, techniques, and procedures (TTPs) that adversaries might use to compromise AI systems. These offer guidance in threat modeling, security planning, and training and awareness. The newest version boasts enhanced detection guidance, expanded scope for industrial control system assets, and mobile structured detections.

ATLAS maps out potential attack vectors specific to AI systems like those mentioned in this post, such as data poisoning, model inversion, and other adversarial examples. The framework aids in identifying vulnerabilities within the AI models, training data, and deployment environments. It’s also an educational tool for security professionals, developers, and business leaders, providing a framework to understand AI systems’ unique threats and how to mitigate them.

ATLAS is also a practical tool for secure development and operations. Its guidance includes securing data pipelines, enhancing model robustness, and ensuring proper deployment environment configuration. It also outlines detection practices and procedures for incident response should an attack occur.

AI Red Teaming

AI Red Team exercises can simulate attacks on AI systems to identify vulnerabilities and weaknesses before malicious actors can exploit them. In their simulations, Red Teams use techniques similar to real attackers’, such as data poisoning, model manipulation, and exploitation of vulnerabilities in deployment pipelines.

These simulated attacks can uncover weaknesses that may not be evident through other testing methods. Thus, AI Red Teaming can enable organizations to strengthen their defenses by implementing better data validation processes, securing CI/CD pipelines, strengthening access controls, and similar measures.

Regular Red Team exercises provide ongoing feedback, allowing organizations to continuously improve their security posture and adapt to evolving threats in the AI landscape. It’s also a valuable training tool for security teams, helping them improve their overall readiness to respond to real incidents.

Facing the Evolving Threat

As AI/ML technology continues to evolve and is used in new applications, new attack vectors, vulnerabilities, and risks will be identified and exploited. Organizations who are directly or indirectly exposed to these threats must expend effort to identify and manage these risks, working to mitigate the potential impact from the exploitation of this new technology.


[1] https://paperswithcode.com/task/model-extraction/codeless

[2] https://dl.acm.org/doi/fullHtml/10.1145/3485832.3485838

[3] https://arxiv.org/pdf/2312.05386

[4] https://people.duke.edu/~zg70/courses/AML/Lecture14.pdf

INSIGHTS, RESEARCH | February 4, 2025

New Academic Paper: Extraction of Secrets from 40nm CMOS Gate Dielectric Breakdown Antifuses by FIB Passive Voltage Contrast

In my previous blog post titled “Novel Invasive Attack on One-Time-Programmable Antifuse Memory,” and my post introducing IOActive’s silicon security eGuide titled “Threat Brief: Low-level Hardware Attacks,” I alluded to the fact that IOActive would be releasing a preprint academic paper on our novel attack technique for one-time-programmable (OTP) antifuse memory.

The lead researcher on this topic, Dr. Andrew Zonenberg, is a keynote speaker at the Hardware Reverse Engineering Workshop (HARRIS 2025), which will be held on the 17th and 18th of March 2025 in Bochum, Germany. Additional details are available in our blog post titled “Hardware Reverse Engineering Workshop (HARRIS) 2025.”

We have submitted this preprint paper to an academic conference and will share those details in a future blog post, should the paper be accepted.

Abstract

CMOS one-time-programmable (OTP) memories based on antifuses are widely used for storing small amounts of data (such as serial numbers, keys, and factory trimming) in integrated circuits (ICs) due to their low cost, as they require no additional mask steps to fabricate. Device manufacturers and IP vendors have claimed for years that antifuses are a “high security” memory that is significantly more difficult for an attacker to extract data from than other types of memory, such as flash or mask ROM; however, as our results show, this is untrue. In this paper, we demonstrate that data bits stored in a widely used antifuse block can be extracted by a semiconductor failure analysis technique known as passive voltage contrast (PVC) using a focused ion beam (FIB). The simple form of the attack demonstrated recovers the bitwise OR of two physically adjacent memory rows sharing common metal 1 contacts; however, we have identified several potential mechanisms by which it may be possible to read the even and odd rows separately. We demonstrate the attack on a commodity microcontroller made on the 40nm node and show how it can be used to extract significant quantities of sensitive data (such as keys for firmware encryption) in time scales that are very practical for real-world exploitation (one day of sample prep plus a few hours of FIB time), requiring only a single target device after initial reconnaissance has been completed on blank devices.

Supporting Open Science

Most of us who work on cutting-edge technologies and supporting science find our efforts stymied by closed access to general knowledge academic papers due to the antiquated journal model of publishing. We do not face these problems with our original cybersecurity research. We publicly present at cybersecurity conferences and publish our findings openly on the Internet in a responsible manner after completing a coordinated, responsible disclosure process with simple copyright protection.

We have chosen to release this academic paper under the Creative Commons license, specifically the CC-BY-NC-SA variant, which allows for non-commercial use with proper attribution, including derivative works, so long as they use the same license variant. Our intention is to support independent researchers and researchers at institutions without the significant budgets needed to acquire every academic journal and to encourage more research in this highly impactful area of interest.

This paper describes the PVC fuse extraction technique and sample preparation in sufficient detail to enable other research groups to replicate the work. Key microscope configuration parameters are included in the image databars.

The full physical address map of the RP2350 is included in the appendix, enabling other groups to easily program test devices with test patterns of their choice and experiment with data extraction techniques.

A series of Python scripts for converting a linear fuse dump from the “picotool” utility to a physically addressed ASCII art render (of both the individual bit values and the OR’d values seen via PVC), as well as for converting a desired test pattern to a linear fuse map, have been uploaded to an anonymous pastebin for review. The camera-ready version of the paper will link to a more permanent GitHub repository or similar.

Client Confidentiality Commitment

Our commitment to client confidentiality is sacrosanct and never impacted by our commitment to open science. We think it’s important to reinforce this point to remove any ambiguity about the strict separation between our research and our client work. On occasion, a project deliverable may be a research or assessment report made public at the request of the client due to their interest in disseminating the results for the public good. To avoid the appearance of a conflict of interest, we always note when we receive any material funding from a sponsor for the production of a report. The assessment reports released on client direction as part of Open Compute Project’s OCP SAFE program exemplify this practice, one of which you can find here. Other examples include studies we have conducted for our clients, such as on the security and performance of WiFi and 5G and the attack surface between generations of Intel processors.

Acknowledgments

The authors would like to thank Raspberry Pi for their cooperation throughout the competition and disclosure process, as well as Entropic Engineering for assistance with procuring scarce RP2350 samples shortly after the device had been released.

Additional Reading

IOActive recently released an eGuide titled “The State of Silicon Chip Hacking,” which is intended to make the very opaque topic of low-level attacks on microchips and ICs more accessible to security team members, business leaders, semiconductor engineers, and even laypersons. This eGuide is meant to be clear, concise, and accessible to anyone interested in the topic of low-level hardware attacks with an emphasis on invasive attacks using specialized equipment. To increase the accessibility of the eGuide to all readers, we made an effort to include high-quality graphics to illustrate the key concepts related to these attacks.

INSIGHTS | January 20, 2025

Threat Brief: Low-level Hardware Attacks

Low-level Hardware Attacks: No Longer an Emerging Threat

As organizations have improved their cybersecurity posture, motivated attackers compensate by looking for other attack vectors to continue to achieve their objectives. Efforts to improve system and device security have produced a greater availability and reliance on hardware security features, and as component and device designers continue to innovate with new security features, attackers continue to innovate and share new tools and attack techniques. Increasingly, this means that to provide solid security posture for a component, device, or system, a full-stack perspective on security is mandatory.

More effort from developers is required to understand and increase resistance to low-level hardware attacks, including:

  • Manufacturers of typical microchips and embedded devices that require features such as secure boot and resistance to firmware extraction
  • Manufacturers of specialty microchips and integrated circuits (ICs) with security features such as secure elements, secure enclaves, encrypted read only memory (ROMs), hardware root of trust, etc.
  • Product manufacturers who depend on microchips with key security features as a core control of their overall product or ecosystem security model

Defending against low-level hardware attacks is critical for most every organization today, but the topic remains largely inaccessible to the key stakeholders who can act to improve security posture to better resist and defend against these attacks. To support improved security posture on these critical components, IOActive is making new materials available to the community to aid in understanding the attack vectors and threats.

IOActive is very pleased to announce the release of our eGuide titled “The State of Silicon Chip Hacking,” which is intended to make the very opaque topic of low-level attacks on microchips and ICs more accessible to security team members, business leaders, semiconductor engineers, and even laypersons.

Furthermore, in the coming months, we will publish some original cybersecurity research related to low-level attacks on specialized memory in microchips and integrated circuits (ICs), and formerly proprietary intelligence (PROPINT) about the capabilities of malicious threat actors targeting microchips and ICs.

BACKGROUND

Evolving Security Posture and Attack Vectors

Within the last decade, low-level hardware attacks at the microchip and IC level have become more appealing to attackers as many organizations have become much better at the fundamentals of cybersecurity that include cyber hygiene, vulnerability management, secure development,[1][2][3] and the many other components in a modern cybersecurity program or framework such as NIST’s CSF. As organizations became more mature in their cybersecurity capabilities, including sophisticated response and threat hunting capabilities offered for their traditional information technology (IT) and operational technology (OT) environments either by an internal security operation center (SOC) or a trusted third-party service provider, threat actors have shifted their focus to other areas of opportunity where less defensive effort had been expended to date, such as supply chain[4][5] and hardware attack vectors.

A recent example of this is the high-impact supply chain attack by Salt Typhoon, a People’s Republic of China (PRC) Ministry of State Security (MSS) affiliated threat actor. This attacker compromised the major U.S. mobile network operators to enable espionage and counterintelligence operations, hyper-targeted cyber operations against high value targets, and the circumvention of the ridiculously weak short message system (SMS) multi-factor authentication (MFA) implementations used far too frequently today.

The more secure an organization itself, the more attractive that organization’s upstream supply chain becomes as an attack vector to a dedicated attacker. A sophisticated threat actor seeks the easiest and least attributable pathway into a target; today the path of least resistance and risk for an attacker is often one of the target’s Tier 1 or Tier 2 suppliers who have an exploitable vulnerability that can provide full access into the target’s network. Other times, the actor has chosen an attack vector of a software library or hardware component. For example, we have recently seen an attacker attempt to compromise the XZ Utils compression library in an effort to subvert the effectiveness of OpenSSH.[6][7]

Greater Reliance on Hardware Security

One outcome of these efforts to improve security is a greater reliance on hardware components and devices to improve the overall system’s security, especially when an attacker has physical access through the protection of key data such as cryptographic keys, whose compromise would result in the compromise of the device or all devices. Key hardware technologies and implementations such as secure boot (e.g., UEFI), trusted execution environments (TEEs), and hardware roots of trust have been created and refined to reduce the likelihood of a software compromise. Many of these key hardware security measures are standardized for the industry or a manufacturer to reuse on multiple products and devices. Perhaps most attractive to device developers, these components are frequently integrated into system-on-a-chip (SOC) components that provide a broad set of features, including hardware security capabilities, into a single package.

This greater availability, utilization, and criticality of hardware security has pushed threat actors to develop new tooling, tactics, techniques, and procedures to achieve their operational or strategic objectives.

Full-stack Perspective

Today’s sophisticated threat actors are capable of successfully attacking at any level of the technology and operational stack including hardware, software, people, processes, and the supply chain. This necessitates much more thoughtful risk management and defense. Moreover, the advanced, low-level hardware attack techniques outlined in our eGuide have become much more democratized and accessible to many of today’s threat actors. There is no expectation that this trend will abate. A successful, low-level hardware attack can compromise an entire organization, its customers, and even its suppliers.

Globalization Consequences

With the admission of the PRC to the World Trade Organization (WTO) in December 2001, the world experienced an extremely rapid period of globalization, which transformed the global economy and its supply chains. This supply chain globalization has actually made our supply chains longer, more geographically dispersed, much more complex and less resilient. Today, a product may have to go through multiple countries and hundreds of suppliers before it’s complete, offering more opportunities for things to go wrong from a supply chain risk perspective, whether accidental or intentional. Within the last several years we have seen numerous high-impact supply chain disruptions.

The 2020 pandemic painfully illustrated the vulnerabilities of the global supply chain to disruption from a virus and the associated government response. Significant microchip shortages beginning in 2021 deeply impacted the automotive industry on a global basis with an estimated impact of a more than 10% reduction in global light-vehicle production in 2021. In March 2021, the container ship Ever Given was grounded in the Suez Canal, causing significant disruptions to shipping and canal transits. In late 2023, we saw huge disruptions to the transit of the Red Sea and Suez Canal, which forms a key link between Asia and Europe, from kinetic attacks by the Houthis, an Iranian proxy group based in Yemen. Arguably, the consequences of the Ever Given incident gave the Houthi strategists a model with which to understand the consequences of severing the link between Europe and the Indo-Pacific through piracy and kinetic strikes.

Perhaps most concerning is the fact that these long, complex supply chains frequently have key nodes in locations under the control of unfriendly, malign,  or adversarial countries who are seeking to penetrate, compromise, and hold at risk critical infrastructure and information of their perceived adversaries.

THREATSCAPE: INCREASING RISKS

The confluence of the above trends has created significantly greater risks that as previously favored attack vectors become more challenging, costly, or attributable, organizations almost certainly will be targeted through low-level hardware attacks. There are many more threat actors today capable of launching low-level hardware attacks. The nature of the global semiconductor supply chain gives the most motivated and well-funded threat actors excellent opportunities to compromise microchips, ICs, and digital devices before they make it to the end user. Increasingly, these threat actors will require low-level hardware attack techniques to continue to meet their operational and strategic objectives.

To support improved security posture on these critical components, we are making new materials available to the community to aid in understanding the attack vectors and threats.

Upcoming Research Publications

In the coming months, we will publish some original cybersecurity research related to low-level attacks on specialized memory in microchips and ICs after completion of our responsible disclosure process. In addition, we will publish some previous work we had performed by a third party to help us assess and develop PROPINT about the capabilities of malicious threat actors to reverse engineer microchips and ICs for the purpose of intellectual property theft.

The State of Silicon Chip Hacking

IOActive is pleased to announce the release of eGuide titled “The State of Silicon Chip Hacking,” which is intended to make the very opaque topic of low-level attacks on microchips and ICs more accessible to security team members, business leaders, semiconductor engineers, and even laypersons. This eGuide is meant to be clear, concise, and accessible to anyone interested in the topic of low-level hardware attacks with an emphasis on invasive attacks using specialized equipment. To increase the accessibility of the eGuide to all readers, we made an effort to include high quality graphics to illustrate the key concepts related to these attacks.


[1] https://www.sei.cmu.edu/our-work/secure-development/

[2] https://www.microsoft.com/en-us/securityengineering/sdl/practices

[3] https://www.ncsc.gov.uk/collection/developers-collection/principles

[4] https://www.ncsc.gov.uk/collection/supply-chain-security/supply-chain-attack-examples

[5] https://github.com/cncf/tag-security/blob/main/supply-chain-security/compromises/README.md

[6] https://www.darkreading.com/cyber-risk/xz-utils-backdoor-implanted-in-intricate-multi-year-supply-chain-attack

[7] https://nvd.nist.gov/vuln/detail/CVE-2024-3094

INSIGHTS, RESEARCH | January 14, 2025

Novel Invasive Attack on One-Time-Programmable Antifuse Memory

Antifuse-based OTP Memory Attack May Compromise Cryptographic Secrets

Complementary metal-oxide semiconductor (CMOS) one-time programmable (OTP) memories based on antifuses are widely used for storing small amounts of data (such as serial numbers, keys, and factory trimming) in integrated circuits (ICs) because they are inexpensive and require no additional mask steps to fabricate. The RP2350 uses an off-the-shelf Synopsys antifuse memory block for storing secure boot keys and other sensitive configuration data.

Despite antifuses being widely considered a ”high security” memory—which means they are significantly more difficult for an attacker to extract data from than other types of memory, such as flash or mask ROM—IOActive has demonstrated that data bits stored in the example antifuse memory can be extracted using a well-known semiconductor failure analysis technique: passive voltage contrast (PVC) with a focused ion beam (FIB).

The simple form of the attack demonstrated here recovers the bitwise OR of two physically adjacent memory bitcell rows sharing common metal 1 contacts, however, we believe it is possible for an attacker to separate the even/odd row values with additional effort.

Furthermore, it is highly likely that all products using the Synopsys dwc_nvm_ts40* family of memory IPs on the TSMC 40nm node are vulnerable to the same attack, since the attack is not specific to the RP2350 but rather against the memory itself.

IOActive has not yet tested our technique against other vendors’ antifuse intellectual property (IP) blocks or on other process nodes. Nonetheless, IOActive assessed it to have broad applicability to antifuse-based memories.

Security models which mandate a per-device cryptographic secret are not exposed to significant additional risk, but products which have a shared cryptographic secret stored in the antifuse-based OTP memory are at substantial additional risk from the compromise of a shared secret through this invasive attack.

BACKGROUND

IOActive’s Advanced Low-level Hardware Attacks

IOActive has been building out the depth and coverage of our full-stack cybersecurity assessment capabilities for decades. In recent years, we have been developing exciting new capabilities to dive deeper into the lowest levels of the technology stack using non-invasive, semi-invasive, and fully invasive hardware attack techniques. The key areas of recent focus have been on side channel analysis, fault injection, and fully invasive microchip and IC attacks.

We’ve published original research on side channel analysis on electromechanical locks and fault injection attacks on uncrewed aerial systems (UAS) to demonstrate our capabilities and bring attention to these impactful attack vectors. We’ve been looking for some research projects to demonstrate our fully invasive attack techniques and an industry leader gave us a great opportunity.

IOActive will be sharing more materials in the coming months about advanced, low-level hardware attacks.

Raspberry Pi

Raspberry Pi has an excellent record of producing innovative low-cost, high-value computing hardware that is a favorite of hobbyists, hackers, product designers, and engineers. IOActive has used numerous Raspberry Pi products in our research projects and specialized internal tools to support our commercial cybersecurity engagements. One of the more recent IOActive uses of Raspberry Pi hardware was to build an upgraded remote testing device (RTD) that allowed us to continue delivering our cybersecurity assessments during the pandemic that previously required our consultants to be on site at a client’s facility. The RTD allowed us to continue operating during the lockdowns and Raspberry Pi allowed us to do so with a high-performance, low-cost device. It’s fair to say we’re fans.

RP2350

The RP2350 microcontroller is a unique dual-core, dual-architecture with a pair of industry-standard Arm Cortex-M33 cores and a pair of open-hardware Hazard3 RISC-V cores. It was used as a key component in the DEF CON 2024 (DEF CON 32) Human badge. IOActive sponsored the 2024 car hacking badge for DEF CON’s Car Hacking Village. IOActive has participated in DEF CON since our founding. Companies who participate in DEF CON demonstrate an above average commitment to cybersecurity and the security of their products.

The RP2350 datasheet and product brief highlight the significant effort that Raspberry Pi made into democratizing the availability of modern security features in the RP2350. The RP2350 provides support for Arm TrustZone for Cortex-M, signed boot, 8-KB of antifuse OTP memory for cryptographic key storage, SHA-256 acceleration, a hardware true random number generator (TRNG), and fast glitch detectors.

IOActive strongly recommends that users of the RP2350 employ these advanced security features.

RP2350 Security Challenge

Building on their commitment to product security, the Raspberry Pi team, in addition to participating in DEF CON, also sponsored a Hacking Challenge using the RP2350 with a bug bounty. The objective or “flag” of this challenge was to “find an attack that lets you dump a secret hidden in OTP ROW 0xc08 – the secret is 128-bit long and protected by OTP_DATA_PAGE48_LOCK1 and RP2350’s secure boot!”

This challenge was designed to provide independent validation and verification that the security features present in the RP2350 were properly designed and implemented. In particular, the challenge was focused on determining the resistance of the product to fault injection, specifically glitching attacks.

Companies, like Raspberry Pi, who understand that no designer can completely check their own work and therefore leverage security consultants and researchers to independently assess the security of product designs and implementations, ship more secure products.

Raspberry Pi has a blog post covering the RP2350 Hacking Challenge here.

INVASIVE READING OF ANTIFUSE-BASED OTP MEMORY

Attack Overview

An attacker in possession of an RP2350 device, as well as access to semiconductor deprocessing equipment and a focused ion beam (FIB) system, could extract the contents of the antifuse bit cells as plaintext in a matter of days. While a FIB system is a very expensive scientific instrument (costing several hundred thousand USD, plus ongoing operating expenses in the tens of thousands per year), it is possible to rent time on one at a university lab. These costs are low enough to be well within the realm of feasibility in many scenarios given the potential value of the keys in the device.

The attack can theoretically be performed with only a single device and would take a skilled attacker approximately 1-2 weeks of work to perform the initial reverse engineering and process development on blank or attacker-programmed test chips. Actual target devices would take 1-2 days per chip to prepare the sample and extract a small amount of data such as a key; a full fuse dump might require an additional day of machine time for imaging of the entire array.

As with any invasive attack, there is a small chance of the device being damaged during handling and deprocessing, so a real-world attacker would likely procure several samples to ensure a successful extraction.

The attack is based on established semiconductor failure analysis techniques and is not specific to the RP2350; it is likely that similar techniques can be used to extract data from other devices with antifuse-based memory.

Suggested User Mitigation

Users of the RP2350 can mitigate the basic form of the attack by using a “chaffing” technique taking advantage of the paired nature of the bit cells. By using only the low half (words 0-31) or high half (words 32-63) of a page to store cryptographic keys, and storing the complement of the key in the opposite half of the page, each pair of bit cells will have exactly one programmed and one unprogrammed bit. Since the basic passive voltage contrast (PVC) technique cannot distinguish between the two bits sharing the common via, the attacker will see the entire page as being all 1s.

This mitigation does not provide complete protection, however: by taking advantage of the circuit-edit capabilities of a FIB, an attacker could likely cut or ground the word lines being used for chaffing and then use PVC to read only the key bits. We intend to explore this extended attack in the future but have not yet tested it.

Consequences

This novel attack technique of using PVC to invasively read out data from antifuse-based OTP memory calls into question the security model of any device or system, which assumes that the OTP memory cannot be read by an attacker with physical control of the device. Security models which mandate a per-device cryptographic secret are not exposed to significant additional risk, but products which have a shared cryptographic secret stored in the antifuse-based OTP memory are at substantial additional risk from the compromise of a shared secret through this invasive attack.

The simple form of the attack demonstrated here recovers the bitwise OR of two physically adjacent memory bitcell rows sharing common metal 1 contacts, however, we believe it is possible for an attacker to separate the even/odd row values with additional effort.

Furthermore, it is highly likely that all products using the Synopsys dwc_nvm_ts40* family of memory IPs on the TSMC 40nm node are vulnerable to the same attack, since the attack is not specific to the RP2350 but rather against the memory itself.

IOActive has not yet tested our technique against other vendors’ antifuse IP blocks or on other process nodes. Nonetheless, IOActive assessed it to have broad applicability to antifuse-based memories.

Additional Information

IOActive’s full disclosure report is available here. When available, a preprint paper, which has been submitted to an academic conference will be shared on the IOActive website.

Dr. Andrew Zonenberg is a keynote speaker on this topic at the Hardware Reverse Engineering Workshop (HARRIS 2025) held on 17 and 18 March 2025 in Bochum, Germany.

ACKNOWLEDGEMENTS

IOActive would like to thank the Raspberry Pi team for the excellent coordination and communication during the disclosure process.