INSIGHTS | February 26, 2013

“Broken Hearts”: How plausible was the Homeland pacemaker hack?

[1]
I watched the TV show Homeland for the first time a few months ago. This particular episode had a plot twist that involved a terrorist remotely hacking into the pacemaker of the Vice President of the United States.

People follow this show religiously, and there were articles questioning the plausibility of the pacemaker hack. Physicians were questioned as to the validity of the hack and were quoted saying that this is not possible in the real world [2].

In my professional opinion, the episode was not too far off the mark.


In the episode, a terrorist finds out that the Vice President has a bad heart and his pacemaker can be remotely accessed with the correct serial number. The terrorist convinces a congressman to retrieve the serial number, and then his hacker accomplice uses the serial to remotely interrogate the pacemaker. Once the hacker has connected to the pacemaker, he instructs the implantable device to deliver a lethal jolt of electricity. The Vice President keels over, clutches his chest, and dies.
My first thought after watching this episode was “TV is so ridiculous! You don’t need a serial number!”
In all seriousness, the episode really wasn’t too implausible. Let’s dissect the Hollywood pacemaker hack and see how close to reality they were. First, some background on the implantable medical technology:
The two most common implantable devices for treating cardiac conditions are the ““`pacemaker and the implantable cardioverter-defibrillator (ICD).
The primary function of the pacemaker is to monitor the heart rhythm of a patient and to ensure that the heart is beating regularly. If the pacemaker detects certain arrhythmia (an irregular heartbeat or abnormal heart rhythm), the pacemaker sends a low voltage pulse to the heart to resynchronize the electrical rhythm. A pacemaker does not have the capability to deliver a high voltage electrical shock.
Newer model ICD’s not only have pacing functionality, but they are also able to deliver a high voltage shock to the heart. The ICD most commonly detects an arrhythmia known as ventricular fibrillation (VF), which is a condition that causes irregular contractions of the cardiac muscle. VF is the most common arrhythmia associated with sudden cardiac death. When an ICD detects VF, it will deliver a shock to the heart in an attempt to reverse the condition.
Back to Homeland, let’s see how close Hollywood was to getting it right. Although they mention a pacemaker in the episode, the actual implantable device is an ICD.
HOMELAND: The congressman is told to retrieve the serial number from a plastic box in the Vice President’s office. The plastic box he retrieves the serial number from has a circular wand and telephone inputs on the side.

Fact! The plastic box on the episode is a commonly used remote monitor/bedside transmitter. The primary use of the bedside transmitter is to read patient data from the pacemaker or ICD and send the data to a physician over the telephone line or network connection. The benefit to this technology is reduced in-person visits to the physician’s clinic. The data can be analyzed remotely, and the physician can determine whether an in-person visit is required.

Before 2006, all pacemaker programming and interrogation was performed using inductive telemetry. Programming using inductive telemetry requires very close skin contact. The programming wand is held up to the chest, a magnetic reed switch is opened on the implant, and the device is then open for programming and/or interrogation. Communication is near field (sub 1MHZ), and data rates are less than 50KHZ.
The obvious drawback to inductive telemetry is the extremely close range required. To remedy this, manufacturers began implementing radiofrequency (RF) communication on their devices and utilized the MICS (Medical Implant Communication Service) frequency band. MICS operates in the 402-405MHZ band and offers interrogation and programming from greater distances, with faster transfer speeds. In 2006, the FDA began approving fully wireless-5based pacemakers and ICDs.
Recent remote monitors/bedside transmitters and pacemaker/ICD programmers support both inductive telemetry as well as RF communication. When communicating with RF implantable devices, the devices typically pair with the programmer or transmitter by using the serial number, or the serial number and model number. It’s important to note that currently the bedside transmitters do not allow a physician to dial into the devices and reprogram the devices. The transmitter can only dial out.
So far, so good.
HOMELAND: The hacker receives the serial number on his cell phone and uses his laptop to remotely connect to the ICD. The laptop screen displays real-time electrocardiogram (ECG) readouts, heart beats-per-minute, and waveforms generated by the ICD. He then instructs his software to deliver defibrillation. Then it’s game over for the victim at the other end.
If you look past the usual bells and whistles that are standard issue with exploit software on TV, the scrolling hex dumps, the c code in the background—the waveform and beats per minute (BPM) diagnostic data from the implant that is shown in real-time is legitimate and can be retrieved over the implant’s wireless interface. The official pacemaker/ICD programmers used by the physicians have a similar interface. In the case of ICDs, the pacing and high voltage leads are all monitored, and the waveforms can be displayed in real time.
The attacker types in a command to remotely induce defibrillation on the victim’s ICD.It is possible to remotely deliver shocks to ICDs. The functionality exists for testing purposes. Depending on the device model and manufacturer, it is possible to deliver a jolt in excess of 800 volts.
Moving on to the remote attack itself, the TV episode never mentioned where the attacker was located, and even if it was possible to dial into the remote monitor, it was disconnected.
Some manufacturers have tweaked the MICS standard to allow communication from greater distances and to allow communication over different frequencies. Let’s talk a little about how these devices communicate, using a fairly new model ICD as an example.
The Wake-up Scheme:
Before an implantable device can be reprogrammed, it must be “woken up”. Battery conservation is extremely important. When the battery has been depleted, surgery is required to replace the device. To conserve power, the implants continually run in a low power mode, until they receive an encoded wake-up message. The transceiver in this ICD supports wake-up over the 400MHZ MICS band, as well as 2.45GHZ.
To wake-up and pair with an individual device, you must transmit the serial and model number in the request. Once a device has paired with the correct model and serial number and you understand the communications protocol, you essentially have full control over the device. You can read and write to memory, rewrite firmware, modify pacemaker/ICD parameters, deliver shocks, and so on.
When using a development board that uses the same transceiver, with no modifications to the stock antenna, reliable communication will max out at about 50 feet. With a high gain antenna, you could probably increase this range.
Going back to Homeland, the only implausible aspect of the hack was the range in which the attack was carried out. The attacker would have had to be in the same building or have a transmitter set up closer to the target. With that said, the scenario was not too far-fetched. The technology as it stands could very easily be adapted for physicians to remotely adjust parameters and settings on these devices via the bedside transmitters. In the future, a scenario like this could certainly become a reality.
––

As there are only a handful of people publicizing security research on medical devices, I was curious to know where the inspiration for this episode came from. I found an interview with the creators of the show where the interviewer asked where the idea for the plot twist came from [3]. The creators said the inspiration for the episode was from an article in the New York Times, where a group of researchers had been able to cause an ICD to deliver jolts of electricity [4].

This work was performed by Kevin Fu and researchers at the University Of Massachusetts in 2008. Kevin is considered a pioneer in the field of medical device security. The paper he and the other researchers released (“Pacemakers and Implantable Cardiac Defibrillators: Software Radio Attacks and Zero-Power Defenses”) [5] is a good read and was my initial inspiration for moving into this field.
I recommend reading the paper, but I will briefly summarize their ICD attack. Their paper dates back to 2008. At the time, the long range RF-based implantable devices were not as common as they are today. The device they used for testing has a magnetic reed switch, which needs to be opened to allow reprogramming. They initiated a wakeup by placing a magnet in close proximity to the device, which activated the reed switch. Then they captured an ICD shock sequence from an official ICD/pacemaker programmer and replayed that data using a software radio.
The obvious drawback was the close proximity required to carry out a successful attack. In this case, they were restricted by the technology available at the time.
At IOActive, I’ve been spending the majority of my time researching RF-based implants. We have created software for research purposes that will wirelessly scan for new model ICDs and pacemakers without the need for a serial or model number. The software then allows one to rewrite the firmware on the devices, modify settings and parameters, and in the case of ICDs, deliver high-voltage shocks remotely.
At IOActive, this research has a personal attachment. Our CEO Jennifer Steffens’ father has a pacemaker. He has had to undergo multiple surgeries due to complications with his implantable device.
Our goal with this research is not for people to lose faith in these life-saving devices. These devices DO save lives. We are also extremely careful when demonstrating these threats to not release details that could allow an attack to be easily reproduced in the wild.
Although the threat of a malicious attack to anyone with an implantable device is slim, we want to mitigate these risks no matter how minor. We are actively engaging medical device manufacturers and sharing our knowledge with them.
INSIGHTS | February 25, 2013

IOAsis at RSA 2013

RSA has grown significantly in the 10 years I’ve been attending, and this year’s edition looks to be another great event. With many great talks and networking events, tradeshows can be a whirlwind of quick hellos, forgotten names, and aching feet. For years I would return home from RSA feeling as if I hadn’t sat down in a week and lamenting all the conversations I started but never had the chance to finish. So a few years ago during my annual pre-RSA Vitamin D-boosting trip to a warm beach an idea came to me: Just as the beach served as my oasis before RSA, wouldn’t it be great to give our VIPs an oasis to escape to during RSA? And thus the first IOAsis was born.


Aside from feeding people and offering much needed massages, the IOAsis is designed to give you a trusted environment to relax and have meaningful conversations with all the wonderful folks that RSA, and the surrounding events such as BSidesSF, CSA, and AGC, attract. To help get the conversations going each year we host a number of sessions where you can join IOActive’s experts, customers, and friends to discuss some of the industry’s hottest topics. We want these to be as interactive as possible, so the following is a brief look inside some of the sessions the IOActive team will be leading.

 

(You can check out the full IOAsis schedule of events at:

Chris Valasek @nudehaberdasher

 

Hi everyone, Chris Valasek here. I just wanted to let everyone know that I will be participating in a panel in the RSA 2013 Hackers & Threats track (Session Code: HT-R31) on Feb 28 at 8:00 a.m. The other panelists and I will be giving our thoughts on the current state of attacks, malware, governance, and protections, which will hopefully give attendees insight into how we as security professionals perceive the modern threat landscape. I think it will be fun because of our varied perspectives on security and the numerous security breaches that occurred in 2012.

Second, Stephan Chenette and I will talking about assessing modern attacks against PCs at IOAsis on Wednesday at 1:00-1:45. We believe that security is too often described in binary terms — “Either you ARE secure or you are NOT secure — when computer security is not an either/or proposition. We will examine current mainstream attack techniques, how we plan non-binary security assessments, and finally why we think changes in methodologies are needed. I’d love people to attend either presentation and chat with me afterwards. See everyone at RSA 2013!

 

By Gunter Ollman @gollmann

 

My RSA talk (Wednesday at 11:20), “Building a Better APT Package,” will cover some of the darker secrets involved in the types of weaponized malware that we see in more advanced persistent threats. In particular I’ll discuss the way payloads are configured and tested to bypass the layers of defensive strata used by security-savvy victims. While most “advanced” features of APT packages are not very different from those produced by commodity malware vendors, there are nuances to the remote control features and levels of abstraction in more advanced malware that are designed to make complete attribution more difficult.

Over in the IOAsis refuge on Wednesday at 4:00 I will be leading a session with my good friend Bob Burls on “Fatal Mistakes in Incident Response.” Bob recently retired from the London Metropolitan Police Cybercrime Division, where he led investigations of many important cybercrimes and helped put the perpetrators behind bars. In this session Bob will discuss several complexities of modern cybercrime investigations and provide tips, gotcha’s, and lessons learned from his work alongside corporate incident response teams. By better understanding how law enforcement works, corporate security teams can be more successful in engaging with them and receive the attention and support they believe they need.

By Stephan Chenette @StephanChenette

 

At IOAsis this year Chris Valasek and I will be presenting on a topic that builds on my Offensive Defense talk and starts a discussion about what we can do about it.

For too long both Chris and I have witnessed the “old school security mentality” that revolves solely around chasing vulnerabilities and remediation of vulnerable machines to determine risk.  In many cases the key motivation is regulatory compliance. But this sort of mind-set doesn’t work when you are trying to stop a persistent attacker.

What happens after the user clicks a link or a zero-day attack exploits a vulnerability to gain entry into your network? Is that part of the risk assessment you have planned for? Have you only considered defending the gates of your network? You need to think about the entire attack vector: Reconnaissance, weaponization, delivery, exploitation, installation of malware, and command and control of the infected asset are all strategies that need further consideration by security professionals. Have you given sufficient thought to the motives and objectives of the attackers and the techniques they are using? Remember, even if an attacker is able to get into your network as long as they aren’t able to destroy or remove critical data, the overall damage is limited.

Chris and I are working on an R&D project that we hope will shake up how the industry thinks about offensive security by enabling us to automatically create non-invasive scenarios to test your holistic security architecture and the controls within them. Do you want those controls to be tested for the first time in a real-attack scenario, or would you rather be able to perform simulations of various realistic attacker scenarios, replayed in an automated way producing actionable and prioritized items?

Our research and deep understanding of hacker techniques enables us to catalog various attack scenarios and replay them against your network, testing your security infrastructure and controls to determine how susceptible you are today’s attacks. Join us on Wednesday at 1:00 to discuss this project and help shape its future.

 

By Tiago Asumpcao @coconuthaxor

 

 

At RSA I will participate in a panel reviewing the history of mobile security. It will be an opportunity to discuss the paths taken by the market as a whole, and an opportunity to debate the failures and victories of individual vendors.

 

Exploit mitigations, application stores and mobile malware, the wave of jail-breaking and MDM—hear the latest from the folks who spend their days and nights wrestling with modern smartphone platforms.While all members of the panel share a general experience within the mobile world, every individual brings a unique relationship with at least one major mobile industry player, giving the Mobile Security Battle Royale a touch of spice.

At IOAsis on Tuesday at 2:00 I will present the problem of malware in application stores and the privacy in mobile phones. I will talk about why it is difficult to automate malware analysis in an efficient way, and what can and cannot be done. From a privacy perspective, how can the user keep both their personal data and the enterprise’s assets separate from each other and secure within such a dynamic universe? To the enterprise, which is the biggest threat: malicious apps, remote attacks, or lost devices?
I will raise a lot of technical and strategic questions. I may not be able to answer them all, but it should make for lively and thought-provoking discussion.

 

By Chris Tarnovsky @semiconduktor
I will be discussing the inherent risks of reduced instruction set computer (RISC) processors vs. complex instruction set computer (CISC) processors at IOAsis on Tuesday at 12:00.

 

Many of today’s smart cards favor RISC architectures, ARM, AVR, and CalmRisc16 being the most popular versions seen in smartcards. Although these processors provide high performance, they pose a trade-off from a security perspective.

 

The vendors of these devices offer security in the form of sensors, active meshing, and encrypted (scrambled) memory contents. From a low-level perspective these all offer an attacker an easy way to block branch operations and make it possible to address the device’s entire memory map.

 

To prevent branches on an AVR and CalmRisc16 an attacker only needs to cut and strap the highest bit to a ‘0’. After doing so the branch instruction is impossible to execute. An instruction that should have been a branch will become something else without the branch effect, allowing an attacker to sit on the bus using only one to two micro-probing needles.

 

On the other hand, CPU architectures such as 8051 or 6805 are not susceptible to such attacks. In these cases modifying a bus is much more complicated and would require a full bus width of edits.

 

I look forward to meeting everyone!
INSIGHTS | February 12, 2013

Do as I say, not as I do. RSA, Bit9 and others…

You thought you had everything nailed down. Perhaps you even bypassed the “best practice” (which would have driven you to compliance and your security to the gutter) and focused on protecting your assets by applying the right controls in a risk-focused manner.

You had your processes, technologies, and logs all figured out. However, you still got “owned”. Do you know why? You are still a little naive.

You placed your trust in big-name vendors. You listened to them, you were convinced by their pitch, and maybe you even placed their products through rigorous testing to make sure they actually delivered. However, you forgot one thing. The big-name vendors do not always have your best interest at heart.

Such companies will preach and guide you through to the righteous passage. However, when you look behind the curtain, the truth is revealed.

The latest Bit9 compromise is not too surprising. Bit9 customers are obviously very security aware, as they opted to use a whitelisting product to their computing assets. As such, these customers are most likely high-value targets to adversaries. With acute security awareness, these customers probably have more security measures and practices to mitigate and protect themselves from attackers. In other words, if I were to scope out such a target for an attack, I would have to focus on supply chain elements that were weaker than the target itself (much in the same manner we teach our Red-Team Testing classes).

RSA was such a target. there were others. Bit9 was also target for some of its customers.

Color me surprised.

If you are a vendor that gloats over the latest compromise, please do not bother. If you have not gone through a similar threat model, your products are either not good enough (hence your customers are not high-value targets), or your own security is not up to speed and you have not realized yet that you have been breached.

If you are a security consumer and therefore care a bit more, do not make any assumptions about your security vendors. They are not the target. You are. As such, they have more generalized security practices than you do. Account for this in your security strategy, and never fully trust anything outside of your control span. It is your responsibility to hold such vendors to at least their own standard and demand oversight and proof that they do so.

 

INSIGHTS | February 11, 2013

Your network may not be what it SIEMs

The number of reports of networks that are rampaged by adversaries is staggering. In the past few weeks alone we’ve seen reports from The New York Times, The Washington Post and Twitter. I would argue that the public reports are just the tip of the iceberg. What about the hacks that never were? What about the companies that absorbed the blow and just kept on trucking or … perhaps even those companies that never recovered?

When there’s an uptick in media attention over security breaches, the question most often asked – but rarely answered – is “What if this happens to me?”

Today you don’t want to ask that question too loudly – else you’ll find product vendors selling turn-key solutions and their partners on your doorstep, closely followed by ‘Managed Security Services’ providers. All ready to solve your problems once you send back their signed purchase order… if you want to believe that.

Most recently they’ve been joined by the “let’s hack the villains back” start-ups. That last one is an interesting evolution but not important for this post today.

I’m not here to provide a side-by-side comparison of service providers or product vendors. I encourage you to have an open conversation with them when you’re ready for it; but what I want to share today is my experience being involved in SIEM projects at scale, and working hands-on with the products as a security analyst. The following lessons were gained through a lot of sweat and tears:

  • SIEM projects are always successful, even when delivered over budget, over time and with only 1/3rd of the requirements actually realized.
  • Managed services don’t manage your services, they provide the services they can manage at the price you are willing to pay for them.
  • There is no replacement for knowing your environment. Whatever the price you are willing to pay for it.

This obviously begs the question whether installing a SIEM is worth spending a limited security budget upon.

It is my personal opinion that tooling to facilitate Incident Response, including SIEM to delve through piles and piles of log data, is always an asset. However, it’s also my opinion that buying a tool RIGHT NOW is not your priority if you can not confidently answer “YES” to the following questions :

  1. I know where my most valuable assets reside on my network, which controls are implemented to protect them and how to obtain security-related data from those controls.
  2. I know the hardware and software components that support those most valuable assets and know how to obtain security-related data from them.
  3. I know, for those most valuable assets, which devices communicate with them, at which rate and at which times. I can gather relevant data about that.
  4. I know, and can verify, which machines can (and should) communicate with external networks (including the internet). I can gather relevant data about that.

In case of a resounding “NO” or a reluctantly uttered “maybe”, I would argue that there are things you should do before acquiring a SIEM product. It is your priority to understand your network and to have control over it, unless you look forward to paying big money for shiny data aggregators.

Advice

With that challenge identified how do you move ahead and regain control of your network. Here’s some advice :

The most important first step is very much like what Forensics investigators call “walking the grid”. You will need to break down your network in logical chunks and scrutinize them. Which are the components that are most important to our business and what are the controls protecting them. Which data sources can tell us about the security health of those components and how? how frequently? in which detail? Depending on the size and complexity of the network this may seem like a daunting task but at the same time you’ll have to realize that this is not a one time exercise. You’ll be doing this for the foreseeable future so it’s important that it is turned into a repeatable process. A process that can be reliably executed by other people than you, with consistent results.

Next, epiphany! Nobody but yourself can supplement the data delivered by appliances and software distributed across your network with actual knowledge about your own network. Out of the box rulesets don’t know about your network and -however bright they may be- security analysts on follow-the-sun teams don’t either. Every event and every alert only makes sense in context and when that context is removed, you’re no longer doing correlation. You’re swatting flies. While I came up with this analogy on the fly, it makes sense. You don’t swat flies with a Christian Louboutin or a Jimmy Choo, you use a flip-flop. There are plenty of information security flip-flops out there on the internet. Log aggregators (syslog, syslog-ng), Full-blown open-source NIDS systems (Snort, Bro) or HIDS systems (OSSEC), and even dedicated distributions (e.g. Security Onion) or open source SIEM (like OSSIM) can go a long way in helping you understand what is going on in your network. The most amazing part is that, until now, you’ve spend $0 on software licensing and all of your resources are dedicated to YOUR people learning about YOUR network and what is going on there. While cracking one of the toughest nuts in IT Security, you’re literally training your staff to be better security analysts and incident responders.

The first push-back I generally receive when I talk (passionately, I admit) about open source security tooling is in the form of concern. The software is not controlled, we can’t buy a support contract for it (not always true by the way!!), our staff doesn’t have the training, we are too small/big for these solutions…It’s impossible to argue closed vs. open source in this blogpost.

I believe it’s worth looking at to solve this problem, others may disagree. To the point of training staff I will say that what those tools largely do is what your staff currently does manually in an ad hoc fashion. They do understand logging and network traffic, learning how the specific tools work and how they can make their job easier will be only a fraction of the time they spend on implementing them. It is my experience that the enthusiasm of people that get to work with tools –commercial or otherwise– that makes their actual job easier, compensates for any budget you have to set aside for ‘training’. To the point of size, I have personally deployed open source security tools in SMB environments as well as in 1000+ enterprise UNIX farms. It is my strongest belief that, as security engineers, it is not our job to buy products. It is our task to build solutions for the problems at hand, using the tools that best fit the purpose. Commercial or not.

It makes sense that, as you further mature in your monitoring capability, the free tools might not continue to scale and you’ll be looking to work with the commercial products or service providers I mentioned above. The biggest gain at that moment is that you perfectly understand what you need, which parts of the capability you can delegate to a third party and what your expectations are, which parts of the problem space you can’t solve without dedicated products. From experience, most of the building blocks will be easily reused and integrated with commercial solutions. Many of those commercial solutions have support for the open source data generators (Snort, Bro, OSSEC, p0f, …).

Let’s be realistic: if you’re as serious about information security as I think you are, you don’t want to be a “buyer of boxes” or a cost center. You want to (re)build networks that allow you to defend your most valuable assets against those adversaries that matter and, maybe as important as anything else, you want to stop running behind the facts on fancy high heels.

*For the purpose of this post SIEM stands for Security Information and Event Management. It is often referred to as SIM, SEM and a bunch of other acronyms and we’re ok with those too.

INSIGHTS | February 6, 2013

The Anatomy of Unsecure Configuration: Reality Bites

As a penetration tester, I encounter interesting problems with network devices and software. The most common problems that I notice in my work are configuration issues. In today’s security environment, we can accept that a zero-day exploit results in system compromise because details of the vulnerability were unknown earlier. But, what about security issues and problems that have been around for a long time and can’t seem to be eradicated completely? I believe the existence of these types of issues shows that too many administrators and developers are not paying serious attention to the general principles of computer security. I am not saying everyone is at fault, but many people continue to make common security mistakes. There are many reasons for this, but the major ones are:

  • Time constraints: This is hard to imagine, because an administrator’s primary job is to manage and secure the network. For example: How long does it take to configure and secure a network printer? If you ask me, it should not take much time to evaluate and configure secure device properties unless the network has significant complex requirements. But, even if there are complex requirements, it is worth investing the time to make it harder for bad guys to infiltrate the network.
  • Ignorance, fear, and a ‘don’t care’ attitude: This is a human problem. I have seen many network devices that are not touched or reconfigured for years and administrators often forget about them. When I have discussed this with administrators they argued that secure reconfiguration might impact the reliability and performance of the device. As a result, they prefer to maintain the current functionality but at the risk of an insecure configuration.
  • Problems understanding the issue: If complex issues are hard to understand, then easy problems should be easy to manage, right? But it isn’t like that in the real world of computer security. For example, every software or network device is accompanied by manuals or help files. How many of us spend even a few minutes reading the security sections of these documents? The people who are cautious about security read these sections, but the rest just want to enjoy the device or software’s functionality. Is it so difficult to read the security documentation that is provided? I disagree.  For a reasonable security assessment of any new product and software, reading the manuals is one key to impressive results. We cannot simply say we don’t understand the issue­­­ and if we ignore the most basic security guidance, then attackers have no hurdles in exploiting the most basic known security issues.

 

Default Configuration – Basic Authentication

 

Here are a number of examples to support this discussion:

The majority of network devices can be accessed using Simple Network Management Protocol (SNMP) and community strings that typically act as passwords to extract sensitive information from target systems. In fact, weak community strings are used in too many devices across the Internet. Tools such as snmpwalk and snmpenum make the process of SNMP enumeration easy for attackers. How difficult is it for administrators to change default SNMP community strings and configure more complex ones? Seriously, it isn’t that hard.

A number of devices such as routers use the Network Time Protocol (NTP) over packet-switched data networks to synchronize time services between systems. The basic problem with the default configuration of NTP services is that NTP is enabled on all active interfaces. If any interface exposed on the Internet uses UDP port 123 the remote user can easily execute internal NTP readvar queries using ntpq to gain potential information about the target server including the NTP software version, operating system, peers, and so on.

Unauthorized access to multiple components and software configuration files is another big problem. This is not only a device and software configuration problem, but also a byproduct of design chaos i.e. the way different network devices are designed and managed in the default state. Too many devices on the Internet allow guests to obtain potentially sensitive information using anonymous access or direct access. In addition, one has to wonder about how well administrators understand and care about security if they simply allow configuration files to be downloaded from target servers. For example, I recently conducted a small test to verify the presence of the file Filezilla.xml, which sets the properties of FileZilla FTP servers, and found that a number of target servers allow this configuration file to be downloaded without any authorization and authentication controls. This would allow attackers to gain direct access to these FileZilla FTP servers because the passwords are embedded in these configuration files.

 

 

 

FileZilla Configuration File with Password

It is really hard to imagine that anyone would deploy sensitive files containing usernames and passwords on remote servers that are exposed on the network. What results do you expect when you find these types of configuration issues?

 

XLS File Containing Credentials for Third-party Resources
The presence of multiple administration interfaces widens the attack surface because every single interface makes available different attack vectors. A number of network-based devices have SSH, Telnet, FTP, and Web administration interfaces. The default installation of a network device activates at least two or three different interfaces for remote management. This allows attackers to attack network devices by exploiting inherent weaknesses in the protocols used. In addition to that, the presence of default credentials can further ease the process of exploitation. For example, it is easy to find a large number of unmanaged network devices on the Internet without proper authentication and authorization.

 

 

 

Cisco Catalyst – Insecure Interface
How can we forget about backend databases? The presence of default and weak passwords for the MS SQL system administrator account is still a valid configuration that can be found in the internal networks of many organizations. For example, network proxies that require backend databases are configured with default and weak passwords. In many cases administrators think internal network devices are more secure than external devices. But, what happens if the attacker finds a way to penetrate the internal network
Thesimple answer is: game over. Integrated web applications with backend databases using default configurations are an “easy walk in the park” for attackers. For example, the following insecure XAMPP installation would be devastative.

 

 

 

XAMPP Security – Web Page
The security community recently encountered an interesting configuration issue in the GitHub repository (http://www.securityweek.com/github-search-makes-easy-discovery-encryption-keys-passwords-source-code), which resulted in the disclosure of SSH keys on the Internet. The point is that when you own an account or repository on public servers, then all administrator responsibilities fall on your shoulders. You cannot blame the service provider, Git in this case. If users upload their private SSH keys to the repository, then whom can you blame? This is a big problem and developers continue to expose sensitive information on their public repositories, even though Git documents show how to secure and manage sensitive data.
Another interesting finding shows that with Google Dorks (targeted search queries), the Google search engine exposed thousands of HP printers on the Internet (http://port3000.co.uk/google-has-indexed-thousands-of-publicly-acce). The search string “inurl:hp/device/this.LCDispatcher?nav=hp.Print” is all you need to gain access to these HP printers. As I asked at the beginning of this post, “How long does it take to secure a printer?” Although this issue is not new, the amazing part is that it still persists.
The existence of vulnerabilities and patches is a completely different issue than the security issues that result from default configurations and poor administration. Either we know the security posture of the devices and software on our networks but are being careless in deploying them securely, or we do not understand security at all. Insecure and default configurations make the hacking process easier. Many people realize this after they have been hacked and by then the damage has already been done to both reputations and businesses.
This post is just a glimpse into security consciousness, or the lack of it. We have to be more security conscious today because attempts to hack into our networks are almost inevitable. I think we need to be a bit more paranoid and much more vigilant about security.

 

 

 

INSIGHTS |

Hackers Unmasked: Detecting, Analyzing, And Taking Action Against Current Threats

Tomorrow morning I’ll be delivering the opening keynote to InformationWeek & Dark Reading’s virtual security event – Hackers Unmasked — Detecting, Analyzing, And Taking Action Against Current Threats.

You can catch my live session at 11:00am Eastern discussing the “Portrait of a Malware Author” where I’ll be discussing how today’s malware is more sophisticated – and more targeted – than ever before. Who are the people who write these next-generation attacks, and what are their motivations? What are their methods, and how do they chose their targets? Along with how they execute their craft, and what you can do to protect your organization.

The day’s event will have a bunch of additional interesting speakers too – including Dave Aitel and our very own Iftach Ian Amit.

Please come and join the event. I promise not to stumble over my lines too many times, and you’ll learn new things.

You’ll need to quickly subscribe in order to get all the virtual event connection information, so visit the InformationWeek & DarkReading event subscription page HERE.

— Gunter Ollmann, CTO — IOActive, Inc.

INSIGHTS | February 4, 2013

2012 Vulnerability Disclosure Retrospective

Vulnerabilities, the bugbear of system administrators and security analysts alike, keep on piling up – ruining Friday nights and weekends around the world as those tasked with fixing them work against ever shortening patch deadlines.

In recent years the burden of patching vulnerable software may have felt to be lessening; and it was, if you were to go by the annual number of vulnerabilities publicly disclosed. However, if you thought 2012 was a little more intense than the previous half-decade, you’ll probably not be surprised to learn that last year bucked the downward trend and saw a rather big jump – 26% over 2011 – all according to the latest analyst brief from NSS Labs, “Vulnerability Threat Trends: A Decade in Review, Transition on the Way”.

Rather than summarize the fascinating brief from NSS Labs with a list of recycled bullet points, I’d encourage you to read it yourself and to view the fascinating video they constructed that depicts the rate and diversity of vulnerability disclosures throughout 2012 (see the video – “The Evolution of 2012 Vulnerability Disclosures by Vendor”).

I was particularly interested in the Industrial Control System (ICS/SCADA) vulnerability growth – a six-fold increase since 2010! Granted, of the 5225 vulnerabilities publicly disclosed and tracked in 2012 only 124 were ICS/SCADA related (2.4 percent), it’s still noteworthy – especially since I doubt very few of vulnerabilities in this field are ever disclosed publicly.

Once you’ve read the NSS Labs brief and digested the statistics, let me tell you why the numbers don’t really matter and why the ranking of vulnerable vendors is a bit like ranking car manufacturers by the number of red cars they sold last year.

A decade ago, as security software and appliance vendors battled for customer dollars, vulnerability numbers mattered. It was a yardstick of how well one security product (and vendor) was performing against another – a kind of “my IDS detects 87% of high risk vulnerabilities” discussion. When the number of vulnerability disclosures kept on increasing and the cost of researching and developing detection signatures kept going up, yet the price customers were willing to pay in maintenance fees for their precious protection technologies was going down, much of the discussion then moved to ranking and scoring vulnerabilities… and the Common Vulnerability Scoring System (CVSS) was devised.

CVSS changed the nature of the game. It became less about covering a specific percentage of vulnerabilities and more about covering the most critical and exploitable. The ‘High, Medium, and Low’ of old, got augmented with a formal scoring system and a bevy of new labels such as ‘Critical’ and ‘Highly Critical’ (which, incidentally, makes my teeth hurt as I grind them at the absurdity of that term). Rather than simply shuffling everything to the right, with the decade old ‘Medium’ becoming the new ‘Low’, and the old ‘Low’ becoming a shrug and a sensible “if you can be bothered!”… we ended up with ‘High’ being “important, fix immediately”, then ‘Critical’ assuming the role of “seriously, you need to fix this one first!”, and ‘Highly Critical’ basically meaning “Doh! The Mayans were right, the world really is going to end!”

But I digress. The crux of the matter as to why annual vulnerability statistics don’t matter and will continue to matter less in a practical sense as times goes by is because they only reflect ‘Disclosures’. In essence, for a vulnerability to be counted (and attribution applied) it must be publicly disclosed, and more people are finding it advantageous to not do that.

Vulnerability markets and bug purchase programs – white, gray and black – have changed the incentive to disclose publicly, as well as limit the level of information that is made available at the time of disclosure. Furthermore, the growing professionalization of bug hunting has meant that vulnerability discoveries are valuable commercial commodities – opening doors to new consulting engagements and potential employment with the vulnerable vendor. Plus there’s a bunch of other lesser reasons why public disclosures (as a percentage of actual vulnerabilities found by bug hunters and reported to vendors) will go down.

The biggest reason why the vulnerability disclosures numbers matter less and less to an organization (and those charged with protecting it), is because the software landscape has fundamentally changed. The CVSS approach was primarily designed for software that was brought, installed and operated by multiple organizations – i.e. software packages that could be patched by their many owners.

With today’s ubiquitous cloud-based services – you don’t own the software and you don’t have any capability (or right) to patch the software. Whether it’s Twitter, Salesforce.com, Dropbox, Google Docs, or LinkedIn, etc. your data and intellectual property is in the custodial care of a third-party who doesn’t need to publicly disclose the nature (full or otherwise) of vulnerabilities lying within their backend systems – in fact most would argue that it’s in their best interest to not make any kind of disclosure (ever!).

Why would someone assign a CVE number to the vulnerability? Who’s going to have all the disclosure details to construct a CVSS score, and what would it matter if they did? Why would the service provider issue an advisory? As a bug hunter who responsibly discloses the vulnerability to the cloud service provider you’d be lucky to even get any public recognition for your valuable help.

With all that said and done, what should we take-away from the marvelous briefing that NSS Labs has pulled together? In essence, there’s a lot of vulnerabilities being disclosed and the vendors of the software we deploy on our laptops and servers still have a ways to go to improving their security development lifecycle (SDL) – some more than others.

While it would be nice to take some solace in the ranking of vulnerable vendors, I’d be more worried about the cloud vendors and their online services and the fact that they’re not showing up in these annual statistics – after all, that’s were more and more of our critical data is being stored and manipulated.

— Gunter Ollmann, CTO — IOActive, Inc.

INSIGHTS | January 30, 2013

Energy Security: Less Say, More Do

Due to recent attacks on many forms of energy management technology ranging from supervisory control and data acquisition (SCADA) networks and automation hardware devices to smart meters and grid network management systems, companies in the energy industry are increasing significantly the amount they spend on security. However, I believe these organizations are still spending money in the wrong areas of security.  Why? The illusion of security, driven by over-engineered and over-funded policy and control frameworks and the mindset that energy security must be regulated before making a start is preventing, not driving, real world progress.

Sadly, I don’t see organizations in the oil and gas exploration, utility, and consumer energy management sectors taking more visible and proactive approaches to improving the security of their assets in 2013 any more than they did in 2012.
It’s only January, you protest. But let me ask you: on what areas are your security teams going to focus in 2013?
I’ve had the privilege in the past six months of travelling to Asia, the Middle East, Europe and the U.S. to deliver projects and have seen a number of consistent shortcomings in security programs in almost every energy-related organization that I have dealt with. Specialized security teams within IT departments are commonplace now, which is great. But these teams have been in place for some time. And even though as an industry we spend millions on security products every year, the number of security incidents is also increasing every year.  I’m sure this trend will continue in 2013. It is clear to me (and this is a global issue in energy security), that the great majority of organizations do not know where or how to correctly spend their security budgets.
Information security teams focus heavily on compliance, policies, controls, and the paper perception of what good security looks like when in fact there is little or no evidence that this is the case. Energy organizations do very little testing to validate the effectiveness of their security controls, which leaves these companies exposed to attacks and wondering what they are doing wrong.

 

For example, automated malware has been mentioned many times in the press and is a persistent threat, but companies are living under the misapprehension that having endpoint solutions alone will protect them from this threat. Network architectures are still being poorly designed and communication channels are still operating in the clear, leaving critical infrastructure solutions exposed and vulnerable.
I do not mean to detract from technology vendors who are working hard to keep up with all the new malware challenges, and let’s face it, we would we would be lost without many of their solutions. But organizations that are purchasing these products need to “trust but verify” these products and solutions by requiring vendors and solution integrators to prove that the security solutions they are selling are in fact secure. The energy industry as a whole needs to focus on proving the existence of controls and to not rely on documents and designs that say how a system should be secure. Policies may make you look good, but how many people read them? And, if they did read them, would they follow them? How would you know? And could you place your hand on heart and swear to the CEO, “I’m confident that our critical systems and data cannot be compromised.”?

 

I say, “Less say, more do in 2013.” Energy companies globally need to stop waiting for regulations or for incidents to happen and must do more to secure their systems and supply. We know we have a problem in the industry and it won’t go away while we wait for more documents that define how we should improve our security defenses. Make a start. The concepts aren’t new, and it’s better to invest money and effort in improved systems rather than churning out more polices and paper controls and hoping they make you more secure. And it is hope, because without evidence how can you really be sure the controls you design and plan are in place and effective?

 

Start by making improvements in the following areas and your overall security posture will also improve (a lot of this is old news, but sadly is not being done):

 

Recognize that compliance doesn’t guarantee security. You must validate it.
·         Use ISA99 for SCADA and ISO27001/2/5 for security risk management and controls.
·         Use compliance to drive budget conversations.
·         Don’t get lost in a policy framework. Instead focus on implementing, then validating.
·         Always validate paper security by testing internal and external controls!
Understand what you have and who might want to attack it.
·         Define critical assets and processes.
·         Create a list of who could affect these assets and how.
·         Create a layered security architecture to protect these assets.
·         Do this work in stages. Create value to the business incrementally.
·         Test the effectiveness of your plans!
Do the basics internally, including:
·         Authentication for logins and machine-to-machine communications.
·         Access control to ensure that permissions for new hires, job changers, and departing employees are managed appropriately.
·         Auditing to log significant events for critical systems.
·         Availability by ensuring redundancy and that the organization can recover from unplanned incidents.
·         Integrity by validating critical values and ensuring that accuracy is always upheld.
·         Confidentiality by securing or encrypting sensitive communications.
·         Education to make staff aware of good security behaviors. Take a Health & Safety approach.
Trust but verify when working with your suppliers:
·         Ask vendors to validate their security, not just tell you “it’s secure.”
·         Ask suppliers what their security posture is. Do they align to any security standards? When was the last time they performed a penetration test on client-related systems? Do they use a Security Development Lifecycle for their products?
·         Test their controls or ask them to provide evidence that they do this themselves!
Work with agencies who are there to assist you and make them part of your response strategy, such as:
·         Computer Emergency Readiness Team (CERT)
·         Centre for the Protection of National Infrastructure (CPNI)
·         North American Electric Reliability Corporation (NERC)
Trevor Niblock, Director, ICS and Smart Grid Services
INSIGHTS | January 25, 2013

S4x13 Conference

S4 is my favorite conference. This is mainly because it concentrates on industrial control systems security, which I am passionate about. I also enjoy the fact that the presentations cover mostly advanced topics and spend very little time covering novice topics.

Over the past four years, S4 has become more of a bits and bytes conference with presentations that explain, for example, how to upload Trojan firmwares to industrial controllers and exposés that cover vulnerabilities (in the “insecure by design” and “ICS-CERT” sense of the word).

This year’s conference was packed with top talent from the ICS and SCADA worlds and offered a huge amount of technical information. I tended to follow the “red team” track, as these talks broke varying levels of control systems networks.

Sergey Gordeychick gave a great talk on the vulnerabilities in various ICS software applications, including the release of an “1825-day exploit” for WinCC, which Siemens did not patch for five years. (The vulnerability finally closed in December 2012.)
 

Alexander Timorin and Dmitry Skylarov released a new tool for reversing S7 passwords from a packet capture. A common feature of many industrial controllers includes homebrew hashing algorithms and authentication mechanisms that simply fall apart under a few days of scrutiny. Their tool is being incorporated into John the Ripper. A trend in the ICS space seems to include the incorporation of ICS-specific attacks into current attack frameworks. This makes ICS hacking far more available to network security assessors, as well as to the “Bad Guys”. My guess is that this trend will continue in 2013.
Billy Rios and Terry McCorkle talked about medical controllers and showed a Phillips XPER controller that they had purchased on eBay. The computer itself had not been wiped and contained hard-coded accounts as well as trivial buffer overflow vulnerabilities running on Windows XP.
Arthur Gervais released a slew of Schneider Modicon PLC-related vulnerabilities. Schneider’s service for updating Unity (the engineering software for Modicon PLCs) and other utilities used HTTP to download software updates, for example. I was sad that I missed his talk due to a conflict in the speaking schedule.
Luigi Auriemma and his colleague Donato Ferrante demonstrated their own binary patching system, which allows them to patch applications while they are executing. The technology shows a lot of promise. They are currently marketing it as a way to provide patches for unsupported software. I think that the true potential of their system is to patch industrial systems without shutting down the process. It may take a while for any vendor to adopt this technique, but it could be a powerful motivator to get end users to apply patches. Scheduling an outage window is the most-cited reason for not patching industrial systems, and ReVuln is showing that we can work around this limitation.

 

My favorite talk was one that was only semi-technical in nature and more defensive than offensive. It was about implementing an SDL and a fuzz-testing strategy at OSISoft. OSISoft’s PI server is the most frequently used data historian for industrial processes. Since C-level employees want to keep track of how their process is doing, historical data often must be exported from the control systems network to the corporate network in some fashion. In the best case scenario, this is usually done by way of a PI Server in the DMZ. In the worst case scenario, a corporate system will reach into the control network to communicate with the PI server. Either way, the result is the same: PI is a likely target if an attacker wants to jump from the corporate network to the control network. It is terrific and all too rare still to see a software company in ICS sharing their security experiences.

 

Digital Bond provides a nice “by the numbers” look at the conference.
If you are technical and international minded and want to talk to actual ICS operators, S4 is a great place to start.
INSIGHTS | January 22, 2013

You cannot trust social media to keep your private data safe: Story of a Twitter vulnerability

I‘m always worried about the private information I have online. Maybe this is because I have been hacking for a long time, and I know everything can be hacked. This makes me a bit paranoid. I have never trusted web sites to keep my private information safe, and nowadays it is impossible to not have private information published on the web, such as a social media web site. Sooner or later you could get hacked, this is a fact.

 

Currently, many web and mobile applications give users the option to sign in using their Twitter or Facebook account. Keeping in mind the fact that Twitter currently has 200 million active monthly users (http://en.wikipedia.org/wiki/Twitter), it makes a lot of sense for third-party applications to offer users an easy way to log in. Also, since applications can obtain a wealth of information from your Twitter or Facebook account, most of the time you do not even need to register. This is convenient, and it saves time signing into third-party applications using Twitter or Facebook.

 

 

Every time I’m asked to sign in using Twitter or Facebook, my first thought is, “No way!”  I don’t want to give access to my Twitter and Facebook accounts regardless of whether I have important information there or not. I always have an uneasy feeling about giving a third-party application access to my accounts due to the security implications.

 

Last week I had a very interesting experience.

I was testing a web application that is under development. This application had an option to allow me to sign into Twitter. If I selected this option, the application would have access to my Twitter public feed (such as reading Tweets from my timeline and seeing who I follow). In addition, the application would have been able to access Twitter functionality on my behalf (such as following new people, updating my profile, posting Tweets for me). However, it wouldn’t have access to my private Twitter information (such as direct messages and more importantly my password). I knew this to be true because of the following information that is displayed on Twitter’s web page for “Signing in with Twitter”:

 

Image 1

 

After viewing the displayed web page, I trusted that Twitter would not give the application access to my password and direct messages. I felt that my account was safe, so I signed in and played with the application. I saw that the application had the functionality to access and display Twitter direct messages. The functionality, however, did not work, since Twitter did not allow the application to access these messages. In order to gain access, the application would have to request proper authorization through the following Twitter web page:

 

  Image2

 

The web page displayed above is similar to the previous web page (Image 1). However, it also says the application will be able to access your direct messages. Also, the blue button is different. It says “Authorize app” instead of “Sign in”. While playing with the application, I never saw this web page (image 2). I continued playing with the application for some time, viewing the functionality, logging in and out from the application and Twitter, and so on. After logging in to the application, I suddenly saw something strange. The application was displaying all of my Twitter direct messages. This was a huge and scary surprise. I wondered how this was possible. How had the application bypassed Twitter’s security restrictions? I needed to know the answer.

 

My surprise didn’t end here. I went to https://twitter.com/settings/applications to check the application settings. The page said “Permissions: read, write, and direct messages”. I couldn’t understand how this was possible, since I had never authorized the application to access my “private” direct messages. I realized that this was a huge security hole.

 

I started to investigate how this could have happened. After some testing, I found that the application obtained access to my private direct messages when I signed in with Twitter for a second or third time. The first time I signed in with Twitter on the application, it only received read and write access permissions. This gave the application access to what Twitter displays on its “Sign in with Twitter” web page (see image 1). Later, however, when I signed in again with Twitter without being already logged in to Twitter (not having an active Twitter session – you have to enter your Twitter username and password), the application obtained access to my private direct messages. It did so without having authorization, and Twitter did not display any messages about this. It was a simple bypass trick for third-party applications to obtain access to a user’s Twitter direct messages.

 

In order for a third-party application to obtain access to Twitter direct messages, it first has to be registered and have its direct message access level configured here: https://dev.twitter.com/apps. This was the case for the application I was testing.  In addition and more importantly, the application has to obtain authorization on the Twitter web page (see Image 2) to access direct messages. In my case, it never got this. I never authorized the application, and I did not encounter a web page requesting my authorization to give the application access to my private direct messages.

 

I tried to quickly determine the root cause, although I had little time. However, I could not determine this. I therefore decided to report the vulnerability to Twitter and let them do a deeper investigation. The Twitter security team quickly answered and took care of the issue, fixing it within 24 hours. This was impressive. Their team was very fast and responsive. They said the issue occurred due to complex code and incorrect assumptions and validations.

 

While I think the Twitter security team is great, I do not think the same of the Twitter vulnerability disclosure policy. The vulnerability was fixed on January 17, 2013, but Twitter has not issued any alerts/advisories notifying users.

 

There should be millions of Twitter users (remember Twitter has 200 million active users) that have signed in with Twitter into third-party applications. Some of these applications might have gained access to and might still have access to Twitter users private direct messages (after the security fix the application I tested still had access to direct messages until I revoked it).

 

Since Twitter, has not alerted its users of this issue, I think we all need to spread the word. Please share the following with everyone you know:

Check third-party applications permissions here: https://twitter.com/settings/applications

If you see an application that has access to your direct messages and you never authorized it, then revoke it immediately.

 

Ironically, we could also use Twitter to help users. We could tweet the following:

Twitter shares your DMs without authorization, check 3rd party application permissions  https://www.ioactive.com/you-can-not-trust-social-media-twitter-vulnerable/ #ProtectYourPrivacy (Please RT)

 

I love Twitter. I use it daily. However, I think Twitter still needs a bit of improvement, especially when it comes to alerting its users about security issues when privacy is affected.