INSIGHTS | February 3, 2012

Solving a Little Mystery

Firmware analysis is a fascinating area within the vast world of reverse engineering, although not very extended. Sometimes you end up in an impasse until noticing a minor (or major) detail you initially overlooked. That’s why sharing methods and findings is a great way to advance into this field.

While looking for certain information during a session of reversing, I came across this great post. There is little to add except for solving the ‘mystery’ behind that simple filesystem and mentioning a couple of technical details.
This file system is part of the WindRiver’s Web Server architecture for embedded devices, so you will likely find it inside firmwares based on VxWorks. It is known as MemFS (watch out, not the common MemFS) or Wind River management file system, and basically allows devices to serve files via the embedded web server without needing an ‘actual’ file system since this one lies on its non-volatile memory.
VxWorks  provides  pagepack, a tool used to transform any file intended to be served by a WindWeb server into C code. Therefore, a developer just compiles everything into the same firmware image.
 From a reverser’s point of view, what we should find is the following structure:
 
 

 There are a few things  here worth mentioning:

  • The header is not necessarily 12 but 8 so the third field seems optional.
  • The first 4 bytes look like a flag field that may indicate, among other things,  whether  a file data will be compressed or not (1 = Compressed, 2 = Plain)
  • The signature can vary between firmwares since it is defined by the constant ‘HTTP_UNIQUE_SIGNATURE’ , in fact, we may find this signature twice inside a firmware; the first one due to  the .h  where it is defined (close to other strings such as the webserver banner )and the second one already as part of  the MemFS.
Hope these additional details help you on your future research.
INSIGHTS | January 17, 2012

A free Windows Vulnerability for the NSA

Some months ago at Black Hat USA 2011 I presented this interesting issue in the workshop “Easy and Quick Vulnerability Hunting in Windows,” and now I’m sharing it with all people a more detailed explanation in this blog post.

In Windows 7 or Windows 2008, in the folder C:WindowsInstaller there are many installer files (from already installed applications) with what appear to be random names. When run, some of these installer files (like Microsoft Office Publisher MUI (English) 2007) will automatically elevate privileges and try to install when any Windows user executes them. Since the applications are already installed, there’s no problem, at least in theory.

 

However, an interesting issue arises during the installation process when running this kind of installer: a temporary file is created in C:UsersusernameAppDataLocalTemp, which is the temporary folder for the current user. The created file is named Hx????.tmp (where ???? seem to be random hex numbers), and it seems to be a COM DLL from Microsoft Help Data Services Module, in which its original name is HXDS.dll. This DLL is later loaded by msiexec.exe process running under the System account that is launched by the Windows installer service during the installation process.

 

When the DLL file is loaded, the code in the DLL file runs as the System user with full privileges. At first sight this seems to be an elevation of privileges vulnerability since the folder where the DLL file is created is controlled by the current user, and the DLL is then loaded and run under the System account, meaning any user could run code as the System user by replacing the DLL file with a specially-crafted one before the DLL is loaded and executed.

 

Analysis reveals that the issue is not easily exploitable since the msiexec.exe process generates an MD5 hash of the DLL file and compares it with a known-good MD5 hash value that is read from a file located in C:WindowsInstaller, which is only readable and writable by System and Administrators accounts.

 

In order to exploit this issue, an attacker needs to replace the DLL file with a modified DLL file that contains exploit code that can match the valid MD5 hash. The attacker DLL will then be run under the System account, allowing privilege elevation and operating system compromise. The problem is that this is not a simple attack—it’s an attack to the MD5 hashing algorithm referred to as a second-preimage attack for which there are no practical attacks that I know of, so it’s impossible for a regular attacker to generate a file with the same MD5 hash as the existing DLL file.

 

The reason for the title of this post comes from the fact that intelligence agencies, which are known for their cracking technologies and power, probably could perform this attack and build a local elevation of privileges 0day exploit for Windows.

 

I don’t know why Microsoft continues using MD5; it has been banned by Microsoft SDL since 2005 and it seems there has been some component oversight or these components have been built without following SDL guidance. Who knows on what other functionality MD5 continues to be used by Microsoft, allowing abuse by intelligence agencies.

 

Note: When installing some Windows updates, the Windows Installer service also creates the same DLL file in the C:windowstemp folder, possibly allowing the same attack.

 

The following YouTube links provide more technical details and video demonstrations about this vulnerability.

References.

INSIGHTS | January 9, 2012

Common Coding Mistakes – Wide Character Arrays

This post contains a few of my thoughts on common coding mistakes we see during code reviews when developers deal with wide character arrays. Manipulating wide character strings is reasonably easy to get right, but there are plenty of “gotchas” still popping up. Coders should make sure they take care because a few things can slip your mind when dealing with these strings and result in mistakes.

A little bit of background:
The term wide character generally refers to character data types with a width larger than a byte (the width of a normal char). The actual size of a wide character varies between implementations, but the most common sizes are 2 bytes (i.e. Windows) and 4 bytes (i.e. Unix-like OSes). Wide characters usually represent a particular character using one of the Unicode character sets: in Windows this will be UTF-16 and for Unix-like systems, whose wide characters are twice the size, this will usually be UTF-32.

 

Windows seems to love wide character strings and has made them standard. As a result, many Windows APIs have two versions: functionNameA and functionNameW, an ANSI version and a wide char string version, respectively. If you’ve done any development on Windows systems, you’ll definitely be no stranger to wide character strings.

 

There are definite advantages to representing strings as wide char arrays, but there are a lot of mistakes to make, especially if you’re used to developing on Unix-like systems or you forget to consider the fact that one character does not equal one byte.

 

For example, consider the following scenario, where a Windows developer begins to unsuspectingly parse a packet that follows their proprietary network protocol. The code shown takes a UTF-16 string length (unsigned int) from the packet and performs a bounds check. If the check passes, a string of the specified length (assumed to be a UTF-16 string) is copied from the packet buffer to a fresh wide char array on heap.

 

[ … ]
if(packet->dataLen > 34 || packet->dataLen < sizeof(wchar_t)) bailout_and_exit();
size_t bufLen = packet->dataLen / sizeof(wchar_t);

wchar_t *appData = new wchar_t[bufLen];
memcpy(appData, packet->payload, packet->dataLen);
[ … ]
This might look okay at first glance; after all, we’re just copying a chunk of data to a new wide char array. But consider what would happen if packet->dataLen was an odd number. For example, if packet->dataLen = 11, we end up with size_t bufLen = 11 / 2 = 5 since the remainder of the division will be discarded.

 

So, a five-element–long wide character buffer is allocated into which the memcpy() copies 11 bytes. Since five wide chars on Windows is 10 bytes (and 11 bytes are copied), we have an off-by-one overflow. To avoid this, the modulo operator should be used to check that packet->dataLen is even to begin with; that is:

 

 

if(packet->dataLen % 2) bailout()

 

Another common occurrence is to forget that the NULL terminator on the end of a wide character buffer is not a single NULL byte: it’s two NULL bytes (or 4, on a UNIX-like box). This can lead to problems when the usual len + 1 is used instead of the len + 2 that is required to account for the extra NULL byte(s) needed to terminate wide char arrays, for example:

 

int alloc_len = len + 1;
wchar_t *buf = (wchar_t *)malloc(alloc_len);
memset(buf, 0x00, len);
wcsncpy(buf, srcBuf, len);
If srcBuf had len wide chars in it, all of these would be copied into buf, but wcsncpy() would not NULL terminatebuf. With normal character arrays, the added byte (which will be a NULL because of the memset) would be the NULL terminator and everything would be fine. But since wide char strings need either a two- or four-byte NULL terminator (Windows and UNIX, respectively), we now have a non-terminated string that could cause problems later on.

 

Some developers also slip up when they wrongly interchange the number of bytes and the number of characters. That is, they use the number of bytes as a copy length when what the function was asking for was the number of characters to copy; for example, something like the following is pretty common:
int destLen = (stringLen * (sizeof(wchar_t)) + sizeof(wchar_t);
wchar_t *destBuf = (wchar_t *)malloc(destLen);
MultiByteToWideChar(CP_UTF8, 0, srcBuf, stringLen, destBuf, destLen);
[ do something ]

 

The problem with the sample shown above is that the sixth parameter to MultiByteToWideChar is the length of the destination buffer in wide characters, not in bytes, as the call above was done. Our destination length is out by a factor of two here (or four on UNIX-like systems, generally) and ultimately we can end up overrunning the buffer. These sorts of mistakes result in overflows and they’re surprisingly common.

 

The same sort of mistake can also be made when using “safe” wide char string functions, like wcsncpy(), for example:
unsigned int destLen = (stringLen * sizeof(wchar_t)) + sizeof(wchar_t);
wchar_t destBuf[destLen];
memset(destBuf, 0x00, destLen);
wcsncpy(destBuf, srcBuf, sizeof(destBuf));
Although using sizeof(destuff) for maximum destination size would be fine if we were dealing with normal characters, this doesn’t work for wide character buffers. Instead, sizeof(destBuf) will return the number of bytes indestBuf, which means the wcsncpy() call above it can end up copying twice as many bytes to destBuf as intended—again, an overflow.

 

The other wide char equivalent string manipulation functions are also prone to misuse in the same ways as their normal char counterparts—look for all the wide char equivalents when auditing such functions as swprintf,wcscpywcsncpy, etc. There also are a few wide char-specific APIs that are easily misused; take, for example,wcstombs(), which converts a wide char string to a multi-byte string. The prototype looks like this:
size_t wcstombs(char *restrict s, const wchar_t *restrict pwcs, size_t n);

 

It does bounds checking, so the conversion stops when n bytes have been written to s or when a NULL terminator is encountered in pwcs (the source buffer). If an error occurs, i.e. a wide char in pwcs can’t be converted, the conversion stops and the function returns (size_t)-1, else the number of bytes written is returned. The MSDN considerswcstombs() to be deprecated, but there are still a few common ways to mess when using it, and they all revolve around not checking return values.

 

If a bad wide character is encountered in the conversion and you’re not expecting a negative number to be returned, you could end up under-indexing your array; for example:
int i;
i = wcstombs( … )  // wcstombs() can return -1
buf[i] = L'';
If a bad wide character is found during conversion, the destination buffer will not be NULL terminated and may contain uninitialized data if you didn’t zero it or otherwise initialize it beforehand.
Additionally, if the return value is n, the destination buffer won’t be NULL terminated, so any string operations later carried out on or using the destination buffer could run past the end of the buffer. Two possible consequences are a potential page fault if an operation runs off the end of a page or potential memory corruption bugs, depending on howdestbuf is usedlater . Developers should avoid wcstombs() and use wcstombs_s() or another, safer alternative. Bottom line: always read the docs before using a new function since APIs don’t always do what you’d expect (or want) them to do.

 

Another thing to watch out for is accidentally interchanging wide char and normal char functions. A good example would be incorrectly using strlen() on a wide character string instead of wcslen()—since wchar strings are chock full of NULL bytes, strlen() isn’t going to return the length you were after. It’s easy to see how this can end up causing security problems if a memory allocation is done based on a strlen() that was incorrectly performed on a wide char array.

 

Mistakes can also be made when trying to develop cross-platform or portable code—don’t hardcode the presumed length of wchars. In the examples above, I have assumed sizeof(wchar_t) = 2; however, as I’ve said a few times, this is NOT necessarily the case at all, since many UNIX-like systems have sizeof(wchar_t) = 4.

 

Making these assumptions about width could easily result in overflows when they are violated. Let’s say someone runs your code on a platform where wide characters aren’t two bytes in length, but are four; consider what would happen here:
wchar_t *destBuf = (wchar_t *)malloc(32 * 2 + 2);
wcsncpy(destBuf, srcBuf, 32);
On Windows, this would be fine since there’s enough room in destBuff for 32 wide chars + NULL terminator (66 bytes). But as soon as you run this on a Linux box—where wide chars are four bytes—you’re going to get wcsncpy()writing 4 * 32 + 2 = 130 bytes and resulting in a pretty obvious overflow.

 

So don’t make assumptions about how large wide characters are supposed to be since it can and does vary. Always usesizeof(wchar_t) to find out.

 

When you’re reviewing code, keep your eye out for the use of unsafe wide char functions, and ensure the math is right when allocating and copying memory. Make sure you check your return values properly and, most obviously, read the docs to make absolutely sure you’re not making any mistakes or missing something important.
INSIGHTS | December 7, 2011

Automating Social Engineering: Part Three

 

PHASE 2: Ruses

 

Once we have enough information about the employees and company in question, we can begin to make some sense of the information and start crafting our ruses. It is worth noting that this stage currently does not have a lot of since it does require a lot of human intuition and information processing. Certainly as we continue developing the tool we will be able to automate more and create some decision making systems capable of creating useful ruses, but for now a key factor of this phase is to look for key ideas and useful information in order to help us generate our attack as realistic and trustworthy as possible.

 

As previously mentioned, this stage it is not fully automated. Still, EMaily provides several examples and template emails that could help automate some ruses. Having said this, if we want to have high success rates with our attacks, it is still clear that testers will need to craft their own custom built email ruses depending on the data gathered in the previous phase.

 

Before we move on, there are a few things that are worth discussing before we dig down to the technical tools themselves. Social engineering is about abusing people and abusing their needs, senses, ideas, likes, dislikes, fears, etc. It is basically about selling a good lie- good enough that even you would believe it.
Depending on the feeling or idea we are trying to generate or trigger, we need to use a different ruse or combination of ruses. For example, the list below shows a short list of ruses ordered by specific feeling they try to trigger.

 

Ruses Examples:

 

General Templates:
  • · Facebook invite
  • · Twitter invite
  • · Linkedin Invite
  • · Mail cannot be delivered.
  • · Etc.
Fear Oriented Templates:
  • · Virus Found
  • · Compliance updates
  • · Popular thread in the news.
  • · Etc.

Needs or Likes Templates:

  • · Win an iPad.
  • · Internships or new internal available positions.
  • · General Company party.
  • · New Corporate discounts.
  • · Etc.
Gossip or Need to Know Templates:
  • · Latest financial data
  • · Layoff for next month
  • · Internal memorandum.
  • · Etc.
Let’s look at a couple real life examples of such ruses:

 

Source:

 

In this case, the person copied a friendship request from Facebook. We can easily change the links, and redirect them to a fake Facebook website, among many other things. Furthermore, we could use other techniques we will discuss later to actually perform internal egressing firewall rules scans by adding fake images to the email.

 

PHASE 3: Internal Information gathering: Software and Physical networks.

Once we have a target list (emails, names, etc.) and a ruse, we need to begin gathering information about the internal network and infrastructure from within. One possible way of doing this is by sending one or more rounds of emails using specially crafted html templates consisting of several image tags pointing to different ports, as it is shown on the figure below.

 

 
EMaily is a command line tool created to send multiple template emails using several servers at the same time. It contains many templates, but users can create their own templates and populate them as needed. It is worth noting that EMaily is also an expandable ruby library that can be called from any other ruby script or application.

Once we have a list of ports we want to scan from an internal perspective, we do not need to generate the entire list. EMaily will automatically generate the list and populated with the corresponding email using the template system as it is shown on the following code snippet, by simply using the %%payload[port 1, … ,port n]%%

You may ask, “but what is to gain from generating this random set of images?” The short answer is lots of information. We will not only be able to confirm that the firewall in the company we are trying to attack is not properly filtering the particular port, but also whenever an application makes an HTTP request, it sends lots of useful information back to the attacker.

 

As we can see from the output generated by EMaily, this will test egressing rules, obtain information such as operating system, email client used, IP addresses, etc…
Basically, Emaily works as a reverse scanner, allowing people to send their “payloads” to victims, which will render the images and will generate a list of request that will be served by the Emaily web server process, allowing us to gather all kinds of information, as it is shown in the figure below.

 

Now we have emails and hopefully tons of information tight to each address such as egressing rules, operating system name and version, browser or mail client name and version, mobile phone information, etc. We have enough to start using the more interesting ruses and correctly targeting the victims using the correct payloads (since we know the OS and mail client version), and open ports allowing us to connect back to us. In the next phase we will discuss how to use all that information to successfully compromise the company being tested.

This is part three of a four-part social engineering post. The next and final entry will discuss compromising machines.

 

INSIGHTS | November 8, 2011

Automating Social Engineering: Part Two

 

As with any other type of penetration test, we need to gather information. The only difference here is that instead of looking for operating system types, software versions, and vulnerabilities, we’re searching for information about the company, their employees, their social networking presence, et cetera.

Given that we’re performing an assessment from a corporate perspective, there are some limitations with regard to privacy and employees’ private life, but the truth is that real attackers won’t abide by such limitations. So, you should assume that any information made public or available on the Internet will be considered usable. (Disclosure: consultants/employees should talk to your client/employer and lawyers to define the scope for any penetration test prior to information gathering.)

 

As stated in the comic, information gathering is really simple and there’s only one rule: There is never enough information; the more you have the better. Everything is relevant in some way or another—everything from company icons, images, and documents all they way down to where an employee went to dinner last week and with whom.

 

Luckily for us, Mark Zuckerberg (creator of Facebook) and corporate America have made people’s lives public and easy to follow by convincing them that they’re supposed to forget about privacy and share as much information as they can with as many people and services as they can, because it is “good” for them.

 

The type of data we need depends on the type of attack we’re performing. Given that we are currently discussing social engineering assessments in a corporate context, we will surely need to gather corporate email accounts and plenty of names. There are many tools capable of performing Open Source Intelligence (OSINT) including the Harverster, Maltego, and, of course, ESearchy.

 

Esearchy is project that I began a few years go as a small Ruby library with a proof-of-concept CLI tool capable of searching the internet for email addresses and people from a specific domain or company. Currently, the supported search plug-ins include but are not limited to:

 

Search Engines
– Google
– Bing
– Yahoo
– AltaVista
Social Engines
– LinkedIn
– Google Profiles
– Naymz
– Classmates
– Spoke
– Google+
Other Engines
– PGP servers
– Usenets
– GoogleGroups Search
– Spider
– LDAP
In addition to that, ESearchy is capable of downloading—upon request—several types of files and searches their contents for emails. File types supported include but are not limited to:
PDF
DOC
DOCX
ODP
ODS
ODB
XLSX
PPTX
TXT
ODT
ASN

 

With this simple introduction, we’re now going to install the tool and test a few of the information gathering concepts described above. ESearchy is currently hosted as a Ruby gem at https://rubygems.org, so by fetching the gem in any Linux, OSX, or Windows environment, it will install all the necessary dependencies and binaries.
Note: Ubuntu users will need to add the Ruby path to their $PATH in order to run esearchy.

 

$> sudo gem install esearchy

 

Once ESearchy is installed, we are ready to start gathering information. As previously mentioned, the application supports several types of searches using the esearchy CLI command and/or by creating custom scripts using the ESearchy library—that is, they requireesearchy.
Using the tool is straightforward; for example:

 

$> esearchy -q @company.com –enable-google –enable-pgp
$> esearchy -q @company.com -c “Company Inc” –enable-linkedin
For a full description on the engines supported and all the other possible ESearchy features, please refer to the help command in the ESearchy tool itself, which is:
esearchy –h
Despite now having a list of email addresses related to the company in question, it’s a good idea to continue gathering as much data as possible. We should continue performing searches; we may need to find information regarding the DNS servers and mail servers, as well as other information that is usually collected as part of a standard penetration test. ESearchy currently does not perform these search types , but that functionality will be supported in future versions as separate, standalone tool.
Last but not least, a good way to confirm (and possibly obtain more) email addresses involves checking the SMTP server for vulnerabilities (such as information disclosures) using VRFY or EXPN, et cetera. If present, this information should allow us to confirm our email addresses and possibly even acquire more.

 

This is part two of a four-part social engineering post. The next entry will discuss using ruses to gather more intrusive information about the internal network.
INSIGHTS | November 1, 2011

Automating Social Engineering: Part One

since the original conceptualization of computer security, and perhaps even before, social engineering has been in existence. One could say that social engineering began when societies began, whether it was realized or not. It is now time to give some of this work to scripts and applications to make it a little more interesting…

As the years passed in the computer security community, network penetration became more and more necessary, but computers were not the only thing getting compromised. Social engineering was part of the hacker subculture, but it was never a service offered by companies.

In recent years—largely due to the fact that they are doing more business online—companies have become more security aware and networks have become more “secure.” Finding remote vulnerabilities on Internet-facing networks that can be exploited is becoming more and more difficult due, in part, to such realities as the increased safety of operating systems, the standardization of automated patching, and the hiring of security personnel. Having said that, many would argue, “What about corporate networks? Do companies secure their networks the same way they secure production servers?”

The short answer, in my experience, is no. Companies have different approaches to and views about internal and external networks: they often don’t think about internal threats. They fail to understand that internal threats don’t necessarily mean an internal employee going rogue; it could easily be an attacker with access to the corporate network who is attacking it from an internal perspective.

For thousands of reasons and excuses, workstations and internal servers are never kept as secure as external servers: they usually lack up-to-date patching schedules, and are loosely and improperly configured. On top of this already insecure network are the human users, which includes IT admins, engineers, and developers. Your employees.
Employees: A group of people who can perform amazing tasks such as infect their computer in less than two hours, install buggy freeware apps, and open all those links that come with explicit warnings such as DO NOT OPEN – VIRUS FOUND.

To make a story short, hackers, spammers, botnets, criminal organizations, and all the other “bad guys” constantly take advantage of the weakest link in all types of security: The Human Factor, or human stupidity. The reality is, it doesn’t matter how much you harden a computer, you can rely on a human to find a way to compromise that computer.

Social engineers are acutely aware of how human psychology operates, and they are well aware of human needs and feelings. Consequently, they will use and abuse these “issues” to craft their ruses and attacks.
Additionally, due to the rise of social networks in personal and corporate environments, people are constantly checking their Facebook, LinkedIn, email, Twitter, Google+, and Gmail—everyone wants to know what is going on within their company. The 21st century human has an addictive need to be informed in real-time. It is human nature to communicate and interact with people, and to be as informed as you can about your environment. Deep down, we all love to gossip.
Before we even start, it’s worth noting that client-side attacks, phishing attacks, social engineering attacks, and social engineering penetration tests have existed for a long time. Due to the ever-tightening security around networking in recent years on one hand, and the expansion and rapid growth of social networks on the other, these attacks have gained strength, and new attack types are appearing daily, abusing the communication channels humans are working so hard to create.

Standard attack types:
• Classic email-driven social engineering attacks
• Website phishing attacks
• Targeted social hacking (Facebook, LinkedIn, Google+, et cetera)
• Physical social engineering
In my next three posts, I will be walking through the steps to perform a social engineering attack from a corporate point of view as a security consultant. I’ll begin with information gathering, the indispensible “homework phase” that every social engineering engagement should begin with.

INSIGHTS | October 3, 2011

Windows Vulnerability Paradox

For those who read just the first few lines, this is not a critical vulnerability. It is low impact but interesting, so keep reading.

 

This post describes the Windows vulnerability I showed during my Black Hat USA 2011 workshop “Easy and Quick Vulnerability Hunting in Windows”.

 

The Windows security update for Visual C++ 2005 SP1 Redistributable Package (MS11-025) is a security patch for a binary planting vulnerability. This kind of vulnerability occurs when someone opens or executes a file and this file (or the application used to open the file) has dependencies (like DLL files) that will be loaded and executed from the current folder or other folders than can be attacker controlled. This particular vulnerability allows an attacker to execute arbitrary code by tricking a victim user into opening a file from a network share. When the victim user opens the file, the application associated with the file is executed, and an attacker-crafted DLL file is loaded and executed by the application.

 

It’s either funny or scary (you choose) that the Windows security update meant to fix the above-described vulnerability is also vulnerable to the same kind of vulnerability it fixes, and it can be exploited to elevate privileges.

 

When installing the security update on 64-bit Windows 7, the file vcredist_x64.exe is downloaded and then executed under the System account (the most powerful Windows account, it has full privileges) with some command line options:

 

C:WindowsSoftwareDistributionDownloadInstallvcredist_x64.exe” /q:a /c:”msiexec /i vcredist.msi /qn
After being run, vcredist_x64.exe tries to launch the msiexec.exe process from theC:WindowsTempIXP000.TMPtemporary folder, which is where the vcredist.msi used in the command line option is located, but because msiexec.exe doesn’t exist there, vcredist_x64.exe will fail to run it. Then vcredist_x64.exelaunches msiexec.exefrom C:WindowsSysWOW64, where msiexec.exe is located by default on 64-bit Windows 7.

 

There is an obvious vulnerability and it can be exploited by low-privilege Windows users since theC:WindowsTempIXP000.TMP temporary folder DACL has write permissions to the Users group, so any Windows user can place in that temporary folder a file named msiexec.exe and execute arbitrary code under the System account when they attempt to install the vulnerable security update.

 

While this is an interesting vulnerability, it’s not critical at all. First, to be vulnerable you have to have the vulnerable package installed and without the security update applied. Second, for an attacker to exploit this vulnerability and elevate privileges, the option “Allow all users to install updates on this computer” must be enabled. This option is enabled on some systems, depending on configuration settings about how Windows updates are installed.

 

This presents an interesting paradox in that you’re vulnerable if you haven’t applied the vulnerable patch and you’re not vulnerable if you have applied the vulnerable patch. This means that the patch for the vulnerable patch is the vulnerable patch itself.

 

The following links provide some more technical details and video demonstrations about this vulnerability and how it can be exploited:
References
INSIGHTS |

Easy and Quick Vulnerability Hunting in Windows

I’m glad to start this new blog for IOA Labs by publishing the video demonstrations and updated slides of my Black Hat USA 2011 workshop. I hope you like it, please send me your feedback, questions, etc. We will continue posting cool materials from our researchers very soon, keep tuned!

INSIGHTS | March 20, 2011

Blackhat TPM Talk Follow-up

Since speaking at BlackHat DC 2009, there have been several inquiries in regards to the security of the SLE66PE series smartcard family.

Here are some issues that should be pointed out:

We have heard, “..it took 6 months to succeed..

The reality is it took 4 months to tackle obsticles found in any <200nm device such as:

  1. Capitance/load of probe needles when chip is running.
  2. Powering the device inside the chamber of a FIB workstation.
  3. Level-shifting a 1.8v core voltage following what we learned in #1 above.
  4. Cutting out metal layers without creating electrical shorts.
  5. Other more minute issues regarding the physical size of the die.

Upon overcoming the points above,  the actual analysis required no more than approximately 2 months time.

In addition, these techniques listed above apply to all devices in the <200nm category (SecureAVR, SmartMX, ST21, ST23).

We have heard, “…you said the Infineon SLE66 was the best device out there in the market…

The Infineon SLE66PE is a very secure device however, it (as do it’s competitors) all have their strengths and weaknesses.

Some examples of weaknesses are

  1. Layout of all Infineon SLE50/66 ‘P’ or ‘PE’ are very modular by design
  2. Lack of penalty if active shield is opened
  3. Begin runtime from a CLEAR (unencrypted) ROM which is ‘invisible’ to the user
  4. CPU core is based on a microcode/PLA type implementation
  5. Power-on-reset always begins running from the externally supplied clock
  6. Current design is based on a previous 600nm version designed around 1998
  7. 3 metal layer design for “areas of interest” (4th layer is the active shield)

Some examples of strengths are:

  1. ‘PE’ family used bond-pads located up the middle of the device.
  2. ROMKey must be loaded before begin attacked (else you just see their clear ROM content).
  3. MED is quite powerful if used properly for EEPROM content.
  4. Mesh is consistent across the device and divided into sections.
  5. Auto-increment of memory base address.
  6. Mixing of physical vs. virtual address space for MED / memory fetch.

No device is perfect.  All devices have room for improvement.  Some things to consider when choosing a smartcard are:

  • Does CPU ever run on external clock?
  • What is the penalty for an active-shield breach?
  • What is the fabrication process geometry?
  • How many metal layers is the device?
  • List of labs who might have evaluated this device and their capabilities.

Lastly, just because the device has been Common Criteria certified does not mean much to an attacker armed with current tools.  This is a common-oversight.

There is an ST23 smartcard device which has recently been certified EAL-6+ and the device has an active-shield with almost 1 micron wide tracks and a 1-2 micron spacing!!!  This makes a person scratch there head and say, “WTH????”

We have some new content to post soon on the blog.  Be sure and tune in for that.  We will tweet an alert as well.

INSIGHTS | August 9, 2010

Atmel ATMEGA2560 Analysis (Blackhat follow-up)

At this years Blackhat USA briefings, the ATMEGA2560 was shown as an example of an unsecure vs. secure device.  We have received a few requests for more information on this research so here it goes…

The device did not even need to be stripped down because of designer lazyness back at Atmel HQ.  All we did was look for the metal plates we detailed back in our ATMEGA88 teardown last year and quickly deduced which outputs were the proper outputs in under 20 minutes.

Atmel likes to cover the AVR ‘important’ fuses with metal plating.  We assume to prevent the floating gate from getting hit with UV however the debunk to this theory is that UV will SET the fuses not clear them!

For those who must absolutely know how to unlock the device, just click on the, “Money Shot!”