How the latest Facebook hack could have costed money to its users

Related imageFacebook has recently reported that external actors have exploited a bug in its system to gain access to more than 50 million users. Apparently, three different bugs were used together to get access-token of the affected users that lets the attackers login to Facebook without needing the user’s password.  Though Facebook is still investigating what data the attackers could have gotten, considering the fact that the access-token is powerful (it has the permission of the Facebook mobile app), we can assume they’ve got everything. However, apart from getting user data, the attacker could have also performed several (automated/manual) actions using the affected user’s account including costing money! The following are off the top of my head assuming 50 million user acounts:

Bypass Facebook News Feed algorithm

We know the Facebook News Feed algorithm favors posts that have initial momentum (reactions, comments and shares). Thus, that attackers could have used the affected users’ accounts to generate fake reactions, comments and shares in order to fool the algorithm.

Sell Facebook Page Likes

In order to increase a Facebook Page’s Likes, one has to pay for Facebook and advertise the Page to a given audience that most likely will like the Page. However, using the 50 million accounts, the attackers could bypass Facebook advertisement and sell Likes directly to their users. Well, this could also apply for the first case, where the attackers sell traction.

Takeover users Facebook Pages and Facebook Groups

Imagine having a Page with millions of Likes that you spent money on and because of a bug on Facebook system, the attackers take control of and remove you from the admins list? Though Facebook might restore ownership, it is reported that it is not that simple to get back.

Use configured Facebook ad account

This actually will cost money to the user. If a user has a configured ad account, the attackers could use it to promote something that costs money. Moreover, the attackers could advertise something that violates Facebook’s policy and get the users ad account disabled.


According to the notification I received on Facebook, I am one of the 50 million affected users. Though Facebook is still working on it, I tried to go through the possible places where I could see if my account was used to perform some actions. For now, it seems fine. But I can’t say anything about the data they have.

 

Stay safe on this unsafe platform.

java.lang.VerifyError error after instrumenting/transforming Android apps

You might have encountered the java.lang.VerifyError DEX verification error when developing an Android app. There are several reasons for this. Most commonly being the IDE messing up with the build process and cleaning and rebuilding might solve the problem. In some cases it could also be tools that we use (for example, security tools for obfuscation). There are several resources for this case on the net and is relatively easy to fix.

However, what I wanted to write about in this post is not from the developer point of view, but rather from automated software testing point of view (that involves instrumentation or code transformation), where you have hundreds or even thousands of Android apps to test and you don’t have their source code. Here is my experience.

For a given security testing experiment of Android apps, I had to mutate apps that satisfy some mutation criteria. After the mutation is applied, I had to automatically verify whether the mutation didn’t break the app. To achieve this, I had to apply the mutation (and all the steps necessary to make app ready for install), then install the app on the emulator and test the mutated component if it crashes. In order to understand if the crash is caused by the applied mutation, I wrap the introduced statements within try-catch block and log the exception.

However, running the mutated apps on the emulator failed with the java.lang.VerifyError. The strict Runtime refused to load the “inconsistent” bytecode into the VM because it found a “Bad method” or some other reasons. This might depend on the level of instrumentation that you are applying and hence if you’re just introducing, say, a log statement only, maybe you will not encounter this problem and the instrumented app might run without a problem.

Since mutation is applied automatically in different real world apps, addressing the problem for each app is a bit difficult. For example, it is known that the Runtime will report error if we try to wrap a  synchronized block in a  try-catch block. Therefore, while doing the mutation, it will be a bit difficult (but not impossible) to first know if a call that I wrapped in a  try-catch block will eventually have a synchronized block in it. Even if I know this in advance (say, for example, during static analysis to check mutation criteria), it has no help as I cannot skip the  try-catch block since I need it to see if failure is caused by mutation and I cannot also remove the synchronized block since I will interfere with the design of the app.

Cause

This is just one case as the Android Runtime checks for several inconsistencies that were ignored during the Dalvik VM time. To mention some of inconsitences that are caught by the new Runtime:

  • extending a class that was declared final
  • overriding a package-private methods
  • invalid control-flow
  • unbalanced moniterenter/moniterexit (this might be the reason for the synchronized block but I haven’t checked the final bytecode for the said inconsistency)
  • passing wrong argument to a method

Solution

A more general solution would have been understanding what modifications are making the verification fail and improving the instrumentation. However, that is out of my scope for the moment.

So, the solution that I found is specific to my problem. Considering that the ART is introduced in Android 5 (API 21), the easiest workaround I found was using an emulator running, say, API 20. Since I know what kind of mutations I am applying and I also monitor executions, resorting to a less restrictive Runtime would’t affect the general behavior of the app under test.

Therefore, if your instrumented Android app isn’t running on the emulator for ART  java.lang.VerifyError  error, just use emulators running API level below 21 and it should be an easy workaround.

Cheers!

Chrome Extension “Video Downloader GetThemAll 30.0.2” might contain malware

lo

So I have been using this extension for a while and all of a sudden it was disabled. When going to the extension to see what happened, Chrome reports that the extension contains malware and for this reason it is disabled.

What could have happened?

As it happened to other popular extensions, it could have been modified to include malicious behavior, transferred ownership to potential malicious owners, have been hacked or update included policy violating feature (e.g., download from YouTube)–but we don’t know. There is also a rumor that the extension was mining cryptocurrency. It is time to analyze version 30.0.2 in details in order to understand what information could have leaked, if any.

Though GetThemAll 30.0.3 is already available on Chrome web store, probably it is better to stay away until further results on what happened on the previous version.

A quick look at the diff of version 30.0.2 and 30.0.3 shows, the earlier has a suspicious obfuscated “background.js” file that accesses the images/video_help.png file, which also exists only in version 30.0.2.

Cheers

Sources of dangerous vulnerabilities

There are a lot of whitehat and blackhat security researchers out there. While the whitehat researchers inform the target company before disclosing vulnerabilities, the blackhats either use the vulnerabilities for personal activities or sell them on black market. But both kinds of researchers spend a lot of time stepping through codes around where potential vulnerabilities might exist (e.g. around input manipulation code in a hunt for buffer overflow or format string vulnerabilities).

But there are a couple of other potential tricks to facilitate the hunt. One of this is comparing binaries of critical updates that are pushed by the vendor. If we consider Windows as an example, if Microsoft pushed a critical update, say, for one of the services handled by svchost.exe, an attacker may compare the old binary with the new updated version to see where Microsoft patched the vulnerabilities. In some cases reverse-engineering the patch might also reveal the vulnerabilities. The vulnerabilities found might interest the attacker if they have remote code execution ability, local privilege escalation or any arbitrary code execution. Since not all people update their system as soon as the update is pushed, the attackers will have enough time to craft their exploit and start using. If I remember correctly, the Sasser worm (2004) used the vulnerability discovered in this way to exploit MS04-011.

The other potential source of information is Dr. Watson of Windows. When applications crash, the error reporting feature on Windows XP and later versions might leak some important information on source of error. A reverse-engineering around the location might provide information about a vulnerability in the target application. Gathering the error reporting might be done on a lab computer or from different compromised computers.

These definitely are not going to be the only ways to identify vulnerabilities from binaries.

X11 based Linux keylogger

As a challenge to the paper “Unprivileged Black-Box Detection of User-Space Keyloggers“, we were askd to write a Linux keylogger that can hide behind the tool mentioned on the paper. A friend and I came up with several ideas to hide our keylogger from being detected. We didn’t manage to include all the ideas in the keylogger code because of time constraint but the professor approved that if we had included those options, their tool wouldn’t have detected it. Before writing the ideas we came up with, let me explain how their detection method works in a nutshell.

They assumed that, by design, keyloggers capture keystrokes and save them to file. If for example, “AbCdeF” was pressed on the keyboard, somewhere, some process is writing a file with the same bytes. So they check for this correlation by sending keys to all running processes and monitoring file activities and checking the number of bytes written. If the same amount of bytes were written to file, then the process that wrote that file is a keylogger.

Our ideas to circumvent this are presented below. Obviously, these are for educational purpose only.

1. Buffering.  We have a buffer that changes its size every time it writes its content to file. Here is how it works.

Let’s say we have buf[1024]. And let the first random buffer size be 750. Then we keep buffering until 750 is reached and write it to file. Then the next buffer size would be randomly chosen. Say 900. The process continues like that.

2. If we’re on an Internet-connected computer, we directly post the logged keystrokes on a remote server. That’s it, nothing on file.

3. Being selective when we capture keys. Let’s face it, when we capture keys, usually it’s password or something related. So why would we be interested in keys that are entered on Sublime? So, we targeted web browsers: Mozilla Firefox and Google Chrome. What does it mean? Their tool sends our keylogger a key and it is ignored. Keeps logging like a boss 🙂

There were also other ideas like having a different sized output (different size than the entered keys) on file by applying cryptography but the paper says they addressed this issue very well so we didn’t bother to test it. Will post code if anybody is interested.

The C source code of X11 based Linux keylogger can be found here.

This work got us full points.