-
Full Disclosure: Heartbleed
I find it impossible to believe that you could find your way to my blog without knowing what the Heartbleed vulnerability is, but just in case, more information can be found here. It has been all over every sort of news. If you read news on the Internet, you HAD to have heard about it.
The CodeWatch site was vulnerable. In an effort to support TLS 1.2 and cryptographic ciphers that utilize PFS, I was on one of the latest versions of OpenSSL, 1.0.1e to be exact. Within a day after the announcement I had OpenSSL patched, and within two days or so I had re-generated my private key and had my CA reissue my certificate based on the new key, but I am just now getting around to posting about it. If you use the site for the CodeWatch web app, then you should change your password(s) just in case.
Here is a list of actions I took to remediate the vulnerability:
- Downloaded and compiled the patched version of OpenSSL (1.0.1g).
- Compiled the latest stable version of Nginx against OpenSSL 1.0.1g. This was performed externally on the web server. I also compiled several internal systems to remediate this vulnerability inside the network.
- Re-generated the web server’s private key.
- Issued a new certificate signing request using the new private key.
- Issued a new certificate using the certificate signing request with my Certificate Authority.
- Revoked the old certificate.
- Installed the new key and certificate on the web server
What a pain! The good news (I guess, kind of?) is that even if my private key had been compromised and traffic from the site intercepted, an attacker should still not be able to decrypt the data because the CodeWatch site only supports PFS ciphers and has been configured this way from the beginning.
-
Web Services Penetration Testing with soapUI, Burp, and Macros
I test web services fairly infrequently in proportion to “standard” web applications or network penetration tests. I guess organizations are still trying to get their hands around general web application security or are oblivious to the risk of attacks at the web services layer, unaware of the high potential for remote code execution among other security risks. Due to the limited number of web services penetration tests performed, it often takes me a little bit of time to refresh myself on how I like to setup the environment. I thought others might be in the same boat, hence this post.
There are some other good posts and resources out there for web services testing, but I thought I would go into a little more detail on some things that I’ve seen and steps necessary to automate a portion of the testing. This post is geared towards testing REST based web services, though some of the steps and approaches could be used in SOAP based web services testing as well. I will be using soapUI and Burp Suite Pro, along with Burp macros to facilitate in the testing.
The first thing that needs to be done is to setup soapUI to use Burp as the proxy for all connections. This can be done by navigating to File->Preferences->Proxy Settings. My setup is currently listening on the loopback on port 8083 (make sure to enable the proxy as well):

Next, I need to create a project by navigating to File->New soapUI Project, providing a project name and the associated WADL file (if you are testing SOAP based web services, you would use a WSDL file). I also want soapUI to create a test suite for me:

The next screen involves configuring the test suite. I like to create a single test case for each method, solely based on how it is laid out in the UI:

Give the test suite an intuitive name. If there are different web services methods, you could consider naming the test suite after that specific resource:

Depending upon how the web services were defined in the description language, you might be prompted about test steps being unique. In this case, I just accept the names that soapUI selects by default. If the web services layer can be accessed over HTTP, then you are pretty much ready to go for testing purposes now; however, if the web services must be accessed over HTTPS (hopefully this is the case), then there are a few more steps.
First, expand the test suite and then expand the “Test Steps.” Double click on one of the steps and at the top you should see the actual host/port to which you are connecting. Click on this drop down and select “[edit current]” if using HTTPS:

This should be edited to use HTTP instead:

The SSL connection fails from soapUI to the Burp proxy without the above due to the fact that Burp is MiTM’ing the connection. There is probably some way to export the keys/certificates used by Burp and import them into soapUI to avoid this but the method I use here just winds up being easier for me. Once you have edited the link to use HTTP, you will see the HTTP version as an option in the drop downs for the other test methods. You can either double click each one, click on the drop down, then select the new HTTP version, or you can click once on each one and change the option in the “REST TestRequest Properties” box below the “Test Steps” (which should be much faster):

Now, you need to setup a proxy in Burp that sends communication received over HTTP from soapUI to the web service over HTTPS. In the Burp “Proxy” tab, go to the “Options” sub tab and click the “Add” button. Once again, for this example I am listening on port 8083 on the loopback:

Click on the “Request handling” tab to ensure communication to the web service is over HTTPS by selecting “Force use of SSL.” In this example, I am using test.example.com and the remote port is 8443:

At this point, we can send requests from soapUI, intercept and modify them in Burp, and then get a response from the web service. If the web service doesn’t require authentication (unlikely and scary if true), then you can now perform all the testing you want with and through Burp. Or, if the web service is using Basic, Digest, or NTLM authentication, you can configure Burp to automatically authenticate within Options->Connections->Platform Authentication. However, if the web service has implemented an alternative way of authenticating and granting access to the web service method calls then you are going to need to create some macro’s to automate checking for whether the connection is authenticated, and if not, re-authenticate each time automatically before performing the test. This is especially important if you want to use Burp’s Scanner and Intruder tools.
The rest of this post is going to focus on one process flow that I’ve seen that seems to be common for authenticating access to web services. In this particular case, the authentication service is the Jasig Central Authentication Service (CAS). The flow of the authentication service is basically:
- Access form based web login and receive a Session ID token,
- POST credentials to the web site, including the Session ID token,
- Attempt to access the REST service,
- If the access ticket is current, return results of REST service method call, if not, redirect to login page to generate a new Session ID,
- Login page redirects (without the need for credentials) to the authentication service, including a ticket ID in the redirect URL,
- Authentication service accepts the ticket generated by the login page and grants access to resources (this is often for a very limited window – typically between 5 and 10 seconds) and redirects to requested web service method,
- Web service method is finally called and results are obtained.
What we need to build is a macro that first checks to see if we are authenticated, and if not, it performs the above steps to re-authenticate. This is all the more important due to the fact that access is typically granted for a very small window of 5 to 10 seconds as described above. The first step is to work through this process within the browser. Connect to the login page, enter valid credentials, and then make a REST call. The next step is to create a macro out of the requests/responses produced by this test.
To start, navigate to Options->Sessions, and then click the “Add” button in the “Session Handling Rules” section. Give the macro a name that makes sense:

Now, set the scope for the macro rule in the “Scope” tab. For web services in particular, since I will be performing most of the testing via soapUI, I select all but “Extender” in the “Tools Scope” options. I need the authentication to be automatic no matter what I am doing. I also like to set the scope to custom to make sure the rule only applies in the specific instance in which I want it to apply:

Then I need to actually set the scope. For this example, it is all over HTTPS, the host is test.example.com, the port being used is 8443, and the REST url is always prefixed with “/path/to/resource.” So, when you click the “Add” button to include a new item in scope, you get:

Now we are ready for the meat of the macro. This macro is intended to check whether the session is valid. So back on the “Details” tab of the main session handling rule editor, navigate to the “Rule Actions” section, click the “Add” button, and add a rule to “Check session is valid.”

Edit this rule and configure it to run a macro to validate the session. This will require you to “Add” a macro for the check. When you click “Add”, select the request that you previously made in the browser after authentication to the test web services method. In several instances, there has been an “/about” method that returns general information about the API or service and that is what I have used in this example. Then provide a meaningful name:

Click on the “Configure item” button to edit actions that are taken by the macro and select the option to “Use cookies…” as we want to use any cookies we already have in the initial request because we might still be authenticated:

Click ok to save this individual macro. Next, we need to configure what to tell Burp to look for in the response to that initial request to determine whether we have valid credentials. There are many options here, but the example assumes that the response body will contain the literal (case insensitive) string “test1.1b” if the session is still valid:

As shown in the screenshot above, if the session is invalid, we want to run another macro. This macro will be used to re-authenticate to the web service. The first step is to click “Add” and create another macro. The next step is to select all of the applicable requests made in the test made via the browser. In this example, it follows the flow listed above where there is a request to the login page, credentials are submitted, a request to the REST method is made, a request is made to the login service to obtain a ticket for calling REST methods, a request to the auth service is made to create the ticket, and then a final request is made to the test REST method:

The first request should be the initial connection to the login form page. Click on the “Configure item” button for this request, and select “Add cookies…” We select this option because we want to add the initial Session ID cookie received in the response to the second request which includes submitting our credentials:

The next request to configure is the actual POSTing of credentials to the login page. Note that the Session ID received in the initial request to the login page is included in the POST:

This request needs to be configured such that all submitted data is NOT URL encoded, and cookies should be configured to be added to the cookie jar if received and previously provided cookies are used in the initial request. This ensures that we will continue to use the Session ID provided, but if a new one is generated for some reason we will use that one in future requests as it is the most recent:

The next step is to make a request to a sample/test REST method (for which we will add and use cookies):

This is followed up with a request to the login page using a new Session ID generated in the above request to the REST method. In this case, the Session ID is used as part of the URL:

We need to configure this request. We want the Session ID provided in the response to the REST method to be used as the jsessionid parameter in the request to the login page. We want to add any cookies received in the request to the cookie jar and use any cookies already in the cookie jar in the request. In order to use the previously provided Session ID as part of the URL, we need to configure the parameter to be derived from the immediately preceding response and not to be URL encoded:

In addition, as seen in the above screenshot, the response to this request includes a redirect to the auth service which includes a ticket that has been generated for authentication. The response includes something along the lines of:

We need to grab this ticket value so that it can be included in the request to the auth service so that we use a new and valid ticket to obtain access. We do this by clicking the “Add” button in the “Custom parameter locations in response” section. I use the same name as what is sent in the response, and pull out the ticket value by capturing the values between “?ticket=” and the next header which is “Content-Length”:

Next, we make a request to the auth service using the ticket provided by the login service. This should result in the granting of a ticket, and thus access to all the REST methods for some short period of time:

For this request to succeed, we need to configure it to use the response from the request to the login service as the parameter value within the URL in the request to the auth service:

This should be about it in terms of this re-authentication macro. To recap, we:
- Made a request to the login page and received a Session ID,
- Submitted our credentials and the Session ID to the login page,
- Made a request to a sample/test REST method,
- Made a request to the login page in order to obtain a ticket, part of this request included a new Session ID generated in the request to the sample/test REST method,
- Made a request to the auth service using the ticket provided by the login service, thus granting us access for a limited period.
I actually included a follow up request to the sample/test method when originally creating this demonstration but I don’t believe it was necessary.
The macro is complete and it is safe to click “Ok.” Two more check boxes need to be checked as part of the session handling rule, they are: “Update current request with parameters matched from final macro response” and “Update current request with cookies from session handling cookie jar.” This rule is almost complete and you can click “Ok” in the rule editor window. You should now be back in the main window for the rule (where the name/scope/etc are edited). The last step is to make sure that as part of this rule, requests from the session handling cookie jar are used:

This rule should be configured to essentially update all cookies:

The session handling rule is ready. Now we need to prime Burp to use the automated Scanner. Go back to soapUI and double click on a “Test Step” (aka REST method/service call). There will probably be options for adding values to parameters, which you should do, and then a green arrow to click and generate a request:

Input data into the parameters and send the request for each REST method. Now, you can go into the Burp “Target” tab, right click on the root resource path and select “Actively scan this branch.” In addition, I suggest sampling some juicy looking methods by right clicking the actual full REST URL in the “Target” tab or the “Proxy” tab and sending to Intruder. Make sure you also manually test all or a bunch of the methods by editing the parameter data within soapUI, clicking the request button in soapUI, setting Burp’s proxy to intercept requests and modify the requests, and then observing responses.
I hope this was helpful. Happy hunting!
-
Automate WAF Bypass with Burp
I read an article from a Fortify security researcher earlier this week that provided a very simple and effective way to bypass some Web Application Firewalls (WAFs). The article can be found here. After reading, I updated my Burp configuration to automatically take advantage of this flaw in design and thought I would share the simple approach with my readers (if you have been using Burp for a while, you will likely already know how to do this).
The flaw relies on adding HTTP headers to each request we make to the application. This can be done simply by adding some rules in the proxy options. First, navigate to the “Proxy” tab, and then click on the “Options” tab. If you scroll down, you will see the option to “Match and Replace.” Click on the add button:

Now, all you have to do is add the match. The “Type” field should be “Request Header,” as you want to add a header. If you leave the match field blank, then instead of looking for a match Burp will add the header you create in the “Replace” field. In the “Replace” field, type in one of the headers that can be used to bypass the WAF.
The list of headers includes, but is probably not limited to:
- x-originating-IP: 127.0.0.1
- x-forwarded-for: 127.0.0.1
- x-remote-IP: 127.0.0.1
- x-remote-addr: 127.0.0.1
Add each of these as matches, and check the box as shown in the image above when you want them enabled and sent in each request. An example is provided below:

That’s it! Crazy simple huh?
-
CodeWatch Update – Two Factor Access
I updated the authentication features of CodeWatch over the weekend to support two-factor authentication (2FA). This is an update I’ve wanted to make for a while but never seemed to have the time due to other commitments or projects. I waited a few days so that I could test before posting. If you have an account, you can now go into the “Account” tab and click the “Two Factor” option to configure this setting.
What this means is that you can add a second factor of authentication to your account. For more information on 2FA, see this Wikipedia entry. The system currently supports two different 2FA options; SMS based phone 2FA and email based 2FA. All you have to do is enable 2FA for your account, configure email or phone based 2FA, and then enter a PIN. The PIN can really be any value and might be a little overkill (think second password, but without the same complexity and length requirements).
I am using the Time-based One Time Password Algorithm (TOTP) to create the 2FA token. You can read more about this here. I leveraged information from the PHP site here as well as this Github project here for implementation in my app.
Additional security properties include:
- The token length is 8 digits.
- I am using the openssl_random_pseudo_bytes function with a 32 byte length value to generate a random key to seed each TOTP token.
- The token is only valid for 180 seconds. I felt this was a reasonable time to receive and enter the token.
- The token is stored in memory and is single use only; it is removed after the first successful usage or after the 180 seconds has expired.
- The PIN used in combination with the token is stored as a hash using bcrypt, an IV unique to each account, and a work factor.
I decided to use Twilio for SMS based integration. I had never used Twilio before but was happy to find that for my simple purposes it was crazy easy. It took maybe a few minutes to figure out, implement, and test.
-
Introducing Deadrop
I am pleased to release Deadrop, a secure file upload and download utility. I know there are probably already sites/utilities that provide this but I wanted to build this a) because I could and b) because I trust my stuff more than any cloud provider. Currently, file uploads are limited to 25M based on hard drive space constraints on the server.
Deadrop was built to be secure. To understand the security of the system you must first understand the basics of the architecture. The architecture involves a front-end web server (the only server accessible from the Internet), a backend web services server based on SOAP, and a backend database server. The front-end takes the user submitted file over SSL, encrypts it, and sends it to the web service over SSL along with the encryption IV, file hashes, and other information provided in detail below. The web service server stores this file in a protected directory and stores details about the transaction in the database for future access.
The details stored in the database include:
- Timestamp at which point the file was stored. This is for some future potential features like deleting the file after X period of time.
- Original file name.
- A SHA-256 HMAC of the file using the user provided password as the key.
A SHA-256 hash of the file.- The IV used in the file encryption process.
- A hashed version of the provided password.
- The IV used to create the bcrypt hashed password.
- The original size of the file.
- Two fields to customize when the file gets deleted.
- Two fields to act as counters to determine when to delete the file.
The web server front end create’s an IV for the encryption process using mcrypt. Then, the mcrypt encrypt function is used to encrypt the data (AES-256 bit encryption) using the generated IV and the password provided by the user. This data is then base64 encoded so that it can be sent via SOAP. In addition to the encoded and encrypted data being sent to the web service, the web server also takes an HMAC of the file (SHA-256) using the provided password as the key
and a SHA-256 hash of the file. The data, HMAC,Hash,IV, password, name of the file, user supplied options, and original size of the file is then sent to the web service.The web server then uses shred to securely delete the temporary file created during the upload process. I am using the ext4 filesystem set to data=writeback, but it is on a RAID array so I realize that the file is still accessible for a short period of time. Unfortunately, this temporary file is not what is encrypted, the file content data is what is encrypted and sent to the web service. My research into PHP found that you cannot prevent an uploaded file from hitting your upload directory without patching the PHP source itself (see here and here). To combat this limitation, I am using shred to securely delete the only time the data is stored unencrypted.
The web service receives the data and writes the data for the file to disk using the HMAC as the filename. This ensures that the exact same file can be uploaded to the server with a different password, resulting in a unique file name. The system will prevent uploading an identical file with an identical password.
This is where the hash version of the file is used;The system queries the database using the HMAC and if there are any returned resultsit uses the IV for the password of the matched entry with the newly provided password, and if this matches thenthe upload is rejected. The password is hashed using bcrypt, an mcrypt generated IV, and a work factor of 13. For a great description on hashing and storing passwords see this article as it provides a good overview of what I am doing and safe storage of passwords in general. The hashed version of the password, along with the associated IV, user supplied options, file HMAC,file Hash,and file IV is then stored in the database.That kind of concludes the architecture and process for uploading a file. Now, let’s talk about security at the transport layer. The front-end web server is configured like so:
- TLS is the only supported protocol (TLS 1.0, 1.1, and 1.2).
- I’m using a 4096 DH param key and a 2048 bit certificate signed with SHA-256.
- The web server is configured to only support AES-256 bit algorithms with Perfect Forward Secrecy (PFS)
- The web server is configured to use Strict Transport Security (HSTS) and automatically redirects all requests to SSL as an added security measure.
You can validate the above information by submitting my site to Qualys SSL Labs if you like. The web service server is also configured to only allow TLS and AES-256 bit algorithms. In addition, database traffic to MySQL is encrypted using SSL as well. The file data is also encrypted when in transit inside the SSL encrypted connection to and from the web service server.
Access to a file requires passing the HMAC as the ‘file’ parameter in the Deadrop link. The user will then be prompted for the password. Once the password is provided, the HMAC and password are sent to the web service (again, all over SSL). The HMAC is used to identify the file in the database and then the password is used with the stored IV; if the bcrypted version of the provided password combined with the stored IV matches the hashed version of the password stored in the database, then the file is read back and sent as a SOAP message to the web server (along with the file IV, original file size, and file name). The web server uses this to base64 decode the data, decrypt the file and send it back using the ‘application/octet-stream’ mime type and the original file name. The original file size is required as mcrypt adds extra padding to the file which can be removed by using `fputs($outstream, $data, $filesize)`, only sending back the same size file as was provided. When the file is sent back to the browser, the PHP output stream is used, ensuring the data is not written to disk.
In addition to the transport layer and disk layer encryption, there are two user options when uploading a file that I thought might be useful for sensitive data. The first option determines how many times the file can be downloaded before it is deleted and the second option determines how many consecutive failed attempts to download are allowed before deleting the file. The second option is there to prevent brute force attacks such that after X amount of incorrect passwords the data is wiped. In each case where the limit is reached, the row containing all information in the database concerning the file is deleted and the file itself is shredded.
So far, I have successfully tested this out using several types of files:
- Adobe PDF
- Text (Notepad compatible)
- Excel (.xls, .xlsx, .xlsm, and .xlsb)
- Word (.doc and .docx)
- PowerPoint (.ppt and .pptx)
- Executable (.exe)
- Compressed (.zip, .tgz, and .bz2)
I welcome any feedback on limitations or flaws in the design (outside of the known issue of the original file being written as a temp file in an unencrypted state as this is a limitation in PHP itself). Or if you just have an idea on how to improve it some way then feel free to comment below.
Note: If you upload a file make sure you keep track of the link to access it as well as the password itself. All data is either hashed, which is one way, or encrypted in a way that prevents even myself from decrypting the data. So if you forget the URL or the password then the file is no longer accessible for all intents and purposes.
UPDATE 2013-01-04: This post was updated to reflect the latest version. The separate SHA-256 hash of the file was not needed in combination with the HMAC. I’m not sure what I was thinking there as I could determine whether the upload was a unique combination of file + password already with the HMAC (duh!). The SHA-256 is no longer taken and that field has been removed from the database.