Common Attack Pattern Enumeration and Classification
A Community Resource for Identifying and Understanding Attacks
This attack targets the encoding of the URL combined with the encoding of the slash characters. An attacker can take advantage of the multiple way of encoding an URL and abuse the interpretation of the URL. An URL may contain special character that need special syntax handling in order to be interpreted. Special characters are represented using a percentage character followed by two digits representing the octet code of the original character (%HEX-CODE). For instance US-ASCII space character would be represented with %20. This is often referred as escaped ending or percent-encoding. Since the server decodes the URL from the requests, it may restrict the access to some URL paths by validating and filtering out the URL requests it received. An attacker will try to craft an URL with a sequence of special characters which once interpreted by the server will be equivalent to a forbidden URL. It can be difficult to protect against this attack since the URL can contain other format of encoding such as UTF-8 encoding, Unicode-encoding, etc.
Attack Example: Combined Encodings CesarFTP
Alexandre Cesari released a freeware FTP server for Windows that fails to provide proper filtering against multiple encoding. The FTP server, CesarFTP, included a Web server component that could be attacked with a combination of the triple-dot and URL encoding attacks.
An attacker could provide a URL that included a string like
This is an interesting exploit because it involves an aggregation of several tricks: the escape character, URL encoding, and the triple dot.
Skill or Knowledge Level: Low
An attacker can try special characters in the URL and bypass the URL validation.
Skill or Knowledge Level: Medium
The attacker may write a script to defeat the input filtering mechanism.
An attacker can manually inject special characters in the URL string request and observe the results of the request.
Custom scripts can also be used. For example, a good script for verifying the correct interpretation of UTF-8 encoded characters can be found at http://www.cl.cam.ac.uk/~mgk25/ucs/examples/UTF-8-test.txt
Automated tools such as fuzzer can be used to test the URL decoding and filtering.
If the first decoding process has left some invalid or blacklisted characters, that may be a sign that the request is malicious.
Traffic filtering with IDS (or proxy) can detect requests with suspicious URLs. IDS may use signature based identification to reveal such URL based attacks.
Assume all input is malicious. Create a white list that defines all valid input to the software system based on the requirements specifications. Input that does not match against the white list should not be permitted to enter into the system. Test your decoding process against malicious input.
Be aware of the threat of alternative method of data encoding and obfuscation technique such as IP address encoding.
When client input is required from web-based forms, avoid using the "GET" method to submit data, as the method causes the form data to be appended to the URL and is easily manipulated. Instead, use the "POST method whenever possible.
Any security checks should occur after the data has been decoded and validated as correct data format. Do not repeat decoding process, if bad character are left after decoding process, treat the data as suspicious, and fail the validation process.
Refer to the RFCs to safely decode URL.
Regular expression can be used to match safe URL patterns. However, that may discard valid URL requests if the regular expression is too restrictive.
There are tools to scan HTTP requests to the server for valid URL such as URLScan from Microsoft (http://www.microsoft.com/technet/security/tools/urlscan.mspx).
[R.64.1] [REF-2] G. Hoglund and G. McGraw. "Exploiting Software: How to Break Code". Addison-Wesley. February 2004.
[R.64.2] [REF-3] "Common Weakness Enumeration (CWE)". CWE-20 - Input Validation. Draft. The MITRE Corporation. 2007. <http://cwe.mitre.org/data/definitions/20.html>.
[R.64.3] [REF-35] Gunter Ollmann. "URL Encoded Attacks - Attacks using the common web browser". CGISecurity.com. <http://www.cgisecurity.com/lib/URLEmbeddedAttacks.html>.
[R.64.4] [REF-36] T. Berners-Lee, R. Fielding and L. Masinter. "RFC 3986 - Uniform Resource Identifier (URI): Generic Syntax". January 2005. <http://www.ietf.org/rfc/rfc3986.txt>.
[R.64.5] [REF-37] T. Berners-Lee, L. Masinter and M. McCahill. "RFC 1738 - Uniform Resource Locators (URL)". December 1994. <http://www.ietf.org/rfc/rfc1738.txt>.
[R.64.5] [REF-38] "HTML URL Encoding Reference". URL Encoding Reference. W3Schools.com. Refsnes Data. <http://www.w3schools.com/tags/ref_urlencode.asp>.
[R.64.6] [REF-39] "The URLEncode and URLDecode Page". Albion Research Ltd. <http://www.albionresearch.com/misc/urlencode.php>.
[R.64.7] [REF-18] David Wheeler. "Secure Programming for Linux and Unix HOWTO". 5.11.4. Validating Hypertext Links (URIs/URLs). <http://www.dwheeler.com/secure-programs/Secure-Programs-HOWTO/filter-html.html#VALIDATING-URIS>.
More information is available — Please select a different filter.