Fuzzing PHP’s unserialize Function

Recently, the PHP development team have decided that they will no longer consider bugs in the implementation of the unserialize function to be security relevant. In this post I’d like to outline why I think this is a bad idea, and provide an easy set up for fuzzing/ongoing QA of unserialize, as the fact that it is a bottomless pit of bugs seems to be part of the motivation for this move.

The argument for the change in categorisation is twofold. Firstly, they state that unserialize is inherently an unsafe function to use on data from an untrusted source. Essentially, they are saying that there is no point in treating bugs in the implementation of unserialize as security-relevant, because the mere fact that an application passes untrusted data to it means that the security of that application has already been compromised. Secondly, they argue that treating unserialize bugs as security relevant encourages developers to use it on untrusted data. In my opinion this is quite a weak point. It is prominently documented that unserialize is unsafe, and if that warning doesn’t get the message across then I very much doubt that a change in categorisation in the PHP bug tracker will.

Lets briefly discuss the first point in a little more detail, as the reality is a bit more nuanced. Often, it is relatively easy to leverage an unserialize of untrusted data without resorting to attacking the unserialize implementation itself. There are some requirements for this though. In particular, the attacker must be able to find a class containing a PHP magic method in the target application or one of the libraries that it makes use of, e.g. a __destruct function. Also, if the allowed_classes argument to unserialize is provided then the class containing the magic method must be included in this list. These conditions don’t always hold. If the application is closed source then the attacker will have no direct way to figure out the name of such classes, and if allowed_classes is the empty list then they won’t be able to make use of them anyway.

The above are not merely hypothetical scenarios.  This rather nice write-up documents the exploitation of a vulnerability in PornHub which required the attackers to target the unserialize implementation. As mentioned in the write-up, the fact that the application was closed source prevented them from discovering a class which could be used for object injection.

I think most people will agree with the PHP development team that if an application contains a call to unserialize with untrusted data then it is a bad idea and should be fixed. However, it is also true that treating unserialize bugs like normal bugs unnecessarily exposes PHP application developers, and the users of their applications, to further risk.

The Fun Part

One reason the PHP development team are drowning in unserialize bugs appears to be that they either don’t fuzz the function at all after making changes, or that their mechanism for fuzzing it is ineffective. I fully recognize that the functionality that unserialize implements is complex and that adding to it, or modifying the existing code, is always going to come with the risk of introducing bugs. However, I also believe that with a small of effort it should be possible to significantly decrease the number of low hanging fruit that make it to release builds.

Anyway, in the interests of putting my shell scripts where my mouth is, I’ve uploaded a repository of scripts and auxiliary files for building PHP and fuzzing unserialize via AFL. There’s no secret sauce. If anything that’s kind of the point; ongoing sanity checking of the existing unserialize implementation and changes to it could be trivially automated.

The README file explains how to get everything up and running, but in short it should just be a matter of running ./get.sh && ./build.sh && ./fuzz.sh output N.  A GNU screen session will be running in the background containing the output of an AFL master instance and N slaves. which you can attach to via screen -r fuzz.

The scripts are straightforward, but there are a few things worth noting for the curious:

  • The driver PHP script loads a string representing the data to be unserialized from a file and passes it directly to unserialize. This should be sufficient to find a significant portion of the bugs that I know of that have previously been fixed in unserialize, e.g. things like this. However, if some extra PHP code is required to trigger the bug then it will not be found. e.g. things like this. It should be relatively easy to extend the fuzzing setup with ideas from other blog posts on the topic, e.g. there are some good ideas in this post by the people who wrote the PornHub exploit mentioned above.
  • One can either build all of PHP with AFL’s instrumentation, or just the components that one thinks will be relevant to the unserialize functionality. The advantage of the former is you’re guaranteed that AFL can intelligently explore all functionality which may be triggered via unserialize, with the disadvantage that it is slower. The build.sh script currently compiles everything with AFL’s instrumentation. If you would like a faster binary you can compile PHP without AFL/ASAN, delete the object files that relate to unserialize, reconfigure the build to use AFL/ASAN and then recompile those object files. I’ve used this is the past and it works quite well but you need to make sure you recompile all the object files which relate to code that unserialize might trigger, otherwise AFL won’t be able to track coverage there.
  • AFL can be provided with a dictionary of tokens which it will use when fuzzing. The dictionary in the repository contains tokens pulled from the unserialize parser and I think it is fairly complete, but if I’ve missed anything let me know and I’ll be sure to include it.
  • The seed files one provides to AFL for fuzzing can have a significant impact on it’s ability to find bugs, and how quickly one finds a particular bug. The seed files I have provided have been constructed from examples of valid unserialize usage I’ve found in the PHP documentation and triggers for previous vulnerabilities. Feel free to let me know if you find a seed file which would be useful to add.
  • I have created this repository by pulling files and configurations from my fuzzing machines as I write this blog post. The goal is to give a faithful representation of the setup I’ve used in the past to find multitudes of unserialize bugs. However, I may have forgotten something, or broken something in the process of pulling it all together for release. As I’m writing this I have started up the fuzz.sh script to ensure everything works, but it’s possible I’ve messed something up and won’t notice. If that does turn out to be the case, let me know! If you’d like to confirm everything is functional then compile PHP version 5.6.11 and you should find this bug within 40 million executions or so.

Happy hunting!

(Unrelated side note: I’ll be teaching a 4 day version of ‘Advanced Tool Development with SMT Solvers’ in London in early November. See here for details if you’re interested!)

 

Fuzzing Language Interpreters Using Regression Tests

At INFILTRATE ’14 I gave a talk on the topic of fuzzing language interpreters. The slides are now available here. The results generated by the system presented and, subsequent, related work, were sufficiently good that my bottleneck quite soon moved from bug discovery to crash triage, which ended up forming the basis for my talk at INFILTRATE ’16.

The question addressed in the talk is ‘How can we fuzz stateful APIs in an efficient manner?’, where ‘efficient’ in this case means we want to reduce the number of wasted tests. For a stateful API, it’s probable that a given API call requires a preceding set of API calls to have set the environment into a particular state before it does anything ‘interesting’. If we don’t make an attempt to set up the environment correctly then we will likely end up with a lot of our tests discarded without exercising new functionality.

As you’ve probably guessed, and as I would assume many other people have concluded before me: a good source of such information is existing code written using the API. In particular, regression tests (and other test types) often provide self-contained examples of how to exercise a particular API call.

The talk itself is in two parts: firstly, a minimal system which works versus ‘easy’ targets, such as PHP and Ruby, and secondly, a more complex system which works versus more difficult targets, such as the JS interpreters found in web browsers.

Given that this talk was two years ago, the ideas in it have evolved somewhat and if you are interested in fuzzing either category of software mentioned above I would offer the following advice:

  1. For the easier targets, compile them with ASAN, extract the tests as mentioned and mutate them using something like radamsa. ASAN plus some batch scripts to automate execution and crash detection is sufficient to reproduce the results mentioned and also to find bugs in the latest versions of these interpreters. The system mentioned in the slides ended up being overkill for these targets.
  2. For the harder targets, a reimplementation of Langfuzz by Holler et al is probably a good first port of call. Also, rather than going down the somewhat insane route of hand-coding Javascript parsing and AST traversal in Go, the esprima and estraverse libraries make the process a lot less painful. My only excuse for not doing this to begin with is along the lines of ‘I’ve just learned Go and I must use it for everything!’. I have too much free time.

On the topic of program analysis: if you are interested in learning about SMT-based analysis engines the early-bird rate for the public editions of ‘Advanced Tool Development with SMT Solvers’ is available for another week. The details, including a syllabus, are here, and if you are interested drop me an email via contact@vertex.re.

Better Interpreter Fuzzing with Clang

Last weekend I decided to finally put some effort into investigating Clang, the C/C++/Objective-C frontend for LLVM. Clang is interesting as it is not only designed to provide efficient parsing and processing during compilation, but also as a platform for program analysis tools. Other options for static analysis of C/C++ exist e.g. GCC extensions, cscope hackery, and the various things built on top of CIL (e.g. Frama-C). However, Clang is attractive to me as it supports both C and C++, has a very permissive license, includes a well designed, extensive and documented API and is itself written in C++ (which might be a negative point depending on your view of the world but I like it :P). It also appears to have the backing of both Apple and Google, with a number of their engineers contributing regularly to both LLVM and Clang. All in all, it feels like a maturing project with a skilled and active community behind it.

Clang supports three approaches to writing analysis tools. Using libclang is the encouraged method for most tasks and is probably what you’re looking for if all you want to do is iterate over ASTs and pull out data. The next option is to write a plugin for Clang to be run during the build phase of a project. This is slightly more involved and doesn’t have the benefit of allowing you to use a Python/whatever wrapper around the API. However, if you want to do things like automated source rewriting, or more involved static analysis as part of a build, then this is where you want to be. The final approach is using libtooling. I haven’t looked into writing anything based on this yet but it appears to offer the benefits of writing a standalone tool provided by libclang with the power of full AST control offered by a plugin. The only potential downsides, in comparison to libclang, that I can see are that you cannot write your code in a higher level language and it does not have access to as convenient an API.

After a bit of digging around in documentation and source, I think libclang offers the most gentle introduction to Clang as a program analysis platform. It’s not as powerful as the other two methods but has a stable API (check out include/clang-c/Index.h for the C interface and bindings/python/clang/cindex.py for the Python bindings) and supports a nice visitor based approach for writing tools that traverse an AST. From reading over the above code, and following Eli Bendersky’s introduction to using the Python bindings, you should be able to get up and running fairly easily. As a side note, libclang has progressed somewhat since that blog post so the limitations related to missing cursor kinds are now largely alleviated. There are still some missing features but we’ll get to that later.

A relevant question about now is probably “What the hell has all of that got to do with interpreter fuzzing?”. In general, not a whole lot =) However, when trying to think of some small problem I could tackle to get a feel for libclang I recalled an issue I previously encountered when fuzzing PHP. When fuzzing an interpreter there are typically two high level stages that are somewhat unique to interpreter fuzzing; 1) Discover the functions you can call and 2) Discover how to correctly call them e.g. the number and types of their arguments.

In this regard, PHP is a slightly easier target than say, Python or Ruby, because on the first point it has a lot of global functions that can be called without instantiating a class or figuring out what module to import. These functions can be easily discovered using get_defined_functions. Using this approach it is simple to enumerate a fairly large attack surface that calls directly into the C backend (somewhere upwards of 500 functions in a default configuration) and get fuzzing.

Point 2 remains a problem however. Each PHP function implemented in the C backend takes a number of typed parameters. Some type conversion can and does take place, which means you can specify a boolean when an int is required or other such things. Naturally, there are some types that are completely incompatible, e.g. a function callback is required and you provide an integer type. If you get this wrong the function will bail out without performing any further processing. As well as incorrect argument types, specifying an incorrect number of arguments will also often result in the function execution terminating before anything interesting happens. Precise implementation details can be found in the zend_parse_va_args and zend_parse_arg_impl functions of Zend/zend_API.c.

When fuzzing PHP we are then left with a bit of a conundrum. We can easily discover a substantial attack surface but we still need to figure out the types and number of arguments that each function expects. One solution is to just play a numbers game. Fuzzing an interpreter is fast… really fast. It is also easily distributed and instrumented. So, if we just guess at the types and parameters we’ll probably be correct enough of the time [1]. This has a slight issue, in that there is going to be a bias towards functions that take a smaller number of arguments for purely probabilistic reasons. An improvement is to just parse the error messages from PHP which give an indication as to why the function bailed out e.g. incorrect types or incorrect number of arguments. This is all kind of ugly however and it would be nice if we could just know up front the correct argument types and the exact number required.

Enter Clang! An efficient solution for this problem is simply to parse every C file included in the PHP build, find all calls to zend_parse_parameters, extract the format string that specifies the arguments and their types and then relate this information back to the function names discovered via get_all_functions. It’s a fairly straightforward task on the PHP code base and could probably be achieved via regular expressions and some scripting. That said, using libclang we can come up with a much tidier and less fragile solution. Before I delve into the solution I should mention that I’ve uploaded the code to GitHub under the name InterParser, minus the fuzzer, should you want to try it out for yourself, extend it or whatever.

I’ll explain the code using questions for which the answers were not immediately obvious from the Clang documentation or are things that were interesting/surprising to me.

How do I run my libclang based tool on all files in a build?

The solution to this is also a solution to the next problem so I’ll deal with it there!

How do I know the correct compiler arguments to use when parsing a file?

The clang_parseTranslationUnit of Index.h allows you to specify the arguments that would be passed to the compiler when parsing a particular source file. In some cases it is necessary that you specify these arguments correctly or the code that is parsed will not accurately reflect the code that is included when you run make. For example, if a chunk of code is enclosed within #if BLA / #endif tags and BLA is supposed to be defined, or not, via the compiler arguments then unless you provide the same argument when creating the translation unit the code will not be parsed.

This problem, and the previous one, could easily be solved if we could tell 1) every file processed when make is run and 2) the command line argument passed to the compiler on each invocation. After browsing the Clang documentation for a solution a friend of mine suggested the obvious answer of just wrapping the compiler in a script that logged the relevant information during the build. The script creplace.py provides this functionality. By pointing the CC and CXX environment variables at it you should get a log of all files processed by the compiler when make is ran, as well as the arguments passed.

The ccargparse.py module provides the load_project_data function which you can then use to build a Python dictionary of the above information. If there’s an inbuilt Clang API to do all this I didn’t find it [2], but if not you should be able to just drop in my code and work from there! See the main function of parse_php.py for exact usage details.

Once the compiler arguments have been extracted and converted to a list they can just be provided to the index.parse method when constructing the translation unit for a given file. e.g.

comp_args = load_project_data(cc_file)
args = comp_args[file_path]
index = clang.Index.create()
tu = index.parse(file_path, compiler_args)

How do I find all functions within a file?

Our first task is to find all calls to zend_parse_parameters and note the name of the top level backend API function so that we can later relate this to a user-space PHP function call. To do this we first need to find all function declarations within a translation unit for a source file. e.g the process_all_functions method:

for c in tu.cursor.get_children():
    if c.kind == clang.CursorKind.FUNCTION_DECL:
        f = c.location.file
        if file_filter and f.name != file_filter:
            continue
        fmt_str = process_function(c)

This is a fairly simple task — we iterate over all children of the top level translation unit for the given source file and compare the cursor kind against FUNCTION_DECL, for a function declaration. It’s worth noting that the children of a translation unit can include various things from files that are #include‘d. As a result, you may want to check the the file name associated with the location for the child is the name of the file passed to index.parse.

How do I find all calls to a particular function?

For each function declaration we can then iterate over all of its children and search for those with the kind CALL_EXPR. This is performed in the process_function method. Firstly, we need to check if the function being called has the correct name. The first child of a CALL_EXPR AST node will represent the function itself, with the rest representing the arguments to this function.

The last statement isn’t quite correct. The C and C++ standards specify a number of conversions that can take place on a function call, so the direct children of a CALL_EXPR AST node will represent these conversions and then the direct child of these conversion nodes will contain the details we require. Clang does have a mode to dump the AST for a given source file so if you’re curious what the AST actually looks like try passing -ast-dump or -ast-dump-xml as parameters to the clang compiler.

As an example, on a function call we encounter a CALL_EXPR node and its first child represents the decay of the function to a pointer as specified by the C/C++ standards. This node will have the kind UNEXPOSED_EXPR. It will have a single child of type DECL_REF_EXPR from which we can retrieve the function name via the displayname attribute. The following code in process_function performs the function name extraction and comparison.

if n.kind == clang.CursorKind.CALL_EXPR:
    unexposed_exprs = list(n.get_children())
    func_name_node = get_child(unexposed_exprs[0], 0)
    if func_name_node.displayname == "zend_parse_parameters":
        ...

How do I extract the parameters to a function call?

Once we know we have the correct function call we then need to extract the format string parameter that provides the type specification for the user-space API call. As with the node representing the function, arguments may also undergo conversions. The second argument to zend_parse_parameters is the one we are interested in, and it is usually specified as a string literal. However, string literals undergo a conversion from an rvalue to an lvalue. To account for this, we again go through an UNEXPOSED_EXPR node representing the conversion to access the node for the string literal itself.

Things get a little inconvenient at this point. Once we have the node representing the string literal we ideally want some way to access the actual value specified in the source code. It turns out that this functionality hasn’t been added to the Python bindings around libclang yet. Fortunately, a guy by the name of Gregory Szorc has been working on a branch of libclang and the associated Python bindings that adds the get_tokens method to AST nodes. His code is great but also contains a bug when get_tokens is called on an empty string literal. Until that gets fixed I’ve mirrored his cindex.py and enumerations.py files. Drop both into your clang/bindings/python/clang directory and you should be good to go!

The actual code to extract the string literal given the node representing it is then straightforward.

if tkn_container.kind != clang.CursorKind.STRING_LITERAL:
    raise VariableArgumentError()

tokens = list(tkn_container.get_tokens())
if tokens[0] is None:
    return ""

The check on the kind of the node is required as in a miniscule number of cases the format string parameter is passed via a variable rather than directly specified. This situation is so rare that it isn’t worth handling. The only other slightly odd thing is that when get_tokens is called on an AST node representing a string literal, like the first parameter in func("xyz", a), it will return a generator for two elements. The first will be the string literal itself, including the double quotes, while the second will represent the comma separator. I’m not sure if this is a bug or not but it’s not a big deal. If the string literal is empty then the generator will return None.

Conclusion

At this point we can associate the functions calling zend_parse_parameters with the format string passed to that call, and thus the types and number of arguments that the function expects. In a fairly small amount of Python we have a robust solution to problem 2, discussed earlier. A question you may have is How do you then associate the function name/argument specification from the C code to the corresponding user-space PHP function?. Well, PHP is nice in that it doesn’t do a massive amount of name mangling and consistently if we have a user-space function called func_one then the backend C function will be called zif_func_one (in the code it will probably look like PHP_FUNCTION(func_one) but Clang will expand the macro for you).

When parse_php.py terminates it will output something like the following:

INFO:main:API info for 518 functions in 68 files written to php_func_args.txt

Inspecting the output file we’ll see function to type specification mappings:

# /Users/sean/Documents/git/php-src/ext/standard/levenshtein.c
zif_levenshtein ss sss sslll
# /Users/sean/Documents/git/php-src/main/main.c
zif_set_time_limit l
# /Users/sean/Documents/git/php-src/ext/standard/versioning.c
zif_version_compare ss|s
...

This tells us that a user-space PHP script can correctly call levenshtein with two strings, three strings or two strings and three longs. It can call set_time_limit with a single long only. version_compare can be called with two or three strings and so on and so forth. So, with this information at hand we can construct a PHP fuzzer that no longer has to guess at the number of arguments to a function or their types.

Extensions

The most obvious extension of this code that comes to mind is in pulling out numeric values and string literals from comparison operators. For example, say we’re analyzing a function with the following code:

if (x == 0xc0)
    ...
else
   ....

It might be an idea to pull out the constant 0xc0 and add it to a list of possible values for numeric input types when fuzzing the function in question. Naturally, you’ll end up with extra constants that are not compared with the function inputs but I would guess that this approach would lead to some gain in code coverage during fuzzing.

Resources

Introduction to writing Clang tools
Eli Bendersky’s blog post on parsing C with libclang
Unofficial Clang tutorials

[1] This is what I did when I first fuzzed PHP a long time ago. You hit a lot of incorrect type/argument count errors but you can get so many executions per second that in general it works out just fine =)

[2] Two days after I wrote this it seems that support for the compilation database functionality was added to the Python bindings. You can read about this feature here. Using this feature is probably a more portable way of gathering compiler arguments and pumping them into whatever APIs require them.