SMT Solvers for Software Security (USENIX WOOT’12)

At WOOT’12 a paper co-written by Julien Vanegue, Rolf Rolles and I will be presented under the title “SMT Solvers for Sofware Security”. An up-to-date version can be found in the Articles/Presentation section of this site.

In short, the message of this paper is “SMT solvers are well capable of handling decision problems from security properties. However, specific problem domains usually require domain specific modeling approaches. Important limitations, challenges, and research opportunities remain in developing appropriate models for the three areas we discuss – vulnerability discovery, exploit development, and bypassing of copy protection”. The motivation for writing this paper is to discuss these limitations, why they exist, and hopefully encourage more work on the modeling and constraint generation sides of these problems.

A quick review of the publication lists from major academic conferences focused on software security will show a massive number of papers discussing solutions based on SMT technology. There is good reason for this 1) SMT-backed approaches such as symbolic/concolic execution have proved powerful tools on certain problems and 2) There are an increasing number of freely available frameworks.

The primary domain where SMT solvers have shone, in my opinion, is in the discovery of bugs related to unsafe integer arithmetic using symbolic/concolic execution. There’s a fairly obvious reason why this is the case; the quantifier free, fixed size, bitvector logic supported by SMT solvers provides direct support for the precise representation of arithmetic at the assembly level. In other words, one does not have to do an excessive amount of work when modeling the semantics of a program to produce a representation suitable for the detection of unsafe arithmetic. It suffices to perform a near direct translation from the executed instructions to the primitives provided by SMT solvers.

The exploit generation part of the paper deals with what happens when one takes the technology for solving the above problem and applies it to a new problem domain. In particular, a new domain in which the model produced simply by tracking transformations and constraints on input data no longer contains enough data to inform a solution. For example, in the case of exploit generation, models that do not account for things like the relationship between user input and memory layout. Obviously enough, when reasoning about a formula produced from such a model a solver cannot account for information not present. Thus, no amount of computational capacity or solver improvement can produce an effective solution.

SMT solvers are powerful tools and symbolic/concolic execution can be an effective technique. However, one thing I’ve learned over the past few years is that they don’t remove the obligation and effort required to accurately model the problem you’re trying to solve. You can throw generic symbolic execution frameworks at a problem but if you’re interested in anything more complex than low level arithmetic relationships you’ve got work to do!

Anatomy of a Symbolic Emulator, Part 3: Processing Symbolic Data & Generating New Inputs

In this final video in the series we go over how to generate new inputs for a program once we detect a user-influenced conditional branch. At the end there is also an example of the type of condition/resulting formula that we get from working on a real file parser, in this case libwebp.

(You probably want to click the “Watch on YouTube” option on the bottom right of the video and set the quality to 720p)

Conclusion
This type of emulation, input generation and formula checking does not need to be limited to conditional jumps. As I discussed in a previous post you can use a similar approach to discover variable ranges, check for variable relationships and assist in figuring out complex algorithms. For example, one could generate a query to the solver every time an argument to malloc is found to be influenced by the user, or export a list of all functions that operate on user-influenced data to IDA for manual review. (In fact, a light-weight version of this approach in combination with fuzzing and an IDA importer is possibly more generally useful to an individual auditor than going down the route of full on whitebox fuzzing. More on that later =))

Anyway, I hope these videos provide some insight into how a whitebox fuzzer might work as well as one approach to building a symbolic emulator. To give an idea of the effort involved – the combined whitebox fuzzing, trace parsing and emulation code (along with supporting libraries) comes to around 10,000 lines of Python. Of this, the emulator itself is only 3000 lines or so. The PIN tracer is just under 1000 lines of C++.

Tracing is currently fairly unoptimised and parsing something like a video or image while tracing can result in a factor of 10-100 increase in running time. This usually means a wait of 30 seconds, which isn’t too bad for whitebox fuzzing as tracing is not performed too often but for other uses of a symbolic emulator (like tracing while fuzzing normally) this will require some work. The emulator itself is Python based and as such is not lightning fast. In the default run-mode it emulates ~5000 instructions per second. What this translates to is about 30-40 minutes per trace of an average file parser. This isn’t as bad as you might think however as the tests cases generated tend to be much more effective at hitting new code than what you would get from dumb fuzzing. Despite this we still need performance gains and I’m working on a few different solutions for that. Somewhere around 30,000+ instructions per second would be what I would consider approaching acceptable =)

To preempt the inevitable questions – for now JESTER is not publicly available but that may change in the future. It’s very much a research prototype at the moment where we’re testing out several approaches to improving performance and general usefulness. However, if you are interested in working on this type of research post a comment with a contact address (it won’t appear publicly) as I’m fairly sure we are currently hiring.

Anatomy of a Symbolic Emulator, Part 2: Introducing Symbolic Data

In the previous post I discussed one way to go about gathering a trace for emulation. In this I’m going to talk about how we go about emulating such a trace, how and why we hook functions as they are emulated and how symbolic operations are performed.

As before, this post is accompanied by a video which demonstrates the code in action. Unlike the previous post I’ve decided to skip the paragraphs of rambling and instead most of the info is in the actual video itself =)

Topics covered:
– Introducing symbolic data via function hooks
– Performing computations on symbolic data

(You probably want to click the “Watch on YouTube” option on the bottom right of the video and set the quality to 720p. Btw, near the end of the video I said something along the lines of “one of the advantages of whitebox fuzzing over symbolic emulation”. That makes no sense =) What I meant to say was “one of the advantages of whitebox fuzzing over normal symbolic execution”.)

Anatomy of a Symbolic Emulator, Part 1: Trace Generation

A couple of months ago there was an ACM article on the SAGE whitebox fuzzing system from Microsoft Research. SAGE is one of the most interesting products of research on automated program testing in recent years and, according to Microsoft, has been used to find a massive amount of bugs in their various file parsers.

At its core, SAGE contains a symbolic emulator for executing instruction traces over symbolic data. As well as whitebox fuzzing, symbolic emulators are fairly useful things for a variety of reverse engineering, vulnerability discovery and program analysis tasks. Essentially, a symbolic emulator is a CPU emulator that not only supports operations on concrete numeric values but also on abstract values that may represent a range of concrete values.

In this series of posts I’m going to give an overview of a Python-based symbolic emulator for x86 that I’ve been working on (called JESTER) and show how it can be applied to the problem of whitebox fuzzing. Hopefully this will give an idea of how symbolic emulation works (it’s fairly simple) and also provide some insight into how systems like SAGE operate.

Consider the x86 instruction add eax, ebx. Operating over concrete values an emulator will do the obvious thing of taking the value in EAX, adding it to EBX and then storing the result back in EAX. It will also update the various flags that are affected. Over symbolic values however the result is a bit more interesting. Lets assume that EAX contains the abstract value V1 which represents an unconstrained 32-bit variable, and EBX contains the concrete value 0x10. In this case the emulator will create a new abstract value V2 which represents the addition of V1 and 0x10 and store that back in EAX. Diagrammatically, we can see that EAX now contains something that is a function rather than a single value.

      v1   10
       \   /
    EAX: +    

A slightly more complex diagram shows what the Zero Flag would hold after the above instruction.

      v1   10
       \   /
         +    0
          \  /
           ==   1    0
            \   |   /
        ZF: if-then-else

I purposefully used the word ‘function’ because what we end up with, in registers and memory, are expression trees that map from a given set of inputs to an output. As more instructions are emulated these trees get bigger and more difficult to reason about so people usually take the approach of exporting them to a SMT solver and querying their models that way. The obvious applications being input crafting, tracking user-influenced data and checking security properties. This is fairly well documented in previous posts and in a decade worth of academic literature so I won’t delve into the details.

The point of this post is instead to look at the overall architecture of a symbolic emulator with the aim of illuminating some of the components involved, more directly than is typically done in formal descriptions. I also want to give people an idea of how much or how little effort is involved in building these tools. In order to demonstrate the use of a symbolic emulator I’ll apply it to the problem of whitebox fuzzing i.e. using a symbolic emulator in combination with a SMT solver to generate inputs for a program guaranteed to force execution down a new path.

While writing this series of posts Rolf Rolles posted a great video/blog entry on the topic of input crafting using an SMT solver. Taking a leaf out of his book I’ve decided to accompany these with a video that demonstrates the tools described in operation and should ideally give some insight into their construction. The video is linked at the end but the following wall of text will give some context and might be worth glancing over. This isn’t the most entertaining of entries in the series and is mostly for completeness so if you’re bored out of your mind I accept minimal responsibility =)

1. Trace generation

An emulator needs some way to know what instructions execute and it also needs a starting memory and thread context. There are a few different approaches to getting such information. The Bitblaze/BAP research groups modified Qemu and hook in directly there, the guys working on S2E do something similar and I previously wrote a C++ library that was used as part of a Pintool at run time. There are a couple of problems with tying your emulation directly into the run time environment of the tool however. Firstly, it’s a lot more annoying to debug an extension to Qemu or PIN than n separate emulator and secondly, it prevents you from doing the emulation on a separate machine to the tracing. The second issue is probably the most important in the long run as to really scale whitebox fuzzing to the point where it is useful requires parallelism.

The approach I took this time around is directly inspired by the work of MSR on their Nirvana/iDNA tool, but much more simplistic. Instead of using the Pintool to do the emulation I use a lightweight one to just trace the instructions executed and other interesting events, like image loads/unloads, system calls and debugging info. If you’ve used PIN before then most of what I’m about to describe will be obvious and fairly boring so you might want to skip on to part 2 of this series of entries.

The trace format is uncompressed and unoptimised and to date I’ve not had any problems with that. A typical segment just looks as follows (L denotes an image load, I an instruction execution and C provides debugging information as discussed below):

L;4;/lib32/libc.so.6;f5c54000;157244
L;5;/lib32/libm.so.6;f5c2e000;24790
C;0;EAX:ffb292a4;EBX:f5da9ff4;ECX:53f78923;EDX:5;ESP:ffb291f8;EBP:ffb291f8 ... 
I;0;8048fc5
C;0;EAX:ffb292a4;EBX:f5da9ff4;ECX:53f78923;EDX:5;ESP:ffb291f0;EBP:ffb291f8 ... 
I;0;8048fc8
C;0;EAX:ffb292a4;EBX:f5da9ff4;ECX:53f78923;EDX:5;ESP:ffb291ec;EBP:ffb291f8 ... 

In the early stages of the project I worried about this and thought I’d have to come up with some compression method but that hasn’t been the case. Most file parsers generate traces that can be measured in 10s of millions of instructions and things of that scale easily fit in a few gigabytes of storage.

1.1 Debugging Assistance

Writing an emulator of any kind can be tedious work. It’s easy to make mistakes and get the semantics of an instruction slightly wrong or set a flag incorrectly. Initially I tried to counter this by writing unit-tests but it quickly became obvious that 1) These were never going to be exhaustive and 2) They were as likely to have mistakes as the emulator code itself. Instead, I added a debug mode to the tracer that logs the register values after each instruction (The lines starting with a “C” above). This then allows the emulator to compare its register values to the ones we know it should have and highlight any discrepancies. Tracing everything in /usr/bin/ and checking these values is a hell of a lot more exhaustive than any unit-testing I would have done! The only reason I’m mentioning this is that I’d recommend it to anyone writing something of this nature. The best tests are by far those you can extract from real binaries.

1.2 Handling system calls

One of the disadvantages of using a user-land tracer is that you miss out on any updates to memory that happens within the kernel. The only real way to handle this correctly is to define per-system-call handlers that know which memory addresses a system call will update based on its arguments or return value. In PIN this is fairly straightforward, you register a syscall entry and exit handler, get the syscall args or return value and then log whatever you need on exit.

int main(int argc, char *argv[])
{
        ...

        PIN_AddSyscallEntryFunction(SyscallEntry, 0);

        ...
}

VOID SyscallEntry(THREADID tid, CONTEXT *ctxt, SYSCALL_STANDARD std, VOID *v)
{
        ADDRINT syscall_num = PIN_GetSyscallNumber(ctxt, std);
        ...
#ifdef LINUX
        // Handle Linux syscalls
        switch (syscall_num) {
        case SYS_READ:
        {
                ADDRINT read_buf_ptr = PIN_GetSyscallArgument(ctxt, std, 1);
                size_t read_sz = PIN_GetSyscallArgument(ctxt, std, 2);

                t_data->log_addrs[read_buf_ptr] = read_sz;
                break;
        }
        ...
}

Handling each and every system call might seem like an onerous task but if you’re working on particular types of software (e.g. file parsers) then you can get away with a minimal subset e.g. open, read, lseek, mmap and a few others. My general approach is to just add them as necessary. You’ll encounter many more along the way but it turns out not a whole lot end up having any interaction with the user controlled data you’re interested in.

In the trace log format I included support for events other than those shown in the above snippet.). For syscalls as just discussed there is the M event which looks like as follows and tells the emulator to update the memory address given with the contents of a file.

M;0;f5f97000:syscall_c0_f5f97000_1000_1

There is also the ‘R’ event which tells the emulator to update a register with a particular value. This is useful for instructions you can’t handle for whatever reason. Other than that there isn’t really anything to capturing a trace. The only thing I haven’t mentioned is that on starting tracing, either at a given address or the programs entry point, you also need to log the programs memory and thread contexts at that point in order to give your emulator starting values. This is fairly straightforward though and PIN provides all the required API calls.

(You probably want to click the “Watch on YouTube” option on the bottom right of the video and set the quality to 720p. The tools discussed are not publicly available but that may change in the future.)

Finding Optimal Solutions to Arithmetic Constraints

This post is a follow on to my previous ones on automatically determining variable ranges and on uses for solvers in code auditing sessions. In the first of those posts I showed how we can use the symbolic execution engine of ID to automatically model code and then add extra constraints to determine how it restricts the state space for certain variables. In the second I looked at one use case for manual modelling of code and proving properties about it as part of C++ auditing.

In this post I’m going to talk about a problem that lies between the previous two cases. That is, manually modelling code, but using Python classes provided by ID in a much more natural way than with the SMT-LIB language, and looking for optimal solutions to a problem rather than a single one or all possible solutions.

Consider the following code, produced by HexRays decompiler from an x86 binary. It was used frequently throughout the binary in question to limit the ranges allowed by particular variables. The first task is to verify that it does restrict the ranges of width and height as it is designed to. Its purpose is to ensure that v3 * height is less than 0x7300000 where v3 is derived from width.

 

int __usercall check_ovf(int width, int height,
    int res_struct)
{
  int v3; // ecx@1

  v3 = ((img_width + 31) >> 3) & 0xFFFFFFFC;
  *(_DWORD *)(res_struct + 12) = width;
  *(_DWORD *)(res_struct + 16) = height;
  *(_DWORD *)(res_struct + 20) = v3;
  if ( width <= 0 || height <= 0 ) // 1
  {
    *(_DWORD *)(res_struct + 24) = 0;
    *(_DWORD *)(res_struct + 28) = 0;
  }
  else 
  {
    if ( height * v3 <= 0 || 120586240 / v3 <= height ) // 2
      *(_DWORD *)(res_struct + 24) = 0;
    else
      *(_DWORD *)(res_struct + 24) = malloc_wrapper(res_struct,
                                       120586240 % v3,
                                       height * v3); // 3
    *(_DWORD *)(res_struct + 28) = 1;
  }
  return res_struct;

 

If the above code reaches the line marked as 3 a malloc call will occur with height * v3 as the size argument. Can this overflow? Given the checks at 1 and 2 it’s relatively clear that this cannot occur but for the purposes of later tasks we will model and verify the code.

One of the things that becomes clear when using the SMT-LIB language (even version 2 which is considerably nicer than version 1) is that using it directly is still quite cumbersome. This is why in recent versions of Immunity Debugger we have added wrappers around the CVC3 solver that allow one to build a model of code using Python expressions (credit for this goes to Pablo who did an awesome job). This was one of the things we covered during the recent Master Class at Infiltrate and people found it far easier than using the SMT-LIB language directly.

Essentially, we have Expression objects that represent variables or concrete values and the operators on these expressions (+, -, %, >> etc) are over-ridden so that they make assertions on the solvers state. For example, if x and y are Expression objects then x + y is also an Expression object representing the addition of x and y in the current solver context. Using the assertIt() function of any Expression object then asserts that condition to hold.

With this in mind, we can model the decompiled code in Python as follows:

 

import sys
import time

sys.path.append('C:\\Program Files\\Immunity Inc\\Immunity Debugger\\Libs\\x86smt')

from prettysolver import Expression
from smtlib2exporter import SmtLib2Exporter

def check_sat():
    img_width = Expression("img_width", signed=True)
    img_height = Expression("img_height", signed=True)
    tmp_var = Expression("tmp_var", signed=True)

    const = Expression("const_val")
    (const == 0x7300000).assertIt()
    
    (img_width > 0).assertIt()
    (img_height > 0).assertIt()
    
    tmp_var = ((img_width + 31) >> 3) & 0xfffffffc
    (img_height * tmp_var > 0).assertIt()
    (const / tmp_var > img_height).assertIt()

    expr = (((tmp_var * img_height) &
            0xffffffff000000000) != 0)  # 1
    expr.assertIt()

    s = SmtLib2Exporter()
    s.dump_to_file(expr, 'test.smt2') # 2
    
    # After this we can check with z3 /smt2 /m test.smt2
    # Alternatively we can use expr.isSAT which calls CVC3 but it
    # is a much slower solver

    start_time = time.time()
    if expr.isSAT():
        print 'SAT'
        print expr.getConcreteModel()
    else:
        print 'UNSAT'

    print 'Total run time: %d seconds' % (time.time() - start_time)

if __name__ == '__main__':
    check_sat()

 

The above code (which can be run from the command-line completely independently of Immunity Debugger) models the parts of the decompiled version that we care about. The added condition, marked as 1 checks for integer overflow by performing a 64-bit multiplication and then checking if the upper 32 bits are 0 or not. The first thing to note about this code is that it models the decompiled version quite naturally and is far easier to write and understand than the SMT-LIB alternative. This makes this kind of approach to analysing code much more tractable and means that once you are familiar with the API you can model quite large functions in very little time. For example, asserting that the condition if ( height * v3 <= 0 || 120586240 / v3 <= height ) must be false translates to the following, which is syntactically quite close to the C code:

 

tmp_var = ((img_width + 31) >> 3) & 0xfffffffc
(img_height * tmp_var > 0).assertIt()
(const / tmp_var > img_height).assertIt()

 

Checking if the function does in fact prevent integer overflow is then simple.

Using the solver to check if an overflow is possible on the argument to malloc

So, modulo modelling errors on our behalf, the check is safe and prevents an overflow on the size argument to malloc*. So what now? Well, in the case of this particular code-base an interesting behaviour appeared later in the code if the product of width and height is sufficiently large and the above function succeeded in allocating memory. That is, the height and width were small enough such that height * v3 was less than 0x7300000 but due to multiplication with other non-constants later in the code may then overflow. The question we then want to answer is, what is the maximum value of image * height that can be achieved that also passes the above check?

Solving Optimisation Problems with Universal Quantification**

This problem is essentially one of optimisation. There are many assignments to the input variables that will pass the overflow check but we are interested in those that maximise the resulting product image and height. Naturally this problem can be solved on paper with relative ease for small code fragments but with longer, more complex code this approach quickly becomes an more attractive.

The first thing to note is that at the line marked as 2 in the above Python code we used a useful new feature of ID, the SmtLib2Exporter***, to dump the model constructed in CVC3 out to a file in SMT-LIB v2 syntax. This is useful for two reasons, firstly we can use a solver other than CVC3, e.g. Z3 which is much faster for most problems, and secondly we can manually modify the formula to include things that our Python wrapper currently doesn’t have, such as universal quantification.

Universal quantification, normally denoted by the symbol ∀ and the dual to existential quantification, is used to apply a predicate to all members of a set. e.g. ∀x ∈ N.P(x) states that for all elements x of the natural numbers some predicate P holds. Assume that the conditions of the integer overflow check are embodied in a function called sat_inputs and M is the set of natural numbers module 2^32 then the formula that we want to check is (sat_inputs(x, y) => (∀ a, b ∈ M | sat_inputs(a, b), x * y >= a * b)), that is that we consider x and y to be solutions if x and y satisfy the conditions of sat_inputs implies that the product x * y is greater or equal to the product of any other two values a and b that also satisfy sat_inputs. This property is encoded in the function is_largest in the following SMT-LIB v2 code. The rest of the code is dumped by the previous Python script so checking this extra condition was less than 5 lines of work for us. The details of sat_inputs has been excluded for brevity. It simply encodes the semantics the integer overflow checking code.

 

(declare-funs ((img_width BitVec[32])(img_height BitVec[32])))

(define-fun sat_inputs ((img_width BitVec[32])(img_height BitVec[32])) Bool
    (and
         ; Model of the code goes here
    )
)

(define-fun is_largest ((i BitVec[32])(j BitVec[32])) Bool
    (forall ((a BitVec[32]) (b BitVec[32]))
        (implies (sat_inputs a b)
            (bvsge (bvmul i j) (bvmul a b))
        )
    )
)

(assert (and
    (sat_inputs img_width img_height)
    (is_largest img_width img_height)
    )
)

(check-sat)
(get-info model)
Finding the maximum product of height and width

Running this through Z3 takes 270 seconds (using universal quantification results in a significant increase in the problem size) and we are provided with an assignment to the height and width variables that not only pass the checks in the code but are guaranteed to provide a maximal product. The end result is that with the above two inputs height * width is 0x397fffe0, which is guaranteed to be the maximal product, and height * (((width + 31) >> 3) & 0xfffffffc) is 0x72ffffc, as you would expect, which is less than 0x7300000 and therefore satisfies the conditions imposed by the code. Maximising or minimising other variables or products is similarly trivial, although for such a small code snippet not particularly interesting (Even maximising the product of height and width can be done without a solver in your head pretty easily but instructive examples aren’t meant to be rocket science).

This capability becomes far more interesting on larger or more complex functions and code paths. In such cases the ability to use a solver as a vehicle for precisely exploring the state space of a program can mean the difference between spotting a subtle bug and missing out.

Conclusion
By its nature code auditing is about tracking state spaces. The task is to discover those states implied by the code but not considered by the developer. In the same way that one may look at a painting and discover meaning not intended by the artist, an exploit developer will look at a program and discover a shadow-program, not designed or purposefully created, but in existence nonetheless. In places this shadow-program is thick, it is easily discovered, has many entry points and can be easily leveraged to provide exploit primitives. In other places, this shadow-program clings to the intended program and is barely noticeable. An accidental decrement here, an off-by-one bound there. Easy to miss and perhaps adding few states to the true program. It is from these cases that some of the most entertaining exploits derive. From state spaces that are barren and lacking in easily leveragable primitives. Discovering such gateway states, those that move us from the intended program to its more enjoyable twin, is an exercise in precision. This is why it continues to surprise me that we have such little tool support for truly extending our capacity to deal with massive state spaces in a precise fashion.

Of course we have some very useful features for making a program easier to manually analyse, among them HexRays decompiler and IDA’s various features for annotating and shaping a disassembly, as well as plugin architectures for writing your own tools with Immunity Debugger, IDA and others. What we lack is real, machine driven, assistance in determining the state space of a program and, dually, providing reverse engineers with function and basic block level information on how a given chunk of code effects this state space.

While efforts still need to be made to develop and integrate automation technologies into our workflows I hope this post, and the others, have provided some motivation to build tools that not only let us analyse code but that help us deal with the large state spaces underneath.


* As a side note, Z3 solves these constraints in about half a second. We hope to make it our solving backend pretty soon for obvious reasons.
** True optimisation problems in the domain of satisfiability are different and usually fall under the heading of MaxSAT and OptSAT. The former deals with maximising the number of satisfied clauses while the latter assigns weights to clauses and looks for solutions that minimise or maximise the sum of these weights. We are instead dealing with optimisation within the variables of the problem domain. OptSAT might provide interesting solutions for automatic gadget chaining though and is a fun research area.
*** This will be in the next release which should be out soon. If you want it now just drop me a mail.

Thanks to Rolf Rolles for initially pointing out the usefulness of Z3’s support for universal/existential quantification for similar problems.

Augment your Auditing with a Theorem Prover

A better post title may have been ‘Outsourcing your thinking when lack of sleep makes basic arithmetic difficult’. Anyways, consider the following code from the  ArrayBuffer::tryAllocate function found in WebCore/html/canvas/ArrayBuffer.cpp.

85	void* ArrayBuffer::tryAllocate(unsigned numElements, unsigned elementByteSize)
86	{
87	    void* result;
88	    // Do not allow 32-bit overflow of the total size
89	    if (numElements) {
90	        unsigned totalSize = numElements * elementByteSize;
91	        if (totalSize / numElements != elementByteSize)
92	            return 0;
93	    }
94	    if (WTF::tryFastCalloc(numElements, elementByteSize).getValue(result))
95	        return result;
96	    return 0;
97	}

 

Lets ignore for now whether or not you know that the check on line 91 is valid or not. For the sake of argument, assume you’re unsure about whether this code does in fact prevent a situation where totalSize can overflow and the allocation function on line 94 can still get called. In such a situation you could try and reason through on paper whether the check is safe or not, but this is potentially error prone. The other option is to model the code and throw a theorem prover at it. The advantage of this approach is that if the code is safe then you end up with a proof of that fact, while if it’s unsafe you end up with a set of inputs that violate the safety check.

The following is my model of the above code:

[sean@sean-laptop bin]$ cat test.smt
(benchmark uint_ovf
:status unknown
:logic QF_BV

:extrafuns ((totalSize BitVec[32])(numElements BitVec[32])(elSize BitVec[32]))
:extrafuns ((a BitVec[64])(b BitVec[64])(big32 BitVec[64]))

; if (numElements) {
:assumption (bvugt numElements bv0[32])

; unsigned totalSize = numElements * elementByteSize;
:assumption (= totalSize (bvmul numElements elSize))

; totalSize / numElements != elementByteSize
:assumption (= elSize (bvudiv totalSize numElements))

; Check if an overflow is possible in the presence of the 
; above conditions
:assumption (= big32 bv4294967295[64])
:assumption (= a (zero_extend[32] numElements))
:assumption (= b (zero_extend[32] elSize))
:formula (bvugt (bvmul a b) big32)
)

 

(The above .smt file is in SMTLIB format. Further information can be found at [1], [2] and [3])
The above models tryAllocate pretty much exactly. The final three assumptions and the formula are used to check if the integer overflow can occur. Mixing bitvectors of different types isn’t allowed for most operations so it is necessary first to extend numElements and elSize into 64 bit variables. We then check for overflow by multiplying these 64 bit extensions by each other and checking if the result can be greater than 0xffffffff (big32) while also satisfying the conditions imposed in modelling the function.

[sean@sean-laptop bin]$ ./yices -V
Yices 2.0 prototype. Copyright SRI International, 2009
GMP 4.3.1. Copyright Free Software Foundation, Inc.
Build date: Fri Apr 23 11:15:16 PDT 2010
Platform: x86_64-unknown-linux-gnu (static)
[sean@sean-laptop bin]$ time ./yices -f < test.smt 
unsat

real	0m0.587s
user	0m0.576s
sys	0m0.008s

 

And there we have it, our proof of safety (modulo modelling errors on my behalf and implementation errors in yices). Given the assumptions specified it is not possible for the multiplication at line 90 to overflow and still satisfy the condition at line 91. Total time from starting modelling to a proof, about 10 minutes.

[1] Logic for quantifier free bit-vector logic
[2] Theory for fixed size bit-vectors
[3] SMT-LIB v2 reference

Edit: I modified the above formula and some of the text after noticing an error. Hence any comments below may refer to an older version of the post.

Code Analysis Carpentry (Ruxcon 2010)

Ruxcon is next month and I’ll be giving a talk titled Code Analysis Carpentry (Or how not to brain yourself when handed an SMT solving hammer). Here’s the abstract:


This talk will be one part “Oh look what we can do when we have a Python API for converting code into equations and solving them” and one part “Here’s why the world falls apart when we try to attack every problem in this way”.

One popular method of automated reasoning in the past few years has been to build equational representations of code paths and then using an SMT solver resolve queries about their semantics. In this talk we will look at a number of problems that seem amenable to this type of analysis, including finding ROP gadgets, discovering variable ranges, searching for bugs resulting from arithmetic flaws, filtering valid paths, generating program inputs to trigger code and so on.

At their core many of these problems appear similar when looked at down the barrel of an SMT solver. On closer examination certain quirks divide them into those which are perfectly suited to such an approach and those that have to be beaten into submission, often with only a certain subset of the problem being solvable. Our goal will be to discover what problem attributes place them in each class by walking through implemented solutions for many of the tasks. Along the way the capabilities and limitations of the modern crop of SMT solvers will become apparent. We will conclude by mentioning some other techniques from static analysis that can be used alongside a SMT solver to complement it’s capabilities and alleviate some of the difficulties encountered.

The schedule is full of talks that look like fun. I’m really looking forward to seeing a few in particular, especially those by Silvio Cesare, Ben Nagy and kuza55. Looks like it’ll be just as entertaining as REcon (with hopefully not quite as much sun-burn)! Mostly I’m just looking forward to watching 30 people get on stage and try to out do each other with sheep related innuendo. If there isn’t at least one drunken presenter abusing the crowd I’m calling it a failure!