SAT/SMT Summer School 2011 Summary (Days 5 & 6)

Day 5

Sketching: Program Synthesis using SAT Solvers (Armando Solar-Lezama)

Armando started his talk by demonstrating the automatic synthesis of a program for swapping two integer variables without using a third. It’s a standard algorithm and quite small but was still cool to see. He then demonstrated a few more algorithms involving bit-level arithmetic. The implementation of this tool, called Sketch, can be found here. The demonstrations given were for a C-like language and apparently synthesis works quite well for algorithms based around bit-twiddling.

These programs were generated from program ‘sketches’, essentially algorithmic skeletons, and a test harness, similar to unit tests, that described the desired semantics of the program. The sketches express the high level structure of the program and then the details are synthesized using a SAT solver and a refinement loop driven by the tests. The idea of the sketches is to make the problem tractable. The intuition for this was given by the example of curve fitting. That can be a difficult problem if you have nothing to go on but data points whereas if you are told the curve is Gaussian, for example, the problem becomes much more feasible.

The synthesis algorithm first uses the sketched fragment to generate a candidate program and then a SAT solver is invoked to see if the conjunction of this program and the semantics described by the tests are valid. If not, a counter-example is generated and used to refine the next iteration of program generation. This is incredibly simplified and the full details can be found in Armando’s thesis.

This was yet another talk where there was an emphasis on pre-processing formulae before they get to the solver. The phrase ‘aggressive simplification’ was used over and over throughout the conference and for synthesis this involved dataflow analysis and expression reduction (e.g. y AND 1 reduces to y) as well as more standard common sub-expression elimination.

Day 6

Harnessing SMT power using the verification engine Boogie (Rustan Leino)

This talk began with some coding demonstrations in a language called Dafny that has support for function pre-conditions, post-conditions and loop invariants. As these features are added to a code-base they are checked in real time. Dafny is translated into an intermediate verification language (IVL) called Boogie (the verification system for it is open source under the MS public license) which can be converted into SMT form and then checked using the Z3 SMT solver. While this fun to watch, most languages don’t have these inbuilt constructs for pre/post-conditions and invariants. Fortunately, Boogie is designed to be a generic IVL and translation tools exist for C, C++, C#, x86 and a variety of other languages (although from what I gather only some of these are publicly available and none are open source). As such, Boogie is designed to separate the verification of programs in a given language from the effort of converting them into a form that is amenable to checking.

The high level, take-away message from this talk was “Don’t go directly to the SMT solver”. It relates to the separation of concerns I just mentioned. This lets you share infrastructure and code for verification tasks that will be common between many languages and also means you have an intermediate form to perform simplification on before passing any formulae to a solver.

HAVOC: SMT solvers for precise and scalable reasoning of programs (Shuvendu Lahiri & Shaz Qadeer)

HAVOC is one such verification tool for C that makes use of Boogie. It adds support for user-defined contracts on C code that can then be checked. Based on the Houdini algorithm HAVOC can also perform contract inference with the aim of alleviating much of the burden on the user.

I really wish we had something similar to HAVOC for code auditing (this was actually one of the use cases mentioned during the talk). I’m not sure about others but essentially how I audit source code involves manually coming up with pre-conditions, post-conditions and invariants and then trying to verify these across the entire code-base by hand. This is fine, but with a tool-set of vim, ctags and cscope it’s also incredibly manual and seems like something that could at least be partially automated. It was mentioned that a more up-to-date version of HAVOC might be released soon so maybe this will be a possibility.

Non-DPLL Approaches to Boolean SAT Solving (Bart Selman & Carla Gomes)

This talk was on probabilistic approaches to SAT solving. These techniques still lag far behind DPLL based algorithms on industrial benchmarks but are apparently quite good on random instances with large numbers of variables.

Symbolic Execution and Automated Exploit Generation (David Brumley)

While I previously ranted about the paper this talk was based on, this talk was a far better portrayal of the research. Effectively, we’re in the very, very early stages of exploit generation research; while there have been some cool demos of how solvers might come into play we’re still targeting the most basic of vulnerabilities and in toy environments. All research has to start somewhere though, my own thesis was no more advanced, so it was good to see this presented with an honest reflection on how much work is left.

One interesting feature of the CMU work is preconditioned symbolic execution which adds preconditions to paths that must be satisfied for the path to be explored. This is a feature missing from KLEE but would be just as useful in symbolic execution for bug finding as well as exploit generation. Something that remains to be researched and discussed is efficient ways to come up with these pre-conditions.


The summer school was a great event and renewed my enthusiasm for formal methods as a feasible and cost effective basis for bug finding and exploit development. The best talks were those that presented an idea, gave extensive, concrete data to back it up and explained the core concepts and limitations with real world examples. I hope to see more papers and talks like this in the future.

A generic conclusion for the six days would be difficult, so instead the following were the reoccurring themes that stood out to me across the talks that may be relevant to someone implementing these systems:

Focus on one thing and do it well. For example, separate instrumentation from symbolic execution from solving formulae.
Aggressively simplify before invoking a solver. Simplification strategies varied from domain specific e.g. data-flow analysis, to generic logical reductions but all of them greatly reduced the complexity of the problems that solvers had to deal with and thus increased the problems the tools could handle.
Abstract, refine and repeat. The concept of a counter-example guided abstraction refinement loop seemed to be core to algorithms from hardware model checking, to program synthesis, to bug finding. In each, CEGAR was used to scale algorithms to more complex and more numerous problems by abstracting complexity and then reintroducing it as necessary.
Nothing beats hard data for justifying conclusions and driving new research. This point was made in the earliest talks on comparing SAT solver algorithms and reiterated through the SAGAN information collection/organisation system of SAGE. Designing up front to gather data lets you know where things are going wrong, keeps a record of improvements and makes for some pretty cool slides when you need to convince other people you’re not insane =)

SAT/SMT Summer School 2011 Summary (Days 3 & 4)

The slides for the summer school have started to go online so for the remaining days I’ll just give a quick summary of parts I thought were particularly interesting or comments that were made but not in the slides.

Day 3

BitBlaze & WebBlaze: Tools for computer security using SMT Solvers (Dawn Song & Prateek Saxena)

The first thing of note from this talk was a brief discussion on selecting paths for analysis during symbolic/concrete execution. Anyone who has used KLEE knows the lack of sane path selection mechanisms is a significant drawback so it was good to see one of the talks on this type of system discuss it. The methods used were dataflow and control flow distances to functions of interest.

An interesting problem tackled later in the talk was how to distinguish due from undue influence of tainted data over control flow. For example, it might be acceptable for user data to taint a value used in a switch statement that selects a function to run but it’s unlikely to be very good if it can taint a function pointer directly. Four different methods were presented from distinguishing these cases, the simplest being point by point exhaustion using a solver of the number of possible target addresses in the address space. More complex probabilistic and information theoretic approaches were also discussed and are elaborated on in their paper. It would be nice to see some more experimental data with these more advanced methods though as it is limited to 3 vulnerabilities and 3 benign cases.

SAT-based Model-Checking (Armin Biere)

Armin is the developer of one of the best SAT solvers, Lingeling, and his talk discussed advances in using SAT technology for model checking. During the talk he mentioned a paper by Aaron Bradley on SAT based model checking without unrolling which might be worth checking out but I haven’t had a chance to yet

CryptoMiniSat — A Rough Guide (Mate Soos)

This was a great talk by Mate Soos on CryptoMiniSat, which won last years SAT Race and is open source, and SAT solver design. Mate started with a discussion of the software design philosophy behind the project and put forward that it’s better to have less optimised and complex code if you can more easily implement better ideas. Given that his solver is faster than Lingeling, which is far more difficult to comprehend, it seems that he is correct. He had some other interest things to say on SAT solver features, emphasising regular simplification of expressions and maintaining a cache of results from unit propagation even if they are not currently useful.

SAGE: Automated Whitebox Fuzzing using SMT solvers (Patrice Godefroid & David Molnar)

In my opinion, this was the best talk of the summer school so far. Patrice and David discussed SAGE and presented a lot of data to encourage the development of tools for this kind of testing. SAGE is built on top of previously developed MS tools detecting crashes (AppVerifier), recording traces (Nirvana), generating constraints (TruScan) and solving constraints (Z3).

Unlike KLEE and the Bitblaze tools, the symbolic execution part of SAGE only accounts for a small fraction of the time cost. Only 1/4 of the total time is spent on symbolic execution with the remainder of their 3 week fuzzing runs spent on tracing, generating constraints and running the application under test on the fuzz files.

One interesting thing mentioned was that while most queries to the solver only take 1/10th of a second all queries are capped at 5 seconds and after that the solver is killed and the result is presumed UNSAT. This is based on the observation that they get more code coverage by this method than waiting for hours for a single query to return. They backed this up with some statistics that showing that longer run times only very rarely led to more bugs.

Some other points of note were:
– From the start SAGE was engineered to provide enough information and statistics on every part of its system that determining what it is doing and where it is succeeding/failing is possible. This is facilitated through a system called SAGAN that allows them to focus on areas needing work.
– SAGE is primarily deployed against file parsers. This is a use case where the majority of non-determinism is from the input. In other environments with different sources of non-determinism it might be more difficult to direct the application through constraint solving.
– Most OOM conditions are a result of trying to store the constraints in memory while analysing the trace, not in the solver as I would have expected. As a result, simplification and expression elimination can be necessary even before that staged.
– Most crashes seemed to be concentrated within the first 6 generations of constructed fuzz files but crashes were seen in all generations up to the mid to late teens. I don’t think they’ve ran for any longer than that.
– SAGE was responsible for 30% of bugs found in a certain class of file parsers on Windows 7. These were bugs missed by all other testing mechanisms. I wonder how long it will be before those of us interested in bug finding will have to start looking at tools like SAGE from the point of view of discovering where they are weak as a starting point for auditing.

All in all, this presentation had hard data to support some very exciting conclusions.

Day 4

Approaches to Parallel SAT Solving (Youssef Hamadi)
I had recently been wondering what the state of the art in parallel solving is so this was good to see. Youssef first started by proposing that we are unlikely to see order of magnitude speed ups in SAT solving based on advances in the current CDCL architecture of SAT solvers. I guess there are two ways to deal with this, one is to look at different approaches to sequential SAT solving and the other is to look at parallelism.

From 1996 to 2008 most of the approaches to parallel SAT proceeded by splitting the problem space into chunks and solving these instances in parallel with clause sharing of clauses under a certain size. Another approach is the portfolio approach used by parallel Z3, PLingeling and ManySAT. In this approach the same problem is attacked by several different solver instances each using a different configuration. The solver configuration is parameterized on the restart policy, polarity selection, clause learning and branching heuristics. This lead to super-linear speed-up with combinations of these solvers performing better than the sum of their parts.

One particular issue for this strategy though is how best to do clause sharing between solvers. Sharing of large clauses can be costly so one needs a strategy to figure out the upper limit on the size of clauses to share. This is complicated by the fact that the size of learned clauses progresses as the solver advances in the problem. Two algorithms were discussed in this context. The first was based on TCP bandwidth calculation algorithms that slowly increase the size of shared clauses and then quickly back off once a problem is detected and a heuristic based on variable activity to set a quality threshold on clauses to accept from other solvers.

This portfolio mechanism works well for between 4 and 8 cores but after that the effects of added cores is greatly diminished.

SAT Solving and Complexity Theory (Ryan Williams)

This was a theoretical talk that discussed some of the difficulties that people have ran into in not only proving P != NP but on many problems on the relations between complexity spaces. Much like the talk by Shai Ben David I’m pretty sure I’d butcher the mathematical details were I to summarise but the take away message was similar: the logics for reasoning and proof techniques that we have today for problems like this are quite insufficient; as a result people have struggled to even do far weaker reasoning like establishing lower bounds on NP-complete problems. It was a really interesting talk but I’ll definitely need to rewatch the video when it comes out =D

SAT/SMT Summer School 2011 Summary (Day 2)

Independence Results for the P vs. NP Question (Shai Ben David)

This was a really fun talk that was very much on the theoretical side of logic and satisfiability but with potentially very important implications. Ever since learning about early 20th century work by Hilbert, Godel, Turing etc. on foundations of logical systems and proofs I’ve been fascinated by anything that discusses the universal limitations and capabilities of logical systems. This was the first talk I’ve seen where this kind of purely theoretical work was linked to an implication for solving technologies. The fundamental question approached in the talk was whether P != NP is an irresolvable question give the logics we have available. That is, can we prove that it is unprovable.

I would do it injustice to try and summarise the talk (and I’d get it wrong!) but the main result was that if it were true that P is nearly equal to NP then we would not be able to prove P != NP using current lines of reasoning and tools. The interesting result for SAT solvers is that if this were the case then many of the problems we want to solve may be solvable in almost-polynomial time. The downside is that even if we could prove this the proof probably wouldn’t help at all in building a solver than can exploit this closeness.

I’ve totally butchered the details of this talk but you can find a earlier/shorter version of it here and a paper here.

HAMPI: A Solver for String Theories (Vijay Ganesh)

Vijay’s talk was on the HAMPI solver. HAMPI contains a theory for bounded (that’s important) character strings that allows it to reason about things like whether a regular expression matches against a particular string or not. From what I gathered, it operates by converting a regular expression into a context-free-grammar and then converting that context-free-grammar, along with any constraints we may wish to check, into a formula over bitvectors and checking the satisfiability of this with STP. The main target application was detecting oversights in regexs designed to catch SQL injection attempts but Vijay also mentioned they got a 2-5x speed-up when using this solver with KLEE on applications that do a lot of string manipulation. KLEE tends to perform quite poorly on things like XML parsers so I’d love to see if specialised solvers can help out here.

Modern SMT Solver Implementation (Leonardo De Moura& Nikolaj Bjorner)

This was a good talk by some of the best guys building SMT solvers, Leonardo De Moura and Nikolaj Bjorner. Both of their publication pages are worth checking out for details on building SMT solvers as well as the theoretical aspects.

They first highlighted some of the core problems in SMT solvers that affect performance: combining engines, unfairness between theory solvers and quantifiers. The most interesting part of the talk for me was on the use of abstraction/relaxing and then refinement when dealing with complex problems. For example, abstracting problems problems using uninterpreted functions and then checking satisfiability may reduce the complexity of the original problem. If it turns out that is UNSAT then the original is UNSAT and if you get a SAT result you can then refine the abstraction if necessary and check again. This idea of abstraction/refinement (CEGAR I guess) loops came up a lot in many different talks.

Also interesting was the mention of their verifying compiler projects that do function level verification and use contracts for called functions in the analysis rather than analysing down into them. I know the idea of contracts is used in HAVOC and discussed extensively in Thomas Ball’s publications but I’m not sure if this was the project they were referring too.

Scalable Testing/Reverse Engineering/Performance Profiling with Parallel and Selective Symbolic Execution (George Candea & Stefan Bucur)

The next talk was on the guys behind S2E and Cloud9. Cloud9 is cool in that it’s a big cluster of nodes each exploring different parts of a tree in symbolic execution. They found run times for gaining a particular code coverage percentile to drop dramatically when going from 1 to 8 nodes and then drop even further as they went up to 48 nodes. The total effect being a drop from 6 hours to minutes for the particular example.

S2E caught my attention a few weeks ago after reading their paper as it is designed to be a platform for writing analysis tools that leverage symbolic execution. To my knowledge it is the first system of this kind that allows a callback/event based mechanism for analysis tools and can target an entire operating system stack (it’s built on QEMU). They have some good documentation as well which is crucial for getting users involved. When I grabbed the code a few weeks back I did notice some dramatic slowdown in execution times even when not doing symbolic execution so that’s an issue that will have to be addressed but this looks like it could be a great project. With the combination of docs and well thought out design I’m hoping for the PIN of symbolic execution tools.

In the later part of their talk they gave some feedback to the SMT developer community with suggestions for improvements. For example, 30% of the time spent within their solver (STP) was spent in memory allocation routines. It’s something I haven’t seen a whole lot written on but the type of work that SMT engines is probably specific enough to require carefully crafted memory allocation algorithms. It’ll be interesting to see what comes of this in the future.

CVC3 and Applications (Clark Barrett)

Clark Barrett has been involved in SMT solver development for probably as long as anyone else and as CVC3 is the solver used internally in Immunity Debugger this talk was of particular interest. Clark mentioned that CVC4 is in development and should be seeing a release sometime this year so that’s good news. We’ve had some issues with CVC3 dealing with large array constraints and as this is being redone it should hopefully fare a bit better.

Unrelated to CVC3 really but one of the comments at the end was kind of striking in that the person said they often found using the theory of linear integer arithmetic with constraints to represent the bounded nature of 32-bit values faster than the theory of bitvectors. I guess that has something to do with their application area and the kinds of constraints if they’re not heavy on bit-level operations but it was something I’ve never thought to do before.

CEGAR+SMT: Formal Verification of Control Logic in the Reveal System (Karem Sakallah)

Karem Sakallah was one of the most entertaining speakers of the day and also presented some interesting ideas behind a verification system based on model checking and Counter Example Guided Abstraction Refinement (CEGAR) that is currently being used to verify real hardware. This was the second talk of the day in which abstraction and refinement using uninterpreted functions were discussed to make difficult problems more tractable (the first being the one by the MSR guys). In this talk Karem also mentioned that naive refinement was not sufficient. So, typically what happens is that when a SAT result turns out to be a false positive a constraint is generated that will block that result from being given again and this is added to the global state. To alleviate this some post-processing is done on the generated formula. They weaken the conditions so that it entails an entire family of states. For example, if the condition was (a == 5 AND b == 6) they weaken it to (a < b). I have no idea how they prevent this weakening from excluding valid states so I’ll need to follow up on that tomorrow =D

A final point made was that throughout their development process they built a number of optimisations but discovering the best combination of these optimisations was a trial/error process. The graph shown for this varied from combinations of optimisations that had no effect at all to some that cut the execution time to minuscule fractions of the base case.

SAT/SMT Summer School 2011 Summary (Day 1)

This week I’m attending the first SAT/SMT Summer School, organised by Vijay Ganesh and hosted at MIT. There are plenty of interesting talks, organised into three categories, so I figured it might be useful to do a brief summary of each day with some links to relevant material. I’ll update this post as the week progresses.

Introduction to Satisfiability Solving with Practical Applications (Niklas Een)

The first day of talks was a preliminary day, providing introductions to the foundations of the SAT and SMT problems and a quick history of how the research areas have progressed. Niklas Een, one of the MiniSAT developers, opened the technical part of the conference discussing the history of automated approaches to SAT. He then moved on to an overview of the algorithms that form the core of most SAT solvers. In Niklas’ opinion the most important algorithms in terms of their impact on the ability of SAT solvers have been those for conflict clause analysis and variable activity tracking. Perhaps an obvious point, but one worth making was that often industrial SAT problems are trivial once you know what part of the graph to explore and hence why algorithms for locating this are quite important. One idea that was reiterated multiple times was that the development of SAT solvers is largely an experimental science and that even though we can give intuitions as to why certain algorithms are useful, in many cases nobody has a clue what the true reason is, not even the algorithms inventors.

SMT Theory and DPLL(T) (Albert Oliveras)

The second talk was by Albert Oliveras and provided an introduction to SMT solving. After a quick discussion on the motivation for SMT solvers Albert gave an overview of both the lazy and eager approaches to SMT solving. The eager approach, embodied in solvers like STP, involves directly encoding the expressions over different theories into a SAT problem and then using a SAT solver on that problem. The lazy approach, found in Z3 among others, has two distinct systems – a SAT solver and one or more theory specific solvers, and proceeds by finding SAT solutions to a propositional skeleton and using theory specific solvers to check for consistency of the returned model in their domains. Albert also provided a good high level summary of how these theory specific solvers share information. The slides from this talk can be found here and are definitely worth a look for an introduction to the topic.

SAT Solvers for Formal Verification (Ed Clarke)

After lunch Ed Clarke continued the introductions with a basic summary of bounded model checking (BMC) and linear temporal logic (LTL). BMC has progressed over the years from initially using basic data structures to explicitly represent states to the use of binary decision diagrams and on to SAT encodings. Its an interesting topic and there are many parallels and cross overs between modern versions of these algorithms (and CEGAR) and symbolic execution based approaches to verification. The second part of Ed’s talk was on current research his students are doing into the use of bounded model on systems that have properties modelled by differential equations. I apparently didn’t pay enough attention in calculus classes because most of this went over my head =) The work is being led by Sicun Gao so if you’re interested his research page is probably helpful.

SMT-LIB Initiative (Cesare Tinelli)

Following this, Cesare Tinelli talked about the SMT-LIB and SMT-COMP initiatives. It was interesting to hear the story behind how it started. One thing of note that Cesare said is that back when this was started it was incredibly hard to tell what algorithms and extensions to SAT solvers were truly useful. This was because of unstandardised reporting and also because the tendency of developers to report on the benchmarks that their tools performed particularly well on. This is relevant I think because we are at a similar stage with symbolic execution and security focused tools. It’s hard to tell what really works as sometimes the tools aren’t released and when they are it can be quite difficult to recreate the authors results. It might be useful for some sort of standardised benchmark/testing suite with a focus on modern problems, especially as people move into other areas like automatic exploit generation.

An interesting discussion broke out near the end on the usefulness of the SMT-LIB standard as an input mechanism for tools given that it is not designed for efficient storage and so writing large problems to disk to read into a solver isn’t feasible for many cases in symbolic execution. The solution here seems to be to simply embed a solver and use its C/OCaml/etc API but that does somewhat nullify the usefulness of the SMT-LIB language as anything more than a standard and teaching tool. It might be interesting to see a version of the language developed specifically with the goal of efficient storage and parsing though.

Constraint Solving Challenges in Dynamic Symbolic Execution (Cristian Cadar)

The final talk of the day was by Christian Cadar on the technologies behind EXE and KLEE, with a focus on the problems of solving the types of constraints generated. Using the example of constraints over arrays, Christian made the point that it can be useful to modify a solvers algorithms for a particular theory for domain specific cases. The graph presented showed a reduction from something near exponential slowdown to linear slowdown by removing the array axioms typically added for every array based operation (and one other thing that escapes me right now!) and thus checking an under-approximation of the original formula.

These tools are essentially mixed symbolic/concrete execution tools with a focus on bug finding, equivalence checking and so forth. KLEE is open source, which is a definite plus and worth checking out. I’d love to hear some feedback from people on its performance as I’ve ran it a few times and haven’t really gotten results that are as useful as fuzzing or manual auditing for bug finding. I think for tools such as this a large set of standardised benchmarks in modern applications could be very useful for gauging progress and focusing research efforts. Apparently Microsoft Research have found this approach very useful in their SAGE framework but it’s hard to tell if it can be used generally when you don’t have a data-center and a dedicated research group.

And that was it! Fun day and tomorrow looks awesome =)

Infiltrate 2011 Slides

The slides for most of the talks from this years Infiltrate have gone online! Among them you can find the slide deck for Attacking the WebKit Heap, which Agustin and I gave.

The talks were awesome and I’d recommend grabbing them all. Halvar’s, titled State Spaces and Exploitation, was one of my favourites. I generally believe that the reason we in industry, as well as university based research groups, sometimes fail to build better tools is because we don’t spend enough time reflecting on the nature of exploitation and bug finding and as a result end up solving the wrong problems. Halvar spent about half of his talk addressing one way to think about exploits, which is programming a ‘weird machine’ that lives inside a program and is unlocked by a bug trigger. It’s an interesting way to look at things and, as he mentioned, similar to how many of us think of exploit development when it comes down to it.

As far as I know, we’ll only be releasing audio for one of the talks, Nico’s keynote on Strategic Surprise, which can be found here. Also worth checking out, educational, funny and just a little bit troll-y … what more can you ask =D

“We don’t care about nulls because this ain’t no strcpy shit” – Ryan Austin, by consensus the best quote of the conference.

Finding Optimal Solutions to Arithmetic Constraints

This post is a follow on to my previous ones on automatically determining variable ranges and on uses for solvers in code auditing sessions. In the first of those posts I showed how we can use the symbolic execution engine of ID to automatically model code and then add extra constraints to determine how it restricts the state space for certain variables. In the second I looked at one use case for manual modelling of code and proving properties about it as part of C++ auditing.

In this post I’m going to talk about a problem that lies between the previous two cases. That is, manually modelling code, but using Python classes provided by ID in a much more natural way than with the SMT-LIB language, and looking for optimal solutions to a problem rather than a single one or all possible solutions.

Consider the following code, produced by HexRays decompiler from an x86 binary. It was used frequently throughout the binary in question to limit the ranges allowed by particular variables. The first task is to verify that it does restrict the ranges of width and height as it is designed to. Its purpose is to ensure that v3 * height is less than 0x7300000 where v3 is derived from width.


int __usercall check_ovf(int width, int height,
    int res_struct)
  int v3; // ecx@1

  v3 = ((img_width + 31) >> 3) & 0xFFFFFFFC;
  *(_DWORD *)(res_struct + 12) = width;
  *(_DWORD *)(res_struct + 16) = height;
  *(_DWORD *)(res_struct + 20) = v3;
  if ( width <= 0 || height <= 0 ) // 1
    *(_DWORD *)(res_struct + 24) = 0;
    *(_DWORD *)(res_struct + 28) = 0;
    if ( height * v3 <= 0 || 120586240 / v3 <= height ) // 2
      *(_DWORD *)(res_struct + 24) = 0;
      *(_DWORD *)(res_struct + 24) = malloc_wrapper(res_struct,
                                       120586240 % v3,
                                       height * v3); // 3
    *(_DWORD *)(res_struct + 28) = 1;
  return res_struct;


If the above code reaches the line marked as 3 a malloc call will occur with height * v3 as the size argument. Can this overflow? Given the checks at 1 and 2 it’s relatively clear that this cannot occur but for the purposes of later tasks we will model and verify the code.

One of the things that becomes clear when using the SMT-LIB language (even version 2 which is considerably nicer than version 1) is that using it directly is still quite cumbersome. This is why in recent versions of Immunity Debugger we have added wrappers around the CVC3 solver that allow one to build a model of code using Python expressions (credit for this goes to Pablo who did an awesome job). This was one of the things we covered during the recent Master Class at Infiltrate and people found it far easier than using the SMT-LIB language directly.

Essentially, we have Expression objects that represent variables or concrete values and the operators on these expressions (+, -, %, >> etc) are over-ridden so that they make assertions on the solvers state. For example, if x and y are Expression objects then x + y is also an Expression object representing the addition of x and y in the current solver context. Using the assertIt() function of any Expression object then asserts that condition to hold.

With this in mind, we can model the decompiled code in Python as follows:


import sys
import time

sys.path.append('C:\\Program Files\\Immunity Inc\\Immunity Debugger\\Libs\\x86smt')

from prettysolver import Expression
from smtlib2exporter import SmtLib2Exporter

def check_sat():
    img_width = Expression("img_width", signed=True)
    img_height = Expression("img_height", signed=True)
    tmp_var = Expression("tmp_var", signed=True)

    const = Expression("const_val")
    (const == 0x7300000).assertIt()
    (img_width > 0).assertIt()
    (img_height > 0).assertIt()
    tmp_var = ((img_width + 31) >> 3) & 0xfffffffc
    (img_height * tmp_var > 0).assertIt()
    (const / tmp_var > img_height).assertIt()

    expr = (((tmp_var * img_height) &
            0xffffffff000000000) != 0)  # 1

    s = SmtLib2Exporter()
    s.dump_to_file(expr, 'test.smt2') # 2
    # After this we can check with z3 /smt2 /m test.smt2
    # Alternatively we can use expr.isSAT which calls CVC3 but it
    # is a much slower solver

    start_time = time.time()
    if expr.isSAT():
        print 'SAT'
        print expr.getConcreteModel()
        print 'UNSAT'

    print 'Total run time: %d seconds' % (time.time() - start_time)

if __name__ == '__main__':


The above code (which can be run from the command-line completely independently of Immunity Debugger) models the parts of the decompiled version that we care about. The added condition, marked as 1 checks for integer overflow by performing a 64-bit multiplication and then checking if the upper 32 bits are 0 or not. The first thing to note about this code is that it models the decompiled version quite naturally and is far easier to write and understand than the SMT-LIB alternative. This makes this kind of approach to analysing code much more tractable and means that once you are familiar with the API you can model quite large functions in very little time. For example, asserting that the condition if ( height * v3 <= 0 || 120586240 / v3 <= height ) must be false translates to the following, which is syntactically quite close to the C code:


tmp_var = ((img_width + 31) >> 3) & 0xfffffffc
(img_height * tmp_var > 0).assertIt()
(const / tmp_var > img_height).assertIt()


Checking if the function does in fact prevent integer overflow is then simple.

Using the solver to check if an overflow is possible on the argument to malloc

So, modulo modelling errors on our behalf, the check is safe and prevents an overflow on the size argument to malloc*. So what now? Well, in the case of this particular code-base an interesting behaviour appeared later in the code if the product of width and height is sufficiently large and the above function succeeded in allocating memory. That is, the height and width were small enough such that height * v3 was less than 0x7300000 but due to multiplication with other non-constants later in the code may then overflow. The question we then want to answer is, what is the maximum value of image * height that can be achieved that also passes the above check?

Solving Optimisation Problems with Universal Quantification**

This problem is essentially one of optimisation. There are many assignments to the input variables that will pass the overflow check but we are interested in those that maximise the resulting product image and height. Naturally this problem can be solved on paper with relative ease for small code fragments but with longer, more complex code this approach quickly becomes an more attractive.

The first thing to note is that at the line marked as 2 in the above Python code we used a useful new feature of ID, the SmtLib2Exporter***, to dump the model constructed in CVC3 out to a file in SMT-LIB v2 syntax. This is useful for two reasons, firstly we can use a solver other than CVC3, e.g. Z3 which is much faster for most problems, and secondly we can manually modify the formula to include things that our Python wrapper currently doesn’t have, such as universal quantification.

Universal quantification, normally denoted by the symbol ∀ and the dual to existential quantification, is used to apply a predicate to all members of a set. e.g. ∀x ∈ N.P(x) states that for all elements x of the natural numbers some predicate P holds. Assume that the conditions of the integer overflow check are embodied in a function called sat_inputs and M is the set of natural numbers module 2^32 then the formula that we want to check is (sat_inputs(x, y) => (∀ a, b ∈ M | sat_inputs(a, b), x * y >= a * b)), that is that we consider x and y to be solutions if x and y satisfy the conditions of sat_inputs implies that the product x * y is greater or equal to the product of any other two values a and b that also satisfy sat_inputs. This property is encoded in the function is_largest in the following SMT-LIB v2 code. The rest of the code is dumped by the previous Python script so checking this extra condition was less than 5 lines of work for us. The details of sat_inputs has been excluded for brevity. It simply encodes the semantics the integer overflow checking code.


(declare-funs ((img_width BitVec[32])(img_height BitVec[32])))

(define-fun sat_inputs ((img_width BitVec[32])(img_height BitVec[32])) Bool
         ; Model of the code goes here

(define-fun is_largest ((i BitVec[32])(j BitVec[32])) Bool
    (forall ((a BitVec[32]) (b BitVec[32]))
        (implies (sat_inputs a b)
            (bvsge (bvmul i j) (bvmul a b))

(assert (and
    (sat_inputs img_width img_height)
    (is_largest img_width img_height)

(get-info model)
Finding the maximum product of height and width

Running this through Z3 takes 270 seconds (using universal quantification results in a significant increase in the problem size) and we are provided with an assignment to the height and width variables that not only pass the checks in the code but are guaranteed to provide a maximal product. The end result is that with the above two inputs height * width is 0x397fffe0, which is guaranteed to be the maximal product, and height * (((width + 31) >> 3) & 0xfffffffc) is 0x72ffffc, as you would expect, which is less than 0x7300000 and therefore satisfies the conditions imposed by the code. Maximising or minimising other variables or products is similarly trivial, although for such a small code snippet not particularly interesting (Even maximising the product of height and width can be done without a solver in your head pretty easily but instructive examples aren’t meant to be rocket science).

This capability becomes far more interesting on larger or more complex functions and code paths. In such cases the ability to use a solver as a vehicle for precisely exploring the state space of a program can mean the difference between spotting a subtle bug and missing out.

By its nature code auditing is about tracking state spaces. The task is to discover those states implied by the code but not considered by the developer. In the same way that one may look at a painting and discover meaning not intended by the artist, an exploit developer will look at a program and discover a shadow-program, not designed or purposefully created, but in existence nonetheless. In places this shadow-program is thick, it is easily discovered, has many entry points and can be easily leveraged to provide exploit primitives. In other places, this shadow-program clings to the intended program and is barely noticeable. An accidental decrement here, an off-by-one bound there. Easy to miss and perhaps adding few states to the true program. It is from these cases that some of the most entertaining exploits derive. From state spaces that are barren and lacking in easily leveragable primitives. Discovering such gateway states, those that move us from the intended program to its more enjoyable twin, is an exercise in precision. This is why it continues to surprise me that we have such little tool support for truly extending our capacity to deal with massive state spaces in a precise fashion.

Of course we have some very useful features for making a program easier to manually analyse, among them HexRays decompiler and IDA’s various features for annotating and shaping a disassembly, as well as plugin architectures for writing your own tools with Immunity Debugger, IDA and others. What we lack is real, machine driven, assistance in determining the state space of a program and, dually, providing reverse engineers with function and basic block level information on how a given chunk of code effects this state space.

While efforts still need to be made to develop and integrate automation technologies into our workflows I hope this post, and the others, have provided some motivation to build tools that not only let us analyse code but that help us deal with the large state spaces underneath.

* As a side note, Z3 solves these constraints in about half a second. We hope to make it our solving backend pretty soon for obvious reasons.
** True optimisation problems in the domain of satisfiability are different and usually fall under the heading of MaxSAT and OptSAT. The former deals with maximising the number of satisfied clauses while the latter assigns weights to clauses and looks for solutions that minimise or maximise the sum of these weights. We are instead dealing with optimisation within the variables of the problem domain. OptSAT might provide interesting solutions for automatic gadget chaining though and is a fun research area.
*** This will be in the next release which should be out soon. If you want it now just drop me a mail.

Thanks to Rolf Rolles for initially pointing out the usefulness of Z3’s support for universal/existential quantification for similar problems.

Exploit Necromancy in TCMalloc – Reviving the 4-to-N Byte Overflow Primitive with Insert to FreeList[X]

A couple of months back while looking into a heap overflow in Chrome* I found myself poking around in the internals of TCMalloc. What I found there was pretty interesting. Perhaps the implementers have been watching Twitter and decided to take a proactive approach to security researchers moaning about the difficulties of modern heap exploitation. Maybe they just got caught up in making a blazing fast allocator and couldn’t bring themselves to slow it down with nasty things like integrity checks. Either way, TCMalloc provides a cosy environment for heap exploits to thrive and is worth fully exploring to discover the possibilities it provides.

At Infiltrate next weekend, Agustin Gianni and I will be discussing (among other things) TCMalloc from the point of view of exploitation. Essentially our research into TCMalloc wasn’t so much ‘research’ as it was vulnerability necromancy. In the name of speed TCMalloc has forsaken almost any type of sanity checks you might think of. There are annoyances of course but they are artefacts of the algorithms rather than coherent attempts to detect corruption and prevent exploitation. As a result we can revive a number of primitives from heap exploits past as well as some unique to TCMalloc.

Like most custom allocators, TCMalloc operates by requesting large chunks of memory from the operating system via mmap, sbrk or VirtualAlloc and then uses its own methods to manage these chunks on calls to malloc, free, new, delete etc. Agustin wrote a high level overview here and Google’s project page is also useful for gaining a general idea of its workings. In this post I’m not going to go into the exact details of how the memory is managed (come see our presentation for that!) but instead briefly discuss how thread local free lists function and one useful exploit primitive we gain from it.

ThreadCache FreeList allocation
The front end allocator for TCMalloc (with TC standing for ThreadCache) consists of kNumClasses** per-thread free lists for allocations of size < 32768. The FreeLists store chunks of the same size. The bucket sizes are generated in SizeMap::Init and allocations that don’t map directly to a bucket size are simply rounded up to the next size. For allocations larger than 32768 the front end allocator is the PageHeap which stores Spans (runs of contiguous pages) and is not thread specific. The following code shows the interface to the allocator through a call to malloc***.

3604 static ALWAYS_INLINE void* do_malloc(size_t size) {
3605   void* ret = NULL;
3611   // The following call forces module initialization
3612   TCMalloc_ThreadCache* heap = TCMalloc_ThreadCache::GetCache();
3621   if (size > kMaxSize) {
3622     // Use page-level allocator
3623     SpinLockHolder h(&pageheap_lock);
3624     Span* span = pageheap->New(pages(size));
3625     if (span != NULL) {
3626       ret = SpanToMallocResult(span);
3627     }
3628   } else {
3629     // The common case, and also the simplest.  This just pops the
3630     // size-appropriate freelist, afer replenishing it if it's empty.
3631     ret = CheckedMallocResult(heap->Allocate(size));
3632   }

At line 3612 the ThreadCache pointer for the current thread is retrieved. This object contains a number of thread specific details but the one we are interested in is the FreeList array. A FreeList object contains some metadata and a singly-linked list of free chunks that are managed by very simple primitives such as SLL_Pop, SLL_Push etc. If the allocation size is less than 32768 then the following code is called to retrieve a chunk of the required size.

2888 ALWAYS_INLINE void* TCMalloc_ThreadCache::Allocate(size_t size) {
2889   ASSERT(size <= kMaxSize);
2890   const size_t cl = SizeClass(size);
2891   FreeList* list = &list_[cl];
2892   size_t allocationSize = ByteSizeForClass(cl);
2893   if (list->empty()) {
2896   }
2897   size_ -= allocationSize;
2898   return list->Pop();
2899 }

The correct FreeList is retrieved at line 2891 and presuming that it is not empty we retrieve a chunk by calling the Pop method. This results in a call to SLL_Pop with the address of the pointer to the head of the free list.

 761 static inline void *SLL_Next(void *t) {
 762   return *(reinterpret_cast(t));
 763 }
 774 static inline void *SLL_Pop(void **list) {
 775   void *result = *list;
 776   *list = SLL_Next(*list);
 777   return result;
 778 }

Insert to FreeList[X] – Reviving the 4-to-N byte Overflow Primitive
The effect of SLL_Pop is through SLL_Next to follow the list head pointer and retrieve the DWORD there and then make that the new list head. Notice there are no checks of any kind on the value of the *list pointer. When I first saw a crash within TCMalloc it was at line 762 in the SLL_Next function. The value of t was 0x41414141 which I had previously overflowed an allocated chunk with. What this means is that on that call to SLL_Pop the list head was an address that I controlled. How could this happen? Consider the following FreeList layout:

FreeList example

The above FreeList has 3 chunks. For simplicity assume that the addresses of A and B are contiguous****. On the first call to malloc chunk A is returned to the application.

FreeList allocation

Assume an overflow then occurs when the application writes data to A and B is partially corrupted.

Overflow of Chunk A into Chunk B

The first DWORD of B is the pointer to the next chunk in the FreeList and so we have corrupted the singly-linked list. The first chunk is now B and, as we control the first DWORD of this chunk, we control what the next chunk in the FreeList will be. The chunk C is effectively no longer part of the FreeList. The next allocation of this size will then return the chunk B (presuming no free calls on chunks of this size, within this thread, have occured. Free chunks are prepended to the head of the list) and it will give us control of the list head pointer when *list = SLL_Next(*list) executes.

Allocation from a corrupted FreeList

As we control the DWORD at **list we control what the new list head is. In my initial TCMalloc crash this set the list head to 0x41414141. One more allocation of this size will then call SLL_Pop with a controlled list head pointer. The crash I encountered was when SLL_Next attempted to read the next pointer at 0x41414141.

The important part of all this is that by controlling the list head pointer we control the address of the chunk returned and can give back any usable memory region we want to the application. So after one more allocation the situation is as follows:

A memory region of our choosing is handed back to the application

The result of this allocation is a pointer which we control. It is important to note that the first DWORD at this address then becomes the new list head. For a clean exploit it is desirable for either this to be 0x0 or the FreeList list length_ attribute to be 0 after returning our controlled pointer. If we cannot force either of these conditions then future allocations of this size will follow this DWORD, which may not be under our control, and may result in instability in the program before we gain code execution. Another point to note is that if a free call occurs on a pointer that has not previously been noted by TCMalloc as part of a memory region under its control then it will raise an exception that will likely terminate the program. This is important to keep in mind if the memory location we are inserting into the free list is not controlled by TCMalloc. If this is the case then we have to prevent this address from being passed to free before we gain control of the programs execution.

To gain code execution from this primitive it is necessary to find the address of some useful structure to hand back to the application and then have it overwrite that structure with data that we control. For modern systems this will require a memory leak of some kind to find such an address reliably (although heap spraying of useful objects might be an interesting alternative if we can write over one of these objects and then trigger a call through a function pointer or something similar). Despite this requirement, we have a functioning exploit primitive from a heap overflow and one that has not required us to deal with any heap integrity checks or protections. This is not a bad place to be in given the effort that has gone into securing the default allocators for most operating systems.The primitive basically gives the same effect as the Insert to Lookaside technique that was popular on Windows XP but has since been killed on Windows Vista and 7.

In this post I’ve given a quick overview of how we can convert a 4-byte overflow into one that overflows N bytes in an application using TCMalloc. This type of technique has been seen in various places in the past, most notably when overflowing chunks in the Windows XP Lookaside list. It has been largely killed elsewhere but due to the lack of integrity checks in TCMalloc it lives on in a relatively easy to use fashion.

At Infiltrate Agustin and I will discuss a variety of topics related to exploiting WebKit and TCMalloc in much greater detail. The Insert to FreeList[X] technique is but one example of a simple and old heap exploitation tactic that has been revived. Many other exploitation vectors exist and with a grasp of how the allocator works the possibilities are interesting and varied. In our presentation we will describe some of these techniques and also the details of how TCMalloc manages the processes of allocation and deallocation such as is relevant to manipulating the heap for exploitation.

* TCMalloc is not the allocator used by the Chrome Javascript engine V8. That has its own allocator that can be found in the src/v8/src/ directory of the Chrome source. Have a look at spaces.[cc|h] and to get started.
** For WebKit in Safari kNumClasses is 68 but for Chrome it is 61. There seem to be some other differences between the various uses of TCMalloc and are worth keeping in mind. For example, in Chrome the maximum free list length is 8192 whereas in Safari is is 256.
*** We’re looking at the TCMalloc code embedded in WebKit revision 79746 Source/JavaScriptCore/wtf/FastMalloc.cpp. It’s effectively the same as that in the Chrome release and elsewhere, modulo the previous point.
**** FreeLists are created from Spans which are contiguous pages in memory that get subdivided to give the FreeList chunks. The chunks are prepended to the list during creation though so they end up in reverse address order. This means that if A, B are two contiguous addresses then the FreeList will initially be ordered as B -> A. In order to get the desired ordering we need to rearrange the fresh FreeList by allocating B, allocating A, free’ing B and then free’ing A. This will result in the ordering A -> B and another allocation from this FreeList will give back the address A.

Heap Scripts for TCMalloc with GDB’s Python API

When writing heap exploits it’s necessary to be able to view the heap state during debugging. As part of our work on TCMalloc, Agustin and I have written up some heap scripts for Immunity Debugger and GDB that make the process of tracking what TCMalloc is up to quite easy. There are quite a few scripts that do similar things for ID and different heap implementations so in this post I’m going to focus on the GDB side of things. Recent versions of GDB contain an embedded Python interpreter that provides easy access to the internals of an application being debugged. Being able to write scripts in Python to automate debugging tasks is really useful and hopefully the GDB guys will maintain the project.

The scripts I will be discussing in this post can be found here. Copy the gdbinit file your home directory or the one where you will be launching gdb from and rename it to .gdbinit. Modify the pyscripts_dir variable to point to the directory where you extracted the scripts. To load the commands run source /path/to/scripts/ or source /path/to/scripts/ This will make the commands dump_free_list and search_free_lists available within gdb.

The data structures used by TCMalloc are relatively simple so the scripts have little to do besides reading memory and walking lists. Each thread in an application using TCMalloc has its own TCMalloc_ThreadCache object which contains information on the FreeLists for that specific thread. The FreeLists themselves are ThreadCache_FreeList objects, each of which contains a list_ attribute that points to the head of a singly linked list of free chunks. We can access the TLS for a given thread via pthread_self() in GDB and find the ThreadCache_FreeList pointer at the offset -4 from there. The file contains abstractions for the ThreadCache_FreeList and TCMalloc_ThreadCache structures. In we can see the initialisation of the ThreadCache abstraction via the following code:

 24         tls = gdb.parse_and_eval("pthread_self()")
 25         # threadlocal_heap is at TLS - 4
 26         threadlocal_heap = buf_to_le(
 27             self.cur_proc.read_memory(tls - 4, DWORD_SIZE))
 29         tc = ThreadCache(self.cur_proc, threadlocal_heap)

The ThreadCache instance then provides access to the FreeLists for the current thread through getFreeLists. The size classes that TCMalloc uses to bucket chunks together for allocations less than 32768 in size are generated at run-time. To view them run outside of gdb with no arguments.

The number of size classes, and hence the number of FreeLists per thread, is dependent a constant that may change between applications embedding TCMalloc. For Chrome it is 61 and hence we have 61 different FreeLists per thread. If we run dump_free_lists within gdb with no arguments it will dump all free lists, their lengths and the chunk size that list is responsible for.

TCMalloc FreeLists within Chrome on Linux

The getFreeLists function will return each of these FreeLists as a FreeList object. This object contains a list_ptr attribute that corresponds to the list_ pointer found in the TCMalloc source for each FreeList. It points to a singly linked list of free chunks that we can iterate over using the getChunks function of a FreeList object.

If we run dump_free_lists with one or more space separated addresses it will treat them as pointers to ThreadCache_FreeList structures and dump them accordingly.

The chunks in a given FreeList

The scripts archive also contains the search_free_lists command which will search the FreeLists to see if an address lies within a chunk on any of the FreeLists. After Infiltrate we will release the rest of the scripts for ID and GDB but these should be enough to get you started with TCMalloc and GDBs Python API.