The slides for the summer school have started to go online so for the remaining days I’ll just give a quick summary of parts I thought were particularly interesting or comments that were made but not in the slides.

**Day 3**

**BitBlaze & WebBlaze: Tools for computer security using SMT Solvers (Dawn Song & Prateek Saxena)**

The first thing of note from this talk was a brief discussion on selecting paths for analysis during symbolic/concrete execution. Anyone who has used KLEE knows the lack of sane path selection mechanisms is a significant drawback so it was good to see one of the talks on this type of system discuss it. The methods used were dataflow and control flow distances to functions of interest.

An interesting problem tackled later in the talk was how to distinguish due from undue influence of tainted data over control flow. For example, it might be acceptable for user data to taint a value used in a switch statement that selects a function to run but it’s unlikely to be very good if it can taint a function pointer directly. Four different methods were presented from distinguishing these cases, the simplest being point by point exhaustion using a solver of the number of possible target addresses in the address space. More complex probabilistic and information theoretic approaches were also discussed and are elaborated on in their paper. It would be nice to see some more experimental data with these more advanced methods though as it is limited to 3 vulnerabilities and 3 benign cases.

**SAT-based Model-Checking (Armin Biere)**

Armin is the developer of one of the best SAT solvers, Lingeling, and his talk discussed advances in using SAT technology for model checking. During the talk he mentioned a paper by Aaron Bradley on SAT based model checking without unrolling which might be worth checking out but I haven’t had a chance to yet

**CryptoMiniSat — A Rough Guide (Mate Soos)**

This was a great talk by Mate Soos on CryptoMiniSat, which won last years SAT Race and is open source, and SAT solver design. Mate started with a discussion of the software design philosophy behind the project and put forward that it’s better to have less optimised and complex code if you can more easily implement better ideas. Given that his solver is faster than Lingeling, which is far more difficult to comprehend, it seems that he is correct. He had some other interest things to say on SAT solver features, emphasising regular simplification of expressions and maintaining a cache of results from unit propagation even if they are not currently useful.

**SAGE: Automated Whitebox Fuzzing using SMT solvers (Patrice Godefroid & David Molnar)**

In my opinion, this was the best talk of the summer school so far. Patrice and David discussed SAGE and presented a lot of data to encourage the development of tools for this kind of testing. SAGE is built on top of previously developed MS tools detecting crashes (AppVerifier), recording traces (Nirvana), generating constraints (TruScan) and solving constraints (Z3).

Unlike KLEE and the Bitblaze tools, the symbolic execution part of SAGE only accounts for a small fraction of the time cost. Only 1/4 of the total time is spent on symbolic execution with the remainder of their 3 week fuzzing runs spent on tracing, generating constraints and running the application under test on the fuzz files.

One interesting thing mentioned was that while most queries to the solver only take 1/10th of a second all queries are capped at 5 seconds and after that the solver is killed and the result is presumed UNSAT. This is based on the observation that they get more code coverage by this method than waiting for hours for a single query to return. They backed this up with some statistics that showing that longer run times only very rarely led to more bugs.

Some other points of note were:

– From the start SAGE was engineered to provide enough information and statistics on every part of its system that determining what it is doing and where it is succeeding/failing is possible. This is facilitated through a system called SAGAN that allows them to focus on areas needing work.

– SAGE is primarily deployed against file parsers. This is a use case where the majority of non-determinism is from the input. In other environments with different sources of non-determinism it might be more difficult to direct the application through constraint solving.

– Most OOM conditions are a result of trying to store the constraints in memory while analysing the trace, not in the solver as I would have expected. As a result, simplification and expression elimination can be necessary even before that staged.

– Most crashes seemed to be concentrated within the first 6 generations of constructed fuzz files but crashes were seen in all generations up to the mid to late teens. I don’t think they’ve ran for any longer than that.

– SAGE was responsible for 30% of bugs found in a certain class of file parsers on Windows 7. These were bugs missed by all other testing mechanisms. I wonder how long it will be before those of us interested in bug finding will have to start looking at tools like SAGE from the point of view of discovering where they are weak as a starting point for auditing.

All in all, this presentation had hard data to support some very exciting conclusions.

**Day 4**

**Approaches to Parallel SAT Solving (Youssef Hamadi)**

I had recently been wondering what the state of the art in parallel solving is so this was good to see. Youssef first started by proposing that we are unlikely to see order of magnitude speed ups in SAT solving based on advances in the current CDCL architecture of SAT solvers. I guess there are two ways to deal with this, one is to look at different approaches to sequential SAT solving and the other is to look at parallelism.

From 1996 to 2008 most of the approaches to parallel SAT proceeded by splitting the problem space into chunks and solving these instances in parallel with clause sharing of clauses under a certain size. Another approach is the portfolio approach used by parallel Z3, PLingeling and ManySAT. In this approach the same problem is attacked by several different solver instances each using a different configuration. The solver configuration is parameterized on the restart policy, polarity selection, clause learning and branching heuristics. This lead to super-linear speed-up with combinations of these solvers performing better than the sum of their parts.

One particular issue for this strategy though is how best to do clause sharing between solvers. Sharing of large clauses can be costly so one needs a strategy to figure out the upper limit on the size of clauses to share. This is complicated by the fact that the size of learned clauses progresses as the solver advances in the problem. Two algorithms were discussed in this context. The first was based on TCP bandwidth calculation algorithms that slowly increase the size of shared clauses and then quickly back off once a problem is detected and a heuristic based on variable activity to set a quality threshold on clauses to accept from other solvers.

This portfolio mechanism works well for between 4 and 8 cores but after that the effects of added cores is greatly diminished.

**SAT Solving and Complexity Theory (Ryan Williams)**

This was a theoretical talk that discussed some of the difficulties that people have ran into in not only proving P != NP but on many problems on the relations between complexity spaces. Much like the talk by Shai Ben David I’m pretty sure I’d butcher the mathematical details were I to summarise but the take away message was similar: the logics for reasoning and proof techniques that we have today for problems like this are quite insufficient; as a result people have struggled to even do far weaker reasoning like establishing lower bounds on NP-complete problems. It was a really interesting talk but I’ll definitely need to rewatch the video when it comes out =D