When automatically generating an exploit, one of the core tasks is to find a hospitable buffer that we can store our shellcode in so that at the vulnerability point we have somewhere to jump to. Some tools (like Byakugen) do this by scanning processes memory and searching for sequences of bytes from the input (or slight variations thereof, such as the upper/lower case conversion said input). There are a number of problems with this, first of all you can easily miss perfectly usable buffers. If all user input is passed through a function that adds 4 to every byte then this will be missed, but it is still a usable shellcode buffer. The second major problem is that if you do find a suitable buffer this way you have no guarantee that the values you’re changing weren’t used in a path condition on the way to the vulnerability point. Replacing this input with your shellcode could result in the vulnerability not being triggered.
I’m addressing the first problem by using dynamic data flow analysis to track all user input as it moves through registers and process memory, and the second problem by noting all path conditions that operate on such ‘tainted’ data. Already we can see a division between potential shellcode buffers. There are bytes that are used in path conditions, there are those that are not and in both cases there are those that are tainted by simple assignment instructions like mov, versus those that are tainted as a result of arithmetic operations like add.
In the case of a buffer consisting only of bytes that are not used in path conditions and are tainted via simple assignment then we can simply swap out those bytes for our shellcode, but what about the other cases? The approach I am taking is to record all tainting operations on a byte, as well as all path conditions and as a result, at a given program location we can express the potential values of a given byte via an algebraic formula. For example, assume in the following eax is tainted by the 10th byte of user input:
mov ecx, eax add ecx, 4 mov [1234], ecx
At the end of this block we can represent the constraints on location ‘1234’ by the equation, t1 := in10 & t2 := t1 + 4 & mem1234 := t2. If we then want to discover if there exists an assignment to the 10th input byte that would result in location ‘1234’ being equal to 0x41, we can include this additional constraint in the previous formula to give, t1 := in10 & t2 := t1 + 4 & mem1234 := t2 & mem1234 == 0x41. A SAT/SMT solver will take this and return us an assignment to the literal ‘in10’ that will satisfy the formula e.g. 0x3E.
Now obviously real world constraint formulae will be much more complex than this, they could have hundreds of thousands of literals and conditions. A modern SAT/SMT solver will be able to handle such an input, but it could take a number of hours depending on the complexity. Because of this, it would be an advantage to be able to select those buffers that have the least complex constraint formulae (preferably one no more complex than a sequence of assignments, in which case a solver will not be needed). The basic classification scheme I’m using is represented by the following lattice:

Formulae are associated with a single point in the lattice, with the more complex formulae towards the top and the less complex towards the bottom. We can classify based on the complexity of arithmetic in the formula, as well as whether it contains a path condition, like a jump conditioned on user input, or not.
When an object is created, representing the constraints on a given byte, it starts at ‘DIRECT COPY’ and its position in the lattice is updated as operations are performed on it. A byte can only move up in the lattice (i.e. there is only a join function, no meet), with the lattice position of a byte resulting from the interaction of two tainted bytes being the join of the those two source bytes. The join function is defined as the greatest upper bound of two inputs. We can then use this lattice to represent the constraint complexity of a single byte, or a sequence of bytes by taking the join of all bytes in the sequence. Currently, I have classified a path constrained direct copy and non-linear arithmetic as being of equivalent complexity but, with the arithmetic support in modern SMT solvers, it might turn out that a formula made up entirely of arithmetic operations, linear or non-linear, is easier to solve than one containing predicates that must be satisfied.
The advantage of the above scheme is that we now have a way to classify potential shellcode buffers by the amount of effort it will be to use them as such. By doing this we can potentially avoid calling a SAT/SMT solver altogether, and in cases where we must we can select the easier formulae to solve.