I submitted an abstract etc. for a Blackhat talk a few days ago. The title is “Automatic exploit generation for complex programs” and the following is the abstract:
The topic of this presentation is the automatic generation of control ﬂow hijacking exploits. I will explain how we can generate functional exploits that execute shellcode when provided with a known ’bad’ input, such as the crashing input from a fuzzing session, and sample shellcode. The theories presented are derived from software veriﬁcation and I will explain their relevance to the problem at hand and the beneﬁts of using them compared to approaches based on ad-hoc pattern matching in memory.
The novel aspect of this approach is the combination of techniques from data ﬂow analysis and symbolic execution for the purpose of exploit generation. We track input data as it is passed through a running program and taints other variables; in parallel we also track all constraints and modiﬁcations imposed on such data. As a result, we can precisely locate all memory regions inﬂuenced by the tainted input. We can then apply a constraint solver to generate an exploit.
This technique is effective in environments where the input data is subjected to complex, low level manipulations that may be difficult and time consuming for a human to unravel. I will demonstrate that this approach can be used in the presence of ASLR, non-executable regions and other protections for which known work-arounds exist.
During the presentation I will show functioning exploits generated by this technique and describe their creation in detail. I will also discuss a number of auxiliary beneﬁts of the tool and possible extensions. These include the ability to denote sections of a given input used in determining the path taken, in memory allocation routines and in length constraints. Possible uses of this information are in generating more reliable versions of known exploits and in guiding a fuzzer.
So, in a nutshell I’m using dynamic data flow analysis in combination with path constraint gathering and SAT/SMT solving to generate an input for a program that will result in shellcode execution…. assuming it works 😉 I should know by June 1st if it was accepted or not.
Update: The talk was rejected. Success!… or not.
9 thoughts on “Blackhat USA paper”
Sorry to hear that. Did you get any feedback on why it was rejected?
You might try Network Distributed System Security 2009
Nothing on the content of the talk itself. I contacted somebody who worked on the CFP who said they got ~400 submissions and so it was likely that it just didn’t get that much attention.
I’m going to write the paper anyways and then see what to do from there. NDSS ’09 might be worth considering alright.
I would very much like to read it at least, it looks very interesting. If it comes with code – bonus!
Yea, I’m hoping to release the code in the next couple of months. This version is pretty raw but it works, although there is no real UI to speak of atm and I want to do more extensive testing.
Welcome to this year’s rejects club! A submission a friend and I put together was also rejected, as was another friend’s submission. Still, some other friends got accepted, so I’m happy for the people I know who got in this year, and am interested to learn more from the people I don’t.
The submission my friend and I were submitted revolved around using a type qualifier to find exploitable bugs in hypervisors, essentially an adaptation of Johnson and Wagner’s work on using Cqual to find exploitable userland/kernel bugs in Linux. Since we need source code to work, and since type qualifiers are arguably somewhat academic to the general Black Hat audience, and since we’re not really talking about a directly offensive technique, we figured there was a strong likelihood that BH would pass on it, and they did. Like you, we didn’t get any feedback either.
Alas, there is always next year. And other conferences!
That sounds interesting. I’m currently looking for bugs in IOCTL/FSCTL interfaces on Linux/OS X and debating whether it might be fun/productive to use something like Cqual. Unfortunately fuzzing is just so effective it’s hard to justify the time investment right now 😉
Did you guys build on top of Cqual? Also, do you have any stats you can release on how many bugs were found, the false positive ratio, how many annotations were added, how long it took to add them etc?
On the topic of conferences, a friend of mine recently linked me to this, which you may/may not find useful
Some of the rankings are potentially questionable but its useful to have the links anyways.
Hmmh, that sounds pretty interesting. However it’s really hard to get accepted as a speaker if you aren’t well known or lucky.
I’d really like to read that paper anyhow 🙂
True, although I figured regardless of how well known/lucky I was I’d get accepted before a presentation on web proxies 😛
perhaps your topic is just too advanced for the drooling masses, HAR2009 talks were fairly low-tech / lame.
Comments are closed.