When it comes to testing the security posture of a target application, nothing is more indicative than supplying the application with random data and seeing it crash. The idea behind fuzzing is to automate the generation and feeding of data that will identify flaws in a target application.
·
Introduction
By automating fault injection, a researcher can identify flaws in an
automated fashion and focus his or her attention on assessing the risk
associated with any vulnerabilities found. The automated fault injection is
better known as fuzzing, and has been introduced in many Software Development
Life Cycles (SDLC) to identify both easy to find flaws and security issues that
might require a more targeted approach. This article will introduce the idea
behind fuzzing and explain where this approach can be useful and also what the
shortcomings are.
·
Dumb Fuzzing
Software that does fuzzing usually falls
under two categories. Dumb fuzzing usually consists of simple modifications to
legitimate data, that is then fed to the target application. In this case, the
fuzzer is very easy to write and the idea is to identify low hanging fruit.
Such flaws are usually found on the surface of the application code and do not
require other dependencies or perquisites before the vulnerability can be
triggered. An example of a dumb fuzzer to tests a file format parser would be
one that takes a valid file and replaces each 2 bytes with an 0xFFFF one test
at a time. An example of such a fuzzer is FileFuzz by iDefense.
Although not an elegant approach, dumb fuzzing can produce results, especially
when a target application has not been previously tested.
·
Intelligent Fuzzing
A security researcher will probably
initially work with a dumb fuzzer because it is so easy to setup and to get a
general idea of the target application. However many commercial applications
are more robust and will not choke on data generated with a dumb fuzzer. In
that case a security researcher might make use of a fuzzer that knows the
protocol or format of the data. Some protocols require that the application
(fuzzer) keeps a state, for example in the case of HTTP or SIP. Other protocols
will make use of authentication or a valid CRC before any vulnerabilities are
identified. If a target application makes use of a CRC, mutated data produced
by a dumb fuzzer would never reach vulnerable code. Apart from providing much
more code coverage, intelligent fuzzers tend to cut down the fuzzing time
significantly since they avoid sending data that the target application will
not understand. Intelligent fuzzers are therefore much more targeted and
sometimes they need to be developed by the security researcher himself.
Available fuzzing frameworks such as Sulley and Peach can make this
task less of a challenge, and one may be able to setup a fully working fuzzer
in a couple of minutes.
·
Fuzzing and your Security Testing
approach
When compared to other software security testing methods, fuzzing provides
a good starting point. By making use of fuzzing, a researcher can identify
flaws in software that he or she does not have full access to and take a
blackbox approach. Since test cases are automated, different lengths and
variations of the same data can be produced in a very short time; something
that be impossible to perform manually. However, unless the fuzzer fully
understands the target application and especially in the case of complex code,
a fuzzer may only scrape the surface when it comes to identifying
vulnerabilities. Fuzzing will also not identify logic issues. For example, a
backdoor in an authentication procedure will not be found using the fuzzing
approach. Such vulnerabilities can only be found by careful understanding of
the target application, with code reviews and reverse engineering techniques.
·
Conclusion
The place of fuzzing should probably along side regression testing in the
SDLC, where before each major build the target application is tested against a
set of mutated data. It is important to realize that fuzzing does not replace
manual approach to security testing but rather complement it by providing time
saving and unique advantages to vulnerability research.
No comments:
Post a Comment