Fuzzers are awesome! You write ‘em, set ‘em up and run ‘em… and out come delicious fun bugs! (The bugs are a guarantee – your skill will determine if it’s in your fuzzer or the target application)
What I feel is less talked about is that fuzzing doesn’t have to be about finding crashes in binaries to yield cool results (You don’t even have to like binary exploitation! I don’t!), and stuff that won’t crash or aren’t binaries are far from “unfuzzable”; they just need a different approach.
To convince you of that, I want to tell you about a few less known super cool custom fuzzing “patterns” that won’t necessarily result in crashes but can lead you to cool and exciting findings nevertheless!
Differential fuzzing
You might have heard about this one in some academic paper (that’s where I heard about it first), but despite the fancy name, it’s actually a super simple concept.
Differential fuzzing: you take two or more programs that do the same thing, give both of them the same test case, and see if they produce different results.
Now, why is this a fantastic idea?
Well, let’s say you have two python libraries that both claim to strip out arbitrary HTML from a string. Let’s say you hook up a large amount of HTML test cases to Radamsa so it’ll give you randomly mutated output test cases based on the input HTML snippets, then your trivial (probably 20-50 lines of code) custom fuzzing harness takes those test cases and runs them through both libraries and compares what both of them spit out.
Since both of the libraries do the same thing, if they treat a test case differently, it’s actually an excellent sign that one of them is doing something wrong and might be worth investigating.
Parsers, filters, and sanitizers are great candidates for this approach since there’s generally more than one library for the same purpose.
Expectation-based fuzzing
This one’s more obvious with a concrete example from the getgo: regex patterns can be extremely buggy for a number of different reasons, and once you hit a certain level of complexity, it gets almost imposible to manually validate it. Take the following regex for validating email addresses as an example:
(?:[a-z0-9!#$%&'*+/=?^_`{|}~-]+(?:\.[a-z0-9!#$%&'*+/=?^_`{|}~-]+)*|"(?:[\x01-\x08\x0b\x0c\x0e-\x1f\x21\x23-\x5b\x5d-\xff]|\\[\x01-\x09\x0b\x0c\x0e-\x7f])*")@(?:(?:[a-z0-9](?:[a-z0-9-][a-z0-9])?.)+[a-z0-9](?:[a-z0-9-]*[a-z0-9])?|\[(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?|[a-z0-9-]*[a-z0-9]:(?:[\x01-\x08\x0b\x0c\x0e-\x1f\x21-\x5a\x53-\x7f]|\\[\x01-\x09\x0b\x0c\x0e-\x7f])+)\])
Or, I should say, it was a regex for validating email addresses until I made three single-character modifications to it, so it’s still a valid regex but contains some bugs. How do we find the bugs? Even among hardcore vulnerability researchers, I don’t know many people who enjoy reading dense regex patterns.
Suppose instead of manually finding if that regex pattern has any bugs, we fuzz it.
We could do this through differential fuzzing if we know another “email validation” library. Still, another alternative is writing a harness that can take emails we know are good and make string manipulations to it that we are sure are not going to break it (e.g., insert valid characters), and make modifications to it that we are sure will break it (e.g., insert invalid characters).
By knowing the expected outcome, it’s easy to compare the output at the end of the validator to see if it lines up. If it doesn’t, congrats, you’ve got a bug!
Hell, depending on the expected outcome, you might not even need to write a custom mutator.
Suppose you’re checking a sanitizer or filter that is meant to strip extensions and feeds this into another function down the road. Why not simply throw random test cases based on many different example file names (generated with Radamsa, for example) and see if any of the strings that make it through the gauntlet to the other function end with any unexpected extensions?
Unexpected = a win for you.
Fuzzing to aid manual work
Did you know “/” is also a valid tag-attribute separator in HTML?
As in, <svg/onload=alert(1)>
will pop an alert box.
It’s not part of the HTML specification, as you can read from the excerpt directly from the HTML spec below:
[Writing about tag parsing] If there are to be any attributes in the next step, there must first be one or more ASCII whitespace.
People who dive deep into XSS eventually find this out one way or another, but I personally discovered it using fuzzing. I made a simple bash script that dumped every ASCII character between svg
and onload
with a unique alert message each and loaded it up in a browser, and.. lo and behold! An alert box popped for forward slash!
I also use this pattern to explore poorly documented aspects of programming languages, usually by combining a series of for loops to run through different characters along with liberal use of eval. It makes uncovering less known parts of the language a cakewalk.
You’d be surprised how far something as stupid as the following block of code can get you:
for i in range(0,256):
for j in range(0,256):
for k in range(0,256):
retval = eval("preamble" + chr(i)+chr(j)+chr(k)+"postamble")
if retval = some_condition:
print("We've got a live one:",i,j,k)
This is also how I bumped into Javascript’s “optional chaining”, which I’ve used (and seen others use) to successfully to bypass many XSS filters. I was looking for what characters I could insert into arbitrary JavaScript and still have it execute without changing the outcome.
I simply set up a NodeJS script that would iterate through all possible combinations of ASCII characters less than 5 characters in length in the middle of JS snippets I manually chose.
To my surprise, I could insert a question mark in the middle of snippets like console.log
( turning it into console?.log
) and have it still work. Of course, this is an intended part of the language, and anyone willing to dive deep into the docs can find it that way, but I bet you I spent less time finding this feature (20 minutes) than the vast majority of people who eventually learn about it. I also found a few other fun ones, but I’ll leave those as an exercise to the reader. ;)
Because of my exhaustive fuzzing, I’m also certain I found every way to insert up to 5 characters in the middle of a JS snippet without affecting the outcome.
It’s literally impossible for there to be any others, because I tested them all. If you were reading the docs, how certain would you ever be that there isn’t just some undocumented functionality that you could have used if the language was better documented?
Fuzzing can give you a tremendous amount of certainty regarding your approach because it allows you to explore every nook and cranny of a particular problem, or if you’re fuzzing non-exhaustively or randomly (probably because your problem space has too many parameters ) at least a large degree of certainty.
Grammar-based fuzzers
This one’s a bit more general because a generational (grammar-based) fuzzer just generates test cases on its own and from scratch.
Effectively it means the fuzzer places the burden on you to either write the fuzzer from scratch to generate test-cases (not as hard as it sounds) or configure your fuzzing framework (like Sulley, domato, or boofuzz) well enough to cover your use-cases.
But! They are incredibly flexible for what you can use them for. I’ve built a generational fuzzer that looks for XSS filter bypasses in web application firewalls because with a sufficiently complex grammar, you can have it whip up arbitrary valid XSS payloads.
This is a far too big of topic to cover on its own, but for inspiration I’d look at Domato and The Fuzzing Book since those were the two resources I found the most helpful in building my own.
They are not as complicated to write on your own as you’d think since most of them are effectively just layers of string and/or byte string manipulation. My personal fuzzer is very similar to domato in that it’s effectively a recursive string replacer that looks for tokens to replace. So in 2 rounds of mutations, it’d do something like:
Start: <start>
Round 1: <svg <xss_atributes>>
Round 2: <svg onload=alert(1)>
And then terminate when no more valid swappable are found.
Where the grammar file (really just a text file) for the above would look something like:
start=<svg <xss_atributes>>
xss_atributes=onload=alert(1)
It’s really not hard! Think of them as a big ol’ engine that just looks for strings and replaces them with other strings.
Hopefully the above gives you some inspiration and ideas for how to fuzz things that may appear less “conventional”.
I truly encourage anyone in infosec who enjoys finding bugs to start having fun with fuzzers, because they can be a tremendous help in finding bugs (even if you lean away from binary stuff) and are honestly surprisingly easy to write and use.
Have a nice one! Toodles!