Zero day flaws are the application vulnerabilities that nobody knows about until it’s too late. They’re the things like Heartbleed, or Shellshock, or most recently POODLE that allow hackers and attackers to execute malicious code on machines that aren’t theirs. They’re also the things like Sandworm and Operation Snowman: previously unknown entry points into a PC through end user software that allow malware writers to infect their victims in new and often unprotected ways.
Zero days are dangerous because once they are announced users literally have “zero days” to apply a patch. Once a zero day is made public, you can already assume it’s being exploited by cybercriminals in the wild. For this reason, the biggest concern in the world of zero day research is never an issue of when – as bugs will always be discovered. Much more pertinent is the uneasy question of how.
How Zero Days are Disclosed
Zero day research is a very big deal, and it involves a lot of money.
On one end of the spectrum, you have internal researchers, employed by software companies, who actively look for security flaws in the company’s product, so that they can stay ahead of attackers. If zero days are ever found, the software receives “just another round of updates” and the problem is more or less silently fixed, without a scary security announcement to users.
This is, for example, what happens with your Windows-based PC on the second “Patch” Tuesday of every month. Patches like these are by no means perfect, as there is always a small time window between release and automated update that attackers can exploit, but ‘good guy’ zero days more or less make the best of what’s already a bad situation.
On the other end of the spectrum, things get much worse. Here, you have financially motivated hackers who uncover new vulnerabilities all on their own.
They have no ties to the company or the users their discovery will affect, and they simply want to make as much money as they can, regardless of others (or the law). In this ‘bad guy’ scenario, a profitable course of action is keeping one’s mouth shut and silently adopting the zero day in a new malware distribution campaign. In this way, a bot master can infect thousands of new victims in a matter of days. His in-the-wild zero day will of course eventually be discovered by one systems administrator or another, and eventually announced, and eventually patched – but all of that takes time.
Go between these two endpoints, and things start to get interesting. Sometimes, the good guys aren’t official employees – sometimes they’re independent researchers applying for bug bounties, which at big companies like Facebook and Microsoft can be as large as $150,000.
Sometimes these researchers get their bounties, along with 15 minutes of fame, and other times they do not. When this latter scenario occurs, things begin to turn a bit greyer, as jilted researchers sometimes opt to disclose to the public without the affected company’s consent.
In situations like this, the company is usually spurred to action – but whether users are safer than they would have been if no one ever knew is a hot topic of debate. You can’t know what you don’t know, and with zero days, this means that there is always the chance that someone malicious has discovered it too. For the surveillance wary, this ‘malicious someone’ even extends to the government; in fact, in recent months, some have even suggested that the NSA knew about Heartbleed.
Zero days, get your zero days!
So, who else finds zero days? Well, a better question might be: what happens when zero days become a commodity? What happens when a few entrepreneurial actors come along and recognize that the spectrum outlined above represents much more than just a collection of ways in which software flaws are discovered and disclosed? When they realize, with glee, that this spectrum is a real-life environment, overflowing with unmet economic demand?
Enter the world of for-profit zero day research. Here, vulnerabilities are bought and sold to the highest paying bidder.
Here, vulnerabilities aren’t just casually researched by security enthusiasts hoping to make the world of software a better place, and maybe make a few bucks while they’re at it. Here, zero day flaws are aggressively sought after – and when they’re found the danger of public disclosure is used as a very effective sales mechanism.
It works like this:
Someone comes to your place of business and tells you they have discovered a secret way to exploit your product that will allow whoever uses it to leech money and personal information off of you and your customers.
They tell you that you can have access to this secret information, but only at a price. You freak out, but then you think: should I take this person seriously? Then you consider slamming the door on them. Then you realize: if what they’re saying is true, what’s stopping them from selling this supposedly secret knowledge to someone else?
From a legal standpoint, nothing is stopping them. For-profit zero day research, and even brokering, is completely legal. This is because the knowledge of a zero day is not the same thing as the exploitation of a zero day. Knowing a flaw exists is not illegal to know, and for companies that have such flaws this knowledge can help prevent security disasters. The problem, though, is that this knowledge isn’t always sold to the companies it affects. It’s sold to whoever is willing to pay, based on the seller’s discretion.
Sometimes, it’s sold to competitors. Other times, it’s sold to governments. Pricing can range from 5 to 7 figures, and many of the larger customers actually pay for catalog-styled subscriptions that give them access to 100-or so industry vulnerabilities, per year.
Smaller software companies, on the other hand, usually cannot afford to play this zero day game. This often means that independent researchers don’t bother to find flaws in smaller company’s products, even if the products are good and lots of people use them. It can also mean that if zero days affecting smaller companies are found, for-profit researchers stand to earn much more by selling the knowledge to a larger (walleted) competitor and never telling the affected company or its users.
The firms that find and sell these vulnerabilities can be found through a simple Google search. There are many, and anyone who runs this search will also find that scattered throughout the results there are also more than a few articles on ethics.
Zero day knowledge may be fundamentally different from zero day exploitation – but the question of whether people should sell the former to prevent the latter remains unresolved. In a free market vulnerability economy, the only thing stopping a research firm or broker from selling a zero day to a cybercriminal or repressive government is that research firm or broker’s moral compass. Many feel that this barrier is much too subjective and much too easily swayed by the amount of money that is involved. Many also worry at the fact that most zero day salesmen have sworn to keep their client lists absolutely secret.
For users affected by security bugs in the products they buy to manage their work and their lives, the question that needs to be answered is whether for-profit zero day research has a net positive or net negative effect.
Fundamentally: Is software safer in a world where zero day research is privatized? Or is vulnerability salesmanship simply Malware Lite?
As always, we’d love to hear your thoughts.
Download now: Emsisoft Anti-Malware free trial.Antivirus software from the world’s leading ransomware experts. Get your free trial today. Try It Now
Have a great (zero-free) day!