Is it ethical to sell zero day exploits?

Zero day flaws are the application vulnerabilities that nobody knows about until it’s too late. They’re the things like Heartbleed, or Shellshock, or most recently POODLE that allow hackers and attackers to execute malicious code on machines that aren’t theirs. They’re also the things like Sandworm and Operation Snowman: previously unknown entry points into a PC through end user software that allow malware writers to infect their victims in new and often unprotected ways.

Zero days are dangerous because once they are announced users literally have “zero days” to apply a patch. Once a zero day is made public, you can already assume it’s being exploited by cybercriminals in the wild. For this reason, the biggest concern in the world of zero day research is never an issue of when – as bugs will always be discovered. Much more pertinent is the uneasy question of how.

How Zero Days are Disclosed

Zero day research is a very big deal, and it involves a lot of money.

icon_search_50x50On one end of the spectrum, you have internal researchers, employed by software companies, who actively look for security flaws in the company’s product, so that they can stay ahead of attackers. If zero days are ever found, the software receives “just another round of updates” and the problem is more or less silently fixed, without a scary security announcement to users.

This is, for example, what happens with your Windows-based PC on the second “Patch” Tuesday of every month. Patches like these are by no means perfect, as there is always a small time window between release and automated update that attackers can exploit, but ‘good guy’ zero days more or less make the best of what’s already a bad situation.

icon_target_50x50On the other end of the spectrum, things get much worse. Here, you have financially motivated hackers who uncover new vulnerabilities all on their own.

They have no ties to the company or the users their discovery will affect, and they simply want to make as much money as they can, regardless of others (or the law). In this ‘bad guy’ scenario, a profitable course of action is keeping one’s mouth shut and silently adopting the zero day in a new malware distribution campaign. In this way, a bot master can infect thousands of new victims in a matter of days. His in-the-wild zero day will of course eventually be discovered by one systems administrator or another, and eventually announced, and eventually patched – but all of that takes time.

icon_warning_50x50Go between these two endpoints, and things start to get interesting. Sometimes, the good guys aren’t official employees – sometimes they’re independent researchers applying for bug bounties, which at big companies like Facebook and Microsoft can be as large as $150,000.

Sometimes these researchers get their bounties, along with 15 minutes of fame, and other times they do not. When this latter scenario occurs, things begin to turn a bit greyer, as jilted researchers sometimes opt to disclose to the public without the affected company’s consent.

In situations like this, the company is usually spurred to action – but whether users are safer than they would have been if no one ever knew is a hot topic of debate. You can’t know what you don’t know, and with zero days, this means that there is always the chance that someone malicious has discovered it too. For the surveillance wary, this ‘malicious someone’ even extends to the government; in fact, in recent months, some have even suggested that the NSA knew about Heartbleed.

Zero days, get your zero days!

So, who else finds zero days? Well, a better question might be: what happens when zero days become a commodity? What happens when a few entrepreneurial actors come along and recognize that the spectrum outlined above represents much more than just a collection of ways in which software flaws are discovered and disclosed? When they realize, with glee, that this spectrum is a real-life environment, overflowing with unmet economic demand?

icon_globe_50x50

Enter the world of for-profit zero day research. Here, vulnerabilities are bought and sold to the highest paying bidder.

Here, vulnerabilities aren’t just casually researched by security enthusiasts hoping to make the world of software a better place, and maybe make a few bucks while they’re at it. Here, zero day flaws are aggressively sought after – and when they’re found the danger of public disclosure is used as a very effective sales mechanism.

It works like this:

icon_config_50x50Someone comes to your place of business and tells you they have discovered a secret way to exploit your product that will allow whoever uses it to leech money and personal information off of you and your customers.

They tell you that you can have access to this secret information, but only at a price. You freak out, but then you think: should I take this person seriously? Then you consider slamming the door on them. Then you realize: if what they’re saying is true, what’s stopping them from selling this supposedly secret knowledge to someone else?

From a legal standpoint, nothing is stopping them. For-profit zero day research, and even brokering, is completely legal. This is because the knowledge of a zero day is not the same thing as the exploitation of a zero day. Knowing a flaw exists is not illegal to know, and for companies that have such flaws this knowledge can help prevent security disasters. The problem, though, is that this knowledge isn’t always sold to the companies it affects. It’s sold to whoever is willing to pay, based on the seller’s discretion.

icon_dollar_50x50Sometimes, it’s sold to competitors. Other times, it’s sold to governments. Pricing can range from 5 to 7 figures, and many of the larger customers actually pay for catalog-styled subscriptions that give them access to 100-or so industry vulnerabilities, per year.

Smaller software companies, on the other hand, usually cannot afford to play this zero day game. This often means that independent researchers don’t bother to find flaws in smaller company’s products, even if the products are good and lots of people use them. It can also mean that if zero days affecting smaller companies are found, for-profit researchers stand to earn much more by selling the knowledge to a larger (walleted) competitor and never telling the affected company or its users.

The firms that find and sell these vulnerabilities can be found through a simple Google search. There are many, and anyone who runs this search will also find that scattered throughout the results there are also more than a few articles on ethics.

Zero day knowledge may be fundamentally different from zero day exploitation – but the question of whether people should sell the former to prevent the latter remains unresolved. In a free market vulnerability economy, the only thing stopping a research firm or broker from selling a zero day to a cybercriminal or repressive government is that research firm or broker’s moral compass. Many feel that this barrier is much too subjective and much too easily swayed by the amount of money that is involved. Many also worry at the fact that most zero day salesmen have sworn to keep their client lists absolutely secret.

icon_bug_50x50For users affected by security bugs in the products they buy to manage their work and their lives, the question that needs to be answered is whether for-profit zero day research has a net positive or net negative effect.

Fundamentally: Is software safer in a world where zero day research is privatized? Or is vulnerability salesmanship simply Malware Lite?

As always, we’d love to hear your thoughts.

Have a great (zero-free) day!

  • RSRazer

    It really depends. What is the motivation? What is the exploit? Just because one can exploit a product doesn’t mean it is a critical “day zero”.
    The bigger question is “Is it morally right to sell this information to anyone other than the software developers or ones in charge of such?”
    To this I say no. I personally think that there should be some check In place to prevent the selling and distribution of exploits to anyone other than the developer, coinciding with copyright laws as it is their product in question after all.

    • Steve

      Implementing some sort of check is an interesting idea. It is probably impossible to stop curious minds from digging around, but attaching fines/punishments to certain types of distribution would definitely act as a deterrent.

  • Dan

    I agree with the sentiments that unauthorized research may involve copyright issues re OS layers/ops, that not all “exploitable” opportunities can be twisted to malicious purpose, and that it is wrong to sell those exploits which are malicious to anyone at-large; even when encountering a virus or exploit in the wild, rather than copy it to study it myself I upload to places like VirusTotal, and allow software such as HitManPro or Emsisoft to remove/send reports on same (where remote-control isn’t the exact issue…for that kind of to me rare find, I usually inform governmental agencies).

  • Philip

    Privacy International in October 2014 made a criminal complaint to the National Cyber Crime Unit of the National Crime Agency, urging the immediate investigation of the unlawful surveillance of three Bahraini activists living in the U.K. by Bahraini authorities using the intrusive malware FinFisher supplied by British company Gamma.

    Moosa Abd-Ali Ali, Jaafar Al Hasabi and Saeed Al-Shehabi, three pro-democracy Bahraini activists who were granted asylum in the U.K., suffered variously from years of harassment and imprisonment. Investigation and analysis by human rights group Bahrain Watch showed that while
    Moosa, Jaafar, and Saeed were residing in the U.K., Bahraini authorities targeted the activists and had their computers infected with the surveillance Trojan FinFisher.

    The complaint argues that the actions of the Bahraini authorities qualifies as an unlawful interception of communications under section 1 of the U.K.’s Regulation of Investigatory Powers Act 2000. By selling and assisting Bahraini authorities, the complaint argues that Gamma is liable as an accessory under the Accessories and Abettors Act 1861 and/or encouraged and assisted the offence under the Serious Crime Act 2007.

    ( Two PCs were infected and a Apple computer system was infected. They turned up at a meeting in their own country and was arrested. They have been given multiple life sentences and will never see the light of day again. All for the lack of an interactive firewall and an up-to-date virus scanner and a basic understanding of PC security ).

    • Steve

      Thanks, Philip. You can find a bit more about FinFisher here: emsi.at/ffish

  • alphaa10000

    Zero-day exploits exist because software is too complex to be known and understood completely and managed perfectly by its publisher.
    If most software can be found with some security fault unknown to its publisher, that reduces the element of control– and implicitly, the responsibility– traditionally associated with a product creator / inventor.
    Although the days of software publishers sending out letters of apology on their discovery of a coding error are long past, this urgently raises the question– if software is no longer under the exclusive control of the publisher, what is the product for which we pay?
    A “best effort”?
    If a bank buys security software on such a basis, who is to blame when something goes wrong, and people lose money?

    • Steve

      That last question is an important one because if something goes wrong and people lose money and there’s someone to blame… well, then that someone is going to be expected to pay people back.

  • Surely by selling the information on to someone else with the knowledge that the buyer will then use this information illegally they are then helping to break the law. Someone supplying legal weapons knowing they were going to be used illegally would probably be a crime and you could class a flaw as a kind of weapon in the wrong hands

  • sswallen

    Can we please stop this madness! I don’t care how it’s done but please stop it!

    • RSRazer

      Unfortunately, no. Good encryption has proved a possible solution, but take a page from Sony: Human error on their end cost all units on version 3.55 and below, or units capable of being manually downgraded to 3.55 to be permanently exploited. damage includes piracy, mods while playing online and homebrew (though the later I don’t find to be malicious. I actually think you as a consumer have a right to run your own code on your on device without having to get permission just because of a label like sony or nintendo. The reason your loced is fear of piracy, money makingwithout them getting a dime, and they make a mint off of development kits. for example, the development kit for the nintendo 3ds prices are:
      73056 PARTNER-CTR DEBUGGER $2,620

      73058 PARTNER-CTR DEBUGGER/CAPTURE (Dual Functionality) $3,950
      73065 Nintendo 3DS (Development only) “Panda” USA $324
      73066 Nintendo 3DS (Development only) “Panda” EU $324
      73067 Nintendo 3DS (Development only) “Panda” AUS $324
      73062 Flash Card, 16 Gbits (2 GBytes) CTR $85
      73063 Backup Memory, 1Mbit (128 KBytes) Flash CTR $8.35
      73064 Backup Memory, 4Mbits (512 KBytes) Flash CTR $10.65.
      These may seem odd for a non tech, but basically my best setup if I purchased would be $3,950+$324+$170+$10.65 = 4454.65. This does not include the fact that you must already know how to fluently program, and the cost to even get accepted to develop, or the process to get accepted. $4500 just to make a simple pong game? This is mainly why game consoles are hacked. Unfortunately, piracy is generally a side effect.
      Now imagine if programmers had to get permission from Microsoft or Apple for every little program they create? SDK’s would run in the hundreds of thousands! it is unfeasible. This is why there is sometimes good in exploitation, so that companies cannot force control over the device you own). All it takes is a leak of the private keys and all security collapses. As for exploits, many are ToCToU or time of check to time of use. this is an almost impossible thing to fix as it is called a race condition. A hacker has to have deep understanding of the machine to successfully exploit it, but it is not fixable completely and will always be a possible entrypoint. as with any product, digital or physical, it is impossible to make it un-hackable aside from keeping it to yourself.

  • The law then should change. Seems a lot of laws related to online stuff are outdated – the law about copying music has only just been changed to make it clear its legal to backup legally obtained music and place legally obtained music onto a device e.g. iPod.

    We need an overhaul so the tech world is up to date law wise

    • RSRazer

      Not arguing that, I completely agree. Jailbreaking for example is only an issue because the companies are so scared of piracy they lock out everyone. Give a guy control over how he makes his device function e.g. the ability to change icons, and give the homebrew fanatics a limited to their registered device freedom to develop only allowing distribution once approved on the target platforms store. this allows homebrewers the tinkering freedom along with an opportunity to make a coin off of worthy apps. As it stands, jailbreaking is technically both legal* and illegal* at the same time as of current rulings and law. this discrepancy can cause legal issues and allow innocent modders to get convicted and guilty hackers to go free depending on the trial.This is one of many cases where the law needs to be scrapped and rewritten to cover things properly and avoid issues such as the example I made.

      *Legal because a few jailbreaking court cases were ruled in favor that one has the right to modify their device as they own it.

      *Illegal because it is illegal to circumvent copy protection/DRM period.

  • Sarah Reid

    I’m selling Yahoo, Google and Hotmail stored xss that steal emails cookies and works on ALL browsers. And you don’t need to bypass IE or Chrome xss filter as it do that itself because it’s stored xss. Prices around for such exploit is $1,100 – $1,500, while I offer it here for $700. Will sell only to trusted people cuz I don’t want it to be patched soon!
    Email me [email protected]