Why you need a bug bounty program

If you’re ready to deal with the volume of reports, a bug bounty program can help you can find the holes in your system — before attackers do

Why you need a bug bounty program
Thinkstock

Every business needs to have a process in place for handling security vulnerability reports, but some organizations take a much more proactive approach to dealing with security researchers.

An increasing number of hardware and software vendors have formal bug bounty programs. Google, for example, runs its own vulnerability rewards program, and Microsoft has multiple bug bounties covering Office 365, Azure, .Net and Edge as well as general programs covering exploits and defenses.

And the U.S. Department of Defense (DoD) set up its first bug bounty after several years of watching the software industry, says Katie Moussouris, now CEO of Luta Security. She previously created similar programs for Microsoft and Symantec, worked with the FDA to create market guidance around vulnerability disclosure for medical devices and helped the DoD prepare for their bug bounty while working at HackerOne. “The DoD was curious about those programs were effective, whether the folks participating in it were acting in good faith,” she tells CIO. “They wanted to take what was working in the private sector and fast track that into the DoD.”

“Bug bounties are really just a subset of vulnerability disclosure with a particular incentive. They can be a useful tool. Just like any other incentive program, you’re trying to incent certain types of behavior, certain types of bugs,” Moussouris says.

Extortion or fair trade? The value of bug bounties ]

Is a bug bounty for you?

If you’re a business that just uses IT, would a bug bounty be useful to you?

Maybe, “if you develop your own software or you rely on software and web applications to collect and handle sensitive data,” says Dwayne Melancon, vice president for products at security software company Tripwire. These days, that includes an increasingly wide range of businesses.

Companies like HackerOne, BugWolf, SynAck and Bugcrowd offer platforms for both bug bounties and crowdsourced penetration testing.

Vetting researchers builds trust in bounty programs ]

“Offering a monetary reward (a bounty) is definitely not required nor expected,” HackerOne co-founder, Michiel Prins says (the DoD runs its bug bounty through the HackerOne service). “But, it allows you to show gratitude beyond the words ‘thank you’, creates loyalty and actively incentivizes the researcher to report vulnerabilities to you again next time they stumble upon something.”

Casey Ellis, CEO of Bugcrowd, agrees that this starts with your own security culture. “Before anything else, it’s important to recognize that vulnerabilities are inevitable. By starting from the assumption that you are vulnerable, you can work towards remediating these vulnerabilities before the bad guys find them. Crowdsourced testing is an effective and efficient way to do this. There are a few simple steps you can take to minimize risk both from vulnerabilities and from unauthorized public disclosures.”

“The goal is to set expectations to promote positive communication and coordination both with internal stakeholders and external researchers, and create a security feedback loop that makes you smarter and more resilient over time. The first step to achieving this is to determine a clearly defined scope. From there, clearly communicating this scope and keeping an open dialogue with researchers is key.”

Ellis identifies a number of key steps, which are an extension of the way you should be handling less coordinated reports of problems; again, this goes far beyond the IT team. “Develop your vulnerability disclosure policy. Develop a process for handling bug reports. Develop templates with corporate communications and legal for communicating with security researchers, press and development teams. Integrate systems with internal development ticketing software. Decide on the range of rewards for vulnerabilities for incentivized programs.” And because a key part of handling vulnerabilities responsibly is the right level of transparency, “decide on coordinated disclosure policies.”

You also need to be ready to handle reports quickly, says Prins. That includes having an on-going conversation with researchers. “Companies that run top vulnerability disclosure programs strive to acknowledge receipt of a report within 24 hours. They are also quick to validate (or invalidate) submissions to their program, usually getting to a valid/invalid decision within 48 hours of receipt.”

But it also means dealing with the issues once you’ve validated that they’re a real risk. “In terms of escalation, you are going to need a process for the ‘drop everything and work on this now’ type vulnerability.”

“Service providers help you do the front end—the interaction with the hackers, the triage—but they won’t help you prioritize fixing those bugs according to your own business needs,” Moussouris warns. “They won’t help you figure out how many resources you need to devote to engineering so you can fix bugs while you’re still working on general operations or development.”

A significant part of the preparation for the DoD bug bounty was “making sure the back-end engineering teams were ready to receive those bugs” because the first reports arrived within minutes of the program going live. 

Prins also recommends taking advantage of the people you already pay to provide hardware, software and services, because your problems might turn out to be something they have to fix. “Build strong relationships with the various suppliers of your IT solutions. The day an external party tells you about a problem with your IT, it is easy to leverage that existing relationship and get the vulnerability fixed.”

“When you are buying new IT and evaluating different vendors, make sure to inquire whether they have a way for receiving vulnerability reports from third parties. Learn about how that process works. Personally, I would prefer a vendor that has a vulnerability disclosure program or even bug bounty program over a vendor that doesn’t have an established process,” Prins says.

Gray areas and gray hats

You don’t have to go as far as running a bounty program to find the crowdsourced bug bounty and penetration testing services useful. Simply setting up an account for your organization gives researchers a way to contact you when they find issues, notes Troy Hunt, the security consultant who runs the haveibeenpwned.com website to help track data breaches that compromise user account information.

“There’s a maturity matrix. Over time, companies can evolve from having no information at all, to offering contact info, to having a HackerOne or Bugcrowd account. Even without necessarily incentivizing researchers, you can say ‘here are the kind of vulnerabilities we will accept and process’.”

That will help you avoid gray areas—like when a security researcher who finds publicly accessible private information and downloads it. Are they being thorough or going too far? Clear guidance on that was one of the key points in the DoD bug bounty, Hunt says. “So many people ask me about ‘where should I stop?’ when they find a vulnerability.”

Using a third-party platform forces you to define clearly what’s acceptable in advance, rather than slowing down your security response by working out the legal and communications implications as you go.

“Understand that more than just the security and engineering or development teams may have to get involved. It’s not uncommon for legal, compliance, PR, and other such teams to play a vital role—even in the most robust vulnerability disclosure and bug bounty programs,” says Nick Harrahill, senior product manager for security company Synack, who also worked on these programs at eBay and PayPal. “It’s pivotal to set clear rules and boundaries for what’s allowed and what isn’t. Additionally, the organization must ensure that the process has legal protections and be prepared to act upon them if breached.”

“Hopefully, they’re ethical individuals simply trying to help, but they can also be individuals with malicious intent attempting to profit from bugs they have found, through extortion (or ‘bug poaching’) or through other malicious means.”

You need a mature security culture to be ready for bug bounties, Harrahill believes and the key aspect is that “everyone has bought in; they know that this is priority and the researchers [are] vital to security of the company. As much value as it provides, it can also turn out ugly if you don’t give it the proper attention.” 

There’s a clear line between ethical security researchers trying to help you (and often wanting nothing in return beyond recognition and seeing the flaw fixed) and criminals trying to hold you ransom over a bug they’ve found. But, Hunt warns, “ransomware and the awareness of it are making some companies more suspicious of the motives of genuine researchers, because we’re getting used to hearing about people wanting money.”

These services don’t just make it easier to manage crowdsourced security testing, but, as Ellis notes, you’ll be getting reports from “a curated crowd of researchers.”

Hunt puts it a little more bluntly: “These programs can sort the wheat from the chaff for you and they know how to handle them.” Especially younger researchers or those from other countries who lack experience in dealing with businesses, he suggests, “have different social norms, they use very different language and have very different expectations. The way they conduct themselves professionally could easily put them in a position of seeming to be malicious when they don’t mean to be. It’s a difficult negotiation that needs to be managed.”

When you get that right, it has big benefits, says Moussouris. “The people who are turning over things for bug bounty programs are ideally people you want to have come back, who you want to have get to know your product or site well, and build good relationships with so they can help you improve.”

Are you ready for a bug bounty?

Because she’s so well known for setting up the Microsoft and Symantec bug bounties, Moussouris says people assume she’ll always recommend them. “I’m Captain Bug Bounty; but I’m also Captain Don’t Hurt Yourself. A bug bounty is not necessarily appropriate for everybody.”

If the thought of inviting even benevolent hackers to check out your security has you breaking out in a cold sweat because you already know they’d find so many problems that you’d be overwhelmed by the reports, you’re not ready for a bug bounty.

You need to start fixing the problems you know about, and get your own patching and testing protocols up to standard first. That might require outside help, in the form of a security consultant, or it might be time to hire a CSO. You need to get a culture for security internally, including getting security involved in development and ops, and training non-technical staff to spot and report phishing attempts.

“If you have that first step of a vulnerability disclosure program, you may be ready to refine that and harness the power of hackers who are already willing to report to you, and point them in the direction you choose and are interested in and are willing offer a reward for. In that situation, it’s a good way to focus,” says Moussouris.

“But if you’ve never done vulnerability disclosure before, you have no real way to communicate with partners, with affected customers or with the media, let alone with hackers. If you’ve never done this before, and you lack the underlying support structures to be able to do regular vulnerability disclosure, a bug bounty would be a really harsh first step. It will not work well for you, if you lack the appropriate ability to triage and fix those vulnerabilities.”

Moussouris suggests two measures to tell if you’re ready for a bug bounty: the volume of bugs you’re used to handling and the velocity at which you can fix them.

“When an organization says they’re ready for a bug bounty, the first question I ask is ‘how many bugs do you receive on a monthly basis’ and if the answer is very few or ‘we don’t have a bug program,’ then I walk them back and tell them they need to start this in the appropriate way.”

Moussouris cautions against viewing a bug bounty as a non-disclosure agreement you’re paying for to buy more time to fix a known bug. “That can work in some circumstances, but if the expectation is that you can pay a very small amount to have unlimited time to fix the bug, that will not meet the expectation and norms of the community. That’s not why they’re turning vulnerabilities over for those minimal bug bounty fees.”

She also warns against thinking of a bug bounty as ‘security QA’. “Some people are thinking, ‘this is cheaper than a penetration test; I’ll just move my budget.’ That is absolutely the wrong approach; that approach will not get you the results you’re hoping for.”

Instead, she suggests viewing vulnerability disclosure and bug bounties as part of your secure development and deployment lifecycle. “You’re trying to build and deploy the most secure systems possible and you have a plan, post release, for servicing bugs. That’s where your vulnerability disclosure program goes. If you’re at the mature point where you’re cycling those bugs back and learning from them, so you’re creating more secure code or changing deployment practices; if you’re doing that, then you might consider a bug bounty program.”

1 2 Page 1
Page 1 of 2