Just over two years ago, a triumvirate of security researchers – Charlie Miller, Alex Sotirov, and Dino Dai Zovi – announced what they hoped would become an internet meme: “No more free bugs.”
Their argument was that non-aligned security researchers who find security-related bugs ought to be paid for disclosing them to the relevant vendor. No money, no report.
You can also argue that vendors, especially of web-based services, who offer to pay a reasonable fee for bugs – and why limit bug-finding just to security flaws? – are more likely to attract the goodwill and bug-hunting skills of independent researchers and observant home users. By doing so, they will therefore end up with better-quality products and services than those vendors who don’t.
(Computer science luminary, high priest of the analysis of algorithms, pipe-organ buff, funky Biblical scholar and all-round Good Guy, Donald Knuth – you’ve either heard of him or are about to go and read up about him – famously pays a bounty for any and all errors, no matter how small, found in his publications.
Spelling mistakes, factual errors, historical inaccuracies, incorrect index entries: all qualify for a reward of at least $2.56. That’s 100 hexadecimal cents.)
Facebook is the most recent company to come to the bug-bounty party, officially announcing recently that “to show our appreciation for our security researchers, we offer a monetary bounty for certain qualifying security bugs.”
There’s been general approval of this step, though a few observers have claimed that Facebook’s bounty is a bit on the cheap side. Google, say the Facebook detractors, offers US$3113.70 for bugs, and Mozilla US$3000, compared to Facebook’s typical starting bounty of US$500.
In fact, the detractors are wrong. Google’s offer to start paying for for web application bugs explicitly opens the bidding, just like Facebook, at US$500.
Google’s Chromium bug bounty also started at US$500, a figure Google says it copied from Mozilla. The higher figures are for more serious bugs – something Facebook also says it will pay extra for.
So Facebook has definitely taken a step in the right direction here, and its “budget price” for bugs matches what other industry giants are offering. Nice one, Facebook.
Are there any downsides?
The bad news is that Facebook is only interested in security reports to do with explicit web coding flaws, such as XSS (cross-site scripting) bugs or code injection faults. Bugs or shortcomings in the company’s general attitude to security don’t count.
Sadly, that means you can’t grab yourself a quick $1500 by simply sending in Naked Security’s Three Simple Steps To Better Facebook Security from our open letter earlier in the year. If you missed them back then, they were:
* Privacy by default.
* Vetted application developers.
* HTTPS for everything.
In fact, Facebook won’t pay for bugs in third-party applications at all, even though those applications carry an implicit endorsement by knitting themselves into the fabric of Facebook itself, and even though Facebook still doesn’t have a decent application vetting process.
That’s a pity.
So too is the verbiage in Facebook’s Responsible Disclosure Policy. You might expect that this would merely limit bug payouts to people who give Facebook time to fix the bugs before they announce them to the world.
It does, but also adds the following:
If you give us a reasonable time to respond to your report before making any information public and make a good faith effort to avoid privacy violations, destruction of data and interruption or degradation of our service during your research, we will not bring any lawsuit against you or ask law enforcement to investigate you.
To me, this wording comes across as pretty scary stuff. Facebook, if you want to draw attention to the threat of lawsuits and of calling the cops, why not stick to doing so against the huge number of scammers who already plague your social network?
Please don’t write what sounds eerily close to a threat to the very security researchers you want to get working on your behalf!