The more concerning security finding here is that Google sat on this for 9 months. Assuming the claims hold, this is a serious problem for any security-conscious GCP customers. What other vulnerabilities are they sitting on? Do they have processes in place to promptly handle new ones? Doesn’t look like it…
This is especially questionable given the much shorter deadline that Project Zero gives other companies to fix bugs before publishing their vulnerabilities (regardless of whether there's been a fix). It only seems fair that Google should hold itself to the same standard.
Companies who use that response are even worse because they know very well there is no wining move from the researcher. The company have all the responsibility no matter what.
Both are Google - from an outside view we shouldn't distinguish. Google should hold itself to a consistent bar.
It highlights how divisions operate in silos at Google, and just because Project Zero causes a lot of positive security marketing for Google, it doesn't seem that the quality bar is consistently high across the company.
Also, please don't forget this is still not fixed.
Funny thing is I agree with you that Google should hold itself to that bar, but I don't agree as to Project Zero being the reason. I think we very much should distinguish Google from P0, and that P0's policy should be irrelevant here; their entire purpose is to be an independent team of security researchers finding vulnerability in software, indiscriminately. It seems a number of others here feel similarly (judging by the responses), and ironically their support for the position is probably being lost by dragging P0 into the conversation.
The reason I think Google should hold itself to that bar is something else: Google itself claims to use that bar. From the horse's mouth [1]:
> This is why Google adheres to a 90-day disclosure deadline. We notify vendors of vulnerabilities immediately, with details shared in public with the defensive community after 90 days, or sooner if the vendor releases a fix.
If they're going to do this to others as general company policy, they need to do this to themselves.
Are you suggesting Google to make all unfixed vulnerabilities public after 90 days? Would that be even if the finder does not want them to become public? Or just as an opt-out type of thing.
I'm only suggesting Google needs to fix everything in 90 days (and reveal them afterward as they consider that standard practice) so they don't have unfixed vulnerabilities past that. I don't really have opinions on what policies they should have for cases where that isn't followed, though I think of thing even having a policy for that case encourages it not to be followed to begin with.
Vulnerability deadlines are disclosure deadlines, not remediation deadlines. There's plenty of vulnerabilities that can't be fixed in that time, and I think it's fair for the public to know about them rather than keeping them secret forever.
"Fair to the public" was neither intended to be nor is the concern. Their stance has always been "better for security" and disclosing an unpatched vulnerability is generally worse for security unless you believe it'll encourage people to fix things by that deadline.
On this case knowing about this vulnerability allows you to take corrective action. If Google cannot fix the root cause this doesn't necessarily mean there aren't mitigations that can be done manually by an end user (yes it sucks, but still better than getting hacked)
When users can mitigate it I agree with you (I forgot about that case in the second half of my comment), but there have also been cases when users weren't able to do anything but they disclosed anyway, so that doesn't explain the policy.
Insecurity is invisible. Users have no way to know the weaknesses in the software they use until it's too late. Disclosure is meant to make it possible for users to see what weaknesses they might have so they can make informed decisions.
Users still benefit to know about issues that can't be fixed (think about Rowhammer, Spectre and similar), so as these attacks become more practical (eg https://leaky.page or half double) they can adjust their choices accordingly (switching browsers, devices, etc) if the risk imposed by them is too high.
Of course (using an analogy for a second), some can say that it would be better for people to never find out that they are at increased risk of some incurable disease, because they can't do anything about it.
But for software, you can't make individual decisions like that. Even if one person doesn't want to know about vulnerabilities in the software they use, others could still actually benefit to know about them, and the benefit of the many trump over the preferences of the few.
That is, unless the argument is that it's actively damaging for all of the public (or the majority) to know about vulnerabilities in the software they use. If the point is to advocate for complete unlimited secrecy, and for researchers to sit on unfixed bugs forever, then that's quite an extreme view of software security and vulnerability disclosure (but that some companies unfortunately still follow).
Disclosure policies like these aim to strike a balance between secrecy and public awareness. They put the onus of disclosure on the finder because it's their finding (and they are the deciders on how it's shared), and finders are more independent than the vendor, but I could imagine a world in which disclosure happens by default, by the company, even for unfixed bugs.
What is the thing being implied? Like as far as I can tell, Google's position seems to be that "it is best if vuln researchers have the freedom to disclose unfixed issues, especially after reporting them".
People criticize P0 for publishing issues despite companies asking for extensions. But we're criticizing Google here for...what? They didn't ask for an extension, they didn't try to prevent this person from disclosing. Where is the hypocritical thing?
The complaint is that Google's stance with Project Zero is "90 days is plenty sufficient; you're a bad vendor if you can't adhere to it", and then Google itself doesn't adhere to it, which implicates themselves here.
I see what they're saying if you lump them together; I just think it makes sense to treat P0 a little independently from Google. But otherwise it's got a point.
That's a common sentiment I just don't buy. People here love to hand-wave about some vague "benefit to the public", and maybe there is some benefit when the vulnerability can be mitigated on the user side, but it literally cannot be the case for the fraction of vulnerabilities that entities other than the vendor can do nothing about. The only "benefit" is it satisfies peoples' curiosity, which is a terrible way to do security. Yet P0 applies that policy indiscriminately.
> Can you point out the second part, specifically where "you're a bad vendor if..." is either stayed or implied py P0?
As to your question of when this is implied by P0, to me their actions and lack of a compelling rationale for their behavior I explained above is already plenty enough to imply it. But if you won't believe something unless it's in an actual quote from themselves, I guess here's something you can refer to [1]:
- "We were concerned that patches were taking a long time to be developed and released to users"
- "We used this model of disclosure for over a decade, and the results weren't particularly compelling. Many fixes took over six months to be released, while some of our vulnerability reports went unfixed entirely!"
- "We were optimistic that vendors could do better, but we weren't seeing the improvements to internal triage, patch development, testing, and release processes that we knew would provide the most benefit to users."
- "If most bugs are fixed in a reasonable timeframe (i.e. less than 90 days), [...]"
All the "reasonable time frame (i.e. < 90 days)", "your users aren't getting what they need", "your results aren't compelling", "you can do better", etc. are basically semi-diplomatic ways of saying you're a bad vendor when you're not meeting their "reasonable" 90-day timeline.
They literally directly describe it as a benefit to users, the sentiment you don't buy, and don't ever actually call vendors bad, except if you interpret the less benefit to users to be a moral impugnment of the vendors.
> They literally directly describe it as a benefit to users
"It" in that sentence does not refer to their own unpatched disclosures.
> They don't ever actually call vendors bad, except if you interpret the less benefit to users to be a moral impugnment of the vendors. What you cite proves my point!
They didn't fix it within that timeline. I don't know why everyone is saying "well they didn't stop disclosure in 90 days", but they didn't fix it in the timeline that they have allocated as being reasonable for all vulns they report.
At the limit, what you're saying would mean that vendors should feel obligated to fix issues they don't consider to be vulnerabilities, as long as they're reported as such. That'd clearly be absurd. Is there maybe some additional qualifying factor that's required to trigger this obligation that you've left implicit?
If you're leaving the determination to the vendor, they could just avoid the deadline by claiming it is not a vulnerability. That seems like a bad incentive.
There are things that literally cannot be fixed, or where the risk of the fix is higher than the risk of leaving the vulnerability open. (Even if it is publicly disclosed!)
It seems that we're all better off when these two concerns are not artificially coupled. A company can both admit that something is a vulnerability, and not fix it, if that's the right tradeoff. They're of course paying the PR cost of being seen as having unfixed security bugs, and an even bigger PR cost if the issue ends up being exploited and causes damage. But that's just part of the tradeoff computation.
I don't know what point you're trying to make here. Google acknowledges that this is a vulnerability ("nice catch"), Google pushes every other company to fix vulns in 90 days (or have it publicly disclosed, which is based on the assumption that vulns can be fixed in that time), and Google did not fix it in 90 days.
If you're asking me to create a perfect framework for disclosure, I'm not interested in doing that, and it's completely unnecessary to make a judgment of this single scenario.
> A company can both admit that something is a vulnerability, and not fix it, if that's the right tradeoff.
Google's 90 days policy is designed explicitly to give companies ample time to patch. And yes, this is them paying the PR cost - I am judging them negatively in this discussion because I agree with their 90 day policy.
I am saying that there are things that are technically vulnerabilities that are not worth fixing. Either they are too risky or expensive to fix, or too impractical to exploit, or too limited in damage to actually worry about. Given the line you drew was that there must be a fix in 90 days, if the company agrees it is a vulnerability, the logical conclusion is that the companies would end up claiming "not a vulnerability" when they mean WONTFIX.
If you think this particular issue should have been fixed within a given timeline, it should be on the merits of the issue itself. Not just by following a "everything must be fixed in 90 days" dogma. All that the repeated invocations of PZ have achieved is drown out any discussion on the report itself. How serious/exploitable is it actually, how would it be mitigated/fixed, what might have blocked that being done, etc. Seems like those would have been far more interesting discussions than a silly game of gotcha.
(If you believe there is no such thing as a vulnerability that cannot be fixed, or that's not worth fixing, then I don't know that we'll find common ground.)
> Given the line you drew was that there must be a fix in 90 days, if the company agrees it is a vulnerability, the logical conclusion is that the companies would end up claiming "not a vulnerability" when they mean WONTFIX.
OK, but that doesn't apply here, which is why I don't get why you're bringing up general policy issues in this specific instance. Google did acknowledge the vulnerability, as noted in the disclosure notes in the repo.
So like, let me just clearly list out some facts:
* Project 0 feels that 90 days is a good timeline for the vast majority of vulns to be patched (this is consistent with their data, and appears accurate)
* This issue was acknowledged by Google, though perhaps not explicitly as a vulnerability, all that I can see is that they ack'd it with "Good catch" - I take this as an ack of vulnerability
* This issue is now 3x the 90 day window that P0 considers to be sufficient in the vast majority of cases to fix vulnerabilities
I don't see why other information is supposed to be relevant. Yes, vendors in some hypothetical situation may feel the incentive to say "WONTFIX" - that has nothing to do with this scenario and has no bearing on the facts.
> If you think this particular issue should have been fixed within a given timeline, it should be on the merits of the issue itself.
That's not P0s opinion in the vast majority of cases - only in extreme cases, to my knowledge, do they break from their 90 day disclosure policy.
> Not just by following a "everything must be fixed in 90 days" dogma.
Dogma here is quite helpful. I see no reason to break from it in this instance.
> Seems like those would have been far more interesting discussions than a silly game of gotcha.
I'm not saying "gotcha", I'm saying that:
a) 9 months to fix this feels very high, Google should explain why it took so long to restore confidence
b) The fact that they have an internal culture of 90 days being a good time frame for patching purely makes it ironic - it is primarily the fact that I think this should have been patched much more quickly that would bother me as a customer.
> (If you believe there is no such thing as a vulnerability that cannot be fixed, or that's not worth fixing, then I don't know that we'll find common ground.)
Nope, 100% there are vulns that can't be fixed, vulns that aren't worth fixing, etc. But again, Google didn't say this was a "WONTFIX" though, and they did ack that this is a vuln. If it wasn't possible to fix it they could say so, but that isn't what they said at all, they just said they weren't prioritizing it.
If it's the case that this simply isn't patchable, they should say so. If they think this doesn't matter, why not say so? It certainly seems patchable.
It's not what happened, but the logical outcome of what you propose. Right now the rules are simple: "disclosure in 90 days, up to you whether to fix it". What you're proposing is that it is no longer up to the company to make that tradeoff. They must always fix it.
> That's not P0s opinion in the vast majority of cases - only in extreme cases, to my knowledge, do they break from their 90 day disclosure policy.
Again, that is a disclosure timeline. Not a demand for a fix in that timeline. In general it's in the vendors best interest release a fix in that timeline, especially given its immutability. You're trying to convert it to a demand for a fix no matter what. That is not productive.
> a) 9 months to fix this feels very high, Google should explain why it took so long to restore confidence
So why not argue for that explicitly? It seems like a much stronger approach than the "lol PZ hypocricy" option.
You're trying to talk about consequences of my statement, which I'm trying very hard not to talk about, because I don't care. I'm only talking about this very specific instance.
> Again, that is a disclosure timeline. Not a demand for a fix in that timeline.
Yes and it is based on the expectation of a fix within that timeline being practical.
> You're trying to convert it to a demand for a fix no matter what. That is not productive.
No I'm not, you're trying to say that I am, repeatedly, and I keep telling you I don't care about discussing disclosure policy broadly. I'm only talking about this once instance.
> It seems like a much stronger approach than the "lol PZ hypocricy" option.
Take that up with the person who posted about P0 initially. I'm only saying that it's ironic and that I support the 90 day window as being a very reasonable time to fix things, and that them going 3x over is a bad look.
> Again, that is a disclosure timeline. Not a demand for a fix in that timeline. In general it's in the vendors best interest release a fix in that timeline, especially given its immutability. You're trying to convert it to a demand for a fix no matter what.
I don't see what form it would come in if it were a demand in your view. We have a disagreement over private entities over a vulnerability; how would one "force" the other to do that except by disclosing it? Hold someone hostage?
> Google pushes every other company to fix vulns in 90 days (or have it publicly disclosed)
I believe you're mistaken about the conditional publishing. The 90 day clock starts when google reports the bug - they will make it public whether or not the vulnerability is remediated (with very few exceptions). By all appearances, Google is very willing to be on the receiving end of that on the basis that End-Users can protect themselves when they get the knowledge - in this case, GCE users are now aware that their servers are exploitable and make changes - like moving to AWS. I think the 90-day clock is reasonable stance to take, for the public (but not necessarily for the vendor).
http://g.co/appsecurity has more details but TL;DR is that Google is supportive of people disclosing unfixed bugs after 90 days, which is what happened here.