Sure, but at the same time, when Google announced that Google+ had a huge security breach of 52M accounts, they didn't publicly disclose it until well after the fact because they didn't think it was serious enough. I wish Google would follow their own principles.
"Please note: this bug is subject to a 90 day disclosure deadline. After 90 days elapse
or a patch has been made broadly available (whichever is earlier), the bug
report will become visible to the public."
By that standard, literally no company in the industry is following these principles, because internal findings are not routinely disclosed. Internal vulnerability researchers have access to information outsiders don't, so you can imagine, the bugs you're not hearing about are pretty lurid. Every major tech company in North America spends millions annually on third-party software security tests; did you think these just weren't turning things up? What did you think was happening to the reports?
For what it's worth, Mozilla routinely discloses internal findings, subject to the same policy as external findings: the bug report is opened up once the fix has shipped to a sufficient fraction of users.
So it's not "literally no company". ;)
Disclosure: I work for Mozilla and I have reported a number of security bugs on our code, the vast majority of which are now public.
Mozilla certainly discloses more than other vendors do, but I'm talking to Mozilla security team members about this now, and maybe one of them can jump in here and correct me, but I don't think they can claim that all their internal findings are reliably (and meaningfully, in advisory form) disclosed.
Regardless: that's a good point. I should have said, public disclosure of internal findings is not an industry norm. Mozilla is a good counterexample to the argument that everyone close-holds internal findings.
That's a good point about advisories. All the findings are public eventually in the form of non-hidden bug reports, but not all may have advisories issued. Doubly so if the finding happens before the affected code had first shipped in a release (so buggy code gets checked in, then internal testing finds a security bug in it before it ships, and that bug is fixed).
I don't think that is the point though so much as that Google has one standard for their internal findings and another for project zero, which also deals with other companies with the justification that it is better. Mozilla doesn't audit other companies so what they do with their internal finding isn't relevant for that argument. One can of course argue whether it is good, or justified, or not. But I don't think that changes that their is an argument there. If someone wanted to sue Google (ha!) over a project zero disclosure that is likely something they would try to argue. That Google knows that disclosing has consequences.
I could be mistaken, but I think internally reported issues that don't make it to release aren't assigned CVE numbers, which might be what he means by "disclosed".
Google's goal with Project Zero is supposedly to raise the stakes in security. I'm happy they're doing it, but if they're going to enforce a non-negotiable 90 days public disclosure policy, it leaves a bad taste in my mouth when Google itself doesn't care to follow that for their own services.
Project Zero has long maintained that any serious company should be able to meet 90 day disclosure timeframe, and yet here comes Google+...
Project Zero was not the group that discovered the G+ vulnerability, though. Project Zero's terms do not bind other teams within the company who have not agreed to them.
Disclosure is one thing, remediation is another. The former is only instrumental to the latter. In the G+ leak, remediation was swift; so disclosure was not required.
(Btw, the affected accounts were 500K, reportedly, not millions.)