At least they let him publish, albeit under a pseudonym. It makes me wonder how many potentially useful discoveries were made in industrial settings, and wound up being buried due to management not wanting to risk leaking competitive information. The good news, I suppose, would be if you believe that it's rarely the case that only one person could ever discover something. Then you can conclude that all (most?) such discoveries were eventually (or will eventually be) rediscovered independently.
On a related note... I wonder how much valuable research disappears (more or less) when companies fold, get acquired, etc. Take MCC[1] for example. I've been doing a lot of reading lately that involves old papers from the 1990's on "agents" and "multi-agent systems". And time and time again, in the references, you'll see something like "MCC Technical Report TR86-32791" or some-such. Occasionally said report can be found online, but quite a few of them seem to be either hard - or impossible - to find. Maybe there's an archive of physical papers stored away somewhere, but FSM knows where the heck such a thing would be, or how hard it would be to get access.
A similar situation came up a while back when we started discussing "sharding" here on HN[2]. There was a lot of effort spent trying to identify when the term first arose, and a lot of evidence pointed to a particular paper that was internal to CCA, who were acquired by Xerox. And now that original paper seems to be unobtanium. The paper probably still exists somewhere in the bowels of Xerox, but good luck ever getting your hands on it.
> It makes me wonder how many potentially useful discoveries were made in industrial settings, and wound up being buried due to management not wanting to risk
Probably a lot. I’ve come to find out that some dinosaur companies won’t even let their programmers open up issues on open source repos (forget sending patches or releasing their own software).
The logic goes like this: if someone found the log4j zero day before it was reported they could comb through all issues and see the companies that the users worked for then try to target them. In this case any comment would indicate possible involvement.
The least bit of security, through the tiniest extra bit of obscurity. Thankfully many of these companies are starting to come around and realizing that a lack of involvement with open source is more risky than accidental 3rd hand information leaking (like what dependencies doesn’t certain company use).
The easiest counter to this is that, to my knowledge at least, it’s easier to build a vulnerability scanner than to scrape repos for more targeted attacks.
The "No lieutenant, your men are already dead" defense. I like it.
I think that if your threat model includes nation states (and the companies I was referencing above was largely S&P500 financial institutions) then you have to think the attacker also doesn’t want to trip off any alarms with a ham fisted port scan blasting the precious zeroday exploit all over the internet. Your point is still extremely valid though.
Which is why the counter I provided is that the best defense is to get as many engineers’ eyes on the problem and in the codebase as possible to prevent or find it before it becomes an issue. Things like lib XZ are scary, but it’s even scarier if not caught before it’s in the wild.
The dirty secret is that nation states can get your software dependency list pretty easily in a number of ways (e.g. sending agents to meetups to nerd out & make friends would be an expensive way but there’s other social engineering attacks I’ve observed).
The other secret is that monitoring software can’t detect anomalies ahead of time & the vulnerability scan will not show up meaningfully any different than all the other random traffic already happening. Your nation state can hide it’s vulnerability scan amongst all the other vulnerability scanners already running (both legit as a service when you request it against your server & illegitimate actors trying to find a way in). So at best a ham fisted search is unlikely to really tip your hand in a meaningful way unless it requires having penetrated a few layers of your security to begin with.
As for libxz, the scary part is that as an industry we recognize the security challenge of not compensating maintainers and yet we have lackluster responses to fixing it (e.g. Google trying to pay OSS maintainers to harden their security while completely ignoring that a huge problem is that the maintainers can’t devote full time which opens an avenue for malicious actors to overwhelm maintainers & take control socially as happened with libxz).
On a related note... I wonder how much valuable research disappears (more or less) when companies fold, get acquired, etc. Take MCC[1] for example. I've been doing a lot of reading lately that involves old papers from the 1990's on "agents" and "multi-agent systems". And time and time again, in the references, you'll see something like "MCC Technical Report TR86-32791" or some-such. Occasionally said report can be found online, but quite a few of them seem to be either hard - or impossible - to find. Maybe there's an archive of physical papers stored away somewhere, but FSM knows where the heck such a thing would be, or how hard it would be to get access.
A similar situation came up a while back when we started discussing "sharding" here on HN[2]. There was a lot of effort spent trying to identify when the term first arose, and a lot of evidence pointed to a particular paper that was internal to CCA, who were acquired by Xerox. And now that original paper seems to be unobtanium. The paper probably still exists somewhere in the bowels of Xerox, but good luck ever getting your hands on it.
[1]: https://en.wikipedia.org/wiki/Microelectronics_and_Computer_...
[2]: https://news.ycombinator.com/item?id=36848605