Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> It would only work if there's a sizeable, technically-inclined userbase of the project so that someone is likely to have audited the code.

Not really. There's a long history of seemingly credible closed-source codebases turning out to have concealed malicious functionality, such as smart TVs spying on user activity, or the 'dieselgate' scandal, or the Sony rootkit. This kind of thing is extremely rare in Free and Open Source software. The creators don't want to run the risk of someone stumbling across the plain-as-day source code of malicious functionality. Open source software also generally makes it easy to remove malicious functionality, or even to create an ongoing fork project for this purpose. (The VSCodium project does this, roughly speaking. [0])

Firefox's telemetry is one of the more high-profile examples of unwanted behaviour in Free and Open Source software, and that probably doesn't even really count as malware.

> If you're malicious, you can still release malicious software with an open-source cover (ideally without the source including the malicious part - but even then, you can coast just fine until someone comes along and actually checks said source).

I already acknowledged this is possible, you don't need to spell it out. Again I don't have hard numbers, but it seems to me that in practice this is quite rare compared to malicious closed-source software of the 'ordinary' kind.

A good example of this was SourceForge injecting adware into binaries. [1]

> Remember that the xz-utils backdoor was only discovered because they fucked up and caused a slowdown and not due to an unprompted audit.

Right, that was a supply chain attack. They seem to be increasingly common, unfortunately.

[0] https://vscodium.com/

[1] https://en.wikipedia.org/wiki/SourceForge#Installer_with_adw...





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: