In the case of seL4, don't confuse formal verification with security. The code matches the spec, and security properties can be extracted very precisely, but spec might contain oversights/bugs which would allow an attacker to perform unexpected behaviors.
If you define security as a "lack of exploitable bugs", then security can never be proven, because it's impossible to prove a negative. Also many formally verified systems have had critical bugs discovered, like the KRACK attacks in WPA. The formal verification wasn't wrong, just incomplete, because modeling complex systems is inherently an intractable problem.
The fact that seL4 doesn't even offer bug bounties should be a huge red flag that this is still very much an academic exercise, and should not be used in places where security actually matters.
Besides spec bugs, the seL4 treat model is focused on making sure components are kept isolated. They do not deal with most of what we understand as attacks on a workstation at all.
In fact, in a seL4 system, most of the vulnerabilities we find on Linux wouldn't even be on the kernel, and their verification logic can't test something that isn't there.
That said, the seL4 model does probably lead to much better security than the Linux one. It's just not as good as the OP's one-liner implies.
> The formal verification wasn't wrong, just incomplete, because modeling complex systems is inherently an intractable problem.
I’m not involved in this kind of research or low level auditing, but I have some mathematical training and fascination with the idea of formal verification.
I ran across this thing called “K-Framework” that seems to have invested a lot in making formal semantics approachable (to the extent that’s possible). It’s striving to bridge that gap between academia and practicality, and the creator seems to really “get” both the academic and practical challenges of something like that.
The clarity of Grigore’s explanations and the quality of what I’ve found here: https://kframework.org/ makes me think K has a lot of potential, but again, this is not my direct area of expertise, and I haven’t been able to justify a deep dive to judge beyond impressions.
You’re correct in pointing out that complex systems are inevitably difficult to verify, but I think stuff like K could help provably minimize surface area a lot.
It sure makes auditing that code conforms to an expected design a lot easier, which is most security bugs. This is a fantastic design choice for a security focused kernel.
I will grant that proving something was implemented as designed does not rule out design flaws so, fair enough.
If you define security as a "lack of exploitable bugs", then security can never be proven, because it's impossible to prove a negative. Also many formally verified systems have had critical bugs discovered, like the KRACK attacks in WPA. The formal verification wasn't wrong, just incomplete, because modeling complex systems is inherently an intractable problem.
The fact that seL4 doesn't even offer bug bounties should be a huge red flag that this is still very much an academic exercise, and should not be used in places where security actually matters.
https://github.com/seL4/seL4/blob/master/SECURITY.md