If this discussion has caught your interest and you have experience in Linux internals, you might be interested in joining the team at Boeing that is working towards broader use of Linux in aerospace. Here are two links to open positions:
We have a related issue in US civilian Federal agencies, where the IT security posture has been moving for some time to a formal compliance scheme. The idea is that to manage things at scale, it's desirable to have certified solutions, and mandate a very broad set of controls.
This makes general-purpose Linux systems a hard sell -- Ubuntu and RHEL have e.g. FIPS-validated encryption stacks, but they're generally older releases (currently Ubuntu 20.04 is certified and 22.04 certification is pending), and of course limiting your choice of distro is unwelcome for computational researchers. For data at rest, there are certified self-encrypting hard drives, but they are very hard to source, in part because the FIPS 140-2 suite is also very old, and the newer FIPS 140-3 suite is not yet certified.
There are probably ways around this, the diversity and flexibility of Linux cuts both ways, so you can maybe do a FOSS VM infrastructure on top of a certified hypervisor, and get the best of both worlds that way, but it's a lot of work.
And unlike in the aviation-safety world, it's not clear that the certified solution is technically better. It has pluses and minuses, but the biggest plus is administrative, not technical -- it's easy to check.
All I have to say from personal experience (some of it gained working for a big bank) is that if you want and seek compliance, you will get neither security, nor saftey - but you will get compliance. :)
Whoever tells you otherwise has got a bridge to sell, as well as some compliance- and "security"-facilitating "solutions" on top.
In order to fly, current regulations in most countries require the aircraft is flight certified by the named regulatory authority. In the US for civilian aircraft, the regulatory agency is the Federal Aviation Administration (FAA). So compliance to their stated standards is not really optional. It is also true that compliance does not necessarily mean security or safety has been completely achieved. That is, even if one "checks all the boxes" does not guarantee 100% safety. So we also depend on professionals who go beyond simply doing the minimum but who truly care about the safety of the flying public.
This has been my exact experience (some of it gained working adjacent to a big bank).
At a certain level, folks dropped any real pretense that the compliance regulations in industry were for anything other than shifting liability around and ensuring you can check the right checkbox when doing sales or getting audited.
Actual security varied widely, and had zero relation to the compliance checklists.
We're trying to fix this problem at Chainguard. We have our own Linux distro that packages modern versions of software (like minutes or hours after it's released), as well as older versions.
We're also working on FIPS 140-2 and 3, and support pretty much every compliance framework we can find.
"Linux does not have a safety culture... Linux does not have a quality culture."
While this is critical for an airplane (as well as an automobile) I would think the seeds of this would be desirable in corporate application servers as well. While someone might not lose their life if an app server goes down the "move fast and break things" culture only gets you so far before a culture with adult supervision and an eye toward stability is required.
Well nobody is ever against "more safety", it's just when you ask how much it is allowed to cost that the sputtering starts. Most companies don't even want to pay for basic maintenance or to pay license costs for a programming language, let alone that they would want to pay for libraries that could also be had for free in a FOSS version.
Every FOSS library rightly comes with a license that states in all caps that "THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND". That is pretty directly opposed to the kind of "We guarantee this software will perform within spec 100% of the time" assurances that airplane manufacturers want.
You can definitely get CPUs which run in pairs and verify amongst themselves that they both come to the same answer when running the same code on the same inputs: https://en.wikipedia.org/wiki/Lockstep_(computing).
Being physical devices, even if the design adheres to the spec any individual CPU might still be defective of course. You can only test "is it still good" by periodically running one through a test suite and re-certifying that it still returns the correct answers, but there is never any guarantee that it won't break a second later due to freak circumstances. You can get down to acceptable margins though.
Using redundant systems helps detect (dual redunant) and even correct (triple modular redundant) physical faults in the system, where one system will provide a different answer than the other(s). System redundancy does not detect/correct for design flaws, which are a common mode failure. Catching design flaws is currently done by testing (compared to known correct answers) and peer review (by domain experts). Someday, mathematical proofs might be used (known as "formal methods"), but currently these are only possible to use on very small software projects, such as the seL4 project that formally proved correctness for around 10K lines of code.
You can check TI's Hercules Safety MCUs which has 2 MCUs which operate in lockstep. It guarantees the hardware faults I think.
In Railways I have seen two systems running on the same processing and doing a check of the results and then sending it out. If something is opposed then it fails gracefully and safely.
It’s wild to me that software engineers don’t need to be certified in a similar vein to civil engineers.
As more and more of our comms move into closed-gardens over the internet, the opportunity for a bug to indirectly result in injury or death grows. All it takes is a bug in a billing system that prevents a delinquent customer from contacting emergency services..
This was actually something that my course (one of the first computer engineering degrees in Portugal) struggled with back in 1994.
Engineering guilds (and governments) are simply not able to cope with the concept of software being critical, but the key thing at the time was that software engineering was not even acknowledged as “engineering” by the local Engineering guild (which was stuck in a loop around the physicality of civil, electronics and mechanical engineering).
I think the true reason for this is that software is orders of magnitude more complex than civil engineering - and everything you build is pretty much a whole new thing. It's not like there's hundreds of years of bridge design, with gradual improvements made to it over the decades.
Computers would probably have ended up alot better if we'd taken that sort of an approach, but it's much too late now.
That said, when it comes to safety-of-life systems, there are very stringent processes in place. Linux does not even remotely fit into its workflows. Linux will never be a safety system, and we should be glad that it isn't.
> software is orders of magnitude more complex than civil engineering
Come off it. Even if software was that superhumanly complex (most software actually isn't all that complicated, most of it isn't even new, and it has a unique benefit that it can be easily isolated and modularised in a way that physical things usually cannot, and you can easily "checkpoint" things that work and iterate), civil engineering is just as complex and fraught. Buildings, bridges, railways, etc have thousands upon thousands of interlocking details to get right. Structural physics, ground conditions, materials, electrics, HVAC, drainage, specification and sourcing of parts, safety, regulations, building scheduling. It goes on and on. And fixing stuff is more than pushing a new commit.
Nothing else involves as much invention. We know intimately everything that goes into buildings & have built very similar buildings that serve very similar used millions of times. Each software has unique intent & is hugely hand crafted.
You cannot overbuild for software. You can over scale. But we don't have plumbing, electricity, foot, space, structural capacities as numerical behavioral properties that we can overshoot.
Both are complex but code is arbitrarily complex. The ways things can go wrong is unbounded.
To me, it is exceedingly clear almost every other industry of this planet has far far better developed industrial practices and standards. They can go to school & learn how to do the thing.
There's simply no common accepted curiculum for how to actually build good software. There's too many possibilities, too many ways to success, and so few common threads that actually winnow down succes or failure. Software has a couple cobbled together Christopher Alexander style Design Patterns that can kind of inform some repeatedly usable ideas. And we have millions upon millions of libraries, each of which probably could become a library that is in the top 10% of usage, if conditions were right. But there's just so little rhyme or reason to it all. Software is all happenstance.
You can try to snark your way out of this & shade the difference out, but it should just be so obvious & clear. Software is not as mature. It's practitioners have infinitely more possibilities & exponentially less constraints. Most of it ends up working fair, in such a degree that it's near impossible to judge how far from optimal or how much better it could be working. Almost nothing else is so adrift, so unable to measure & understand how sucessful it is. This should just be clear & obvious. Your protests don't move me. It should be obvious.
The idea that there is some finite, known set of things that can go wrong when you build in the physical world seems naive at best.
If anything, software is more bounded than the physical world. A software program is composed of discrete states. The number of such states is finite and bounded by the memory of the computer (you can only form so many different program states in a finite memory space). In contrast, just cutting a board to length has an uncountably infinite number of ways for it to be wrong.
> and everything you build is pretty much a whole new thing.
There’s ton of cookie cutter systems being built, especially as things like could bring standardization (mostly to achieve vendor lock in, but that’s different point).
Building 100th web app for some random business isn’t whole new thing.
> I think the true reason for this is that software is orders of magnitude more complex than civil engineering - and everything you build is pretty much a whole new thing.
This is a lie software devs tell themselves to justify producing a shitty product while simultaneously patting themselves on the back for the brilliance they must posses to navigate such wondrous complexity.
IMO, it’s because software engineering is still in infancy. There are very rapid changes in what and how you build things every 5-10 years.
But I agree with the point. I think we are at the stage where we view coders (who’s main job should be to “just” write great code) and engineers (who’s main job should be to build safe and cost effective solutions to problems) as interchangeable.
Funny enough I work in a small aerospace company and I have been tasked with engineering things before. Technically my initial position was as an engineer and not an SWE even though my degree was in CS
> It’s wild to me that software engineers don’t need to be certified in a similar vein to civil engineers.
At the moment any coder can claim to be a software engineer.
Certification is needed so we can draw a distinction between those who can code, and those can can engineer software solutions, as well as holding them to a higher standard.
In France it is: "engineer"-the-title specifically means having a degree from a state-certified engineering school, and when you get the degree the state inscribes you as an engineer in a national registry.
Whether it's software engineer or another domain is immaterial, the title "engineer" is heavily protected and making use of it without the degree is liable for pursuit the same way claiming to be a doctor and delivering medical advice when one is not: in the same way as with the Hippocrates oath, engineers are held to a high moral standard of ethics and integrity. The words "élite de la nation" often comes up - and often in derogatory ways from outsiders - but it is borne out from these requirements, that engineers are to be placed in position of great power and corresponding responsibility, meaning they have a duty to call the shots in high stakes situations in ways that go beyond technical knowledge and mastery but also factor in intellectual and moral values.
One can get a masters degree in computer science and/or software development though, which is the same "knowledge level" of studies (an engineering degree doubles as a masters degree), but that does not make one an engineer, which is a corps (in the same sense that e.g US Marines are a corps). This dates back to Napoleon and the creation of "classes préparatoires" and public (state-owned) engineer schools, heavily inspired by the military. Case in point Polytechnique is a military school (although the goal is not to produce armymen), and entering any of the three french military corps (the navy, the air force, and the "ground" army) as an officer goes through the same classe préparatoire process.
This design has been relaxed over time and for quite a time there are private engineering schools that don't require going through these first two prépa years, but they still require the state certification to legally produce engineers.
Certification isn't "needed" in any meaningful sense. But the IEEE does offer a pretty good software certification program if that's what you want. You're welcome to make certification a hiring requirement at your company, and it will ensure that candidates know at least the basics.
Certification is needed to hold software "engineers" accountable for shoddy software. Eventually I expect anglo countries to catch up to what other developed countries do and regulate the term.
There is absolutely a need, and it is inevitable licensing/certification will happen at some point.
If software engineers screw up code for say a pacemaker, or an airplane, or a self-driving car, they should be held to the same standards as a civil engineer who was negligent in a bridge design which led to deaths.
Nah, you're just making things up. Irrational cranks have been pushing for certification for decades but in the real world no one cares.
For pacemakers the FDA already certifies medical devices based on the entire design, manufacturing, and testing process of which software is only one part. Requiring certification for the programmers involved would be just stupid and accomplish nothing.
I'm not making anything up. Certification/licensing is the way it's done in most developed countries and for good reason.
Nothing irrational for pushing for it either. Plenty of advantages. It's mostly irrational cranks arguing against it because they think they will lose some degree of freedom.
As I said though, it will happen eventually, and inevitably.
It's actually a thing that's only close to universal for civil engineering. Electronics and mechanical engineers for aircraft, for example, are not usually licensed in the US nor the UK, and in places like Canada where the use of the term is more heavily regulated, it had more of an impact on job titles than anything else. Like a lot of aspects of other branches of engineering, I think it's often viewed with rose-tinted glasses by software engineers. Basically everything that software engineers complain about happens with other disciplines as well!
It’s kind of like anyone with a bit of common sense and a few common tools can build a doghouse, but you need to know a lot more to build a skyscraper.
Linux competes not only on the desktop with Windows and Mac OS, but also on the server and cloud environments, and also in embedded environments. It is highly scalable to many use cases. No, we don't expect to use Linux right off the shelf. As the full presentation noted we would create a carefully curated profile to go on the airplane. And that work to curate and manage it, and producing certification evidence of correctness -- is definitely not free.
> the "move fast and break things" culture only gets you so far before a culture with adult supervision and an eye toward stability is required.
You can run multi billion dollar companies that way (Reddit, Twitter, Meta, Tesla), hell you can get richest-man-in-the-world by shitting on everything.
Our point was a but more nuanced. We pointed out the problem (and also value) of the collaborative innovation. Then, we also noted that with some care, we think a carefully customized distribution could be shown to be sufficiently safe.
> the "move fast and break things" culture only gets you so far before a culture with adult supervision and an eye toward stability is required.
In my opinion, we are still in a "pioneer" phase of software engineering, where expansion (aka "software is eating the world") is prioritized over consolidation. But I think this is already starting to change; one example I've observed (in corporate application servers) being an increased focus on software supply chain management.
Yes, that's it. We want to take advantage of the innovative technology in the "expansion" but need to carefully manage what we actually put on the airplane to demonstrate (with evidence) that it is well suited for the intended purpose. Which I think is what you mean by "consolidation".
I have seen SysGo's PikeOS being used in Railway signalling systems. Also Threadx in Military Aerospace and Medical devices. Micrium in medical devices.
Let them go implement fuchsia. It will check all their boxes right before it slams into a mountain from a km to mi conversion.
I thought we handled this years ago and coming from aviation experts is rather strange that they don't know the industry has migrated away from having a singular operating system that can't die to having a series of redundant fail-safes to fall back to when it does. It's strange to see the places where the microkernel debate still rages on....and how little investment is being made by those complaining multibillion $$ international corporations into projects like fuschia, RTOS, ZephyerOS, GNU Hurd, MIT Mach (or even Darwin), or even Minix!
I think these arguments are disingenuous and while they are valid the various organizations making them seem to aggressively not want to find solutions. I smell a strong desire to hold the vanguard of what they have built until they retire and can be unconcerned with compliance...understandable to a degree but harmful in the long run to be going so fast in the wrong direction.
Maybe Linux isn't a good fit, that's fine but they clearly don't care about that, they just don't want to implement anything and Linux is a convenient scape goat to not have to contribute back into an open source project even one on a BSD license
Honestly one of the craziest things is Linus saying security vulnerabilities are just normal bugs, don't deserve special treatment or to be fixed with priority, nor should they even be announced.
That's a terrible security approach. The article makes good points.
> Honestly one of the craziest things is Linus saying security vulnerabilities are just normal bugs, don't deserve special treatment or to be fixed with priority, nor should they even be announced.
For instance, this quote seems particularly relevant to this article:
"
[...]
In fact, all the boring normal bugs are _way_ more important, just because
there's a lot more of them. I don't think some spectacular security hole
should be glorified or cared about as being any more "special" than a
random spectacular crash due to bad locking.
[...]
To me, security is important. But it's no less important than everything
*else* that is also important!
Linus
"
software that doesn't function at all is 100% secure and has other consequences too.
Thus, is it more or less important that the software functions or that it is free of all security issues?
If there was a condition that your UI became a different shade of grey in your web browser: you wouldnt buy a domain, create a catchy name, generate a logo and scream from the rooftops.
He is just saying: there are important security bugs, there are important functional bugs- they are equally important.
Insignificant security bugs are not more important than deadlocks; which if you were depending on the kernel for a life critical system could kill you.
What he's saying could also be worded as all bugs are also security bugs, and there is really nothing about them that makes them special. Yes fix them, like any other bug.
You could also say it's simply a separate domain to care about security above all else. That is true in all things. If safety was your top priority when designing a saw, you would not design a saw at all and the world would not have the use of saws. So, for the saw designer, the top priorities mostly have to do with how well it cuts. It's someone's job to think about safety as their main whole job, but it's someone else.
I don't see anything so crazy about either of these two concepts.
Anyone who uses the Linux kernel is free to fork the code and fix security vulnerabilities as quickly as they like, or pay a vendor like IBM to do so. That's the beauty of open source. It's completely unreasonable to expect Linus and the other kernel maintainers to focus on security above everything else.
That's always such a silly argument, IMO. Yes, anyone can fork anything OSS, it doesn't mean anyone is going to, or it's going to make sense to do it.
And sure, no one is forcing the maintainers to adopt a different view, but we can judge them for having a bad or incorrect view, and point out issues that arise from that.
And maintainers are not being asked to focus on security above all else, just to treat security vulnerabilities like vulnerabilities instead of just bugs.
There's nothing silly about the argument, you're just being unreasonable. Many organizations already fork the code. There is nothing bad or incorrect about the maintainers view, nor are they responsible for issues arising from that view. Use different software if security is critical to you.
No, I'm not being unreasonable. And yes, it is a silly argument. It's not a practical solution. It's akin to telling someone without a license who lives 10 miles from their place of work to walk if they are not happy with the bus service.
The maintainers view is absolutely incorrect and flawed, ultimately it doesn't matter because distributions pick up their slack.
A lot of it is open (and friendly) criticism, but “With apologies to LOTR - one does not simply walk into aerospace using Linux” got me to laugh out loud.