There are a few reasons DoD PKI is a shitshow which make it somewhat more understandable (although only somewhat).
First, the issues you describe affect only unclassified public-facing web services, not internal DoD internet services used for actual military operations. DoD has its own CA, the public keys for which are not installed on any OS by default, but anyone can find and install the certs from DISA easily enough. Meaning, the affected sites and services are almost entirely ones not used by members of the military for operational purposes. That approach works for internal DoD sites and services where you can expect people to jump through a couple extra hoops for security, but is not acceptable for the general public who aren't going to figure out how to install custom certs on their machine to deal with untrusted cert errors in their browser. That means most DoD web infra is built around their custom PKI, which makes it inappropriate for hosting public sites. Thus anyone operating a public DoD site is in a weird position where they deviate from DoD general standards but also aren't able to follow commercial standard best practices without getting approval for an exception like the one you linked to. Bureaucratically, that can be a real nightmare to navigate, even for experienced DoD website operators, because you are way off the happy path for DoD web security standards.
Second, many DoD sites need to support mTLS for CAC (DoD-issued smartcards) authentication. That requires the site to use the aforementioned non-standard DoD CA certs to validate the client cert from the CAC, which in turn requires that the server's TLS cert be issued by a CA in the same trust chain, which means the entire site will not work for anyone who hasn't jumped through the hoops to install the DoD CA certs. Meaning, any public-facing site has to be entirely segregated from the standard DoD PKI system. For now, that means using commercial certs, which in turn requires a vendor that meets DoD supply chain security requirements.
Third, most of these sites and services run on highly customized, isolated DoD networks that are physically isolated from the internet. There's NIPR (unclassified FOUO), SIPR (classified secret), and JWICS (classified top secret). NIPR can connect to the regular internet, but does so through a limited number of isolated nodes, and SIPR/JWICS are entirely isolated from the public internet. DoD cloud services are often not able to use standard commercial products as a result of the compatibility problems this isolation causes. That puts a heavy burden on the engineers working these problems, because they can't just use whatever standard commercial solutions exist.
Fourth, the DoD has only shifted away from traditional old school on-prem Windows Server hosting for website to cloud-hosting over the past few years. That has required tons of upskilling and retraining for DoD SREs, which has not been happening consistently across the entire enterprise. It also has made it much harder to keep up with the standards in the private sector as support for on-prem has faded, while the assumptions about cloud environments built into many private sector solutions don't hold true for DoD.
Fifth, even with the move to cloud services, the working conditions can be so extraordinarily burdensome and the DoD-specific restrictions so unusual, obscure, poorly documented, and difficult to debug that it dramatically slows down all software development. e.g., engineers may have to log into a jump box via a VDI to then use Jenkins to run a Groovy script to use Terraform to deploy containers to a highly customized version of AWS.
Ultimately, the sites this affects are ones which are lower priority for DoD because they are not operationally relevant, and setting up PKI that can easily service both their internal mTLS requirements and compatibility with commercial standards for public-facing sites and services is not totally straightforward. That said, it is an inexcusable shitshow. Having run CAC-authenticated websites, I can tell you it's insane how much dev time is wasting trying to deal with obscure CAC-related problems, which are extremely difficult to deal with for a variety of technical and bureaucratic reasons.
Good write up. Not to mention the other big HR constraints on DoD engineers: they almost always have to be a “US person.”
Anyone who gets a CAC working on a personal computer deals with this all too much. The root certs DoD uses are not part of the public trusted sources that commonly come installed in browsers.
lol I very nearly included a rant about that but decided it was too far off topic. Not being able to smoke weed may be more of an obstacle these days though.
> engineers may have to log into a jump box via a VDI to then use Jenkins to run a Groovy script to use Terraform to deploy containers to a highly customized version of AWS.
This hits too close to home. I'm sending you my therapist's bill for this month.
> That requires the site to use the aforementioned non-standard DoD CA certs to validate the client cert from the CAC, which in turn requires that the server's TLS cert be issued by a CA in the same trust chain, which means the entire site will not work for anyone who hasn't jumped through the hoops to install the DoD CA certs. Meaning, any public-facing site has to be entirely segregated from the standard DoD PKI system. For now, that means using commercial certs, which in turn requires a vendor that meets DoD supply chain security requirements.
Is this actually all the way technically correct? As far as I know, there is no requirement that the trust chains for server certificates and client certificates are in any way related. It seems to me that it would be perfectly possible for the DoD to use its own entirely private client certificate infrastructure but to still have the server certificate use something resembling an ordinary root certificate.
This is not to say that this would actually be all that worthwhile.
> Is this actually all the way technically correct? As far as I know, there is no requirement that the trust chains for server certificates and client certificates are in any way related. It seems to me that it would be perfectly possible for the DoD to use its own entirely private client certificate infrastructure but to still have the server certificate use something resembling an ordinary root certificate.
I think you're right that it's possible in principle for a Web server to enforce use of DoD CAC (enforcing the client cert being in the DoD PKI) without itself using a DoD PKI cert on the server side.
That said there's little benefit to it, users who haven't jumped through hoops to install DoD root CA certs won't typically be able to get their browsers to present them to the remote server in the first place, and if we're willing to jump through those hoops then there's no good reason for the DoD server not to have a DoD PKI cert.
I've never used one of the DoD smartcards, but I can certainly imagine the DoD wanting a user of one of these smartcards to be able to use it with a COTS client device to authenticate themselves.
Sure, people do that all the time. After they run "InstallRoot" to install DoD root certs on their COTS device, that is. I'm honestly not sure any major browser will allow you to use a client smartcard without having the smartcard's certificate chain to the trust store used by the browser so this part seems unavoidable.
FWIW I just tested it and yes you can run a web server using a commercial server cert that enforces client PKI tied to the client having a DoD PKI cert. It works just fine.
> I'm honestly not sure any major browser will allow you to use a client smartcard without having the smartcard's certificate chain to the trust store used by the browser so this part seems unavoidable.
It’s been a while, but I’ve used file-backed client certs issued by a private CA in an ordinary browser without installing anything into the trust store, and it worked fine. I don’t see why a client cert using PKCS11 or any other store would work any differently. Why would the browser want to verify a certificate chain at all?
I'm really just talking about the browser trusting the user cert itself. I've done the softcert thing myself before, I forget if it used commercial root CA or not but it did work.
I guess you could flag the leaf (user) cert as ultimately trusted and that should be fine, but if the browser doesn't see that trust notation, and does see an intermediate CA, it's going to try to pull that back to a trusted root.
One way or the other the user will have to fiddle with browser settings to make a CAC work, either to tell the browser to trust their cert explicitly, or to have the browser trust DoD certs.
No unfortunately it is not correct. You can supply a different CA to verify client certs against to what is given in server hello. There's no need for them to be related at all.
Critically you probably want to use a custom CA for client certs. The usual implementation logic in servers is "is this cert from the client signed by one I trust?". If that CA is LetsEncrypt say then that's a lot of certificates that will pass that check.
Except for people like me who struggle to wake up before dawn. And whether people prefer light after work doesn't change the available scientific evidence which suggests there are significant negative health effects of waking up too early relative to sunrise, but no significant health benefits from having sunlight hours after work. People's preferences in this case are generally only mildly held and typically are not well informed by the science. I suspect if more people were aware of the deleterious health effects, their stated preferences would change.
I've worked on mission planning software for parachute systems and the precision we can achieve is already extremely high. Given how poorly this seems to scale, the only use case that makes any sense to me would be something like sensor drop, which are the only payloads small enough for these chutes. Or potentially for drogues on multi-stage systems, but I'm not sure they'd even be useful there because usually a fast descent is part of the appeal of a drogued payload, and not just to reduce time exposed to wind drift (e.g., to reduce time it is vulnerable to enemy fire).
Our inflammation responses evolved in part to help us fight off pathogens, but people in modern society are exposed to far, far fewer pathogens than even our immediate ancestors were as recently as 70 years ago when diseases like polio and mumps were still common. As a result many people have an overactive inflammation response relative to the pathogen load to which they are regularly exposed.
In extreme cases, that can manifest as autoimmune disease, when overly strong inflammation or other immune responses end up attacking not just foreign pathogens but the person's body itself. As another poster said, inflammation is a blunt instrument. It's a knob that can only be turned up or down, across the entire body. If you turn it down too far, you risk infectious illness. And if you turn it up too far, you risk damage to your organs.
Interestingly, there was a substantial increase in the incidence of autoimmune diseases in Europe in the generations following the Black Death, probably because people with excessively strong immune responses were more likely to survive exposure to plague bacteria. Celiacs or MS will kill someone much, much more slowly than bubonic plague will, so a disproportionate number of people with those or similar autoimmune disorders were able to survive to pass on their genes.
The internet has been like this forever. In the 90s I was banned from hotmail for having an inappropriate email address because my last name is Cummings. No recourse for some idiotic regex filter.
I guess the only solution is to self-host. I've even been migrating my dedicated server to a homelab I'm slowly building. But that's a very time-consuming option, has a high chance of breakage, and not even available for 99.5% of people. And most people don't wants to spend hours and hours of private time to babysit own email server, which is understandable. Finally, it's not free.
I wonder what would have to happen for people to become more digitally sovereign, but I doubt it'll ever happen. If anything, we're going in the other direction.
> The problem arises since computers can easily identify strings of text within a document, but interpreting words of this kind requires considerable ability to interpret a wide range of contexts, possibly across many cultures, which is an extremely difficult task.
Humans have issues with such tasks, too, apparently. Hence their push to change IT terminology, e.g. master -> main git branch.
Indeed, I got my Hotmail suspended because of something not terribly different. Thank God in those days not every account insisted on 2fa through email
You can’t charge black people systemically different prices than white people, for example. Neither better nor worse prices. Proving causality would likely be the hard part, but a systematic survey of disparate pricing could show disparate impact. AI models frequently are invisibly biased on race because of some feature that operates as a proxy measure that the developers don’t recognize. eg something as simple as ZIP code combined with a name is a highly reliable predictor of race, and there are many similar, far more subtle ways bias can creep into a model.
First, the issues you describe affect only unclassified public-facing web services, not internal DoD internet services used for actual military operations. DoD has its own CA, the public keys for which are not installed on any OS by default, but anyone can find and install the certs from DISA easily enough. Meaning, the affected sites and services are almost entirely ones not used by members of the military for operational purposes. That approach works for internal DoD sites and services where you can expect people to jump through a couple extra hoops for security, but is not acceptable for the general public who aren't going to figure out how to install custom certs on their machine to deal with untrusted cert errors in their browser. That means most DoD web infra is built around their custom PKI, which makes it inappropriate for hosting public sites. Thus anyone operating a public DoD site is in a weird position where they deviate from DoD general standards but also aren't able to follow commercial standard best practices without getting approval for an exception like the one you linked to. Bureaucratically, that can be a real nightmare to navigate, even for experienced DoD website operators, because you are way off the happy path for DoD web security standards.
Second, many DoD sites need to support mTLS for CAC (DoD-issued smartcards) authentication. That requires the site to use the aforementioned non-standard DoD CA certs to validate the client cert from the CAC, which in turn requires that the server's TLS cert be issued by a CA in the same trust chain, which means the entire site will not work for anyone who hasn't jumped through the hoops to install the DoD CA certs. Meaning, any public-facing site has to be entirely segregated from the standard DoD PKI system. For now, that means using commercial certs, which in turn requires a vendor that meets DoD supply chain security requirements.
Third, most of these sites and services run on highly customized, isolated DoD networks that are physically isolated from the internet. There's NIPR (unclassified FOUO), SIPR (classified secret), and JWICS (classified top secret). NIPR can connect to the regular internet, but does so through a limited number of isolated nodes, and SIPR/JWICS are entirely isolated from the public internet. DoD cloud services are often not able to use standard commercial products as a result of the compatibility problems this isolation causes. That puts a heavy burden on the engineers working these problems, because they can't just use whatever standard commercial solutions exist.
Fourth, the DoD has only shifted away from traditional old school on-prem Windows Server hosting for website to cloud-hosting over the past few years. That has required tons of upskilling and retraining for DoD SREs, which has not been happening consistently across the entire enterprise. It also has made it much harder to keep up with the standards in the private sector as support for on-prem has faded, while the assumptions about cloud environments built into many private sector solutions don't hold true for DoD.
Fifth, even with the move to cloud services, the working conditions can be so extraordinarily burdensome and the DoD-specific restrictions so unusual, obscure, poorly documented, and difficult to debug that it dramatically slows down all software development. e.g., engineers may have to log into a jump box via a VDI to then use Jenkins to run a Groovy script to use Terraform to deploy containers to a highly customized version of AWS.
Ultimately, the sites this affects are ones which are lower priority for DoD because they are not operationally relevant, and setting up PKI that can easily service both their internal mTLS requirements and compatibility with commercial standards for public-facing sites and services is not totally straightforward. That said, it is an inexcusable shitshow. Having run CAC-authenticated websites, I can tell you it's insane how much dev time is wasting trying to deal with obscure CAC-related problems, which are extremely difficult to deal with for a variety of technical and bureaucratic reasons.
reply