Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That's in conflict with the philosophy behind the internet. If you'd just drop anything because some part of it you don't understand, you lose a lot of flexibility. You have to keep in mind that some parts of the internet are running on 20 year old hardware, but some other parts might work so much better if some protocol is modified a little. Just like with web browsers, if everything is a little bit flexible in what they accept, you both improve the smoothness of the experience and create room for growth and innovation.


Postel's Law is important, but it creates brittle systems. You can force them further from the ideal operating state before failure, but when they fail they tend to fail suddenly and catastrophically. I like to call it the "Hardness Principle" as opposed to the "Robustness Principle" in analogy to metallurgy.


Surely the opposite? If everything was very pedantic and strict, the 'net would be so brittle as to be non-functional.

You're imagining a world where things get specified and implemented completely correctly. Which does not exist and probably can't!


That's what Postel thought. He was wrong. Allowing everything creates a brittle system because the system has to accept all the undocumented behaviour that other broken systems emit. If broken files were rejected quickly, nobody would generate them.

There's a difference between unknown extensions following a known format, and data that's simply broken (e.g. offset pointer past end of data).


You're not accounting for the incorrectly rejected file / protocols. And incomplete protocol specifications.

And generally I think critics of Postel are lacking the context in which they were made. You and probably others would actually make similar decisions than Postel for many particular issues.


I disagree that I'd make similar decisions. Postel's Law is a big part of the reason Bleichenbacher attacks (adaptive chosen-ciphertext attacks)[1] stayed so common for so long. As an engineer responsible for the security I absolutely reject malformed inputs.

https://en.wikipedia.org/wiki/Adaptive_chosen-ciphertext_att...


But that's what I'm saying; Postel may well have ALSO rejected malformed inputs in this particular case.


There is a place for both. The accept everything model made some extensions better, but it also allowed for various malware when junk was accepted.


Postel's law doesn't mean "accept everything", but that you should accept de-facto rules people have created. If everyone says, "this is how we do it", you should ignore the RFC and just copy what others do.


There are several problems with that.

One, if everyone is doing something different from the spec it is hard to figure out what they are really doing and what they mean. Long term you have confidence things will continue to work even when someone else writes their own version which otherwise might also deviate from the spec.

Two, it is easier to modify the spec as more features are dreamed up if you have confidence that the spec is boss meaning someone else didn't already use that field for something different (which you may not have heard about yet).

Three, if you agree to a spec you can audit it (think security), if nobody even knows what the spec is that is much harder.

Following the spec is harder in the early days. You have to put more effort into the spec because you can't discover a problem and just patch it in code. However the internet is far past those days. We need a spec that is the rule that everyone follows exactly.


This is so wrong, read up on https://datatracker.ietf.org/doc/html/rfc9413

The internet is ossified because middleboxes stick their noses where they shouldn't. If they just route IP packets, we could have had nice things like SCTP...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: