Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> It used to be the case that any program targeting Unix had to either spend a lot of time and energy tracking the precise differences between dozens of different commercial Unices, or use autoconf. Autoconf was the project that combined all of that lore into a single place, so that most people didn’t have to know every single detail.

As a data point, the place I worked for in the mid-90s had a single codebase with something over 600 permutations of supported OS and compiler when you included the different versions. One thing we take for granted now is how easy it is to get and install updates – back then you might find that, say, a common API was simply broken on one particular operating system version but your customers would have to wait for a patch to be released, sometimes purchased, put onto floppy disks or CD-ROM, and then manually installed in a way which had enough risk involved that people often put it off as long as they could. Some vendors also did individual patches which could be combined by the sysadmin so you had to test for that specific feature rather than just saying “is it greater than 1.2.3?”, and it wasn’t uncommon to find cases where they’d compiled a common library with some weird patches so you had to test whether the specific features you needed functioned.

Part of why Linux annihilated them was cost but much of it was having package managers designed by grownups - I remember as late as the mid-2000s bricking brand new Sun servers by running the new Solaris updater, which left them in an unbootable state.






Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: