I've spent a lot of time cleaning up the messes created by Experts like the article's author and I'm simply not convinced. It's easy to say "product market fit is King!!!" When you aren't the one working long nights and weekends reverse engineering code after a bug in pmf-hunting payment processing code made all purchases free for a day.
There is a difference between "table-stakes" complexity and premature complexity. I'd argue that a simple but sane CI / deployment pipeline takes relatively little work to set up and falls under "table-stakes" in that even a pre-pmf team will have a positive ROI in doing it.
On the flip side I have been the one working long nights and weekends reverse engineering code by engineers who prematurely built complexity into the system because they wanted to add a GraphQL api in addition to a rest API. All while in the pre-pmf days, with no value-add to the features that ultimately DID find pmf.
I do generally believe that cleaning-up after the pmf-hunting phase is itself a privilege that many startups do not get to experience, and should be treated as such. I understood the author as arguing that we shouldn't chase shiny things and should ruthlessly avoid complexity in favor of finding pmf. This philosophy is clearly illustrated in the devtools startup he is running. I thought there were some cool ideas there.
I simply reject the premise that all problems a startup needs to solve are original problems. Your customers have lots of ordinary problems too, as do you. Sure, you can't justify spending months on building some custom GraphQL infrastructure or the perfect CI/CD deployment system, but your customers do care about things like "when I download and install this software it's not a corrupt build" and "the software's updater works" and "when I pay these people money for their software I get what I paid for and don't get double-billed". These are all unoriginal problems that are nontrivial - ideally your startup solves them with off-the-shelf solutions to save time, but you still spend engineer hours integrating those solutions.
It's interesting to see how most of the comments here are either "no, this is how lots of startups end up with a mess they have to clean up" or "yes, lots of startups optimize this stuff too early".
And I think that's because both things are true! This is one of the many hard parts about starting a brand new company, figuring out the right balance to strike on this. It's no surprise that companies mostly get it wrong, and in both directions.
There's no single right answer here. It depends on exactly what the company does, the exact path to product market fit, what growth looks like afterwards, and how lucky the guesses about all that stuff were.
This is basically what I was trying to say as well. The trade-off I'm talking about is not "off the shelf and boring" vs. "bespoke and exciting", it's more nuanced than that. The decision companies have to make is more like "what is worth our time and investment and what isn't?". That "time and investment" trade-off may involve custom code vs. off-the-shelf solutions, or it may be between a managed psql that doesn't provide a useful configuration vs. self-hosting and maintaining that configuration yourself, or any number of other build-vs-buy and expedience-vs-longevity decisions.
My point is just that it's very case specific, and you can easily guess wrong in either direction.
Are there companies that die because they produced something people loved and wanted to buy but just couldn't deliver because off the shelf components were just too subpar? I haven't seen that case. I have seen companies die trying to perfect software no one is buying though, quite often.
Of course companies have died because of quality issues. Framing it as "they produce something people love but the quality is subpar" is a false framing.
There are no subpar products that people love. First of all, "quality" is relative. See chatgpt, it's wrong half the time, in a decade if someone were to release a chatbot of chat gpt quality we would say it's terrible. But today, it's the best we have.
The classic story is how airbnb and stripe launched without coding anything, everything was done manually.
Now launch an airbnb competitor today using the same strategy. Obviously comparing yourself to airbnb is dumb, because back then all software was terrible.
The actually successful modern companies of the past few years are openai, tiktok and figma. They all launched with complete products and are massively successful. That's what it takes today.
> I have seen companies die trying to perfect software no one is buying though, quite often.
Quality is absolute.
There were multiple LLMs released before ChatGPT, none made any splash, including GPT-3. Meta released one like 2 weeks before ChatGPT, and had to shut it down for bad output quality. Only GPT3.5 started to meet the 'wow factor' for it to go fully viral.
Until a product hits a minimum quality threshold, its useless. Which is basically what you stated later. Now most areas are mature enough that a copy pasted solution hits that minimum threshold (Eg, setting up a basic ecommerce site with ordering and payment). But for uncharted territory, that threshold is very real, hit it or die.
I think almost certainly, yes? But I think it would require a whole research project to gather persuasive data on this, in both directions.
But my intuition is that failing to find product market fit (for whatever reason) kills companies earlier, whereas hitting product market fit with a subpar engineering foundation is more likely to slow companies down in later stages where they die more slowly or just underachieve.
I think Twitter is probably the best known example of that second pattern. It may be apocryphal that this was their problem, but either way, I think this is a real phenomenon in general.
Stuff like ci lets you ship safely.