Hacker Newsnew | past | comments | ask | show | jobs | submit | saint-evan's commentslogin

I really <i>REALLY</i> enjoyed this article and the direction it took me in. I went in with zero preconceptions, just read it straight through, and only after opening the comments did I realize it was largely AI-assisted. Even then, I was very pleasantly surprised. The piece takes you by the hand and leads you through a very deliberate and directed journey. Sure, there are moments where things wobbled a bit like some explanations around specific failures get a little tangled and even contradictory, but none of that registered as “this must be AI.” I’m only noticing those things now, in hindsight, like oh, that’s what that was.

The images hit that sweet spot too. Just enough and few in between to support the plot without getting in the way, just enough to like visually clarify without over-explaining. It all worked together even with minor contradictions around labelling. The inconsistencies wasn't sticky enough to disrupt the plot at all.

Over the MY years I’ve seen an idea play out in movies, books, articles, short stories, that the “humanity only unites when faced with an alien intelligence”. What gets me is how people can enjoy something like this, then immediately recoil once they figure it was actually AI-assisted enough to be largely Ai generated. Does that actually diminish the substance of what they just experienced? I don’t think it does but I'm not gonna argue such a subjective stance.

Someone in the comments suggested tagging AI-assisted work with sth like an “LLM:” prefix, similar to “ShowHN:”. That feels weird to me. LLMs might not be sentient, but they’re clearly capable enough that the output should stand on its own, alongside the intent and effort of whoever’s guiding it. Pre-labeling it just bakes in bias before anyone even engages with the work. It’s not that far off from asking human authors to declare their race or nationality up front. 'cause really if nothing about my direct experience changed, why should my judgment?

In a tech-forward space like HN, I’d expect a stronger bias toward judging things on merit alone. Just read the thing. Let it speak first. I sincerely hope this isn't gonna be an 'LLM vs Humanity' thing 'cause personally, I find the idea of a different kind of intelligence extremely interesting.


I had the exact same experience. It's probably the first time I've read something that (besides the images, which I think are pretty obvious) I didn't think was AI. And while I did feel a little tricked learning it was AI, ultimately it was actually just quite good?

I understand why people feel like they need more transparency around these things. Reading for me is intentional, and I feel cheated when I put in the effort to read something for which the author put in little. I would like to think the author put in a lot of effort for this story despite AI assistance, and so it was worth me putting effort in. But whether that's true or not I still felt like I got something out of it (hard not to as a software engineer wondering about their place in the world), and that's something.


I think I come at this from a very different angle. I grew up around books, so I default pretty hard to being reader-first. I don’t really factor in the author’s effort when I decide if something was worth reading. It’s almost entirely about whether the work holds my attention and/or gives me something.

So the idea of feeling tricked based on how much effort went into it feels foreign to me. If I got something out of it, that's enough. Even if it took the author and a model no time at all.

The ‘feeling tricked’ part, to me, suggests a kind of adversarial framing with AI outputs that I think is curious. I’m just engaging with the text in front of me, whether it’s a story, a README, or a wall of technical writing. If it communicates clearly and has substance, I don’t think much about where it came from. I think much of this just comes down to what people think they’re engaging with when they read, the work itself or the mind behind it.

And tbh, filtering what’s worth the attention has always been on the reader. There’s plenty of human written slop too. I tend to judge everything the same way on my way to deciding whether to keep reading or drop it.


I think your view is sentimental. For businesses the code usually IS the value, and devs ARE human resources that produce code. It sounds cynical, but it’s basically how most orgs operate. From the company’s POV employees function as cogs in a larger system whose purpose is to generate value considering that businesses are structured to optimize outcomes i.e. Profit. If tech appears that can produce the same output more cheaply or efficiently, companies will most definitely as we've seen so far explore replacing people with it. I mean take a look at corporate posture around LLMs. But do I get the point you’re making about knowledge, domain understanding, and solving real problems because those things clearly matter in practice but from the company’s pov, they matter only because they help produce better code/systems which are still the concrete artifact that embodies the business logic and operations. A symbolic model of the business itself encoded in software. So the framing of devs as human resources that produce code and code as the primary value correctly describes how many businesses see the relationship. And I don't really see the equivalence between SWE-ing in a business context and sports


> From the company’s POV employees function as cogs in a larger system whose purpose is to generate value considering that businesses are structured to optimize outcomes i.e. Profit. If tech appears that can produce the same output more cheaply or efficiently, companies will most definitely as we've seen so far explore replacing people with it.

Businesses wish this were the case, and many will even say it or start to believe it. But it doesn't bare out to be true in practice.

Think about it this way, engineers are expensive so a company is going to want to have as few of them as possible to do as much work as possible. Long before LLMs came along there have been many rounds of "replace expensive engineers" fads.

Visual programming was going to destroy the industry, where any idiot could drag and drop a few boxes and put together software. Turns out that didn't work out and now visual programming is all but dead. Then we had consultants and software consultancies. Why keep engineers on staff and have to deal with benefits and HR functions when you can hire consultants for just long enough to get the job done and end their contracts. Then we had offshoring. Why hire expensive developers in markets like California when you can hire far cheaper engineers abroad in a country with lower wages and laxer employment law. (It's not a quality thing either, many of these engineers are unquestionably excellent.)

Or, think about what happens when software companies get acquired. It's almost unheard of for the acquiring company to layoff all of the engineering staff from the acquired company right away, if anything it's the opposite with vesting incentives to convince engineers to stay.

If all that mattered was the code and the systems, and people were cogs that produced code that businesses wanted to optimise, then none of these actions make sense. You'd see companies offshore and use consultants with the company that does "good enough" as cheaply as possible. You'd see engineers from acquisitions be laid off immediately, replaced with cheaper staff as fast as possible.

There are businesses like that operate like this, it happens all the time. But, all of the most successful and profitable tech companies in the world don't do this. Why?


>If all that mattered was the code and the systems, and people were cogs that produced code that businesses wanted to optimise, then none of these actions make sense.

No, No... Of course all that matters isn't just the code. My framing was about how organizations model the work SWE do economically.

>Visual programming was going to destroy the industry, where any idiot could drag and drop a few boxes and put together software. Turns out that didn't work out and now visual programming is all but dead. Then we had consultants and software consultancies. Why keep engineers on staff and have to deal with benefits and HR functions when you can hire consultants for just long enough to get the job done and end their contracts. Then we had offshoring. Why hire expensive developers in markets like California when you can hire far cheaper engineers abroad in a country with lower wages and laxer employment law. (It's not a quality thing either, many of these engineers are unquestionably excellent.)

It seems like we're agreeing along the same tangent. With this argument, you're admitting that businesses do see SWE as cogs in a wheel and seasonally try to replace them... The seasonality of 'make the engineer replaceable' fads really does point to businesses trying to simplify what devs actually do since most of what they measure is working code output because it’s a tangible artifact (this is waht the OP meant by implying being a working code producer at work). Knowledge, judgment, architectural intuition, and domain understanding are harder to quantify, so they disappear from the model even though they ARE the real constraint. So for the record, I do agree with you that code isn't everything but I maintain that SWEs are modelled based on working codes produced even in more successful companies that invest in domain knowledge and long-term system understanding.

Metrics, performance reviews, sprint velocity, delivery timelines, all orbit around observable artifacts because those are what management systems can actually track objectively and equitably. It's a handy abstraction just like looking only at the ins/outs of a logic gate as opposed to looking at the implementation and wiring. Of course, a NOT gate would get upset over being called a 'bit flipper', it's not all thar physically exists but from our POV, it doesn't exactly matter. This applies to human labor even if a leaky abstraction w


> you're admitting that businesses do see SWE as cogs in a wheel and seasonally try to replace them...

Not quite. I agree that companies will try to do this, but every company that has tried to treat engineering staff as replaceable units of person-hours has failed.

> Metrics, performance reviews, sprint velocity, delivery timelines, all orbit around observable artifacts because those are what management systems can actually track objectively and equitably. It's a handy abstraction just like looking only at the ins/outs of a logic gate as opposed to looking at the implementation and wiring.

Yes, and these metrics are, usually, worthless.

It's not that companies and managers will not try to replace engineers with AI. I'm sure they will. I'm sure many will be laid off because "AI does it cheaper now".

My point is that companies that have gone down this route in the past have failed, and AI is no different. Companies that lean strongly into AI as a workforce replacement will fail too.


lol but you have to first 'view' something as replaceable before yu try to replace it, no? So companies DO see SWEs as cogs and try but fail to actually make them replaceable, yes?

It's not even as simple as "views as replaceable". It's pure economics. It's someone looking at a spreadsheet going "We spent a lot of money on SWE salaries, our financial results look better if we fire some of them. Is there a cheaper option?"

From that perspective, yes some management view SWE as replaceable. My argument is that all attempts to actually implement that have failed to date, and the most successful financial companies are staffed by upper management who know that to remove much of the SWE staff would doom the company in the medium term.

It's a move of either desperation ("we'll go bankrupt if we don't do this"), or short-sightedness ("if I cut 40% of headcount, our P&L will be better, which will result in better quarterly results, which is likely to increase share price, which gives me a bigger performance bonus. Who cares what happens after that."), or a lack of experience in managing software companies and watching this play out before.

AI, even if it lives up the hype, is no different.


>Disclaimer: Creation of this file was AI-assisted. If you thought I was going to write out a .md file for AI myself you must be mad. AI for AI. Human for Human.

'you must be mad'. Aggressively hilarious. Love it!


This is absurd! The author raised money without a concrete execution plan, treating capital as permission to “think,” and then panicked when thinking alone didn’t magically turn into value. The post implies there was no wedge, no market insight, not even a search strategy... just the weird hypothesis that people say 'I fit the founder profile, therefore I should found something' It frames the absence of structure as a psychological tragedy, but irl this was a self-inflicted governance failure. The author clearly had no personal theory of value creation before taking investor money, and the money simply magnified that undefinedness. Paying yourself to think is ridiculous unless you’re running a consultancy otherwise it’s just accountability theater... Cinema being what this post is!


Hey, author here. I was working part-time for months, we had a live product with customers, a thesis, and an execution plan. This part was maybe lost in the post.

We raised, continued down the same path, and only then realized it didn't make sense. I was being honest about my motivations in hindsight here, realizing after the fact that the product we had at the time wasn't the idea of my life and that I really wanted to go out on my own (and took the chance that appeared).


I dont want to dump on the founder here. I appreciate their guts for saying the uncomfy part out loud and being so honest, open and aware of their own thoughts. Major kudos in my book. But dont quit without a quit criteria - a problem, customer conversations, recurring problem insight. Fuck the VCs that say just do it, its advice for rich kids not people who worry about finances running out. Given that this author worked at Posthog, it probably means they lived in London and got paid a decent salary but not a good one. eg Posthog lists their comp at 120k pounds with equity (that thing doesnt have liquidity yet). Thats not a top of the line offer in London where Meta / Goog and others would beat it. Once you have say 100k pounds saved up and mortgage for the next 2 years, maybe then you can think of bootstrapping without an insight. Until then, Leetcode, promotions, better offers are your ticket out of a middle class life. VCs wont show up with more money if and when you're hungry.


What's wrong to stay in the middle class? Furthermore in the UK - you cannot "promote" yourself to upper class, have to be born in it.


I mean it was more about wanting to better your financial condition. Thats what I meant. By upper class I meant financially not like aristocracy.


Starting with the desire to be a founder then searching for a problem is something I’ve only recently discovered is a common occurrence. And i find it a sort of odd reason to become a founder.

Has it this always been common or if it’s a characteristic of a vc-hype cycle?

Why devote yourself so long to creating solutions for a problem of solving it isn’t more gratifying than money?

But after almost decade of working in GTM for B2B startups, I’m starting to realize I find it totally unfulfilling and having begun moving back into the arts world i came from. So maybe I’m just cut from a different mold. Not better, just different.


Looks like a great application of the Titans and Miras paper published by Google research a year ago. You seem to be facing a problem of what to learn during 'sleep'. The paper has a great opinion on determining exactly that mathematically. This gives me an idea too


>About 20 years ago, I tried to found a startup. The ideas were good, and the team was good, but the execution was awful, and while we almost raised some money, we didn’t quite get there. Our failure was my fault. And I was pretty upset. And yet? In retrospect I’m happy that it didn’t happen, because I’ve seen what it means to get an investment. The world needs investors and people with big enough dreams to need venture capital – and I’m glad that I didn’t end up being one of them.

I wish the Author would explain what he meant by this. I'm hugely interested in this story and the 'why' of it.


While working on a PhD in technology and education, I thought that it might be worth creating a SaaS for people to teach whatever they want. This was back in early 2008, when such sites didn't exist. I assembled a team, and we made some progress, and even got a commitment from one funder. But I didn't really understand how to manage the team, and everyone was working very part-time on the project, and we didn't really have anything serious we could show, even after a few months. And the funder was only willing to invest if we found a second investor, which we didn't. So we ended up abandoning the project.

I think that we had some great ideas, including guiding instructors in the creation of online classes using the best proven pedagogical tools and theories. You could connect lessons to standards (if you were in a school, or wanted to be associated with one), or could do it free-form, or could use templates of various sorts.

I ended up finishing the PhD, so I can't complain too much! And as I wrote, I was probalby not a good person to run a startup; I'm much happier with my life as a bootstrapped freelancer. But it was hard to realize that I spent a year or so working on this with very little to show for it -- especially knowing that it might have thrived under a more experienced leader.


THIS IS THE MOST AMAZING THING I'VE EVER READ I MY ENTIRE LIFE LMFAOOOO


Maybe if you mentioned a more complex, lower level or niche language than typescript like maybe C, MIPS or some niche exotic systems language pushing around registers. I'd believe yu, with caveat, but with abstract high level abstract languages like Python, typescript and the likes? It's highly unlikely that you would've put together syntax in any uniquely surprising combination. Maybe yu mean yu designed a clever fix to a problem within a larger codebase so thar would mean a context/attention issue for the LLM but there's no way in hell yu wrote up a contained piece of code solving a specific problem, not tied to a larger software env, that couldn't also have been written by frontier LLMs provided yu could articulate the problem, a course-of-action and expected output/behavior. LLMs are very good at writing code in isolation, humans still have deeper intuition and we're still extremely good at doing the plug-in, wiring and big picture planning. Yu over-estimate what you've done with typescript or misunderstand what 'LLMs are good at writing code' [in isolation] means


This is a weird take. Software engineering solving and design is not about of syntax at all. Syntax can help or hinder some ways of expressing things, but the result of the design process is not clever syntax.

For example, the new shortest path algorithm that eclipses Dijkstra's is conceptual advance; it can be written in any Turing-complete language, and it's discovery had nothing to do with inventing new syntax in any specific language.

You comment betrays the literal/concrete understanding of coding that is a hallmark of novices. It's like saying as long as LLMs can write any kind of musical notation, there is no way a human can be a better composer.

I have not said an LLM cannot the same syntax or code patterns I write; I'm saying it, for instance, is poor at figuring out stuff like: How do I write types to enforce which entities and which fields and which roles are allowed for this action at compile-time? Should I use a generator, iterator, or recursive function for such and such functionality? Should this function be generic or not? How do I design my query fluent interface for the best performance? What should be the folder organization for this module that makes it intuitive to navigate and maintain? What is the best name for that function that will make it most intuitive to use? etc.

Anyone saying such concerns have anything to do with whether I'm using Typescript vs C or Haskell does not understand software engineering.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: