Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

No single person, sure, but at every level there are people who do understand what's going on at that level. Abstractions help but behind the abstraction there are people who know how it works.

What I'm getting at is, what if we get to the point where at some levels of the stack there are literally no people who understand that level? If your company's prod infrastructure is miles and miles of AI written and maintained shell scripts, or config files for tools that haven't been worked on for decades because AI-generated boilerplate is cheaper than adding new features? To the extent that even if you hire the right people they can't reverse engineer the stuff and even if they could the cost of making changes to it without AI is prohibitive, because it's all so low level? It's a weird thing to contemplate but seems like one of the more plausible scenarios in which society becomes as dependent on AI as we are today on electricity.



Ultimately, the goal of all the systems is not really within the systems, but the applications.

And one of the reasons why we have such an excess of configuration - which includes general-purpose programmability itself - is because we're trying to late bind the entire application.

So the question of debugging is more one of "yes, we're going to replace a panoply of hand-written abstractions with an LLM-generated solution" than it is about the LLM working in an unknowable black-box paradigm. What you're getting is a way to retarget the domain and precision of the solution:

1. Act as an oracle that answers questions directly. This works fine if your goal is to restate the form of a simple dataset. But the black box effect becomes too large really quickly.

2. Assisted code generation. This can write algorithmic boilerplate for you. But it's still relying on the framing of an existing protocol - "in language X, do Y". We got past our amazement at the oracle but are still mostly looking at this step, where it still looks like a human organization is needed to write a software stack.

3. Assisted data model and protocol design. LLMs are protocol geniuses - if you tell it of a new language and its rules, and give it some examples and logical feedback, they will happily go up and down through the abstraction stack as necessary to restate things in that language. This lets you bound the kinds of solutions that the LLM could generate later by the logic of the defined interface. This also aids the human by enabling all solutions to be defined legibly.

Right now we tend to avoid original protocol design because it's human-expensive and so we end up with inertia towards standardized designs, but the LLM does not care. You could tell it to regenerate all the necessary dependencies for your application, and if the system fails "somewhere in the middle" you don't have to debug what it did in more than a cursory sense - you just regenerate the whole system with added precision and legibility until you've drilled down to the specifics. It can even generate documentation!

That is, in the future, protocol-level friction can be trivially overcome, which means the "early binding effect" of hardware and software platforms will evaporate. You just have the data. The data could be stored in an illegible form - but that's a general human-language issue, and not really about the incidental complexities of human-written software.


Thank you for the very thoughtful response. I'll be pondering your points for the rest of the day!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: