Hacker Newsnew | past | comments | ask | show | jobs | submit | sph's commentslogin

Day 1: we’ll adopt a simple markup language because our users are not programmers

Day 2: our users have complicated needs so we’ll basically reinvent Lisp expressions, but worse.

Day N: whatever this markup language is

——

I’ve seen this happen so many times it’s not even funny anymore. Well, at least it’s not YAML.


> so we’ll basically reinvent Lisp expressions, but worse

https://en.wikipedia.org/wiki/Greenspun%27s_tenth_rule


The more features they add, the less likely a competitor can arise without investing a billion man-hours.

He’s the right person at the right time. A prolific HN celebrity that has been spamming day in day out this site with LLM updates, playing the optimist, the skeptic and any shade in between, 10 times a week, during the peak of the hype.

His efforts might single-handedly be worth a couple percentage points off the valuations of AI companies. That’s like, what, a dozen billion dollars these days? At least I hope for him he gets the fat check before it all goes up in flames.


Born too late to be a Stasi bureaucrat, born right on time to be a Reddit mod.

Also, for the frontend devs out there:

https://placecats.com/

https://placekittens.com/


By crash do you mean a return to the prices of early January 2026?

Never had any issues, using the Godots (sic) version manager from Flathub, and custom built versions from git. Something’s wrong on your end

Yeah, passing argument "--display-driver wayland" fixed the issue for me.

Man, let me tell you about virtual machines, it’s gonna blow your mind.

Call me old fashioned but I like my tangible approach.

You also get to run both systems on bare metal. Nothing wrong with this.

How old are you? At 39 (20 years of professional experience) I've forgotten more things in this field than I'm comfortable with today. I find it a bit sad that I've completely lost my Win32 reverse engineering skills I had in my teens, which have been replaced by nonsense like Kubernetes and aligning content with CSS Grid.

And I must admit my appetite in learning new technologies has lessened dramatically in the past decade; to be fair, it gets to a point that most new ideas are just rehashing of older ones. When you know half a dozen programming languages or web frameworks, the next one takes you a couple hours to get comfortable with.


> I've forgotten more things in this field than I'm comfortable with today. I find it a bit sad that I've completely lost my Win32 reverse engineering skills I had in my teens

I'm a bit younger (33) but you'd be surprised how fast it comes back. I hadn't touched x86 assembly for probably 10 years at one point. Then someone asked a question in a modding community for an ancient game and after spending a few hours it mostly came back to me.

I'm sure if you had to reverse engineer some win32 applications, it'd come back quickly.


SoftICE gang represent :-)

That's a skill onto itself, and I mean the general stuff does not fade or at least come back quickly. But there's a lot of the tail end that's just difficult to recall because it's obscure.

How exactly did I hook Delphi apps' TForm handling system instead of breakpointing GetWindowTextA and friends? I mean... I just cannot remember. It wasn't super easy either.


I want to second this. I'm 38 and I used to do some debugging and reverse engineering during my university days (2006-2011). Since then I've mainly avoided looking at assembly since I mostly work in C++ systems or HLSL.

These last few months, however, I've had to spend a lot of time debugging via disassembly for my work. It felt really slow at first, but then it came back to me and now it's really natural again.


You can’t keep infinite knowledge in your brain. You forget skills you don’t use. Barring some pathology, if you’re doing something every day you won’t forget it.

If you’ve forgotten your Win32 reverse engineering skills I’m guessing you haven’t done much of that in a long time.

That said, it’s hard to truly forget something once you’ve learned it. If you had to start doing it again today, you’d learn it much faster this time than the first.


> You can’t keep infinite knowledge in your brain.

For what it’s worth—it’s not entirely clear that this is true: https://en.wikipedia.org/wiki/Hyperthymesia

The human brain seemingly has the capability to remember (virtually?) infinite amounts of information. It’s just that most of us… don’t.


You can't store an infinite amount of entropy in a finite amount of space outside of a singularity, well or at least attempting to do that will cause a singularity.

Compression/algorithms don't save you here either. The algorithm for pi is very short, pulling up any particular randomm digit of pi still requires the expenditure of some particular amount of entropy.


It's entirely possible for this to be literally false, but practically true

The important question is can you learn enough in a standard human lifetime to "fill up your knowledge bank"?


> It’s just that most of us… don’t.

Ok, so my statement is essentially correct.

Most of us can not keep infinite information in our brain.


It's not that you forget, it's more that it gets archived.

If you moved back to a country you hadn't lived or spoken its language in for 10 years, you would find yourself that you don't have to relearn it, and it would come back quickly.

Also information is supposedly almost infinite, as with increased efficiency as you learn, it makes volume limits redundant.


I do take your point. But the point I’m trying to emphasize is that the brain isn’t like a hard drive that fills up. It’s a muscle that can potentially hold more.

I’m not sure if this is in the Wikipedia article, but when I last read about this, years ago, there seemed to be a link between Hyperthymesia and OCD. Brain scans suggested the key was in how these individuals organize the information in their brain, so that it’s easy for them retrieve.

Before the printing press was common, it was common for scholars to memorize entire books. I absolutely cannot do this. When technology made memorization less necessary, our memories shrank. Actually shrank, not merely changing what facts to focus on.

And to be clear, I would never advocate going back to the middle ages! But we did lose something.


There must be some physical limit to our cognitive capacity.

We can “store” infinite numbers by using our numeral system as a generator of sorts for whatever the next number must be without actually having to remember infinite numbers, but I do not believe it would be physically possible to literally remember every item in some infinite set.

Sure, maybe we’ve gotten lazy about memorizing things and our true capacity is higher (maybe very much so), but there is still some limit.

Additionally, the practical limit will be very different for different people. Our brains are not all the same.


I agree, it must not be literally infinite, I shouldn’t have said that. But it may be effectively infinite. My strong suspicion is that most of us are nowhere close to whatever the limit is.

Think about how we talk about exercise. Yes, there probably is a theoretical limit to how fast any human could run, and maybe Olympic athletes are close to that, but most of us aren’t. Also, if you want your arms to get stronger, it isn’t bad to also exercise your legs; your leg muscles don’t somehow pull strength away from your arm muscles.


> your leg muscles don’t somehow pull strength away from your arm muscles.

No, but the limiting factor is the amount of stored energy available in your body. You could exhaust your energy stores using only your legs and left barely able to use your arms (or anything else).

If we’ve offloaded our memory capacity to external means of rapid recall (ex. the internet) then what have we gained in response? Breadth of knowledge? Increased reasoning abilities? More energy for other kinds of mental work? Because there’s no cheating thermodynamics, even thinking uses energy. Or are we just simply radiating away that unused energy as heat and wasting that potential?


It is also a matter of choice. I don’t remember any news trivia, I don’t engage with "people news" and, to be honest, I forget a lot of what people tell me about random subject.

It has two huge benefits: nearly infinite memory for truly interesting stuff and still looking friendly to people who tell me the same stuff all the times.

Side-effect: my wife is not always happy that I forgot about "non-interesting" stuff which are still important ;-)


1) That's not infinite, just vast

2) Hyperthymesia is about remembering specific events in your past, not about retaining conceptual knowledge.


https://www.youtube.com/watch?v=8kUQWuK1L4w

APL inventor says that he was developing not a programming language, but notation to express as much problems as one can. He found that expressing more and more problems with the notation first made notation grow, then notation size started to shrink.

To develop conceptual knowledge (when one's "notation" starts to shrink) one has to have some good memory (re-expressing more and more problems).


The point is that this particular type of exceptional memory has nothing to do with conceptual knowledge, it's all about experiences. This particular condition also makes you focus on your own past to an excessive amount, which would distract you from learning new technologies.

You can't model systems in your mind using past experiences, at least not reliably and repeatedly.


You can model systems in your mind using past experience with different systems, reliably and repetetively.

  > When you know half a dozen programming languages or web frameworks, the next one takes you a couple hours to get comfortable with.
Learn yourself relational algebra. It invariantly will lead you to optimization problems and these will also invariantly lead you to equality saturation that is most effectively implemented with... generalized join from relational algebra!

Also, relational algebra implements content-addressable storage (CAS), which is essential for data flow computing paradigm. Thus, you will have a window into CPU design.

At 54 (36 years of professional experience) I find these rondos fascinating.


> I must admit my appetite in learning new technologies has lessened dramatically in the past decade;

I felt like that for a while, but I seem to be finding new challenges again. Lately I've been deep-diving on data pipelines and embedded systems. Sometimes I find problems that are easy enough to solve by brute force, but elegant solutions are not obvious at all. It's a lot of fun.

It could be that you're way ahead of me and I'll wind up feeling like that again.


> I see a lot of my fellow developers burying their heads in the sand, refusing to acknowledge the truth in front of their eyes, and it breaks my heart because a lot of us are scared, confused, or uncertain, and not enough of us are talking honestly about it.

Imagine if we had to suffer these posts, day in and day out, when React or Kubernetes or any other piece of technology got released. This kind of proselyting that is the very reason there is tribalism with AI.

I don't want to use it, just like I don't want to use many technologies that got released, while I have adopted others. Can we please move on, or do we have to suffer this kind of moaning until everybody has converted to the new religion?

Never in my 20 years in this career have I seen such maniacal obsession as it has been over the past few years, the never-ending hype that have transformed this forum into a place I do not recognise, into a career I don't recognise, where people you used to respect [1] have gone into a psychosis and dream of ferrets, and if you dare being skeptical about any of it, you are bombarded with "I used to dislike AI, now I have seen the light and if you haven't I'm sorry for you. Please reconsider." stories like this one.

Jesus, live and let live. Stop trying to make AI a religion. It's posts like this one that create the sort of tribalism they rail against, into a battle between the "enlightened few" versus the silly Luddites.

1: https://news.ycombinator.com/item?id=46744397


The author of that post Nolan is a pretty interesting guy and deep in the web tech stack. He’s really one of the last people I’d call "tribal", especially since you mention React. This guy hand-writes his web components and files bug reports to browsers and writes his own memory leak detection lib and so on.

If such a guy is slowly dipping his toes into AI and comes to the conclusion he just posted, you should take a step back and consider your position.


I really don't care what authority he's arguing from. The "just try it" pitch here is fundamentally a tribalist argument: tribes don't want another tribe to exist that's viewed as threatening to them.

Trying a new technology seems like what engineers do (since they have to leverage technology to solve real problems, having more tools to choose from can be good). I'm surprised it rings as tribalist.

The impression I get from this post is that anyone who doesn't like it needs to try it more. It doesn't really feel like it leaves space for "yeah, I tried it, and I still don't want to use it".

I know what its capabilities are. If I wanted to manage a set of enthusiastic junior engineers, I'd work with interns, which I love doing because they learn and get better. (And I still wouldn't want to be the manager.) AIs don't, not from your feedback anyway; they sporadically get better from a new billion dollar training run, where "better" has no particular correlation with your feedback.


I think it's going to be important to track. It's going to change things.

I agree on your specific points about what you prefer, and that's fine. But as I said 15 years ago to some recent Berkeley grads I was working with: "You have no right to your current job. Roles change."

AI will get better and be useful for some things. I think it is today. What I'm saying is that you want to be in the group that knows how to use it, and you can't there if you have no experience.


There is of course an option here, you can just completely ignore the suggestion and all of these posts.

Honestly that's what makes this all the more dangerous. He's trying to have his cake and eat it too: accept all of the hype and all of the propaganda, but then couch it in the rhetoric of "oh I'm so concerned I can remain in a sort of moderate & empathetic position and not fall prey to tribalism and flame wars."

There's no both-sides-ing of genAI. This is an issue akin to street narcotics, mass weapons of war, or forever chemicals. You're either on the side of heavy regulation or outright bans, or you're on the side of tech politics which are directly harmful to humanity. The OP is not a thoughtful moderate because that's not how any of this works.


> You're either on the side of heavy regulation or outright bans, or you're on the side of tech politics which are directly harmful to humanity.

I don't think this has yet been established. We'll have to wait and see how it turns out. My inclination is it'll turn out like most other technological advancements - short term pain for some industries, long term efficiency and comfort gain for humans.

Despite the anti-capitalist zeitgeist, more humans of today live like kings compared to a few hundred years ago, or even 100 years ago.

But you seem to have jumped to a conclusion that everyone agrees: AI is harmful.


You of course don't have to use AI. Your core point is correct: the world around you is changing quickly, and in unpredictable ways. And that's why it's dangerous to ignore: if you've developed a way that worked in the world 10 years ago, there's a risk it won't play the same way in the world of 2030. So this is time-frame to prepare for whatever that change will be.

For some people, that's picking up the tool and trying to figure out what its good for (if anything) and how it works.


I don't think you understand how much things are about to change in a relatively short time. A lot of people are rightfully confused and concerned.

Many people are seeing this as an existential moment requiring careful navigation and planning, not just another language or browser or text editor war.


This is exactly my position. Landscape-changing technology is impossible to get away from, because it follows you. It's like a local business owner in 1998 telling me they didn't care about the stupid "internet" thing, and then the internet blew away their business within 10 years. Similar story with the PC: folks didn't get the option to just "opt out" of a digital office because they liked typewriters and paper. Cell phones were this way also, and while many people post about how they hate their phones and needs to quit using it so much, pretty much everyone admits you can't live in society without one because they have pervaded so many interactions.

So that's how I think AI will be seen in 20 years: like the PC, the internet, and mobile phones. Tech that shapes society, for better or worse.


100%, even if models stopped advancing today, there's already enough utility that just needs to be constrained by traditional software. It's not going away; it's going to change our interfaces completely, and change how services interface with each other, how they're designed, and change the pace at which software evolves.

This is a tipping point and most anti-AI advocates don't understand that other software developers who keep telling them to reevaluate their positioned are often just trying to make sure no one is left behind.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: