Hacker Newsnew | past | comments | ask | show | jobs | submit | Jemaclus's commentslogin

This is a pretty wild take. "They don't know what DevOps or SRE is" is not the flex the author thinks it is. That's just ignorance of the consequences of shipping.

There's a huge load-bearing assumption in it, as well, which is that AI agent-generated code is correct and bug free. The tight loop only works if the agent reliably produces working, correct, production-ready code. If that were the case, the loop might be intent -> build -> observe. But in reality, it's likely intent -> build -> realize the AI hallucinated an API that doesn't exist -> fix -> discover it broke something else -> fix -> realize the architecture doesn't fit a constraint you thought you had -> start over.

Not to mention that SDLC things like, I dunno, PR reviews and requirements planning aren't some arcane ceremony designed to waste everyone's time. The ceremonies themselves might be bloated, but the function serves a purpose. Generating 500 PRs isn't a flex either. Volume is not a feature. If your system produces more changes than you can verify, you don't have a review problem, you have a quality problem.

There's some truth buried in here, but the overarching post is so wildly disconnected from real software development that I'm having a hard time following along.

Greenfield prototypes? Sure, maybe there's a case for that. But the minute you hit any novel or complex system with more than a couple of engineers, this falls apart pretty fast.


> the loop might be intent -> build -> observe. But in reality, it's likely intent -> build -> realize the AI hallucinated an API that doesn't exist -> fix -> discover it broke something else -> fix -> realize the architecture doesn't fit a constraint you thought you had -> start over.

what people are betting on is that the latter reality will give way to the promised land of intent -> build -> observe. the prize for that is self-evidently enormous.


Sure, but the author is presenting it in the present tense ("I don't know anyone who writes code anymore") which isn't grounded in fact or reality, but wishful thinking about the future masquerading as current reality.

A more interesting take on this, in my mind, would be "The SDLC lifecycle has shifted away from engineers and toward product managers," but that's not the author's argument either. They're arguing that it's _dead_, which is clearly not the case.


how can i structure my income to avoid income tax? asking for a friend.

A popular technique is to be compensated in options. Don't exercise the options, but take out loans against them.

This is incredibly tech-specific, where most workers make a small fraction of $1M a year.

Physicians don't get options, attorneys don't get options. This is a fairy tale answer not grounded in reality.


So what is your answer then?

I don't need an answer to point out that your response is relevant to probably 3 or 4 people every year who:

    - live in Washington State; AND
    - are compensated at least in part in options; AND
    - are compensated in excess of $1M a year; AND
    - are compensated far enough in excess of $1M a year that they are willing to spend time and money lowering that tax liability
But the answer is "you can't, at least not legally" for everyone except those few people.

Legend of the Red Dragon (LoRD), Solar Realms Elite and Barren Realms Elite, and Tradewars were the best.

When I want to learn a new programming language, I always try to recreate Tradewars in it as a language. I know Tradewars like the back of my hand, so it allows me to focus on the nuances of the language while I build it. Such a fun project. The only thing I never quite figured out were the economics mechanics (it technically works, but it's a bit more predictable than TW2002 has in practice) and the Big Bang algorithm (I came up with my own, it's fine, but it doesn't have quite the same feel to it).

Less often, I'll try to create SRE/BRE, which, again is very fun but hard to reverse engineer. Amit (creator) lost the source code years ago, but wrote up some notes here: http://www-cs-students.stanford.edu/~amitp/Articles/SRE-Desi...

Funny, I just googled SRE/BRE to find the notes, and my last comment about it on HN was one of the top Google results... It's truly a lost art!


I loved BRE and LoRD so much. I would dial into like five BBS to play my turns every day.

My first close friend group from high school were actually from BBS Meetups more than school buds. When we found a crossover it was really weird!


Same!


I think about fidonet all the time. It was a magical concept.


Not affiliated, just a fan, but if this is a topic you're interested in, I highly recommend Michael Livingston's "1066: A Guide to the Battles and Campaigns."

Ihttps://www.michaellivingston.com/non-fiction/1066-a-guide-t...


"Better than JSON" is a pretty bold claim, and even though the article makes some great cases, the author is making some trade-offs that I wouldn't make, based on my 20+ year career and experience. The author makes a statement at the beginning: "I find it surprising that JSON is so omnipresent when there are far more efficient alternatives."

We might disagree on what "efficient" means. OP is focusing on computer efficiency, where as you'll see, I tend to optimize for human efficiency (and, let's be clear, JSON is efficient _enough_ for 99% of computer cases).

I think the "human readable" part is often an overlooked pro by hardcore protobuf fans. One of my fundamental philosophies of engineering historically has been "clarity over cleverness." Perhaps the corollary to this is "...and simplicity over complexity." And I think protobuf, generally speaking, falls in the cleverness part, and certainly into the complexity part (with regards to dependencies).

JSON, on the other hand, is ubiquitous, human readable (clear), and simple (little-to-no dependencies).

I've found in my career that there's tremendous value in not needing to execute code to see what a payload contains. I've seen a lot of engineers (including myself, once upon a time!) take shortcuts like using bitwise values and protobufs and things like that to make things faster or to be clever or whatever. And then I've seen those same engineers, or perhaps their successors, find great difficulty in navigating years-old protobufs, when a JSON payload is immediately clear and understandable to any human, technical or not, upon a glance.

I write MUDs for fun, and one of the things that older MUD codebases do is that they use bit flags to compress a lot of information into a tiny integer. To know what conditions a player has (hunger, thirst, cursed, etc), you do some bit manipulation and you wind up with something like 31 that represents the player being thirsty (1), hungry (2), cursed (4), with haste (8), and with shield (16). Which is great, if you're optimizing for integer compression, but it's really bad when you want a human to look at it. You have to do a bunch of math to sort of de-compress that integer into something meaningful for humans.

Similarly with protobuf, I find that it usually optimizes for the wrong thing. To be clear, one of my other fundamental philosophies about engineering is that performance is king and that you should try to make things fast, but there are certainly diminishing returns, especially in codebases where humans interact frequently with the data. Protobufs make things fast at a cost, and that cost is typically clarity and human readability. Versioning also creates more friction. I've seen teams spend an inordinate amount of effort trying to ensure that both the producer and consumer are using the same versions.

This is not to say that protobufs are useless. It's great for enforcing API contracts at the code level, and it provides those speed improvements OP mentions. There are certain high-throughput use-cases where this complexity and relative opaqueness is not only an acceptable trade off, but the right one to make. But I've found that it's not particularly common, and people reaching for protobufs are often optimizing for the wrong things. Again, clarity over cleverness and simplicity over complexity.

I know one of the arguments is "it's better for situations where you control both sides," but if you're in any kind of team with more than a couple of engineers, this stops being true. Even if your internal API is controlled by "us," that "us" can sometimes span 100+ engineers, and you might as well consider it a public API.

I'm not a protobuf hater, I just think that the vast majority of engineers would go through their careers without ever touching protobufs, never miss it, never need it, and never find themselves where eking out that extra performance is truly worth the hassle.


If you want human readable, there are text representations of protobuf for use at rest (checked in config files, etc.) while still being more efficient over the wire.

In terms of human effort, a strongly typed schema rather than one where you have to sanity check everything saves far more time in the long run.


Great writing, thanks. There are of course 2 sides as always. I think especially for larger teams and large projects Protobuf in conjunction with gRPC can play wisely with the backwards compatibility feature, which makes it very hard to break things.


Yes to all of this.

Also the “us” is ever-changing in a large enough system. There are always people joining and leaving the team. Always, many people are approximately new, and JSON lets them discover more easily.


IIRC, AvatarMUD (avatar.outland.org) has 20,000+ rooms in it. It's been a long time since I played, but it's absolutely massive!


Yes, I remembered that some MUDs touted much more rooms. I've found Aarchon (15K) [1] and SlothMUD (23K) [2]. And that's just the number of rooms, the other numbers (8K mobs, 7K items, 12K NPCs) are massive as well.

But big MUDs usually have "builder" teams so the comparison is unfair. However, these numbers have hardly been matched by game studios - WoW or Runescape comes to mind. And then there's Dwarf Fortress which reaches the infinite in some categories thanks to procedural generation.

[1] https://www.aarchonmud.com/arc/features

[2] https://www.mudportal.com/listings/by-genre/hack-slash/item/...


Yah, aardwolf (the one o played in ancient times) apparently has 35,000 rooms.

Collaborative building over decades adds up!


I'm also deaf, and I took 14 years of speech therapy. I grew up in Alabama. The only way you would know I'm from the South is because of the pin-pen merger[1]. Otherwise, you'd think I grew up in the American Midwest, due to how my speech therapy went. Almost nobody picks up on it, unless they are linguists that already knew about the pin-pen merger.

[1]https://www.acelinguist.com/2020/01/the-pin-pen-merger.html


I’m aware of the merger, but I literally can’t hear a difference between the words. I certainly pronounce them the same way.

I also think merry-marry-Mary are all pronounced identically. The only way I can conceive of a difference between them is to think of an exaggerated Long Island accent, which, yeah, I guess is what makes it an accent.


That's exactly what the pin-pen merger is! As you know, it's not limited to pin/pen, and hearing ability (in my case, profound hearing loss) is not related to the ability to hear the difference. I don't understand the linguistics, but my very bad understanding is that there's actual brain chemistry here that means that you _can't_ hear the difference because you never learned it, never spoke it, and you pronounce them the same.

My partner is from the PNW and she pronounces "egg" as "ayg" (like "ayyyy-g") but when I say "egg" she can't hear the difference between what I'm saying and what she says. And she has perfect hearing. But she CAN hear the difference between "pin" and "pen", and she gets upset when i say them the same way. lol

But yeah, that's one of the things that makes accents accents. It's not just the sounds that come out of our mouths but the way we hear things, too. Kinda crazy. :)


When I was listening to some of the samples on the page you linked (pronunciation of “when”), it really seemed to me like the difference they were highlighting was how much the “h” was pronounced. Even knowing what I was listening for, it was very like my brain was just refusing to recognize the vowel sound distinction. So I think you must be right about it being a matter of basic brain chemistry.

In the example of the reverse pen/pin merger (HMS Pinafore) on that page, I couldn’t hear “penafore” to save my life. Fascinating stuff.

I used to think of the movie “Fargo” and think “haha comical upper midwestern accents.” And then at some point I realized that the characters in “No Country for Old Men” probably must sound similarly ridiculous to anyone whose grandparents and great grandparents didn’t all speak with a deep, rural West Texas accent - which mine did, so watching the movie it just seemed completely natural for the place and time at a deeply subconscious level.


They are the same phoneme for me in US Eastern suburbia, the only difference is in a subtle shift in the length that you drag it out. "merry" is faster than "marry" which is sometimes but not always faster than "Mary". Most UK accents seems to drag the proper name out an additional beat, and for some of them there's a slight pitch shift that sounds like "ma-ery", at its most extreme in Ireland (this is one early shibboleth by which I recognized Irish people before I really picked up on the other parts of the accent).


As someone with a German accent, to me the difference between merry and marry is the same as between German e (in this case ɛ in ipa) and ä (æ in ipa). Those two sounds are extremely close, but not quite the same. According to the Oxford dictionary that is true in British English, while it shows the same pronunciation (ɛ) for both in American English


This is WILD. I love it. Congrats on shipping!


Thank you! Shipping for the first time was definitely nerve-wracking. Really appreciate the positive feedback!


You should install it, because it's exactly what you just described.

Edit: From a UI perspective, it's exactly what you described. There's a dropdown where you select the LLM, and there's a ChatGPT-style chatbox. You just docker-up and go to town.

Maybe I don't understand the rest of the request, but I can't imagine a software where a webpage exists and it just magically has LLMs available in the browser with no installation?


It doesn't seem exactly like what they are describing. The end-user interface is what they are describing but it sounds like they want the actual LLM to run in the browser (perhaps via webgpu compute shaders). Open WebUI seems to rely on some external executor like ollama/llama.cpp, which naturally can still be self-hosted but they are not executing INSIDE the browser.


Does that even exist? It's basically what they described but with some additional installation? Once you install it, you can select the LLM on disk and run it? That's what they asked for.

Maybe I'm misunderstanding something.


Apparently it does, though I'm learning about it for the first time in this thread also. Personally, I just run llama.cpp locally in docker-compose with anythingllm for the UI but I can see the appeal of having it all just run in the browser.

  https://github.com/mlc-ai/web-llm
  https://github.com/ngxson/wllama


Oh, interesting. Well, TIL.


> You should install it, because it's exactly what you just described.

Not OP, but it really isn't what' they're looking for. Needing to install stuff VS simply going to a web page are two very different things.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: