p.s.
Normally we downweight subsequent articles in a series because avoiding repetition of any kind is the main thing that keeps HN interesting. But we made an exception in this case. Please don't draw conclusions from that since we'll probably get less series-ey, not more, after this! Better to bundle into one longer article.
This series is seriously the best thing I have read about AI. Thank you thank you thank you for doing so much hard thinking and taking the time to write it all up. It's a monumental work and extremely valuable.
The next time someone asks me where I think AI is going, I'll just point them at this series.
I have read every post in the series and really appreciated it.
I've had a tremendous amount of respect for you since I first encountered the Jepsen analyses, but your breakdown of the likely impacts of LLMs and ML may impress me more.
You've articulated very well several concerns of mine that I haven't seen anyone else mention, and highlighted other issues I had not previously recognized.
Thank you for publishing this now, when it could still have some influence, rather than polishing and researching and refining until it was thoroughly rigorous and too late to be relevant.
> Normally we downweight subsequent articles in a series because avoiding repetition of any kind is the main thing that keeps HN interesting
This is in jest right? Yesterday hn posted a paper from 1983... I regularly see 20 year old articles reposted, you've even got a section in your code of conduct arguing how you aren't Reddit despite the apparent nature of this site....
The overall topic is the same, even in the hypothetical sequence you mention. Keep in mind that even if an article series is strictly partitioned into distinct parts, the discussion threads mostly won't be - all the different aspects will blend together, which means the threads will be more like "the same soup over and over" than "one about metallurgy, one about design, etc."
(Edit: I just noticed that strbean already made this point in the sibling comment!)
Also: usually the splitting into a series is somewhat artificial. In the worst cases, people try to make the segments be like TV episodes with cliffhangers, to push you to the next bit. That's a poor fit for HN. But even when they don't, to get the full "meal" you still have to go through all the parts. Few people do that, and the threads as a whole never do. This makes it less interesting and satisfying.
But there can be exceptions, and (ironically?) featuring an occasional exception mixes things up and so reduces repetitiveness! The trouble is that once people see one exception, they immediately expect/want others, pushing things back into a repetitive sequence and making the site less interesting again. It's a bit like telling the same joke twice in a row—the interest is all in the first telling.
Guess: there is likely some repetition in articles in a series, but there is a ton in the discussion here, and that is what HN wants to avoid. Discussion on a link that bundles together the parts of a series helps avoid excessive rehashing in the comment sections.
If you keep this up, we're going to have to ban you. I don't want to ban you, so if you wouldn't mind reviewing https://news.ycombinator.com/newsguidelines.html and taking the intended spirit of the site more to heart, we'd be grateful.
(other mod here) - not your bad! our complexity :) - usually it works exactly as you described, but when the post is older than a few days we have to do it the other way, by spawning a new post. The reasons for this are mostly technical and boring.
[2020] and wow, what a title. It looks like someone was trying to decide between "How Wake-On-LAN works" and "How does Wake-On-LAN work" and "How do Wake-On-LANs work" and just picked a random combination of words from those choices.
This sort of thing is quite common for non-native speakers. The fact that you can say "how does X work" and "how X works" but not "how does X works" is not particularly obvious, and easy to mix up.
This is one of those slippery slope things where Grammarly did "just" Grammar and then slowly got into tone and perception and brand voice suggestions and now seems to more or less just want to shave everything down to be as bland as possible.
All you have to do is prompt your AI with a writing sample. I generally give it something I wrote from my blog. It still doesn't write like I do and it seems to take more than that to get rid of the emdashes, but it at least kicks it out of "default LLM" and is generally an improvement.
I tried using an LLM to help me write some stuff and it simply didn't sound like I'd written it - or, it did but in a kind of otherworldly way.
The only way I can describe it is like when I was playing with LPC10 codecs (the 2400bps codec used in Speak'n'Spells, and other such 80s talking things). It didn't sound like me, it sounded like a Speak'n'Spell with my accent, if that makes sense.
No? Okay, if not, if you want I could probably record another clip to show you.
reply