The duct tape people are generally drowning under all the work they're doing, and they'd be fine to keep doing other productive stuff.
There's always going to be duct tape stuff around, and you don't want the people who can actually fix widgets to wind up running around with their hair on fire applying duct tape to keep it running without any time to fix widgets.
And when there's too much duct tape jobs going around the widget fixers may take a look at it all and decide they don't get want to get stuck with applying duct tape once all the duct tape appliers are fired, so they just skip to some other job.
More than half the SCOTUS is corrupt and bought off, and the Republican Party in congress is just rubber-stamping what Trump wants. I don't have a lot of faith in the word "unconstitutional" anymore.
Really kind of weird with all these people who think it is just fine for services to take over accounts of people.
Of course, they can literally do whatever they like, it is their platform.
But it would be nice if everyone considered what it would be like for a platform to just arbitrarily nuke their account one way or another.
There's probably a lot of "well they wouldn't do that, I don't have a valuable named account, and I'm a user in good standing" but in reality they can do it for whatever reason they like and there's no actual guardrails--so anyone's account is equally at risk if they decide to.
Books and newspapers have had editors for centuries. It is just code review for the written word.
[It looks like MS Word 97 had the ability to detect passive voice as well, so we're talking 30 year old technology there that predates LLMs -- how far down the Butlerian Jihad are we going with this?]
I don't personally use AI/LLMs for any informal writing here or on reddit, etc. But I think it is pretty weird to be overly concerned around people, particularly ESL, who use tools to clean up their writing. The only thing I really care about is when someone posts LLM regurgitated information on topics they personally don't know anything about. If the information is coming from the human but the style and tone is being tweaked by a machine to make it more acceptable/receptive and fix the bugs in it, then I don't understand why you're telling me I need to care and gatekeeping it. It also is unlikely to be very detectable, and this thread seems to only serve a performative use for people to get offended about it.
Other tools to clean up writing are allowed. They did not tell you you must care. You told them they must not. The submisson's use was to tell you and others LLM generated tone was not more acceptable.
> HN has always been a spirit-of-the-law place, and—contrary to the "technically correct is the best correct" mentality that many of us share—we consciously resist the temptation to make them too precise.
The problem with “spirit-of-the-law” is that having rules be subject to discretion is a pretty clear avenue for discrimination and abuse. Not as big of a deal for an Internet forum as it would be for, say, a country's legal code and the enforcement thereof, but the lack of a clear standard for a rule makes that rule hard to follow and harder to enforce impartially.
The typical problem with trying to create clear standards with no spirit of the law is that it never matches the intentions with the 1st, 2nd, etc iterations of developing the clear standards. At least when trying to deal with something nuanced. It can get to the point that it takes more time and effort to follow the clear standards than to think through it fresh each time. The rules can also eat up time and effort to maintain and distract from the original purpose.
"Don't post generated comments or AI-edited comments."
What about non-native speakers? Can they not use translation software like google translate any more?
"Don't post generated comments or AI-edited comments, except for translating to english"
What about cases of disabilities?
"Don't post generated comments or AI-edited comments, except for translating to english and when used as assistive technologies."
Some translation tools and assistive technologies are still going to case the same issues that we have right now so maybe limit the technologies used
"Don't post generated comments or AI-edited comments, except for translating to english and when used as assistive technologies. Technologies x, y, z are not allowed a and b and similar can be used for translation c and d as assistive technologies"
But we do not want to spend time/effort on filtering technologies and/or people into the above categories.
In the long run we likely will come up with technologies that most everyone is satisfied with using in different use cases, spelling grammar, assistive, maybe even tone, and others.
In the mean time we can not let the perfect be the enemy of the good. If there are clear standards that achieve the goals, great, if not we have to do something until everything shakes out.
Nobody is going to stop using grammarly extensions to post to HN, nobody is going to be able to detect its usage.
This thread just lets a certain kind of people put on their best condescending hall-monitor voice and lecture other people about how they should behave.
And the rule is arguably less useful than speed limits and will be broken about as often (at least speed limits have a very real link to physical safety via kinetic energy).
> This thread just lets a certain kind of people put on their best condescending hall-monitor voice and lecture other people about how they should behave.
I think it is, at least mostly, about the blatant cases that are often already down voted and flag and make it official.
> And the rule is arguably less useful than speed limits and will be broken about as often (at least speed limits have a very real link to physical safety via kinetic energy).
I often see the rules in:
https://news.ycombinator.com/newsguidelines.html
broken, mostly small ways, I still think we are better off with them or something similar rather than having nothing.
> I think it is, at least mostly, about the blatant cases that are often already down voted and flag and make it official.
Which raises the question of why an official guideline is necessary in the first place. Obvious LLM slop being downvoted into oblivion is itself a good enough measure, without needing to create extra rules by which to hang the innocent.
I'm not sure. How common is it to review the outsourced development team's code? My guess is that there is rarely any review. They usually ship the whole software and are responsible for it.
Yeah, if you turn it all into buttons and settings in the actual settings menus, someone else is going to post a long rant about how the settings menus have a million confusing options that nobody uses...
Mine also isn't anywhere nearly as confusing as his by default, so this smells like a power-user-has-power-user-problems-and-solutions rant...
They intentionally made the menu longer to look worse by selecting some text first. So it is showing four sets of contextual actions: For the Link, the Image, the Selection, and the Page.
Also a few of the menu items are new since the latest ESR (the AI stuff in particular), so you won't see them if you are running v140.
What is more likely to happen though is that it doesn't take multiple $10B of datacenter and capital to build out models--and the performance against LLM benchmarks starts to max out to the point where throwing more capital at it doesn't make enough of a difference to matter.
Once the costs shrink below $1B then Apple could start building their own models with the $139B in cash and marketable securities that they have--while everyone else has burned through $100B trying to be first.
Of course the problem with this strategy right now is that Siri really, really sucks. They do need to come up with some product improvements now so that they don't get completely lapped.
And they will most likely also be the last to benefit from hypothetical efficiency gains because they haven't been building up expertise (by burning billions) yet.
Being able to Greenfield something new is a tempting pitch to use to poach employees.
And first to market often doesn't win, or else WebVan would still be doing grocery deliveries. We tend to overstate the first-mover advantages because we more easily remember the cases where that turned into lasting dominance while forgetting all the companies that died to first-mover disadvantages.
This is Meta claiming in their internal communications that they plan on doing it while people are distracted with other concerns.
It isn't really "rhetoric", they're talking like they believe this actually happens, this is strategy.
And I tend to agree with them that things like attention and political capital are ultimately finite resources.
I've found that the "we can do two things" and "we can walk and chew bubblegum" line of argument to be simplistic and just wrong (and pretty incredibly patronizing). I think the world works exactly the way Meta thinks that it does here.
It might blow up and turn into a Streisand effect, but more often than not this kind of strategy works.
Much like how people think they can multitask and talk on the phone and drive at the same time and every scientific measure of it shows that they really can't.
> I've found that the "we can do two things" and "we can walk and chew bubblegum" line of argument to be simplistic and just wrong
It's painfully obvious to me society cannot do two things at once. You focus on one shared goal as a culture or everything falls apart very rapidly - as we are seeing today. It's why a common external "enemy" (e.g competitor, nation state, culture, whatever) has historically been so important.
The shared goal can be complex in nature, which requires many disciplines to come together to achieve it via a series of many parallel activities that might look like they are all doing something random, but it's all in the service of that singular shared goal.
This holds true from my experience at the national level all the way down to small organizations.
Yeah, I still remember when I flipped from Linux to Mac at home. In my case it was a long time ago when I got a 4k monitor and couldn't scale the display text/icons and so couldn't read shit on it, and setting up a multi-monitor setup with Linux with different display resolutions was completely impossible. It worked in a few buttons with Mac. Digging into the issue on the Linux side there was some developer just yelling into the issue that people couldn't see 4k resolution, so there was no point to buying that hardware and everyone was just making a purchasing mistake with 4k monitors. I'm sure it has been long fixed by now, but that's the social problem which is waiting there. It won't be that issue, but there'll be something else like that...
reply