That’s why you need filler words that contribute little to the sentence meaning but give it a chance to compute/think. This is part of why humans do the same when speaking.
The LLM has no accessible state beyond its own output tokens; each pass generates a single token and does not otherwise communicate with subsequent passes. Therefore all information calculated in a pass must be encoded into the entropy of the output token. If the only output of a thinking pass is a dumb filler word with hardly any entropy, then all the thinking for that filler word is forgotten and cannot be reconstructed.
Do you have any evidence at all of this? I know how LLMs are trained and this makes no sense to me. Otherwise you'd just put filler words in every input
e.g. instead of: "The square root of 256 is" you'd enter "errr The er square um root errr of 256 errr is" and it would miraculously get better? The model can't differentiate between words you entered and words it generated its self...
It's why it starts with "You're absolutely right!" It's not to flatter the user. It's a cheap way to guide the response in a space where it's utilizing the correction.
Something they don’t seem to mention in the article: Does greater model “enjoyment” of a task correspond to higher benchmark performance? E.g. if you steer it to enjoy solving difficult programming tasks, does it produce better solutions?
Pretty easy to test, I’d imagine, on a local LLM that exposes internals.
I’d suspect that the signals for enjoyment being injected in would lead towards not necessarily better but “different” solutions.
Right now I’m thinking of it in terms of increasing the chances that the LLM will decide to invest further effort in any given task.
Performance enhancement through emotional steering definitely seems in the cards, but it might show up mostly through reducing emotionally-induced error categories rather than generic “higher benchmark performance”.
If someone came along and pissed you off while you were working, you’d react differently than if someone came along and encouraged you while you were working, right?
If you think training a sparse autoencoder to extract concept vectors that are usable as steering injections into a modern LLM is pretty easy, you should probably go work for Anthropic's mech interp team ;)
Anti-AI articles like this seem to be the new "Doing my part to resist big tech: Why I'm switching back from Chrome to Firefox" genre that popped up on HN for a decade or so. If it makes you feel better, great, but don't kid yourself that your actions will make any difference whatsoever to the overall trajectory of AI adoption in IT or society.
This genre has always been very prevalent on HN. Move from cloud to on-premise. Move away from US-based services. Move away from Gmail. Move away from Github.
While it's relevant to the particular submission, this type of comment is thrown around way too often.
Me boycotting some company's product due to bad practices (slavery, etc).
Response: You know your boycotting isn't going to change anything, right?
Me: Yes, and ....? I'm not trying to change the world.
I don't use FF as some form of protest. It's just a browser I like more.
I'm not anti-AI the way much of HN is, but let's pretend I am. If I ban AI generated content on my site[1], I'm not trying to change the world. Just controlling my site.
Getting more to your sentiment: The world/Internet is a vast place. If even 1000 people think like me, it's more than enough. For a number of years I had valuable online interactions via BBS's with a population < 1000. As long as I get 1000 people, let the rest of the world burn!
It's like the constant "Emacs is dying" threads we used to have on HN, because the percentage of SO users using it kept dropping. When in reality, the absolute numbers kept increasing. Who cares if the world has moved on to VS Code? Emacs as an ecosystem was/is thriving!
[1] Assuming I live in a fantasy world where I can classify content accurately...
I think much of current day neuroticism can be attributed to insincere comparisons of genuinely normal trade offs (like AI) to slavery in the past.
I think people are so afraid to do a hecking racism that they start comparing any normal thing to racism. I also think there’s an incentive here: by comparing to racism they potentially gain some social status points like - I’m more morally superior to you because I didn’t do a hecking racism like you.
But it can backfire, like with your comment. People are catching up to how ridiculous this comparison is
> don't kid yourself that your actions will make any difference whatsoever to the overall trajectory of AI adoption in IT or society
How large does a group have to be (absolute number or percentage of population) for you to change your mind on this?
Serious question - genuinely curious. My answer is about 10% provided they are organised in some way. 5% if you're particularly good at collective action.
Plus many of these articles seem maximized to attract attention on social media, which is its own machine.
Posting your most provocative and strong opinions in reaction to the latest controversy-of-the-week is what fuels the internet and culture more than anything these days. The attention economy demands hot takes mixed with preaching about every new thing.
Responsive accordions are actually solved using CSS nowadays, but plenty of other things aren't, and the web has definitely needed an API or library like this for a long, long time. So it's great that we now have it.
Building something like this was certainly possible before, but it was a lot of effort. What's changed is simple: AI. It seems clear this library was mostly built in Cursor using an agent. That's not a criticism, it's a perfect use of AI to build something that we couldn't before.
> it's a perfect use of AI to build something that we couldn't before.
There's no reason why it couldn't have been built before. This is something that probably should exist as standard functionality, like what the Canvas API already includes. It's pretty basic functionality that every text renderer would include already at a lower level.
Doesn't seem relevant here. TurboQuant isn't a domain-specific technique like the BL is talking about, it's a general optimisation for transformers that helps leverage computation more effectively.
Sure thing, here's a report from the Greater London Authority tracking the history of air quality in the city since the "Great Smog" event 1952, which caused an estimated 4000 deaths.
The main takeaway is that yes, urban air quality (including fine particulate matter) has improved massively over time, but most of it had little to do with road traffic, as for decades it wasn't a significant contributor to the overall mix. The important change was the move away from burning solid fuels like coal for household heating and in power stations within cities, to using gas and electricity with larger, out-of-town power stations.
As other sources have declined, road traffic has indeed become the largest contribution to urban air pollution, but even here there has been progress. Fine particulate emissions have continued to decline as car manufacturers have adapted to more stringent regulation (cheating scandals notwithstanding). A bigger problem now is higher non-exhaust emissions caused by larger and heavier vehicles. This is something else that will need to be solved via regulation. Other policies like Low Traffic Neighbourhoods can also help to restrict the worst pollution to major roads and away from where most people live.
Urban air quality is never going to be as good as that in the countryside, but it's not true to believe that no progress has been made, and that it's simply been a switch in the type of pollution.
Which cities are you referring to? Some cities have policies that discourage gas and diesel cars, and plans to outlaw them by 2030, but I'm not aware of any that have banned them outright yet.
Would you say someone suffering from locked-in syndrome is of a different order of intelligence due to their no longer having a fully embodied experience?
Not parent, but I would say their experience, even though severely impaired in many areas, is still infinitely more embodied than any human artifact is or even conceivably could be. Simply because the millions of years of embodied evolution which have shaped them into who they are and because of the unimpaired embodiment of most of the cells that make up their organism.
"We’re investing in trade schools and scholarships to recruit technicians for vehicle repair as well as our factories."
"But this is a society problem. The one that bothers me the most is cultural. We, as a culture, think that everyone has to go to an Ivy League school to be valuable in our society, yet we all know that our parents and grandparents made our country wonderful because of these kinds of jobs. There’s incredible dignity in emergency services, and people can have wonderful careers. But our society doesn’t celebrate those people like they do the latest AI engineer."
Of course it is -- if you tout investments, that means previously you didn't invest. Whereas any competent business would be looking at demand, the head counts, ages, and doing some quick math.
And eg
> There’s incredible dignity in emergency services, and people can have wonderful careers.
Not really. Ask anyone who does it; you'll hear minimum wage or not much above, and piles of transport of fat people. ie huge risks to the joint health for the people stuck moving them.
And of course, dignity ain't cash. This The whole thing is an extended whinge that rounds to I don't want to pay more.
Plus the implicit idea that society is responsible for preparing employees for Ford, not Ford.
Yeah, why do we as a society want high paying jobs that we're not forced to take 40 minutes of pay for hours of work? Clearly its the celebration that's missing.
That's the cultural issue that's talked about, which also caused the explosion of people trying to get into tech whether they actually liked the work or not.
Because the choice is/was - make $20/hr busting my ass with body breaking work and barely scrape by, or get a CS degree and live comfortably because no other career offers the pay required.
The cultural issue is - why aren't other careers paid as well? (Aka, why don't we value them). Someone risking bodily injury in a trade arguably should be paid more than most desk jobs, but they aren't.
Much like discussion here on HN about how we need an IC promotion path that doesn't lead to management, society needs equal opportunities for high paying careers across a variety of fields, not just white collar or tech work.
It depends on `image` which in turn depends on a number of crates to handle different file types. If you disable all `image` features, it only has like 5 dependencies left.
It is also important to note that this is not specific to Zed. As someone else have mentioned, it is a cultural problem. I picked Zed as an example because that is what I compiled the last time, but it is definitely not limited to Zed. There are many Rust projects that pull in over 1000 dependencies and they do much less than Zed.
reply